Infrastructure deterioration models critically rely on accurate sub-models to predict the rate at which elements will deteriorate and the impact treatment interventions will have on condition parameters. These two types of sub-models are typically referred to as deterioration rate and reset models. Traditionally, developing these sub-models depended on large-scale research efforts, often comprising multi-year, multi-stakeholder projects. Examples include various iterations of the World Bank's Highway Design and Maintenance (HDM) models for road deterioration. These projects usually result in documents detailing regression models for different parameters and infrastructure situations.
While these landmark studies have significantly benefited asset management, the research-intensive and sporadic release cycles of this approach do not fully reflect the modern era of Big Data. Enterprise asset management systems now hold as much or more information than these landmark studies. In the era of machine learning models, the abundance of infrastructure inventory and condition data, coupled with historic maintenance information, provides an opportunity for a different approach to developing and implementing sub-models for infrastructure deterioration.
In this presentation, the authors present details of a tool-set in which machine learning models are automatically selected and trained using up-to-date historical condition data extracted as part of a multi-step data processing pipeline. The trained models are then implemented in the Juno Cassandra deterioration modelling framework and run over a 10-year historical period for final calibration. We present a proof-of-concept for this approach trailed using actual historical data gathered on New Zealand State Highways. Key steps in the methodology are presented and discussed, along with advantages and disadvantages of the approach. We finally discuss lessons learned in the development process.