Article Preview
TopIntroduction
In the past two decades, a large effort went into the development of maintenance optimization systems for bridge management purposes. With road networks mostly completed and bridges aging, a systematic approach was needed in keeping the infrastructure maintained. Several systems have been developed, the most notable one being Pontis, a bridge management system (BMS) developed in the United States at the request of the U.S. Federal Highway Administration (FHWA). Pontis was developed via contracting to two consulting companies and in collaboration with six U.S. states. After a trial implementation in California and testing in several states, the system was adopted by the Association of American State Highway and Transportation officials (AASHTO) (Golabi & Sheppard, 1997). In 1991, the U.S. Congress mandated that state Departments of Transportation develop and implement comprehensive bridge management systems. Since the national bridge inventory was to be based on Pontis, many states elected to adopt it as their BMS.
Pontis was designed along the lines of Kamal Golabi's proposal, and Golabi was asked to head the team that developed Pontis. In March 1988, at the suggestion of Bill Hyman of the Urban Institute, Kamal Golabi and Dan O'Connor, an official with the U.S. Federal Highway Administration, started a series of meetings to discuss the modeling and the optimization approach to bridge management that Golabi had been advocating for a number of years (Golabi & Sheppard, 1997). Golabi had developed a pavement management system for the State of Arizona to produce optimal maintenance policies for the 7,400-mile network of highways (Golabi, Kulkarni, & Way, 1982). At the heart of the system is a Markov decision model. In a similar approach, a Markov chain was made to drive the Pontis bridge management system (Golabi & Shepard, 1997; Thompson, Small, Johnson, & Marshall, 1998). In the Markov model of Pontis, the deterioration takes discrete states observed visually and the transitions from one state to the other are modeled with a Markov chain. The transition probabilities are determined from expert judgment and empirical observations. The Markov model formulation is appealing because it provides a framework that accounts for the uncertainty and the optimal policies can be obtained by solving simple programming problems (Thompson, Neumann, Miettinen, & Talvitie, 1987; van Winden & Dekker, 1998; Golabi & Pereira, 2003). However, a number of criticisms have been made against the usefulness of the model (Madanat, Mishalani, & Ibrahim, 1995; Frangopol, Kong, & Gharaibeh, 2001). Among the issues raised, the Markov chain has a restrictive stationarity assumption by which the time effect is not introduced effectively.