Innovations in Information Systems for Business Functionality and Operations Management

Innovations in Information Systems for Business Functionality and Operations Management

John Wang (Montclair State University, USA)
Release Date: April, 2012|Copyright: © 2012 |Pages: 404
DOI: 10.4018/978-1-4666-0933-4
ISBN13: 9781466609334|ISBN10: 1466609338|EISBN13: 9781466609341
Hardcover:
Available
$185.00
TOTAL SAVINGS: $185.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • 20% discount on 5+ titles*
E-Book:
(Multi-User License)
Available
$185.00
TOTAL SAVINGS: $185.00
Benefits
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
Hardcover +
E-Book:
(Multi-User License)
Available
$220.00
TOTAL SAVINGS: $220.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
OnDemand:
(Individual Chapters)
Available
$37.50
TOTAL SAVINGS: $37.50
Benefits
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
  • 20% discount on 5+ titles*
Description & Coverage
Description:

Tailoring information systems development to operations management and business strategy is an important way to bolster the effectiveness and efficiency of any institution’s managerial success.

Innovations in Information Systems for Business Functionality and Operations Management offers a vital compendium of the latest research in IS/IT applications to business and operations management. With experts from around the world giving contributions in the form of case studies, methodologies, best practices, frameworks, and research, this critical collection of cutting-edge and state-of-the-art technological references will serve practitioners and academics alike.

Coverage:

The many academic areas covered in this publication include, but are not limited to:

  • Analytics
  • Business process mapping
  • Customer Relationship Management
  • Forecasting
  • Inventory management software
  • Lean Manufacturing
  • Operations Research
  • Project Management
  • Strategy
  • Supply Chain Management
Table of Contents
Search this Book:
Reset
Editor Biographies
John Wang is a professor in the Department of Information & Operations Management at Montclair State University, USA. Having received a scholarship award, he came to the USA and completed his PhD in operations research from Temple University. Due to his extraordinary contributions beyond a tenured full professor, Dr. Wang has been honored with a special range adjustment in 2006. He has published over 100 refereed papers and seven books. He has also developed several computer software programs based on his research findings.

He is the Editor-in-Chief of International Journal of Applied Management Science, International Journal of Operations Research and Information Systems, and International Journal of Information Systems and Supply Chain Management. He is the Editor of Data Warehousing and Mining: Concepts, Methodologies, Tools, and Applications (six-volume) and the Editor of the Encyclopedia of Data Warehousing and Mining, 1st (two-volume) and 2nd (four-volume). His long-term research goal is on the synergy of operations research, data mining and cybernetics.
Editorial Review Board
Editorial Advisory Board

  • Sungzoon Cho, Seoul National University, South Korea
  • Theodore Glickman, The George Washington University, USA
  • Eva K. Lee, Georgia Institute of Technology, USA
  • Panos Pardalos, University of Florida, USA
  • Roman Polyak, George Mason University, USA
  • Jasenka Rakas, University of California at Berkeley, USA
  • Ravi Ravindran, Pennsylvania State University, USA
  • Kathryn Stecke, University of Texas at Dallas, USA

Peer Review Process
The peer review process is the driving force behind all IGI Global books and journals. All IGI Global reviewers maintain the highest ethical standards and each manuscript undergoes a rigorous double-blind peer review process, which is backed by our full membership to the Committee on Publication Ethics (COPE). Learn More >
Ethics & Malpractice
IGI Global book and journal editors and authors are provided written guidelines and checklists that must be followed to maintain the high value that IGI Global places on the work it publishes. As a full member of the Committee on Publication Ethics (COPE), all editors, authors and reviewers must adhere to specific ethical and quality standards, which includes IGI Global’s full ethics and malpractice guidelines and editorial policies. These apply to all books, journals, chapters, and articles submitted and accepted for publication. To review our full policies, conflict of interest statement, and post-publication corrections, view IGI Global’s Full Ethics and Malpractice Statement.

Preface

INNOVATIONS IN INFORMATION SYSTEMS FOR BUSINESS FUNCTIONALITY AND OPERATIONS MANAGEMENT

Innovations in Information Systems for Business Functionality and Operations Management belongs to Advances in Operations Research and Information Systems series book project. There are nine sections and 19 chapters in this book.

SECTION 1: COMPUTING AND INFORMATION TECHNOLOGIES

Section 1 consists of two chapters. Chapter 1 entitled “The Open System for Master Production Scheduling: Information Technology for Semantic Connections between Data and Mathematical Models” is written by Lee, Schuster, Allen, and Kar. The OSMPS (Open System for Master Production Scheduling) presented in the chapter is also a research program at the MIT Laboratory for Manufacturing and Productivity with the goal of offering sophisticated modeling capabilities for manufacturing systems. Specifically, the M Language, in conjunction with other Web standards, enables the Modified Dixon-Silver Heuristic (MODS), developed by Lee, Schuster, Allen, and Kar. After more than ten years of industry-based testing, they delivered to users a new method called Software as a Service (SaaS). This approach allows anyone access to the MODS algorithm located on a remote server using only a Microsoft Excel spreadsheet. There is no implementation of MODS on local computing systems and access is immediate. Essentially, the algorithm serves as a calculator and does not store any data from the spreadsheet on the server.
An important feature of the approach is its unique way of accomplishing machine understandable semantics to describe the inputs, outputs, and attributes of a mathematical model. The M Language dictionary provides machine-understandable semantics for describing data fields that are inputs to MODS. In this way, there can be no misunderstanding regarding the type of data needed to run the model or the meaning of various calculations required for input data. The M Language is also the standard for XML data transfers between the spreadsheet and the algorithm or any other target. This makes the data interoperable. Ultimately, the goal of the OSMPS is to have many models load on a server, accessible through a simple interface distributed to users. This provides central control over versions of the code while giving users the flexibility of experimenting with different models for a specific problem. Currently, ERP systems employ a single model to solve practical problems like production scheduling. This limits the ability to experiment.
Their upcoming article in Computers and Electronics in Agriculture, published by Reed Elsevier, builds on the chapter. It is often a challenge to get practitioners in agriculture to use data and mathematical models for everyday decision making. Agriculture lags behind other economic segments such as manufacturing in the use of data, models, and statistics to improve quality and productivity. They hope their work brings the agriculture industry a step forward.
Lee, et al. encourage the readers of Operations Research and Information Systems to pay close attention to three areas related to computing semantics. These are: cyber-physical systems, cloud computing, and the Internet of Things. In all cases, these technologies will increase the need for Internet connections between data and mathematical models, something that their research directly addresses. In the future, a vital part of producing business value will involve the increased effectiveness of mathematical modeling in practice. Their approach is a systematic advancement in the practice of modeling. The technology described in Chapter 1 was disclosed to the MIT Technology Licensing Office in 2008 and 2009, and remains under copyright protection.
In Chapter 2, Emiris and Marentakis present “A Unified Classification Ecosystem for Auctions.” Auctions constitute an extremely interesting area of study for both researchers and practitioners since they constitute a powerful trading tool with many advantages for all market participants. Nowadays, advances in electronic and mobile infrastructure have boosted the use of auctions in almost any market setting. During the 00’s several research efforts dealt with the engineering of auctions, the behavior of participants, the marketing and technological perspective and their integration with other, more or less relevant research and business disciplines. This led to a vast number of research publications examining several issues, the most important of which is the design of auction mechanisms. Since the range of auction mechanisms and corresponding parameters is quite wide, the need for a parametric auction classification scheme seems inevitable. This need is further amplified by the fact that auctions are used widely in almost every type of marketplace, engaging businesses, consumers, governments, and various combinations among them, especially in the current era of Internet business.
Emiris and Marentakis begin with an extensive analysis of the major auction types and auction mechanism parameters based on a wide review of the literature on auctions, especially during the 00’s. Their review focused on major auction mechanisms and parameters, their use and their allowable values. In this chapter, they selected over one hundred of the more representative ones and used them as the basis for their study, starting by shedding light to auction taxonomy schemes proposed in the past. They propose a new taxonomy scheme conceptually defined as Auction Classification Ecosystem (ACE). ACE will be the most complete and integrated classification scheme serving both researchers and practitioners for the design of auctions mechanisms in structured format, and the analysis of auction mechanisms empirically or experimentally. More importantly, the model ACE they proposed is expandable as well, since it can be enriched with additional auction parameters and findings from the research literature. The structure of the classification scheme can convey information to the modeler of information systems during the database and information flow design phase of an informatics application.
Overall, in their chapter, Emiris and Marentakis provide an overview of the Auction Theory and relevant research. Then, they comment and evaluate on the different auction taxonomy schemes and examine the published works dealing with the parameters that influence auction mechanisms, and they summarize their results. Finally, they present the ACE and highlight guidelines for future research.

SECTION 2: DECISION ANALYSIS

Section 2 consists of three chapters. In Chapter 3, Chang, Ku, and Ho introduce “Fuzzy Multi-Choice Goal Programming for Supplier Selection.” The supplier selection decision is an important issue for purchasing management in supply chain management. Selecting the right suppliers and determining the appropriate orders from them can bring significant benefit in the reduction in purchasing cost, the decrease in supplying risk and improvement of product quality. It is not easy to utilize published supplier selection models because all coefficients of supplier selection models should be recomputed when an organization changes their supply chain strategy according to their marketing needs. Also, some linguistic preferences among goals are not easy to express by Decision Makers (DMs), but different goals’ relationships can affect the fitness of selected suppliers for organizations. In order to express DMs’ preference for different supply chain strategies, DMs need the flexibility to determine not only the imprecise target of each goal but also fuzzy relations among goals.
This study proposes a novel method called Fuzzy Multi-Choice Goal Programming (FMCGP) to aid DMs to set the satisfaction level of fuzzy relationship among goals in describing their ambiguous selection preferences. Imprecise importance relations among goals are modeled using fuzzy membership functions. Moreover, the FMCGP methodology allows DMs to set priorities on individual goals in supplier selection decisions with different supply chain strategies. DMs can determine the lower bound satisfaction level on priority relations among goals to obtain better suppliers according to different supply chain strategies. Only minimum changes of coefficients are needed in the proposed method, FMCGP, while organizations change their supply chain strategies to be suitable for different supplier selection problems. Moreover, organizations can simulate the supplier selection process to test the average cost and average delivery efficiency from different suppliers in combination with FMCGP before choosing a supply chain strategy in supplier selection problems.
To demonstrate how to select appropriate suppliers and determine the optimal order quantity from them with fuzzy importance relations, a real-world case of a Liquid Crystal Display (LCD) monitor and acrylic sheet manufacturer is presented. The result from the fuzzy importance relation setting in FMCGP corresponds more closely with DMs’ supplier selection goals. In this study, the authors propose a simple practical method to aid DMs to construct their preferences and determine the weights on their decision goals. The proposed model provides a flexible method for DMs to assign suitable fuzzy relationships among goals and increases the probability of making the appropriate decisions for their supplier selection problems.
Complex auction mechanisms have been at the center of the research interest even from the early years of the study of auction theory. Such complex mechanisms are generated when many items or units are auctioned in a single auction setting either sequentially or simultaneously, when complementary values between the items do exist, and/or when the item has several negotiable attributes beyond price alone. Each of these cases corresponds to one dimension (items, units, attributes) while a multidimensional auction mechanism may contain up to 3 dimensions. Physical settings show severe shortcomings when one intends to implement a complex auction mechanism. Recent advances in information technology and communications have enabled the implementation of complex auction mechanisms—thus these mechanisms are currently treated as emerging.
In Chapter 4, Saurav Datta, Goutam Nandi, Asish Bandyopadhyay, and Pradip Kumar Pal highlight an integrated approach to solve the correlated multi-response optimization problem through a case study in Submerged Arc Welding (SAW). The proposed approach has been presented to overcome different limitations and drawbacks of existing optimization techniques available in the literature. The traditional Taguchi optimization technique is based on the assumption that quality responses are independent of each other; however, this assumption may not always be valid. A common trend in the solution of a multi-objective optimization problem is to convert these multi-objectives into an equivalent single objective function. While deriving this equivalent objective function, different weightages are assigned to different responses according to their relative priority. In this regard, it seems that no specific guideline is available for assigning individual response weights. To avoid this, Principal Component Analysis (PCA) has been adopted to eliminate correlation among individual desirability values and to calculate uncorrelated quality indices that have been aggregated to calculate the overall grey relational grade. This study combines PCA, Desirability Function (DF) approach, and grey relation theory with the entropy measurement technique. Finally, the Taguchi method has been used to derive an optimal process environment capable of producing desired weld quality related to bead geometry.
In Chapter 5, Marentakis and Emiris acknowledge the complexity of the aforementioned mechanisms (complemented with the more complex “combinatorial” format) and they first examine the case of multi-dimensional auctions from a decision-making point of view based on a fine analysis and decoding of previous research efforts. The authors are the first to identify and systematically organize the decision-making problems for this type of auction, leading to the development of a decision-making model. This model is decomposed into three major decisions, each of them corresponding to a core auction phase, namely, the establishment of a bidding language, the bid formation problem and the winner determination problem. For each decision, the decision-maker has to select from a palette of decision elements, options, and constraints, and at the same time, the proposed model can advise on auction types that correspond to these parameters and have been published in the literature. The authors present three decision models useful for both the auctioneer and the bidder in each of the multiple-item, multiple-unit, and multiple-attribute auction formats, which cover extensively the relevant research efforts of the last decade.
In brief, the authors provide a short description of emerging auctions focusing on multi-attribute, multi-unit, multi-item, and combinatorial auctions, and a review of decision problems and methods follows. The authors propose appropriate graphical systematic formulation and depiction of approaches so as to serve as a systematic and comprehensive formulation scheme. Finally, they summarize the problems an auction-designer faces while planning an auction setting.

SECTION 3: MILITARY AND HOMELAND SECURITY

Section 3 has two chapters. According to William P. Fox, war is a conflict between nations carried out by a force of considerable duration and magnitude, by land, sea, or air for obtaining superiority and dominion of one over the other. War has been a topic of analysis and researchers for some time. F.W. Lanchester originally published his mathematical model for air to air combat in his 1916 book, Aircraft in Warfare. These models, known as the linear and square law, became the basis for much of the analysis of combat. These differential equation models have been the methodology to present and solve many historical combat models. James G. Taylor alluded to difficulties in solving more difficult “realistic” equations and suggested numerical methods that can easily and conveniently be numerically solved on a computer. The use of computers to analytically solve or numerically solve combat models is the standard method.
In Chapter 6, William P. Fox introduces a new and novel approach. He converts the standard Lanchester’s differential equations to the discrete form using discrete dynamical systems or difference equations. Starting with Lanchester’s famous square law of combat, the author develops the generalized closed form analytical solution for this discrete model that does not appear to have been done previously. After analyzing the solution form, the author showed how the closed form solution can be written directly from knowing only the kill rates and the initial force sizes of the combatants. The author continued his analysis by examining the parity condition where both sides fight to the finish. He developed a simple relationship of these same kill rates and initial force sizes in order to quickly calculate who would win the engagement under these parity conditions. The author supports his work using historic examples to illustrate his work. Both the numerical and graphical results are not lost in the analytical analysis that tells the story of combat engagements. His multi-faceted approach can be easily understood by the readers. He continues by looking into current and modern combat operations that are on-going in Iraq and Afghanistan, thus illustrating how irregular warfare and counter-insurgency combat models can be realistically modeled, built, and numerically solved using various forms of modified Lanchester equations as difference equations.
In Chapter 7, Xugang Ye, Shih-Ping Han, and Anhua Lin propose “A Graph-Search Based Navigation Algorithm for Traversing a Potentially Hazardous Area with Disambiguation.” The authors did an excellent work in explaining the A* algorithm from the primal-dual point of view. For a long time, as a fundamental search method in the field of Artificial Intelligence (AI), the A*algorithm has been viewed as a generalization of the Dijkstra’s algorithm through consistent heuristic function(s). Hence the efficiency of the A* algorithm can be explained by how many nodes it expands upon termination under a consistent heuristic function. The Dijkstra’s algorithm can be interpreted as “uniform” search, while the A* algorithm, equipped with a consistent heuristic function, is interpreted as “informed,” or “biased” search. Xugang Ye et al. found another way to explain the A* algorithm. It is the first time that the connection between the A* algorithm and the primal-dual algorithm for linear programming was disclosed. It was found that the A* algorithm is in fact the primal-dual algorithm that starts from a special initial feasible solution to the dual form of the linear programming model of the shortest path problem.
The initial solution is as simple as the negative heuristic function used in A* search. When the starting initial solution to the dual is zero, which is also feasible, the primal-dual algorithm becomes the Dijkstra’s algorithm. From this perspective, the efficiency of the A* algorithm can be easily explained by how good the initial feasible solution to the dual is. A better initial feasible solution to the dual means less steps in the primal-dual iterations; it also corresponds to the scenario that the search is more targeted toward the goal. Before this discovery, it was only known that the Dijkstra’s algorithm is actually a primal-dual algorithm. This discovery tells that the A* is also a primal-dual algorithm and its performance is thus connected to the duality theory of linear programming. Undoubtedly, the authors of this chapter add another valuable insight into the mathematical programming foundation of the widely-used A* search algorithm, which is more often cited in the artificial intelligence community.

SECTION 4: NONLINEAR COMPLEMENTARITY PROBLEM

Section 4 has two chapters. In Chapter 8, Ban, Ferris, and Liu focus on “Numerical Studies on Reformulation Techniques for Continuous Network Design with Asymmetric User Equilibria.” The Continuous Network Design Problem (CNDP) in transportation aims to find the optimal capacity enhancement for a selected set of links in a traffic network, under certain budget constraints, so as to minimize congestion. CNDP is usually formulated as a bi-level model: the upper level is to optimize the objective of the system manger, while the lower level is to account for the choice behavior (such as route choice) of individual drivers. Due to its bi-level structure, solving large-scale CNDPs is a long-standing challenging issue in transportation. In this chapter, the authors study the most effective reformulation techniques to solve the MPCC (Mathematical Program with Complementarity Constraints) model that they proposed recently for CNDP under asymmetric user equilibriums. The MPCC model is structured as a link-node based nonlinear complementarity formulation for asymmetric user equilibriums. By applying various reformulation techniques for the lower level nonlinear complementarity, the original bi-level formulation can be converted to a single level nonlinear programming problem. The authors show that certain reformulations are more effective than others to solve the proposed MPCC model. Recommendations are thus provided on how to choose a reformulation of the continuous traffic network design problem, which can be solved effectively and/or efficiently.
Based on findings in this chapter, the authors applied similar MPCC formulations to study other important transportation applications such as congestion pricing and dynamic congestion pricing. Issues such as the nonuniqueness of the lower level user equilibrium solution and their impacts on congestion pricing design were also studied by the authors. Results of those investigations have been published in refereed journals (such as Networks and Spatial Economics) and edited books (such as Transportation and Traffic Theory). Recently the authors also worked on transportation network modeling problems, especially dynamic network modeling problems. In particular, they modeled and solved the continuous-time dynamic user equilibriums using a recently developed mathematical paradigm called Dynamic Variational Inequality (DVI). These results will appear in Transportation Research, Part B.
In Chapter 9, Ying Zhou and Lizhi Wang propose “A New Hybrid Inexact Logarithmic-Quadratic Proximal Method for Nonlinear Complementarity Problems.” Each iteration of the new method consists of a prediction and a correction step. The predictor is produced using an inexact Logarithmic-Quadratic Proximal method, which is then corrected by the Proximal Point Algorithm. The new iterate is obtained by combining the predictor and correction points at each iteration. The authors prove the convergence of the new method under the mild assumptions that the function involved is continuous and monotone. A comparison to another existing method with numerical experiments on classical NCP instances demonstrates its superiority.

SECTION 5: OPTIMIZATION

Section 5 deals with optimization. In Chapter 10, Ye, Han, and Lin did excellent work in explaining the A* algorithm from the primal-dual point of view. For a long time, as a fundamental search method in the field of Artificial Intelligence, the A*algorithm has been viewed as a generalization of the Dijkstra’s algorithm through consistent heuristic function(s). Hence the efficiency of the A* algorithm can be explained by how many nodes it expands upon termination under a consistent heuristic function. The Dijkstra’s algorithm can be interpreted as a “uniform” search, while the A* algorithm, equipped with a consistent heuristic function, is interpreted as an “informed,” or “biased” search. The authors of this chapter found another way to explained the A* algorithm. It is the first time that the connection between the A* algorithm and the primal-dual algorithm for linear programming was disclosed.
It was found that the A* algorithm is in fact the primal-dual algorithm that starts from a special initial feasible solution to the dual form of the linear programming model of the shortest path problem. The initial solution is as simple as the negative heuristic function used in an A* search. When the starting initial solution to the dual is zero, which is also feasible, the primal-dual algorithm becomes the Dijkstra’s algorithm. From this perspective, the efficiency of the A* algorithm is dependent upon how good is the initial feasible solution to the dual. A better initial feasible solution to the dual means less steps in the primal-dual iterations, it also corresponds to the scenario that the search is more targeted toward the goal. Before this discovery, it was only known that the Dijkstra’s algorithm is actually a primal-dual algorithm. This discovery reveals that the A* is also a primal-dual algorithm and its performance is thus connected to the duality theory of linear programming. Undoubtedly, the authors of this chapter add another valuable insight into the mathematical programming foundation of the widely-used A* search algorithm, which is more often cited in the artificial intelligence community.
Chapter 11 by Montoya-Torres, Gómez-Vizcaíno, Solano-Charris, and Paternina-Arboleda considers the tasks scheduling problem in a jobshop configuration. Two objective functions are independently considered: the minimization of the makespan, that is the maximum time of job completion, and the minimization of the total tardiness. Both problems are known to be NP-hard, which means that no optimal solution can be found in a reasonable amount of time for large sized instances. Thanks to the development of computing performance, the scientific literature has seen the proposition of several efficient heuristics and meta-heuristics algorithms to solve the problem. The authors propose to solve both problems by implementing a novel meta-heuristic procedure. This meta-heuristic is inspired from the bacterial phototaxis procedure. The procedure is called Global Bacteria Optimization (GBO).
As with many other nature-inspired meta-heuristics, this procedure is based on the behavior of a colony of individuals (the bacteria) and their reaction to some stimulus. In the case of the GBO meta-heuristic, individuals are the bacteria and the stimulus is light. Hence, GBO emulates the reaction of bacteria to light stimulation. It should be highlighted that this is the first time that this kind of natural behavior is modeled to solve combinatorial optimization problems. Some preliminary approaches were proposed in literature by emulating the reaction of bacteria to chemical stimulus (the process is called chemiotaxis); but those works were very theoretic, validation was done using mathematical equations, and no practical implementation on realistic industrial problems were carried out. Indeed, GBO is the first work that proposed the emulation of bacteria behavior to solve complex optimization problems. The chapter presents the results of computational experiments carried out using well-known data sets taken from literature and available online at the well-known ORLibrary website. Reported results show that the GBO algorithm equals and even outperforms previous state-of-the-art procedures devoted to both jobshop scheduling problems considered in terms of both quality of solution and execution time. The procedure can be very easily adapted to solve other complex optimization problems such as flowshop or vehicle routing problems.

SECTION 6: REVENUE MANAGEMENT

Section 6 handles Revenue Management or Smart Pricing. Dynamic Pricing, which is known as the tactics of yield management, is a set of pricing strategies aimed at maximizing profits. It is widely used in many industries for pricing perishable products, such as airline seat allocation, hotel rooms, rental cars, and seasonal fashion goods. In Chapter 12, “Dynamic Pricing Model for Substitutable Products,” Ue-Pyng Wen and Yi Chen propose a dynamic pricing model of selling the given stock of identical substitutable perishable products to customers who arrive according to a continuous-time point process. The seller can make switches between price policies if necessary in order to maximize the expected total revenue. They also analyze the efficient price strategy according to the marginal expected revenue function. Their proposed model allows reversible changes in price and does not have restriction on the demand arrival process. The main assumptions in this model reflect a common practice in industries. They assume that the demand is price sensitive and obeys a homogeneous Poisson process, the sales horizon is in a finite period, and the available price set is predetermined.
According to the proposed model, all optimal time thresholds can be calculated. Furthermore, the optimal price policy can be determined according to the length of the remaining sales horizon, unsold inventory level, and customer choice behavior. Combining the illustrated examples, the authors summarize the results as follows. First, they construct the time threshold in the whole sales horizon and the result demonstrates that some inefficient price policies should be eliminated even though they are available in the predetermined price set. In addition, from the abatement of the marginal expected revenue, their proposed model can determine the optimal inventory level when the marginal service cost is constant. Finally, they show that the revenue impact is significant by comparing the dynamic pricing model with a fixed price model. Comparing with other continuous-time yield management, the authors solve the continuous problem directly instead of using dynamic programming approaches. Consequently, they can obtain the exact solution and it is easy to compute. The proposed model in this chapter is applicable to flight seats in different cabins. Moreover, it can be fairly easy to extend it to other perishable products such as hotel rooms with different service packages and ball game tickets.
“Optimizing Cash Management for Large Scale Bank Operations” is the title of Chapter 13. Frost, Kennington, and Madhavan describe a very successful application of mathematical programming and advanced analytics to an important logistics problem faced by large national banks. All the national US banks own and manage distribution centers where cash is stored. These distribution centers, commonly referred to as vaults, have equipment for inspecting, counting, sorting, and packaging cash for distribution. Inspection may involve checking for counterfeit bills as well as classifying currency into fit and unfit for distribution. Fit bills can be distributed to branches and ATMs while unfit currency is returned to the regional Federal Reserve Bank for destruction and replacement. Currency is packaged into bricks and each brick contains 1000 bills. When banks prepare shipments of currency with a destination of either a sister vault or the regional Federal Reserve Bank, the basic unit of measure is a brick. At the destination, cash generally arrives on pallets containing hundreds of bricks worth millions of dollars. Since the cargo is so valuable, the transportation costs can be very expensive. With the aid of a very comprehensive mathematical model and state-of-the art optimization software, the research team has shown how to minimize this expensive component of a bank’s operations.
For this logistics environment, all shipments are placed on armored carriers. The cost for a shipment is partitioned into two components, a fixed cost and a variable cost. The authors use binary (0,1) variables to capture the fixed cost and continuous variables to model flow of currency in a time-space network. Nodes represent vaults on a given day and arcs represent shipment between nodes. Some shipments between nearby vaults can occur on the same day, some require that the arrival occur one day later than the departure, while others require a two day transfer. The cash management logistics problem has been modeled using a multi-commodity fixed charge network structure using IBM’s OPL modeling language. Solutions are obtained using IBMs CPLEX mixed linear integer programming solver.
The main challenges for this successful implementation were two fold. It required a fair amount of creativity to model the numerous business rules used by these national banks when interacting with their regional Federal Reserve Bank. Second, extensive testing was needed to tune CPLEX and the model parameters so that solution times were within an acceptable range. The authors demonstrated how to achieve success on both counts.

SECTION 7: SIMULATION

Section 7 concentrates on simulation. In Chapter 14, Celik, Mazhari, Canby, Kazemi, Sarfare, Al-Otaibi, and Son show “Automatic Partitioning of Large Scale Simulation in Grid Computing for Run Time Reduction.” Modern society depends upon many interacting large-scale dynamic systems, such as manufacturing, supply chain, defense, transportation, homeland security, and healthcare. Although some of their constituent systems are amenable to analytical analyses, the ensembles are not, because of scale, randomness, and complex dynamic response. Simulation is a primary modeling tool for analyzing the behavior of such large systems to support their design as well as operations. However, simulating such large-scale systems entails exhaustive computational powers and lengthy execution times. To reduce execution time of such large-scale simulations without sacrificing their accuracy, the authors have proposed a truly innovative approach in partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allowed the authors to distribute required computational power to multiple computers, it has created a new challenge of synchronizing the partitioned models. To handle both challenges in computational requirements and synchronization issues in a balanced manner, the authors have developed a very creative partitioning methodology based on a modified Prim’s algorithm.
The authors strived to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors proposed a method to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In the proposed work, the synchronization of the logical times of the created partitions was enabled using an Epoch synchronization method where appropriate time intervals were determined based on the off-line simulation analyses. The authors have employed a computational grid framework for execution of the partitioned simulations. The partitioning was performed for a various number of scenarios, and the authors have compared their performance against that of the models built by another partitioning method in the literature. The authors have successfully demonstrated that the partitioned model by their proposed partitioning methodology outperformed the model built by the reference method in all the cases. The authors have also demonstrated that the proposed approach reduced simulation execution time significantly while maintaining accuracy as opposed to the monolithic simulation approach as the inter-server computational intensity increased. In a real world setting, where inter-server computation is more intense than what is considered in this study due to highly congested material handling activities, the authors contributions in reducing the simulation execution time are expected to be even more significant.
In Chapter 15, Alan W. Johnson, Theodore Heiman, Martha Cooper, and Raymond R. Hill simulate “Assessing Transport Aircraft Inspection Strategies.” According to the authors, complex aircraft require periodic maintenance checks to assess needed repairs for continued vehicle availability. However, such checks are expensive and the associated aircraft downtime can reduce fleet mission effectiveness. The United States Air Force plans to consolidate the time-based (isochronal) C-5 aircraft major inspection activities for eight C-5 home stations into three locations. Isochronal inspections rely on a calendar method to schedule inspections and disregard actual flying hours between inspections. By having the same personnel perform these inspections for all flying units and by adopting commercial aircraft condition-based inspection strategies, the Air Force hopes to gain efficiencies in performing these inspections. Conversely, the site phase-out schedule and reduced number of inspection locations raises questions about whether overall C-5 mission capability will be reduced. These proposed revisions were simulated in a designed experiment to assess the impacts to fleet availability and inspection site workload.

SECTION 8: STOCHASTIC MODELS

Section 8 focuses on Stochastic Models. In Chapter 16, Yew Sing Lee discusses “Analysis of State Dependent Vacation Queues with Threshold Gated Service Policy.” Queuing models with server vacations have found applications in many areas such as telecommunication, production/inventory systems, and service networks. Server vacations are useful for modeling those systems where the server wants to utilize his idle time for alternative purposes. The server setup time corresponds to the preparatory work of the server before starting the service. The traditional approach for analyzing vacation queues is to utilize transform techniques and busy period analysis to obtain the waiting time distribution. This turns out to be a difficult task for non-exhaustive service policy because the traditional busy period analysis does not work. Non-exhaustive service policy is useful for modeling situations where the server needs to be retooled after serving each batch of customers. Another example of non-exhaustive policy can be found in the tourism industry where guided tours are operating on the basis of if there are enough subscribers. Customers book the guided tour with the understanding that the tour may be cancelled if there are not enough subscribers. Once it is determined that there are enough subscribers, the dispatcher would send a vehicle (e.g., bus or van) depending on the number of subscribers at the scheduled time to the meeting place. The travel time it takes the vehicle to travel to the meeting place may be considered the server setup time. Customers who arrive during this period would be informed that this particular guided tour has been filled up and would be routed to the next available guided tour.
Motivated by these observations, Yew Sing Lee introduces a simple approach for modeling and analyzing a state-dependent vacation queuing model with linear quadratic arrival processes and non-exhaustive threshold trigger service policies. The author shows that his model encompasses a variety of examples. The main result of this chapter is a closed form expression for the performance measures, average customer waiting time. For the special case of Poisson arrival process, general vacation time, zero set-up time, and gated service policy, their analysis provides an alternative method to get a previously known result. The results are useful to anyone interested in managing queuing systems with unreliable or removable servers. For example, the result obtain in this chapter allows one to analyze the trade-off between the cost of operating the system and the service level provided. One can also perform sensitivity analyses of the impact of changing certain factors (e.g., mean of set-up, vacation, or service time) on the average customer waiting time.
In Chapter 17, Mohammed Hajeeh evaluates “Performance of Two-Component Systems with Imperfect Repair.” The performance of an operational system weakens over time due to age and deterioration, and thus, the system becomes unable to perform its intended function and eventually fail. System’s failure is attributed to factors such as poor quality, improper design, ineffective maintenance procedures, inefficient operations, and environmental causes. Failed components are either replaced or repaired. Non-repairable components are usually inexpensive, and it is often more cost-effective to replace, while expensive components should be repaired if they are not very critical to the functionality of the system. There are many types of repair: perfect repair, where the failed system becomes as good as new after each repair; minimal repair, which brings the system to its status just before failure; and imperfect repair, where the repaired system becomes inferior after each repair and the time between successive failure decreases over time while the time between successive repairs increases. Hajeeh examines the behavior of repairable systems consisting of two components undergoing imperfect repair. In this regard, three models of a two-component system of varying degrees of complexity were examined, i.e., the series, parallel, and cold standby configurations.
A closed form analytical expression for the steady state operational probability was developed for each configuration. These expressions have been derived assuming the components to be identical and independent, exponential distributions for the mean times between failures and repair times, single repair man, negligible time for putting a standby component into operation, and zero travel time between working and repair stations. In order to derive these expressions, flow equations were obtained for each state of the system. These differential equations were solved simultaneously to obtain the operational probability for each state and for the system as a whole. For illustration, an industrial air-conditioning system that uses two chillers was presented as an example. The objective was to find the best arrangement for the two chillers in order to maximize the system’s availability. It was found in this study that all configurations perform very well when the components are new. However, as the number of repairs increases, the performance weakens for all configurations but at different rates. The rate of performance decline is least for the standby configuration and steepest for the series configuration. The author suggested that future studies should examine systems with multiple and non-identical components, taking into consideration, besides the system’s availability, the different costs incurred by the various configurations.

SECTION 9: STOCHASTIC OPTIMIZATION

The last section analyzes Stochastic Optimization. In Chapter 18, Lizhi Wang and Nan Kong form a “Security Constrained Economic Dispatch: A Markov Decision Process Approach with Embedded Stochastic Programming.” In accordance with the authors, the main objective of electric power dispatch is to provide electricity to the customers at low cost and high reliability. Transmission line failures constitute a great threat to the electric power system security. The authors use a Markov Decision Process (MDP) approach to model the sequential dispatch decision making process where demand level and transmission line availability change from hour to hour. The action space is defined by the electricity network constraints. Risk of the power system is the loss of transmission lines, which could cause involuntary load shedding or cascading failures. The objective of the model is to minimize the expected long-term discounted cost (including generation, load shedding, and cascading failure costs). Policy iteration can be used to solve this model. At the policy improvement step, a stochastic mixed integer linear program is solved to obtain the optimal action. The authors use a PJM network example to demonstrate the effectiveness of their approach.
In Chapter 19, Lijian Chen and Dustin J. Banet establish a Bernstein polynomial-based approximation scheme for the stochastic programming, including: chance constrained optimization, two stage stochastic programming with recourse, and stochastic dominance constraints. Various uncertainties play critical roles and decision making processes under uncertainty demand timely solution techniques with average computational facilities. The applications of these stochastic programmings are financial planning, logistics, healthcare, traffic management control, and portfolio management. In particular, reliability and service quality are key issues in models arising in health care, power planning, supply chain, and many other industries. For example, hospital needs to ensure 95% or more patients see a doctor within a short amount of time; power planners require power transmission systems to function reliably with almost zero chance of blackout; suppliers need sufficient network capacity to ensure the successful transportation of goods with a chance of over 90%. One method to model them is to impose chance constraints and the resulting models are usually called chance constrained optimization, namely probabilistic optimization.
For chance constrained optimization, the authors assume that the chance constraint is imposed on affine constraints with the random vector in the right hand side. The joint distribution of the random vector needs to be continuous, and logarithmically concave to preserve the feasible set convexity. Meanwhile, the chance constraint can be equivalently re-written into a convex function and the authors propose to use the Bernstein polynomial to obtain an approximation to evaluate its gradient to facilitate the application of the first order gradient method. Since the complexity of evaluating the gradient is polynomial and the problem is indeed convex, the proposed approach is a weakly polynomial algorithm at any given accuracy. The authors also address two other important issues. First, the approximation error can be well controlled at only a reasonably low degree of the polynomial by employing the Chebyshev nodes. Second, the initial point should be selected by the sampling method to pursue faster convergence. Most importantly, the epigraph convergences ensure the solution quality to the original with average computational facilities.