Reducing Risk through Segmentation, Permutations, Time and Space Exposure, Inverse States, and Separation

Reducing Risk through Segmentation, Permutations, Time and Space Exposure, Inverse States, and Separation

Michael Todinov (Department of Mechanical Engineering and Mathematical Sciences, Oxford Brookes University, Oxford, UK)
Copyright: © 2015 |Pages: 21
DOI: 10.4018/IJRCM.2015070101
OnDemand PDF Download:
List Price: $37.50
10% Discount:-$3.75


The paper features a number of new generic principles for reducing technical risk with a very wide application area. Permutations of interchangeable components/operations in a system can reduce significantly the risk of system failure at no extra cost. Reducing the time of exposure and the space of exposure can also reduce risk significantly. Technical risk can be reduced effectively by introducing inverse states countering negative effects during service. The application of this principle in logistic supply networks leads to a significant reduction of the risk of congestion and delays. The associated reduction of transportation costs and environmental pollution has the potential to save billions of dollars to the world economy. Separation is a risk-reduction principle which is very efficient in the cases of separating functions to be carried out by different components and for blocking out a common cause. Segmentation is a generic principle for risk reduction which is particularly efficient in reducing the load distribution, vulnerability to a single failure, the hazard potential and damage escalation.
Article Preview


Despite the critical importance of distilling generic principles related to reducing technical risk, very little has been published on this topic. For a long time, the principles of technical risk reduction have been exclusively focused within specific industries, technologies or operations, for example: oil and gas industry, nuclear industry, aviation, construction, medicine, banking, welding, heat treatment, machining, casting, forging, transportation, handling poisonous substances, handling heavy loads, etc.

These risk reduction principles tend to be oriented towards avoiding or mitigating particular failure modes in the specific application area and usually have no general validity. A similar feature characterises even principles of technical risk reduction related to commonly occurring failure modes across various engineering disciplines, for example, principles related to fatigue and fast fracture of engineering components (Ewalds & Wanhill, 1984; Hertzberg, 1996; Zahavi & Torbilo 1996; Anderson, 2005).

Reliability engineering usually focusses on predicting the reliability of components and systems and does not normally discuss principles for reliability improvement. Improving reliability by active and standby 'redundancy', by strengthening the weakest link, by developing physics-of-failure models, by eliminating a common cause and by reducing variability for example, are generic risk reduction principles that have been covered well in the reliability literature (Barlow & Proschan 1975; Ebeling, 1997; O'Connor, 2003; Lewis, 1996; Todinov, 2007). There exists a rather simplistic view among some reliability practitioners that improving the reliability of a system involves either improving the reliability of the components or providing redundancy. Equally simplistic is the view that only developing physics of failure models can deliver reliability improvement. This view has been fuelled by the failure of some reliability models to predict correctly the life of engineering components. A possible contributing reason is the widespread erroneous view that the quality and utility of reliability models depends exclusively on the availability of failure data. Comparative statistical models however, based on assumed input data, can deliver real reliability improvement in the absence of any failure data. For example, in comparing the performance of competing network topologies and selecting the topology with the best performance, a comparative method for assessing the performance of competing network topologies could proceed by: (i) assuming common flow capacities, failure frequencies and repair times for the corresponding components/edges of the compared networks; (ii) determining the performance of the competing networks by using an appropriate software tool and finally (iii) selecting the best-performing topology.

These extreme views demonstrate unnecessary self-imposed constraints. Increasing reliability can be achieved by using principles which range widely from pure statistical modelling to pure physics-of-failure modelling underpinning the reliable operation and failure.

The risk literature (Vose 2002; Aven 2003; Bedford & Cooke) is oriented towards risk modelling, risk assessment, risk management and decision making and there is very little discussion related to generic principles for reducing technical risk.

The Taguchi's experimental method for robust design through testing (Phadke 1989) achieves designs where the performance characteristics are insensitive to variations of control (design) variables. This method can be considered to be an important step towards formulating the generic risk reduction principle of robust design whose performance characteristics are insensitive to variations of design parameters.

Complete Article List

Search this Journal:
Volume 12: 1 Issue (2023): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2014)
Volume 2: 4 Issues (2013)
Volume 1: 4 Issues (2012)
View Complete Journal Contents Listing