Keywords

30.1 Introduction

While reliability and risk assessment are truly domain-independent areas, this cannot be stated about the equally important areas of reliability improvement and risk reduction. For decades, the reliability and risk science failed to appreciate and emphasise that reliability improvement, risk reduction and uncertainty reduction are underpinned by general principles that work in many unrelated domains.

As a consequence, methods for measuring and assessing reliability, risk and uncertainty were developed, not domain-independent methods for improving reliability, reducing risk and uncertainty which could provide direct input to the design process. Indeed, in standard textbooks on mechanical engineering and design of machine components [1,2,3,4,5,6,7,8,9,10], for example, there is no mention of generic (domain-independent) methods for reliability improvement and risk and uncertainty reduction.

It needs to be pointed out that even the available methods for measuring and assessing reliability and risk cannot always be fully implemented in the design for the obvious reason that for new products and processes reliability data are simply unavailable.

In the rare cases where reliability data for the components and parts building the systems are available, they are relevant for a particular environment and duty cycle and their mechanical application to another environment and duty cycle, as experience has shown, is of highly questionable value. The lack of predictive capability of the existing reliability tools caused many engineers to lose faith in the tools and discard them as not adding real value to their work.

Why is engineering design so slow in exploiting the achievements of the reliability and risk science to improve reliability and reduce risk? This is certainly not due to the complexity of the reliability improvement and risk reduction methods. In this respect, the contrast with the complex generic mathematical methods for stress analysis, kinematic and dynamic analysis of solid bodies and fluids is striking. These mathematical modelling methods are penetrating all aspects of the engineering design.

The problem is that the current approach to reliability improvement and risk reduction almost solely relies on knowledge from a specific domain and is conducted exclusively by experts in that domain. This creates the incorrect perception that effective risk reduction can be delivered solely by using methods offered by the specific domain, without resorting to a general risk reduction methods and principles.

This incorrect perception resulted in ineffective reliability improvement and risk reduction across the entire industry, the loss of valuable opportunities for reducing risk and ‘repeated reinvention of the wheel’. Current technology changes so fast that the domain-specific knowledge related to reliability improvement and risk reduction is outdated almost as soon as it is generated. In contrast, the domain-independent methods for reliability improvement, risk and uncertainty reduction are higher order methods that permit application in new, constantly changing situations and circumstances.

The development of the domain-specific, physics-of-failure approach for reliability improvement [11] has been prompted by the deficiencies of the data-driven approach. Although the physics-of-failure approach was very successful in addressing the underlying causes of failure and eliminating failure modes, it contributed to the widespread view among many reliability practitioners that only physics-of-failure models can deliver real reliability improvement.

It is necessary to point out that building accurate physics-of-failure models of the time to failure is not always possible because of the complexity of the physical mechanisms underlying the failure modes, the complex nature of the environment and the operational stresses. Physics-of-failure modelling certainly helps, for example, to increase the strength of a component by conducting research on the link between microstructure and mechanical properties of the material. However, this approach requires arduous and time-consuming research, special equipment and human resource. More importantly, physics-of-failure models are not capable of capturing principles and invariants underlying reliability improvement and risk reduction in unrelated domains. Despite their success and popularity, physics-of-failure models cannot transcend the narrow domains they serve and cannot be used for improving reliability and reducing risk in unrelated domains.

A central theme in the new domain-independent approach for reliability improvement and risk reduction introduced in [12] is the concept that risk reduction is underlined by common domain-independent principles which, combined with knowledge from the specific domain, are capable of generating effective risk-reducing solutions.

The domain-independent methods do not rely on the availability of past failure data or detailed knowledge of the underlying mechanisms of failure. As a result, they are particularly well suited for developing new designs, with unknown failure mechanisms and failure history. In many cases, these methods reduce risk at no extra cost or at a relatively small cost.

Establishing universally accepted theoretical principles for risk assessment requires a common definition of risk, valid in unrelated domains of human activity [13]. Similarly, establishing universally accepted theoretical principles for risk and uncertainty reduction goes through formulating domain-independent principles for reducing risk and uncertainty, valid in unrelated domains of human activity. Establishing the risk research as a mainstream science requires solid and universally accepted theoretical principles for the two fundamental components of risk management: risk assessment and risk and uncertainty reduction. The domain-independent principles and methods for risk and uncertainty reduction:

  • Add value to decisions related to reliability improvement, risk and uncertainty reduction.

  • Provide key input to the design process by improving the reliability of the designed product rather than measuring its performance only.

  • Provide effective risk and uncertainty reduction across unrelated domains of human activity. Avoid loss of opportunities for reducing risk and ‘reinvention of the wheel’.

  • Deeply impact the current understanding of available methods and techniques for risk and uncertainty reduction.

It is important to point out that the domain-independent methods for reliability improvement and risk and uncertainty reduction are not a substitute for the domain-specific approach for risk reduction. Combined with knowledge from the specific domain, the domain-independent methods and principles help to obtain superior solutions. Accordingly, this chapter demonstrates that combining domain-specific knowledge from different areas of engineering with the domain-independent methods of the algebraic inequalities and segmentation leads to reliability improvement and uncertainty reduction.

30.2 Method of Segmentation to Improve Reliability and Develop Light-Weight Design

The underlying idea of the method of segmentation is to prevent failure modes and reduce the vulnerability to a single failure, by dividing an entity into a number of distinct parts. A large number of applications of the domain-independent method of segmentation have already been presented in [12].

There are numerous cases where design-engineers have control over the points of application of external loads. For the simply supported beam with length a in Fig. 30.1a, the concentrated load F is applied in the middle and results in a bending moment M(x). The maximum bending moment \(M_{1,{\max} }\) is attained at \(x = a/2\) and is equal to \(M_{1,{\max} } = Fa/4\) (Fig. 30.1b). Segmenting the concentrated load F into two loads with magnitude F/2 (Fig. 30.1c) reduces the maximum bending moment three times, from \(M_{1,{\max} } = Fa/4\) to \(M_{2,{\max} } = Fa/12\) (Fig. 30.1d). The reduction of the bending moment reduces the bending stress in the beam and increases its resistance to overstress failure.

Fig. 30.1
figure 1

Reducing the risk of overstress failure of a beam by segmenting the external concentrated load F

In some design applications (e.g. in motorsport design), the focus is often on obtaining a light-weight design, not on increasing the resistance to overstress failure. A light-weight design translates directly into enhanced performance, reduced fuel consumption and reduced emissions. As a result of the segmented external load and the reduced tensile stresses from bending in Fig. 30.1, the cross section of the loaded beam can be reduced which results in a light-weight design.

Indeed, the bending stress \(\sigma_{b}\) in a beam with a circular cross section with diameter d is given by the well-known formula [14]: \(\sigma_{b} = 32M/(\pi d^{3} )\) where M is the bending moment acting in the particular section. Reducing the bending moment 3 times by preserving the bending stress \(\sigma_{b}\), results in a significant reduction of the cross-sectional diameter of the beam. From \(\sigma_{b} = 32M/(\pi d^{3} ) = 32(M/3)/(\pi d_{1}^{3} )\), the diameter of the light-weight design is evaluated to be \(d_{1} = 0.693\,d\), which, for a uniform cross section, results in volume of the material per unit length of the beam equal to \(\pi (0.693d)^{2} /4 = 0.48 \times \pi d^{2} /4\). As a result, the light-weight design carries the same bending stress \(\sigma_{b}\) with only 48% of the material of the original beam. The weight saving from segmenting the loading force is impressive.

The load segmentation also improves reliability and results in light-weight designs in the case of a concentrated external torque (Fig. 30.2a).

Fig. 30.2
figure 2

Reducing the risk of overstress failure of a shaft by segmenting the external concentrated torque T

Segmenting the concentrated torque T into two torques of magnitude T/2 reduces the maximum shear stress from \(\tau_{{\max} } = 16T/(\pi d^{3} )\) along the length AB in Fig. 30.2a, to \(\tau_{{\max} ,1} = 8T/(\pi d^{3} )\) along the section CB in Fig. 30.2c. Similarly, preserving the same shear stress \(\tau_{{\max} }\) along the sections AC and CB yields the light-weight design in Fig. 30.2e with reduced cross section along the section CB.

These simple solutions for reducing the stresses in loaded structures, based on segmentation of external concentrated loads, have never been suggested in standard textbooks in the mature fields of stress analysis and strength of components [1, 2, 5, 14,15,16].

A primary objective of the topology optimisation of structural design is removing and redistributing a material in specified design spaces, for specified loads, constraints and boundary conditions so that a light-weight design is attained while preserving the required functionality. No solutions based on a segmentation of external loads have been suggested in the literature related to topological optimisation [17] despite that segmentation of external loads often leads to light-weight designs.

This shows that the lack of knowledge of the domain-independent method of segmentation made it invisible to the domain experts that segmentation of external loads can be used to reduce significantly the internal stresses in loaded structures and develop light-weight designs.

30.2.1 Improvement of Reliability of Computations

The next application of chain-rule segmentation to reduce the risk of computational errors is related to differentiating a very complex function \(f(t)\) with respect to the parameter t.

The complex function \(f(t)\) is first presented as a composition of nested continuous functions

$$f(t) = f(\varphi_{1} (\varphi_{2} ( \ldots \varphi_{n} (t))))$$

where \(f(\varphi_{1} )\), \(\varphi_{1} (\varphi_{2} )\), \(\varphi_{2} (\varphi_{3} ), \ldots ,\varphi_{n} (t)\) are simpler differentiable functions.

Consequently, the derivative \({\text{d}}f(t)/{\text{d}}t\) can be found by applying the chain rule for differentiation:

$$\frac{{{\text{d}}f(t)}}{{{\text{d}}t}} = \frac{{{\text{d}}f}}{{{\text{d}}\varphi_{1} }} \times \frac{{{\text{d}}\varphi_{1} }}{{{\text{d}}\varphi_{2} }} \times \ldots \times \frac{{{\text{d}}\varphi_{n} }}{{{\text{d}}t}}$$

The reduction of the risk of computational errors comes from the circumstance that each of the derivatives, \({\text{d}}f/{\text{d}}\varphi_{1}\), \({\text{d}}\varphi_{1} /{\text{d}}\varphi_{2} , \ldots ,{\text{d}}\varphi_{n} /{\text{d}}t\), is much easier to evaluate than the derivative \({\text{d}}f(t)/{\text{d}}t\).

Consider an example from kinematics analysis of mechanisms. The mechanism whose kinematics is to be analysed incorporates three sliders B, D and E (Fig. 30.3). Sliders B and D move along the x-axis while slider E moves along the axis ET, which is perpendicular to the x-axis and at a distance d from the origin O of the coordinate system Oxy.

Fig. 30.3
figure 3

A mechanism whose kinematics is analysed

The crank OA rotates in the clockwise direction, with a uniform angular velocity of \(\omega = 1.5\) rad/s and subtends an angle \(\varphi\) with the horizontal x-axis which varies within the interval \([0,2\pi ]\). Note that the angle CDE is not fixed and varies as the links CD and ED rotate around the pin joint D. The values of the parameters fully specifying the mechanism are as follows: OA \(= r = 0.35\,{\text{m}}\); AB \(= a = 0.65\,{\text{m}}\); \(AC = b = 0.50\,{\text{m}}\); \(CD = m = 0.80\,{\text{m}}\); \(DE = t = 0.75\,{\text{m}}\) and \(d = 1.3\,{\text{m}}\).

The point of interest is the velocity of slider E.

Denoting \(x_{D} = OD\) \(y_{E} = TE\) and applying trigonometry yields

$$\sin \psi = r\sin \varphi /a$$
(30.1)
$$\cos \psi = \sqrt {1 - \sin^{2} \psi }$$
(30.2)
$$x_{D} = b\cos \psi - r\cos \varphi + \sqrt {m^{2} - (a + b)^{2} \sin^{2} \psi }$$
(30.3)
$$y_{E} = \sqrt {t^{2} - (d - x_{D} )^{2} }$$
(30.4)

Substituting expressions (30.1) and (30.2) in (30.3), followed by substituting expression (30.3) in (30.4) expresses \(y_{E}\) as a function of the crank angle \(\varphi\) and by using the relationship \(\varphi = \omega t\), \(y_{E}\) can also be expressed as a function of the time t. Once \(y_{E}\) has been presented as a function of time, it can be differentiated to obtain the velocity \(v_{E}\) of slider E: \(v_{E} = {\text{d}}y_{E} (t)/{\text{d}}t\). However, this approach requires differentiating a very complex expression. During this process, the likelihood of making an error is very high. The risk of computational error can be reduced greatly if the method of segmentation is applied, by using the chain rule for differentiation. As a result, the initial problem of determining \(v_{E} = {\text{d}}y_{E} (t)/{\text{d}}t\) is replaced by the simpler problem of determining the three derivatives:

$$v_{E} = [{\text{d}}y_{E} /{\text{d}}x_{D} ] \times [{\text{d}}x_{D} /{\text{d}}\varphi ] \times [{\text{d}}\varphi /{\text{d}}t]$$
(30.5)

Indeed,

$$\frac{{{\text{d}}y_{E} }}{{{\text{d}}x_{D} }} = \frac{{d - x_{D} }}{{\sqrt {t^{2} - (d - x_{D} )^{2} } }}$$
(30.6)
$$\begin{aligned} \frac{{{\text{d}}x_{D} }}{{{\text{d}}\varphi }} & = - \frac{{br^{2} \sin \varphi \cos \varphi }}{{a^{2} \sqrt {1 - (r/a)^{2} \sin^{2} \varphi } }} + r\sin \varphi \\ & \quad - \frac{{(a + b)^{2} r^{2} \sin \varphi \cos \varphi }}{{a^{2} \sqrt {m^{2} - (a + b)^{2} (r/a)^{2} \sin^{2} \varphi } }} \\ \end{aligned}$$
(30.7)
$${\text{d}}\varphi /{\text{d}}t = \omega$$
(30.8)

The velocity and displacement of slider E, as a function of the crank angle \(\varphi\) in radians, are shown in Fig. 30.4 with a continuous and dashed line, respectively. To test the chain-rule segmentation method, the velocity of slider E has also been calculated by using numerical differentiation.

$$v_{E,i} \approx \frac{{y_{E,i} - y_{E,i - 1} }}{h} \times \omega$$
(30.9)

where \(h = 0.001\) rad is a small step of the crank angle, \(y_{E,i}\) and \(y_{E,i - 1}\) are the displacements of point E corresponding to crank angles \(\varphi_{i}\) and \(\varphi_{i - 1}\), \(i = 1, \ldots ,n\).

Fig. 30.4
figure 4

Velocity \(v_{E}\) (Continuous line) and displacement \(y_{E}\) (Dashed line) of point E on slider E

The velocity dependence obtained from the numerical differentiation and the velocity dependence obtained from the chain-rule segmentation coincide.

In the literature related to kinematic analysis of mechanisms [18,19,20], no solutions based on segmentation through the chain rule have been suggested, despite that segmentation based on the chain rule clearly leads to a significantly reduced likelihood of errors. The lack of knowledge of the domain-independent method of segmentation made it invisible to domain experts in the mature field of kinematic analysis of mechanisms that chain-rule segmentation yields a significantly reduced likelihood of computational errors.

30.3 Reducing Risk and Uncertainty by Using Algebraic Inequalities

In textbooks on reliability engineering [21,22,23,24,25] and in papers related to risk, reliability and uncertainty, there is a lack of discussion related to reducing risk and uncertainty by using algebraic inequalities. This is a surprising omission considering the power of algebraic inequalities in reducing risk and uncertainty and the existence of a significant number of publications covering the theory of algebraic inequalities [26,27,28,29,30,31,32]. It was only recently that some applications of the domain-independent method of algebraic inequalities for reducing risk and uncertainty have been presented in [12, 33].

A formidable advantage of the algebraic inequalities is their capacity to reduce aleatory and epistemic uncertainty and produce tight upper and lower bounds related to uncertain reliability-critical design parameters such as material properties, dimensions, loads and component reliabilities. Algebraic inequalities are capable of ranking systems, processes and decisions in terms of reliability in the absence of any knowledge related to the values of the reliability-critical parameters. In addition, algebraic inequalities can be interpreted in a meaningful way and this interpretation can be attached to real systems and processes. This yields not only to uncertainty reduction but also to the discovery of new fundamental properties of systems and processes.

By establishing tight bounds related to properties and parameters, algebraic inequalities can be applied to improve the robustness of designs, by complying them with the worst possible variation of the output parameters. As a result, a number of failure modes can be avoided.

30.3.1 Ranking Systems with Unknown Reliability of Components

Often, the reliabilities of the components building the system are unknown and the epistemic uncertainty associated with the reliabilities of the components building the system translates into epistemic uncertainty related to which system is superior.

An important way of using inequalities to improve reliability and reduce risk is to derive and prove an algebraic inequality which ranks systems performance. For two competing systems (a) and (b), built on components whose reliabilities are unknown, the steps for establishing which system is superior can be summarised as follows.

  • For each of the competing systems, build the reliability network from its functional diagram.

  • By using methods from system reliability analysis, determine the reliabilities \(R_{a}\) and \(R_{b}\) of the systems or the probabilities of system failure \(F_{a}\),\(F_{b}\).

  • Subtract the reliabilities of the competing systems or the probabilities of system failure and prove any of the inequalities: \(R_{a} - R_{b} > 0\), \(R_{a} - R_{b} < 0\), \(F_{a} - F_{b} > 0\), \(F_{a} - F_{b} < 0\).

  • Select the system with superior reliability or the system with the smaller probability of failure.

Consider two systems with different topologies, including the same type of valves (denoted by X, Y and Z) shown in Fig. 30.5. The valves are working independently from one another and all of them are initially open. The question of interest is which system is more reliable with respect to the function ‘stopping the flow of fluid in the pipeline’. The signal for closing is issued to all valves simultaneously.

Fig. 30.5
figure 5

Two competing systems with different topology, built with the same type of components

Figures 30.6a and b represent the reliability networks of the systems from Fig. 30.5a and b, correspondingly. The reliability values x, y and z characterising the separate valves are unknown. The only available information about the reliabilities of the valves are the obvious constraints: \(0 < x < 1;\,\,0 < y < 1;\,\,0 < z < 1\).

Fig. 30.6
figure 6

The reliability networks of the systems from Fig. 30.5

Expressing the probabilities of failure characterising the competing systems as a function of the unknown reliabilities of the valves yields

$$F_{a} (x,y,z) = (1 - x^{2} )(1 - y^{2} )(1 - z^{2} )\,{\text{and}}\,F_{b} (x,y,z) = (1 - xyz)^{2}$$

Ranking the systems’ performance consists of proving \(F_{a} (x,y,z) - F_{b} (x,y,z) < 0\) or \(F_{a} (x,y,z) - F_{b} (x,y,z) > 0\). Proving \(F_{a} (x,y,z) - F_{b} (x,y,z) < 0\), for example, is equivalent to proving the inequality

$$(1 - x^{2} )(1 - y^{2} )(1 - z^{2} ) < (1 - xyz)^{2}$$
(30.10)

To prove inequality (30.10), it suffices to prove the inequality \(\sqrt {(1 - x^{2} )(1 - y^{2} )(1 - z^{2} )} < (1 - xyz)\) or the equivalent inequality

$$\sqrt {(1 - x^{2} )(1 - y^{2} )(1 - z^{2} )} + xyz < 1$$
(30.11)

Indeed, if inequality (30.11) is true, inequality (30.10) follows from it by squaring both sides of the inequality \(\sqrt {(1 - x^{2} )(1 - y^{2} )(1 - z^{2} )} < 1 - xyz\). The squaring operation will not change the direction of the inequality because \(0 < x < 1;\,\,0 < y < 1;\,\,0 < z < 1\), and the following quantities are positive: \(\,(1 - xyz) > 0\), \((1 - x^{2} )(1 - y^{2} )(1 - z^{2} ) > 0\,\).

To prove inequality (30.11), a combination of a substitution technique and a technique based on proving a simpler, intermediate inequality will be used.

Because the reliability \(r_{i}\) of a component is a number between zero and unity, the trigonometric substitutions \(r_{i} = \sin \alpha_{i}\) where \(\alpha_{i} \in (0,\pi /2)\) are appropriate. Making the substitutions: \(x = \sin \alpha ;\,\,y = \sin \beta \,\,\) and \(z = \sin \gamma\) for the reliabilities of the components, transforms the left side of inequality (30.11) into

$$\begin{aligned} & \sqrt {(1 - x^{2} )(1 - y^{2} )(1 - z^{2} )} + xyz = \cos \alpha \times \cos \beta \\ & \quad \times \cos \gamma + \sin \alpha \times \sin \beta \times \sin \gamma \\ \end{aligned}$$
(30.12)

Next, the positive quantity \(\cos \alpha \times \cos \beta \times \cos \gamma + \sin \alpha \times \sin \beta \times \sin \gamma\) is replaced by the larger quantity \(\cos \alpha \times \cos \beta + \sin \alpha \times \sin \beta\). Indeed, because \(0 < \cos \gamma < 1\) and \(0 < \sin \gamma < 1\), the inequality

$$\begin{aligned} & \cos \alpha \times \cos \beta \times \cos \gamma + \sin \alpha \times \sin \beta \\ & \quad \times \sin \gamma < \cos \alpha \times \cos \beta + \sin \alpha \times \sin \beta \\ \end{aligned}$$
(30.13)

holds. If the intermediate inequality \(\cos \alpha \times \cos \beta + \sin \alpha \times \sin \beta \le 1\) can be proved, this will imply the inequality

$$\cos \alpha \times \cos \beta \times \cos \gamma + \sin \alpha \times \sin \beta \times \sin \gamma < 1$$
(30.14)

Since \(\cos \alpha \times \cos \beta + \sin \alpha \times \sin \beta = \cos (\alpha - \beta )\), and \(\cos (\alpha - \beta ) \le 1\), we finally get

$$\begin{aligned} & \cos \alpha \times \cos \beta \times \cos \gamma + \sin \alpha \times \sin \beta \times \sin \gamma < \cos \alpha \times \cos \beta \\ & \quad + \sin \alpha \times \sin \beta = \cos (\alpha - \beta ) \le 1 \\ \end{aligned}$$

from which inequality (30.11) follows.

Inequality (30.11) has been proved and from it, inequality (30.10) follows. The system in Fig. 30.5a is characterised by a smaller probability of failure compared to the system in Fig. 30.5b, therefore, the system in Fig. 30.5a is the more reliable system.

30.3.2 Inequality of Negatively Correlated Random Events

There is another, alternative way of using algebraic inequalities for risk and uncertainty reduction which consists of moving in the opposite direction: starting from existing abstract inequality and moving towards the real system or a process. An important step in this process is creating relevant meaning for the variables entering the algebraic inequality, followed by a meaningful interpretation of the different parts of the inequality which links it with a real physical system or process.

Consider m independent events \(A_{1} ,A_{2} , \ldots ,A_{m}\) that are not mutually exclusive. This means that there are at least two events \(A_{i}\) and \(A_{j}\) for which \(P(A_{i} \cap A_{j} ) \ne \varnothing\). It is known with certainty, that if any particular event \(A_{k}\) of the set of events does not occur (\(k = 1, \ldots ,m\)), then at least one of the other events occurs. In other words, the relationship

$$P(A_{1} \cup \ldots \cup \bar{A}_{k} \cup \ldots \cup A_{m} ) = 1$$

holds for the set of m events.

Under these assumptions, it can be shown that the following inequality holds

$$P(A_{1} ) + P(A_{2} ) + \ldots + P(A_{m} ) > 1$$
(30.15)

This inequality will be referred to as the inequality of negatively correlated events.

To prove this inequality, consider the number of outcomes \(n_{1} ,n_{2} , \ldots ,n_{m}\) leading to the separate events \(A_{1} ,A_{2} , \ldots ,A_{m}\), correspondingly. Let n denote the total number of possible outcomes. From the definition of inversely correlated events, it follows that any of the n possible outcomes corresponds to the occurrence of at least one event \(A_{i}\). Since at least two events \(A_{i}\) and \(A_{j}\) can occur simultaneously, the sum of the outcomes leading to the separate events \(A_{1} ,A_{2} , \ldots ,A_{m}\) is greater than the total number of outcomes n:

$$n_{1} + n_{2} + \ldots + n_{m} > n$$
(30.16)

This is because of the condition that at least two events \(A_{i}\) and \(A_{j}\) can occur simultaneously. Then, at least one outcome must be counted twice: once for event \(A_{i}\) and once for event \(A_{j}\). Dividing both sides of (30.16) by the positive value n does not alter the direction of inequality (30.16) and the result is the inequality

$$n_{1} /n + n_{2} /n + \ldots + n_{m} /n > 1$$
(30.17)

which is inequality (30.15).

Consider the reliability networks in Fig. 30.7, of two systems. Despite the deep uncertainty related to the components building the systems, the reliabilities of the systems can still be ranked, by a meaningful interpretation of the inequality of negatively correlated events.

Fig. 30.7
figure 7

Ranking the reliabilities of two systems with unknown reliability of components

The power of the simple inequality (30.15) can be demonstrated even if only two events \(A_{1} \equiv A\) and \(A_{2} \equiv \bar{B}\) are considered. Event \(A_{1} \equiv A\) stands for ‘system (a) is working at the end of a specified time interval’ while event \(A_{2} \equiv \bar{B}\) stands for ‘system (b) is not working at the end of the specified time interval’ (\(P(\bar{B}) + P(B) = 1\)) (Fig. 30.7). The conditions of inequality (30.15) are fulfilled for events \(A\) and \(\bar{B}\) related to the systems in Fig. 30.7.

Indeed, if event \(\bar{B}\) does not occur, this means that system (b) is working. This can happen only if all components 4, 5 and 6 in Fig. 30.7b are working, which means that system (a) is working. As a result, if event \(\bar{B}\) does not occur then event A occurs. Conversely, if event A does not occur then at least one of the components 4, 5, 6 in Fig. 30.7a does not work, which means that system (b) does not work (the event \(\bar{B}\) occurs). At the same time, both events can occur simultaneously (\(P(A \cap \bar{B}) \ne 0\)). This is, for example, the case if components 1, 2, 3 are in working state at the end of the time interval (0, t) and component 5 is in a failed state.

The conditions of inequality (30.15) are fulfilled, therefore

$$P(A) + P(\bar{B}) > 1$$
(30.18)

holds, which is equivalent to

$$P(A) > 1 - P(\bar{B}) = P(B)$$

As a result, it follows that \(P(A) > P(B)\) irrespective of the reliabilities \(r_{1} ,r_{2} ,r_{3} ,r_{4} ,r_{5} ,r_{6}\) of components (1–6) building the systems. The meaningful interpretation of the inequality of negatively correlated events helped to reveal the intrinsic reliability of competing design solutions and rank these in terms of reliability, in the absence of knowledge related to the reliabilities of their building parts.

In other cases, knowledge about the age of the components is available which can be used in proving the inequalities related to the system reliabilities. For example, it is known that the functional diagrams of the competing systems are built with three valves (A, B and C) with different ages. Valve A is a new valve, followed by valve B with an intermediate age and valve C which is an old valve. If the reliabilities of the valves are denoted by \(a,b\) and \(c\), the reliabilities of the valves can be ranked: \(a > b > c\) and this ranking can be used in proving the inequalities related to the reliabilities of the competing systems [12].

30.3.2.1 Meaningful Interpretation of an Abstract Algebraic Inequality

While the proof of an algebraic inequality does not normally pose problems, the meaningful interpretation of an inequality is not a straightforward process. Such an interpretation usually brings deep insights, some of which stand at the level of a new physical property/law.

Consider the abstract algebraic inequality

$$(x_{1} + x_{2} + \ldots + x_{n} ) \ge n^{2} \left( {\frac{1}{{1/x_{1} + 1/x_{2} + \ldots + 1/x_{n} }}} \right),$$
(30.19)

which is valid for any set of n non-negative quantities \(x_{i}\).

A proof of Inequality (30.19) can be obtained by transforming the inequality to the classical Cauchy–Schwarz inequality

$$(a_{1} b_{1} + a_{2} b_{2} + \ldots + a_{n} b_{n} )^{2} \le (a_{1}^{2} + a_{2}^{2} + \ldots + a_{n}^{2} )(b_{1}^{2} + b_{2}^{2} + \ldots + b_{n}^{2} )$$
(30.20)

which is valid for any two sequences of real numbers \(a_{1} ,a_{2} , \ldots ,a_{n}\) and \(b_{1} ,b_{2} , \ldots ,b_{n}\).

Note that the transformation \(a_{i} = \sqrt {x_{i} }\) (\(i = 1, \ldots ,n\)) and \(b_{i} = 1/\sqrt {x_{i} }\) (\(i = 1, \ldots ,n\)), substituted in the Cauchy–Schwarz inequality (30.20) leads to inequality (30.19).

Appropriate meaning can now be attached to the variables entering inequality (30.19) and the two sides of the inequality can be interpreted in various meaningful ways.

A relevant meaning for the variables in the inequality can be created, for example, if each \(x_{i}\) stands for ‘electrical resistance of element i’. The equivalent resistances \(R_{e,s}\) and \(R_{e,p}\) of n elements arranged in series and parallel are given by [34]

$$R_{e,s} = x_{1} + x_{2} + \ldots + x_{n}$$
(30.21)
$$R_{e,p} = \frac{1}{{1/x_{1} + 1/x_{2} + \ldots + 1/x_{n} }}$$
(30.22)

where \(x_{i}\) is the resistance of the ith element (\(i = 1, \ldots ,n\)). In this case, expression (30.21) on the left side of the inequality (30.19) can be meaningfully interpreted as the equivalent resistance of n elements arranged in series. The expression (30.22), on the right side of inequality (30.19), can be meaningfully interpreted as the equivalent resistance of n elements arranged in parallel. Inequality (30.19) now expresses a new physical property: the equivalent resistance of n elements arranged in parallel is at least \(n^{2}\) times smaller than the equivalent resistance of the same elements arranged in series, irrespective of the individual resistance values of the elements. Equality is attained for \(x_{1} = x_{2} = \ldots = x_{n}\).

It needs to be pointed out that for resistors of equal values, the fact that the equivalent resistance in parallel is exactly n2 times smaller than the equivalent resistance of the resistors in series is a trivial result, easily derived and known for a long period of time [35].

Indeed, For the resistance of n resistors arranged in series \(x_{1} = x_{2} = \ldots = x_{n} = r\), the value \(nr\) is obtained from Eq. (30.21), while for the same n resistors arranged in parallel, the value \(r/n\) is obtained from Eq. (30.22). As can be seen, the value \(r/n\) is exactly \(n^{2}\) times smaller than the value \(nr\). However, the bound provided by inequality (30.19) is a much deeper result. It is valid for any possible values of the resistances. The bound given by inequality (30.19) does not require equal resistances.

The meaning created for the variables \(x_{i}\) in inequality (30.19) is not unique and can be altered. Suppose that \(x_{i}\) in inequality (30.19) stands for electrical capacity. The equivalent capacitances \(C_{e,p}, C_{e,s}\) of n capacitors arranged in parallel and series are given by [34]:

$$C_{e,p} = x_{1} + x_{2} + \ldots + x_{n}$$
(30.23)

and

$$C_{e,s} = \frac{1}{{1/x_{1} + 1/x_{2} + \ldots + 1/x_{n} }}$$
(30.24)

correspondingly, where \(x_{i}\) is the capacitance of the ith capacitor (\(i = 1, \ldots ,n\)). The expression (30.23) on the left side of inequality (30.19) can now be meaningfully interpreted as the equivalent capacitance of n capacitors arranged in parallel. The expression (30.24) on the right side of inequality (30.19) can be meaningfully interpreted as the equivalent capacitance \(C_{e,s}\) of n capacitors arranged in series. Inequality (30.19) now expresses another physical property: the equivalent capacitance of n capacitors arranged in parallel is at least \(n^{2}\) times larger than the equivalent capacitance of the same capacitors arranged in series, irrespective of the values of the individual capacitors.

Suppose that another meaning for the variables \(x_{i}\) in Inequality (30.19) is created, for example, each \(x_{i}\) now stands for the stiffness of the elastic element \(i(i = 1, \ldots ,n)\) Consider the equivalent stiffness \(k_{e,s}\) of n elastic elements in series and the equivalent stiffness \(k_{e,p}\) of n elastic elements in parallel. The stiffness values of the separate elastic elements, denoted by \(x_{1} ,x_{2} , \ldots ,x_{n}\), are unknown. The equivalent stiffness of n elastic elements in series is given by the well-known relationship:

$$k_{e,s} = \frac{1}{{1/x_{1} + 1/x_{2} + \ldots + 1/x_{n} }}$$
(30.25)

and for the same elastic elements in parallel, the equivalent stiffness is

$$k_{e,p} = x_{1} + x_{2} + \ldots + x_{n}$$
(30.26)

Now, the two sides of inequality (30.19) can be meaningfully interpreted in the following way. The expression (30.25) on the right-hand side of the inequality (30.19) can be interpreted as the equivalent stiffness of n elastic elements arranged in series. The left side of inequality (30.19) can be interpreted as the equivalent stiffness of n elastic elements arranged in parallel. The inequality now expresses a different physical property: the equivalent stiffness of n elastic elements arranged in parallel is at least \(n^{2}\) times larger than the equivalent stiffness of the same elements arranged in series, irrespective of the individual stiffness values characterising the separate elements. These are examples of different physical properties derived from a meaningful interpretation of a single abstract algebraic inequality.

The considered examples illustrate new physical properties predicted from interpreting a correct algebraic inequality which give the basis for the principle of non-contradiction: If a correct algebraic inequality permits meaningful interpretation that can be related to a real process, the process realization yields results that do not contradict the algebraic inequality.

Further details regarding the principle of non-contradiction will be presented elsewhere.

Inequality (30.19) is domain-independent. It provides tight bounds for electrical and mechanical properties. At the same time, the uncertainty associated with the relationship between the equivalent parameters characterising elements arranged in series and parallel (due to the epistemic uncertainty related to the values of the building elements) is reduced.

These properties have never been suggested in standard textbooks and research literature covering the mature fields of mechanical and electrical engineering, which demonstrates that the lack of knowledge of the domain-independent method of algebraic inequalities made these properties invisible to the domain experts.

30.4 Conclusions

  1. 1.

    The benefit from combining the domain-independent method of segmentation with domain-specific knowledge in strength of components was demonstrated in reducing the risk of overstress failure by segmenting concentrated external loads. It was demonstrated that the domain-independent method of segmentation also achieves light-weight design.

  2. 2.

    The capability of the chain-rule segmentation to reduce the risk of computational errors has been demonstrated in the area of kinematic analysis of complex mechanisms.

  3. 3.

    The domain-independent method of algebraic inequalities has been used to reduce uncertainty, reveal the intrinsic reliability of competing designs and rank these in terms of reliability, in the absence of knowledge related to the reliabilities of their building parts.

  4. 4.

    The meaningful interpretation of an algebraic inequality led to the discovery of new physical properties.

    Thus, the equivalent resistance of n elements arranged in parallel is at least \(n^{2}\) smaller than the equivalent resistance of the same elements arranged in series, irrespective of the resistances of the elements.

    Another physical property discovered by a meaningful interpretation of an algebraic inequality is that the equivalent capacity of n capacitors arranged in series is at least \(n^{2}\) times smaller than the equivalent capacity of the same capacitors arranged in parallel, irrespective of the actual capacities of the separate capacitors.

  5. 5.

    The inequality of negatively correlated random events was introduced and its meaningful interpretation was used to reveal the intrinsic reliability of competing design solutions and to rank them in the absence of knowledge related to the reliabilities of the building parts.

  6. 6.

    The domain-independent method of segmentation and the domain-independent method based on algebraic inequalities combined with knowledge from specific domains achieved effective risk reduction solutions.