Keywords

Acronyms

1 Introduction

By employing quantifiable and measurable ways of assessing the role and significance of critical uncertainties and treating the human-in-the-loop (HITL) as a part, often the most crucial part, of a complex man–instrumentation (both its hard- and software)–object-of-control–environment system, one could improve dramatically the state-of-the-art in ergonomics engineering, including its HSI aspect. In the analysis that follows several recently employed probabilistic HITL/HSI models are addressed. The models are based on the well-known general principles of applied probability (see, e.g., [1]) and can be and, actually, have been applied, when a human fulfils a challenging mission or encounters an extraordinary situation, and is expected to possess a high enough human capacity factor (HCF) to successfully cope with an elevated mental workload (MWL) [2]. The “object-of-control” could be, particularly, aerospace, automotive, railway or a maritime vehicle; another human, such as, e.g., medical patient or a business customer, particularly in the situations, when adequate trust is critical [3]. One cannot improve anything, if he/she does not quantify things. And since nobody and nothing is perfect, a physically meaningful and effective quantification should be done on the probabilistic basis. Probabilistic predictive modelling (PPM) is able predict, quantify and, if necessary and appropriate, even specify an adequate and never-zero probability of failure of an ergonomics undertaking of importance.

Tversky and Kahneman [4] seem to be the first ones who addressed, in application to decision making tasks in economics (2002 Nobel Prize in economics), various cognitive “heuristics and biases” with consideration of uncertainties in human psychology, but being top-notch, but still traditional, human psychologists, these authors discussed such problems from the qualitative viewpoint, while the importance of the probabilistic quantitative approach [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] that has been addressed and discussed here. It is noteworthy that the analytical PPM was used in all the referenced publications [8] and that the addressed ergonomics models originated from the models suggested and employed earlier in electronics and photonics reliability engineering (see, e.g., [6, 7]). The first attempt of doing so was undertaken in application to helicopter-landing-ship (HLS) situation [5]. It was shown that the likelihood that such landing would be successful and safe, i.e., the helicopter’s undercarriage will not be damaged, if the probability that the anticipated random time of the calm “widow” in the sea state exceeds appreciably the sum of two random times: the actual time of landing and the times of the “go-non-go” decision making by the officer on shipboard and the helicopter pilot.

2 Analysis

As has been indicated, the convolution model was introduced and applied first in the HLS situation, and then employed in several other HSI problems [10,11,12,13,14,15,16]. Let us show, as a suitable example, how this model can be applied for the assessment of the probability of a head-on railway obstruction. Consider first a situation when the assessed sight distance (ASD) \(\hat{S}\) determined by the system’s radar and/or LIDAR is viewed as a non-random variable and assume that the random pre-deceleration constant speed distance S0 and the subsequent constant deceleration distance S1 (after the system and/or the machinist detected an obstacle) are random variables distributed in accordance with the Rayleigh law. Indeed, the most likely values of the breaking times and distances (modes) cannot be zero but cannot be very long (large) either. In addition, in emergency situations of the type in question, short times and small distances are much more likely than long times and large distances. Because of that, the modes of their probability density distribution functions (PDFs) should be heavily skewed towards short times and small distances. The Rayleigh distribution is the simplest distribution that possesses these features. The probability that the random distance \(S = S_{0} + S_{1}\) exceeds a certain non-random level \(\hat{S}\) can be found as a convolution of the two Rayleigh distributed random variables \(S_{0}\) and \(S_{1}\) as follows:

$$ \begin{gathered} P_{S} = 1 - \int\limits_{0}^{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{S} }} {\frac{{s_{0} }}{{\sigma_{0}^{2} }}\exp \left( { - \frac{{s_{0}^{2} }}{{2\sigma_{0}^{2} }}} \right)} \left[ {1 - \exp \left( { - \frac{{(\hat{S} - s_{0} )^{2} }}{{2\sigma_{1}^{2} }}} \right)} \right]ds_{0} = \exp [ - (1 + \eta^{2} )s^{2} ] + \hfill \\ + \exp ( - s^{2} )\left( {\frac{{\exp \left( { - \frac{{s^{2} }}{{\eta^{2} }}} \right) - \exp \left( { - \eta^{2} s^{2} } \right)}}{{1 + \frac{1}{{\eta^{2} }}}} + \sqrt \pi \frac{s}{{1 + \frac{1}{\eta }}}\left[ {\Phi (\eta s) + \Phi \left( {\frac{s}{\eta }} \right)} \right]} \right). \hfill \\ \end{gathered} $$
(1)

Here \(f(s_{0,1} ) = \frac{{s_{0,1} }}{{\sigma_{0,1}^{2} }}\exp \left( { - \frac{{s_{0,1}^{2} }}{{2\sigma_{0,1}^{2} }}} \right)\) are the PDFs of the variables \(S_{0}\) and \(S_{1}\); \(\sigma_{0,1}\) are their modes (most likely values); \(s_{0,1} = \sqrt {\frac{\pi }{2}} \sigma_{0}\) and \(\sqrt {D_{0,1} } = \sqrt {\frac{4 - \pi }{2}} \sigma_{0,1}\) are their means and standard deviations, respectively; \(s = \frac{{\hat{S}}}{{\sqrt {2(\sigma_{0}^{2} + \sigma_{1}^{2} )} }}\) and \(\eta = \frac{{\sigma_{1} }}{{\sigma_{0} }}\) are the dimensionless parameters of the convolution (1); and \(\Phi (\alpha ) = \frac{2}{\sqrt \pi }\int\limits_{0}^{\alpha } {e^{{ - t^{2} }} } dt\) is the probability integral (Laplace function). If the probability (1) is small (how small is “small” should be determined, agreed upon and eventually included into the governing specifications), then there is reason to believe that the train will stop before hitting the obstacle, so that obstruction will be avoided. The calculated PS values are shown in Table 1. As evident from the calculated data, the ASD parameter \(s\) plays the major role, while the ratio \(\eta\) of the most likely deceleration and the pre-deceleration distances is much less important.

Table 1. Calculated probabilities PS of obstruction assuming non-random ASD

The role of the ASD variability could be accounted for based on the following reasoning. Assuming, based on the intuitively obvious physical considerations, that the ASD is a normally distributed random variable, the probability that this variable is below a certain level S is \(P_{A} = \frac{1}{2}\left[ {1 - \Phi (s)} \right]\). Obstruction will be avoided, when the random distance \(S = S_{0} + S_{1}\) is below a level \(\widehat{S}\) (the probability of this situation is \(1 - P_{s}\)) and, in addition, if the ASD distance is above this level (this probability is \(1 - P_{A} )\). Then the probability that the obstruction is avoided can be evaluated as \((1 - P_{s} )(1 - P_{A} ),\) and the probability \(P_{SA}\) that obstruction will occur is therefore

$$ P_{SA} = 1 - (1 - P_{s} )(1 - P_{A} ) = P_{A} + P_{s} - P_{A} P_{s} = \frac{1}{2}\left[ {1 - \Phi (s) + P_{s} \left( {1 + \Phi (s)} \right)} \right] $$
(2)

The calculated \(P_{SA}\) values are shown in Table 2. As one could see by comparing the Tables 1 and 2 data, consideration of the variability of the ASD results in an insignificant increase in the predicted probabilities of obstruction and this difference decreases with the decrease in this probability. For very low probabilities of obstruction, consideration of the ASD variability does not make any difference at all (see italic data in the last two rows of Table 2). Intuitively, such a behaviour could be anticipated from (2). Indeed, when the prediction of the ASD is absolutely accurate and, owing to that, the probability \(P_{A}\) of obstruction caused by the inaccurate radar or a LIDAR measurements is zero \((P_{A} = 0),\) then \(P_{SA} = P_{S} ,\) and when \(P_{S}\) is low, \(P_{SA}\) is also low and is not different from the \(P_{S}\).

Table 2. Calculated probabilities \(P_{SA}\) of obstruction considering ASD variability

The double-exponential-probability-distribution (DEPD) model uses the DEPD function. This function could be introduced in many ways, depending on a particular problem of importance. In vehicular, such as, say, avionic engineering, if one intends to evaluate the impact of the HCF \(F\), the MWL \(G\) and the time \(t\) on the probability \(P^{h} (F,G,t)\) of the pilot’s non-failure, this function can be sought in the form:

$$ P^{h} (F,G,S_{*} ) = P_{0} \exp \left[ { - \left( {\gamma_{S} S_{*} t + \frac{{G^{2} }}{{G_{0}^{2} }}} \right)\exp \left( { - \frac{{F^{2} }}{{F_{0}^{2} }}} \right)} \right]. $$
(3)

Here \(P_{0}\) is the probability of human non-failure at the initial moment of time and/or in the case of a very low MWL level \(G,\) but could be defined also as the level for the situation when the HCF \(F\) is extraordinarily high, while the MWL \(G\) is still finite, and so is the time \(t;\) \(S_{*}\) is the threshold (acceptable level) of the continuously monitored/measured human health characteristic (symptom), such as, e.g., body temperature, arterial blood pressure, etc.; \(\gamma_{S}\) is the sensitivity factor for the symptom \(S_{*} ;\)\(G \ge G_{0}\) is the actual (elevated, off-normal) MWL; \(G_{0}\) is the MWL in normal operation conditions; \(F \ge F_{0}\) is the actual (off-normal) HCF exhibited or required in the extraordinary condition of importance; \(F_{0}\) is the most likely (normal, specified) HCF; \(\gamma_{S}\) is the sensitivity factor for the governing symptom \(S_{*} .\) While measuring the MWL has become, for many years, a key method of improving safety, HCF is a relatively new notion (see, e.g., [2, 3, 10, 11]) that plays with respect to the MWL more or less the same role as strength or capacity play with respect to stress or demand in structural analysis and in some economics problems. The function (3) makes physical sense. Indeed, when time \(t\), and/or the level \(S_{*}\) of the governing symptom, and/or the level of the MWL \(G\) are significant, the probability of non-failure is always low, no matter how high the level of the HCF \(F\) is; when the level of the HCF is high, and the time \(t\), and/or the level \(S_{*}\) of the governing symptom, and/or the level of the MWL \(G\) are finite, the probability \(P^{h} (F,G,S_{*} )\) becomes close to the probability \(P_{0}\); when the HCF \(F\) is on the ordinary level \(F_{0}\) the formula (3) yields:

$$ P^{h} (F,G,S_{*} ) = P^{h} (G,S_{*} ) = P_{0} \exp \left[ { - \left( {\gamma_{S} S_{*} t + \frac{{G^{2} }}{{G_{0}^{2} }}} \right)} \right], $$
(4)

and for a long time in operation (\(t \to \infty )\) and/or when the level \(S_{*}\) of the governing symptom is significant \((S_{*} \to \infty )\) and/or when the level \(G\) of the MWL is high, the probability (4) of non-failure will be always low; at the initial moment of time (\(t = 0)\) and/or in the case of a very low \(S_{*}\) level (\(S_{*} = 0),\) the Eq. (4) yields: \(P^{h} (F,G,S_{*} ) = P^{h} (G) = P_{0} \exp \left[ { - \left( {\frac{{G^{2} }}{{G_{0}^{2} }}} \right)} \right];\) when the MWL \(G\) is high, this probability is low. In the function (3) there are two unknowns: the probability \(P_{0}^{{}}\) and the sensitivity factor \(\gamma_{S}^{{}} .\) The probability \(P_{0}^{{}}\) could be determined by testing a group of highly qualified individuals. Let us show how the sensitivity factor \(\gamma_{S}^{{}}\) can be determined. The Eq. (4) can be written as \(\frac{{ - \ln \overline{P}}}{{\gamma_{S} S_{*} t + \frac{{G^{2} }}{{G_{0}^{2} }}}} = \exp \left( { - \frac{{F^{2} }}{{F_{0}^{2} }}} \right).\) Let accelerated testing be conducted on a flight simulator for the same group of individuals with the same high HCF \(F/F_{0}\) level (Captain Sullenberger [12] is a good example), but at two different elevated (off-normal) MWL conditions, \(G_{1}\) and \(G_{2} .\) Let the governing symptom has reached its critical level \(S_{*}\) at the times \(t_{1}\) and \(t_{2}\) from the beginning of testing, respectively, and the percentages of the individuals that failed the tests were \(Q_{1}\) and \(Q_{2}\), so that the corresponding probabilities of non-failure were \(\mathop{P}\limits^{\rightharpoonup} _{1}\) and \(\mathop{P}\limits^{\rightharpoonup} _{2}\), respectively. Since the same group of individuals was tested, the right part of the above relationship should remain unchanged, and because of that the condition \(\frac{{ - \ln \overline{P}_{1} }}{{\gamma_{S} S_{*} t_{1} + \frac{{G_{1}^{2} }}{{G_{0}^{2} }}}} = \frac{{ - \ln \overline{P}_{2} }}{{\gamma_{S} S_{*} t_{2} + \frac{{G_{2}^{2} }}{{G_{0}^{2} }}}}\) should be fulfilled. This condition yields: \(\gamma_{S} = \frac{1}{{S_{*} }}\frac{{\frac{{G_{1}^{2} }}{{G_{0}^{2} }} - \frac{{\ln \overline{P}_{1} }}{{\ln \overline{P}_{2} }}\frac{{G_{2}^{2} }}{{G_{0}^{2} }}}}{{\frac{{\ln \overline{P}_{1} }}{{\ln \overline{P}_{2} }}t_{2} - t_{1} }}.\) After the sensitivity factor \(\gamma_{S}\) is determined, the probability \(P^{h} (F,G,S_{*} )\) of human non-failure can be evaluated on the basis of the formula (3). Let the accelerated testing on a flight simulator was conducted twice for a group of individuals with high HCF \(\frac{F}{{F_{0} }}\) levels at loading conditions, \(\frac{{G_{1} }}{{G_{0} }} = 1.5\) and \(\frac{{G_{2} }}{{G_{0} }} = 2.5.\) The tests have indicated that the value of the symptom \(S\) of the critical magnitude of, say, \(S_{*} = 180,\) has been detected in 70% of individuals (\(\overline{P}_{1} = 0.3)\) during testing under the loading condition of \(\frac{{G_{1} }}{{G_{0} }} = 1.5\) after \(t_{1} = 2\,\,\rm{h}\) of testing and in 90% of individuals \((\overline{P}_{2} = 0.1)\) during the second set of testing under the loading condition \(\frac{{G_{2} }}{{G_{0} }} = 2.5\) after \(t_{2} = 1\,\,\rm{h}.\) Then the sensitivity factor \(\gamma_{S}\) is as follows: \(\gamma_{S} = \frac{1}{{S_{*} }}\frac{{\frac{{G_{1}^{2} }}{{G_{0}^{2} }} - \frac{{\ln \overline{P}_{1} }}{{\ln \overline{P}_{2} }}\frac{{G_{2}^{2} }}{{G_{0}^{2} }}}}{{\frac{{\ln \overline{P}_{1} }}{{\ln \overline{P}_{2} }}t_{2} - t_{1} }} = \frac{1}{180}\frac{{2.25 - \frac{ - 1.2040}{{ - 2.3026}}6.25}}{{\frac{ - 1.2040}{{ - 2.3026}} - 2}} = 3.8288x10^{ - 3} hr^{ - 1} ,\) and the Eq. (4) results in the following probability of the human non-failure:

$$\overline{P} = \frac{{P^{h} (F,G,S_{*} )}}{{P_{0} }} = \exp \left[ { - \left( {\gamma_{S} S_{*} t + \frac{{G^{2} }}{{G_{0}^{2} }}} \right)\exp \left( { - \frac{{F^{2} }}{{F_{0}^{2} }}} \right)} \right] = \exp \left[ { - \left( {0.68918t + \frac{{G^{2} }}{{G_{0}^{2} }}} \right)\exp \left( { - \frac{{F^{2} }}{{F_{0}^{2} }}} \right)} \right]$$

For a pilot of ordinary skills \(\left( {\frac{F}{{F_{0} }} = 1} \right)\) (normal HCF) and for a normal MWL \(\left( {\frac{G}{{G_{0} }} = 1} \right)\) this formula yields: \(\overline{P} = \exp \left[ { - 0.3679\left( {0.68918t + 1} \right)} \right].\) In 10 h this probability will be only 5.48%. However, for an exceptionally highly qualified individual, like Captain Sullenberger, whose estimated HCF level is as high as \(\frac{F}{{F_{0} }} = 3.14\) [12], the probability of the navigator’s non-failure is considerably higher: \(\overline{P} = \exp \left[ { - \left( {0.68918t + 1} \right)\exp ( - 9.8596)} \right] = 0.9996.\) For an individual with the HCF of, say, \(\frac{F}{{F_{0} }} = 2.0\) this probability is significantly, by 13.5%, lower: \(\overline{P} = \exp \left[ { - \left( {0.68918t + 1} \right)\exp ( - 4.0)} \right] = 0.8654.\) These results indicate particularly the importance of the HCF in the addressed HITL problem.

The probabilistic segmentation model [11, 15] was used to quantify a HSI related situation, when a vehicular mission of interest consists of a number of consecutive segments/phases characterized by different probabilities of occurrence of a particular harsh environment or and/by other extraordinary conditions during the particular segment of the mission, and/or by different durations of these segments/phases; and/or by different failure rates, of the equipment and instrumentation and/or the navigator(s). According to the probabilistic segmentation model, the probability of the mission non-failure can be calculated as the sum of the products of the likelihood \(q_{i}\) of the occurrence of a harsh environment of the given severity at each segment of the route, the probability \(P_{i}^{e} (t_{i} )\) of non-failure of the equipment and the probability \(P_{i}^{h} (t_{i} )\) of non-failure of the navigator(s). The probability of the mission failure can be determined as \(Q = \mathop{\sum}\limits_{i = 1}^{n} {q_{i} } Q_{i} (t_{i} ) = 1 - \mathop{\sum}\limits_{i = 1}^{n} {q_{i} } P_{i}^{e} (t_{i} )P_{i}^{h} (t_{i} ).\) If at a certain segment of the fulfilment of the mission of interest the human performance is not critical, then the corresponding probability \(P_{i}^{h} (t_{i} )\) of human non-failure should be put equal to one. On the other hand, if there is confidence that the equipment (instrumentation) failure is not critical, or if there is a reason to believe that the probability of the equipment non-failure is considerably higher than the probability of the human non-failure, then it is the probability \(P_{i}^{e} (t_{i} )\) that should be put equal to one. Finally, if one is confident that a certain level of the harsh environment will be encountered during the fulfilment of the mission at the \(i -\) th segment of the route, then the corresponding probability \(q_{i}\) of encountering such an environment should be put equal to one. Let, for instance, the duration of a particular vehicular mission is 24 h, and the vehicle spends equal times at each of the six segments (so that \(t_{i} = 4\) hours at the end of each segment), the failure rates of the equipment and the human performance are independent of the environmental conditions and are \(\lambda = 8x10^{ - 4}\) 1/h, the shape parameter in the Weibull distribution in both cases is \(\beta = 2\) (Rayleigh distribution), the HCF ratio is \(\frac{{F^{2} }}{{F_{0}^{2} }} = 8\) \(\left( {\frac{F}{{F_{0} }} = 2.828} \right)\), the probability of human non-failure at ordinary flight conditions is \(P_{0} = 0.9900\), and the MWL \(G_{i}^{{}} /G_{0}^{{}}\) ratios are given vs. the probability \(q_{i}\) of occurrence of the environmental conditions in Table 3.

Table 3. Calculated probabilities of mission failure

The computations of the probabilities of interest yield:

$$ \begin{aligned} P_{i}^{e} = & {\text{exp}}\left[ { - \left( {\lambda t_{i} } \right)^{2} } \right] = {\text{exp}}\left[ { - \left( {8 \times 10^{{ - 4}} \times 4} \right)^{2} } \right] = 0.99999, \\ P_{i}^{h} = & P_{0} \bar{P}_{i} \;{\text{exp}}\left[ {\left( {\lambda t_{i} } \right)^{2} } \right] = 0.9900 \times 0.99999\bar{P}_{i} = 0.99\bar{P}_{i} \\ \end{aligned} $$

The probability of the mission’s non-failure is \(\mathop{\sum}\limits_{i = 1}^{n} {q_{i} } P_{i}^{e} (t_{i} )P_{i}^{h} (t_{i} ) = 0.9900,\) so that the probability of mission failure is \(Q = 1 - \mathop{\sum}\limits_{i = 1}^{n} {q_{i} } P_{i}^{e} (t_{i} )P_{i}^{h} (t_{i} ) = 1 - 0.990 = 0.01 = 1\% .\)

3 Conclusion

A successful/safe outcome of an HSI related effort cannot be assured, nor even improved, if this outcome is not quantified. Since nobody and nothing is perfect, and the probability of failure is never zero, such a quantification should be done on the probabilistic basis, and the established never-zero probability of failure should be made adequate for a particular system, individual(s) and application. Analytical (“mathematical”) predictive modelling should always be considered, in addition to computer simulations, in every critical HSI effort. These two types of models are based, as a rule, on different assumptions and use different calculation techniques, and if the predictions based on these models are in agreement, then there is a good reason to believe that the obtained data are both accurate and trustworthy.