Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Multi-physics or multi-disciplinary systems analysis and optimization is an extensive area of research, and numerous studies have dealt with the various aspects of coupled multi-disciplinary analysis (MDA) in several engineering disciplines. Researchers have focused both on the development of computational methods (Alexandrov and Lewis 2000; Cramer et al. 1994) and on the application of these methods to several types of multi-physics interaction, for example, fluid–structure (Belytschko 1980), thermal–structural (Thornton 1996), fluid–thermal–structural (Culler et al. 2009), etc. Studies have considered these methods and applications either for MDA or for multi-disciplinary optimization (MDO). The coupling between individual disciplinary analyses may be one-directional (feed-forward) or bi-directional (feedback). Feed-forward coupling is straightforward to deal with, since the output of one model simply becomes an output to another (Sankararaman 2012). On the other hand, the topic of bi-directional (feedback) coupling is challenging to deal with, because the output of the first model becomes an input to the second while the output of the second model becomes an input to the first; therefore, it is necessary to iterate until convergence between these models in order to analyze the whole system. Such analysis needs to account for the different sources of uncertainty in order to accurately estimate the reliability and ensure the safety of the multi-physics system.

Computational methods for MDA can be classified into three different groups of approaches (Felippa et al. 2001). The first approach, known as the field elimination method (Felippa et al. 2001), eliminates one or more coupling variables (referred to as “field” in the literature pertaining to fluid–structure interaction) using reduction/elimination techniques such as integral transforms and model reduction. This approach is restricted to linear problems that permit efficient and evident coupling. The second approach, known as the monolithic method (Felippa et al. 2001; Michler et al. 2004), solves the coupled analysis simultaneously using a single solver (e.g. Newton–Raphson). The third approach, known as the partitioned method, solves the individual analyses separately with different solvers. The well-known fixed point iteration (FPI) approach (repeated analysis until convergence of coupling variables) and the staggered solution approach (Felippa et al. 2001; Park et al. 1977) are examples of partitioned methods. While the field elimination and monolithic methods tightly couple the multi-disciplinary analyses together, the partitioned method does not.

Two major types of methods have been pursued for MDO—single level approaches and multi-level approaches. Single level approaches (Cramer et al. 1994) include the multi-disciplinary feasible (MDF) approach (also called fully integrated optimization or the all-in-one approach), the all-at-once (AAO) approach (also called simultaneous analysis and design (SAND)), and the individual disciplinary feasible (IDF) approach. Multi-level approaches for MDO include collaborative optimization (Braun 1996; Braun et al. 1997), concurrent subspace optimization (Sobieszczanski-Sobieski 1988; Wujek et al. 1997), bi-level integrated system synthesis (Sobieszczanski-Sobieski et al. 2003), analytical target cascading (Kokkolaras et al. 2006; Liu et al. 2006), etc.

An important factor in the analysis and design of multi-disciplinary systems is the presence of uncertainty in the system inputs. It is necessary to account for the various sources of uncertainty in both MDA and MDO problems. The MDA problem focuses on uncertainty propagation to calculate the uncertainty in the outputs. In the MDO problem, the objective function and/or constraints may become stochastic if the inputs are random. The focus of the present chapter is only on uncertainty propagation in MDA and not on optimization.

While most of the aforementioned methods for deterministic MDA can easily be extended to non-deterministic MDA using Monte Carlo sampling (MCS), this may be computationally expensive due to repeated evaluations of disciplinary analyses. Hence, researchers have focused on developing more efficient alternatives. Gu et al. (2000) proposed worst case uncertainty propagation using derivative-based sensitivities. Kokkolaras et al. (2006) used the advanced mean value method for uncertainty propagation and reliability analysis, and this was extended by Liu et al. (2006) by using moment-matching and considering the first two moments. Several studies have focused on uncertainty propagation in the context of reliability analysis. Du and Chen (2005) included the disciplinary constraints in the most probable point (MPP) estimation for reliability analysis. Mahadevan and Smith (2006) developed a multi-constraint first-order reliability method (FORM) for MPP estimation. While all the aforementioned techniques are probabilistic, non-probabilistic techniques based on fuzzy methods (Zhang and Huang 2010), evidence theory (Agarwal et al. 2004), interval analysis (Li and Azarm 2008), etc. have also been studied for MDA under uncertainty.

Similar to MDA, methods for MDO under uncertainty have also been investigated by several researchers. Kokkolaras et al. (2006) extended the analytical target cascading approach to include uncertainty. A sequential optimization and reliability analysis (SORA) framework was developed by Du et al. (2008) by decoupling the optimization and reliability analyses. Chiralaksanakul and Mahadevan (2007) integrated solution methods for reliability-based design optimization with solution methods for deterministic MDO problems to address MDO under uncertainty. Smith (2007) combined the techniques in Mahadevan and Smith (2006) and Chiralaksanakul and Mahadevan (2007) for the design of aerospace structures. As mentioned earlier, the focus of this chapter is only on MDA under uncertainty, and therefore, aspects of MDO will not be discussed hereafter.

Review of the above studies reveals that the existing methods for MDA under uncertainty are either computationally expensive or based on several approximations. Computationally expense is incurred in the following ways:

  1. 1.

    Using deterministic MDA methods with MCS (Haldar and Mahadevan 2000) require several thousands of evaluations of the individual disciplinary analyses.

  2. 2.

    Non-probabilistic techniques (Agarwal et al. 2004; Li and Azarm 2008; Zhang and Huang 2010) use interval-analysis-based approaches and also require substantial computational effort. Further they are also difficult to interpret in the context of reliability analysis; this is an important consideration for MDO which may involve reliability constraints.

Approximations are introduced in the following manner:

  1. 1.

    Probability distributions are approximated with the first two moments (Du and Chen 2005; Kokkolaras et al. 2006; Liu et al. 2006; Mahadevan and Smith 2006).

  2. 2.

    Approximations of individual disciplinary analyses may be considered using derivative-based sensitivities (Gu et al. 2000) or linearization at MPP for reliability calculation (Du and Chen 2005; Mahadevan and Smith 2006).

Some of these problems can be overcome by the use of a decoupled approach that has been advocated by Du and Chen (2005) and Mahadevan and Smith (2006). In this decoupled approach, Taylor’s series approximation and the first-order second moment (FOSM) method have been proposed to calculate the probability density function (PDF) of the coupling variables.

Fig. 1
figure 1

A multi-disciplinary system

For example, consider the multi-disciplinary system shown in Fig. 1. Here \(\boldsymbol{x} =\{ x_{1},x_{2},x_{s}\}\) are the inputs, and \(u(x) =\{ u_{12},u_{21}\}\) are the coupling variables. Note that this is a not only a multi-disciplinary system, but also a multi-level system where the outputs of the coupled analysis (g 1 and g 2) are used to compute a higher level system output (f).

Once the PDFs of the coupling variables u 12 and u 21 are estimated using the decoupled approach, the coupling between “Analysis 1” and “Analysis 2” is removed. In other words, the variable u 21 becomes an input to “Analysis 1” and the variable u 12 becomes an input to “Analysis 2,” and the dependence between the quantities u 12, u 21, and \(\boldsymbol{x}\) is not considered any further. This “fully decoupled” approach reduces the computational effort considerably by avoiding repeated evaluations of the fully coupled system; however, this is still based on approximations and more importantly, suitable only when the aim is to estimate the statistics of g 1 or g 2.

In the case of a multi-level system, where the multi-disciplinary outputs (g 1 and g 2 in this case) could be inputs to another model (Analysis 3 in Fig. 1), the fully decoupled approach will not be applicable for the following reason. In Fig. 1, for a given x, there is a unique g 1, and a unique g 2; in addition, for a given u 12, there is a unique u 21, and hence for a given g 1, there is a unique g 2. This functional dependence between u 12 and u 21, and hence between g 1 and g 2, cannot be ignored when estimating the probability distribution of f. In the fully decoupled approach, the functional dependence between u 12 and u 21 is not preserved in subsequent analysis; once the PDFs of u 12 and u 21 are estimated, independent samples of u 12 and u 21 are used to generate samples of g 1 (using only Analysis 1) and g 2 (using only Analysis 2) which in turn are used to compute the statistics of f. This will lead to an erroneous estimate of f, since g 1 and g 2 values are not related to each other as they should be in the original system. This “subsequent analysis” need not necessarily refer to a higher level output; this could even refer to an optimization objective which is computed based on the values of g 1 and g 2 (or even u 12 and u 21). Thus, if the objective is only to get the statistics of g 1 and g 2 as considered in Du and Chen (2005) and Mahadevan and Smith (2006), then the fully decoupled approach is adequate. But if g 1 and g 2 are to be used in further analysis, then the one-to-one correspondence between u 12 and u 21 (and hence between g 1 and g 2) cannot be maintained in the fully decoupled approach. Hence, one would have to revert to the expensive Monte Carlo simulation (MCS) outside a deterministic MDA procedure to compute the statistics of the output f. Thus, it becomes essential to look for alternatives to the fully decoupled approach, especially when the complexity of the system increases.

In order to address the above challenges, Sankararaman and Mahadevan (2012) proposed a new likelihood-based approach for uncertainty propagation analysis in multi-level, multi-disciplinary systems. In this method, the probability of satisfying the inter-disciplinary compatibility is calculated using the principle of likelihood, which is then used to quantify the PDF of the coupling variables. This approach for MDA offers several advantages:

  1. 1.

    This method for the calculation of the PDF of the coupling variable is theoretically exact; the uncertainty in the inputs is accurately propagated through the disciplinary analyses in order to calculate the PDF of the coupling variable. No approximations of the individual disciplinary analyses or the moments of the coupling variable are necessary.

  2. 2.

    This approach requires no coupled system analysis, i.e. repeated iteration between individual disciplinary analyses until convergence (FPI), thereby improving the computational cost.

  3. 3.

    For multi-level systems, the difficulty in propagating the uncertainty in the feedback variables to the system output is overcome by replacing the feedback coupling with unidirectional coupling, thereby preserving the functional dependence between the individual disciplinary models. The direction of coupling can be chosen either way, without loss of generality. This semi-coupled approach is also useful in an optimization problem where the objective function is a function of the disciplinary outputs.

The goal of this chapter is to explain this likelihood-based methodology for uncertainty quantification in multi-physics systems in detail and illustrate its application through numerical examples. The rest of this chapter is organized as follows. Section 2 discusses a “sampling with optimization-based deterministic MDA” (SOMDA) approach, which is an example of using the partitioned method along with MCS. Certain ideas explained in this section are used to motivate the likelihood-based method. Then, the likelihood-based approach for multi-disciplinary analysis (LAMDA) is explained in Sect. 3 and its numerical implementation is discussed in Sect. 4. Section 5 illustrates the LAMDA methodology using a mathematical example and Sect. 6 uses the LAMDA methodology for a three-discipline analysis of a fire detection satellite (Zaman 2010).

2 Sampling with Optimization-Based Deterministic MDA

Consider the multi-disciplinary system shown earlier in Fig. 1. The overall goal is to estimate the probability distribution of the outputs g 1, g 2, and f, given the probability distributions of the inputs \(\boldsymbol{x}\). As explained in Sect. 1, an intermediate step is to calculate the PDFs of the coupling variables u 12 and u 21 and then use these PDFs for uncertainty propagation.

First consider the deterministic problem of estimating the converged u 12 and u 21 values corresponding to given values of \(\boldsymbol{x}\). The conventional FPI approach starts with an arbitrary value of u 12 as input to “Analysis 2” and the resultant value of u 21 serves as input to “Analysis 1.” If the next output from “Analysis 1” is the same as the original u 12, then the analysis is said to have reached convergence and the inter-disciplinary compatibility is satisfied. However, if it is not, the conventional FPI approach treats the output of “Analysis 1” as input to “Analysis 2” and the procedure is repeated until convergence.

This search for the convergent values of u 12 and u 21 can be performed in an intelligent manner by formulating it as an optimization problem. For this purpose, define a new function G whose input is the coupling variable u 12, in addition to \(\boldsymbol{x}\). The output of “G” is denoted by U 12, which is obtained by propagating the input through “Analysis 2” followed by “Analysis 1,” as shown in Fig. 2.

Fig. 2
figure 2

Definition of G

The multi-disciplinary constraint is said to be satisfied if and only if \(u_{12} = U_{12}\). For a given \(\boldsymbol{x}\), the convergent value of the coupling variable u 12 can be obtained by minimizing the squared error \(E = (u_{12} - G(u_{12},\boldsymbol{x}))^{2}\) for a given set of inputs \(\boldsymbol{x}\), where G is given by:

$$\displaystyle{ U_{12} = G(u_{12},\boldsymbol{x}) = A_{1}(u_{21},\boldsymbol{x})\ \text{where}\ u_{21} = A_{2}(u_{12},\boldsymbol{x}) }$$
(1)

Note that this is an unconstrained optimization problem. If the multi-disciplinary compatibility is satisfied, then \(u_{12} = U_{12}\), and the optimum value of E will be equal to zero. In the rest of the chapter, it is assumed that it is possible to satisfy inter-disciplinary compatibility for each realization of the input \(\boldsymbol{x}\); in other words, the MDA has a feasible solution for each input realization. Once the converged value of u 12 is estimated, then the bi-directional coupling can be removed and replaced with a uni-directional coupling from “Analysis 2” to “Analysis 1” as shown in Fig. 3.

Fig. 3
figure 3

A multi-disciplinary system: unidirectional coupling

If there are multiple coupling variables in one direction, i.e. if u 12 is a vector instead of a scalar, then E is also a vector, i.e. \(E = [E_{1},E_{2},E_{3},\ \ldots \ E_{n}]\). If the MDA has a solution, then the optimal value of the vector u 12 will lead to E i  = 0 for all i’s. Since each E i  = 0 by definition, the optimal value of u 12 can be estimated by minimizing the sum of all E i ’s (instead of minimizing each E i separately), and the minimum value of this sum will also be equal to zero.

This is a minor modification to the FPI approach; here the convergent value of the coupling variable is calculated based on an optimization which may choose iterations judiciously in comparison with the FPI approach. Hence, in terms of uncertainty propagation, the computational cost is still very high. The input values need to be sampled and for each realization, this optimization needs to be repeated and the entire distribution of the coupling variable needs to be calculated using many such samples.

Hereon, this approach is referred to as SOMDA. Since this approach is still computationally expensive, a likelihood-based approach for MDA (LAMDA) was developed by Sankararaman and Mahadevan (2012). This LAMDA approach does not require sampling and provides an efficient and theoretically accurate method for uncertainty propagation in MDA.

3 Likelihood Approach for Multi-Disciplinary Analysis

The optimization discussed in the previous section is much similar to a least-squares-based optimization; the difference being that a typical least squares optimization is posed as a summation problem with multiple observed data whereas this is not the case in the current optimization problem. The quantity to be estimated is the convergent value of u 12 for a given set of inputs \(\boldsymbol{x}\). When the inputs are random, then the coupling variable u 12 is also random and its probability distribution needs to be calculated. This can be viewed similar to a statistical parameter estimation problem.

Consider a typical parameter estimation problem where a generic parameter θ needs to be estimated based on some available data. According to Fisher (1912), one can “solve the real problem directly” by computing the “probability of observing the given data” conditioned on the parameter θ (Aldrich 1997; Fisher 1912). This quantity is referred to as the likelihood function of θ (Edwards 1984; Pawitan 2001). Singpurwalla (20062007) explains that the likelihood function can be viewed as a collection of weights or masses and is meaningful up to a proportionality constant (Edwards 1984). In other words, if L(θ (1)) = 10, and L(θ (2)) = 100, then it is 10 ten times more likely for θ (2) than θ (1) to correspond to the observed data. While this likelihood function is commonly maximized to obtain the maximum likelihood estimate (MLE) of the parameter θ, the entire likelihood function can also be used to obtain the entire PDF of θ.

Now, consider the problem of estimating the PDF of the coupling variable u 12 in MDA. This is purely an uncertainty propagation problem and there is no “data” to calculate the likelihood function of u 12 which is defined as the “probability of observing the data.” Hence, the definition of the likelihood function cannot be used directly.

However, the focus of the MDA problem is to satisfy the inter-disciplinary compatibility condition. Consider “the probability of satisfying the inter-disciplinary compatibility” conditioned on u 12 which can be written as \(P(U_{12} = u_{12}\vert u_{12})\). This definition is similar to the original definition of the likelihood function. It is a weight that is associated with a particular value of u 12 to satisfy the multi-disciplinary constraint. In other words, if the ratio of \(P(U_{12} = u_{12}^{(1)}\vert u_{12}^{(1)})\) to \(P(U_{12} = u_{12}^{(2)}\vert u_{12}^{(2)})\) is equal to 0.1, then it is 10 ten times more likely for \(u_{12}^{(2)}\) than \(u_{12}^{(1)}\) to satisfy the inter-disciplinary compatibility condition. Thus, the properties of this expression are similar to the properties of the original likelihood function. Hence, this expression is defined to be the likelihood of u 12 in this chapter, as shown in Eq. (2). Since the likelihood function is meaningful only up to a proportionality constant, Eq. (2) also uses only a proportionality sign.

$$\displaystyle{ \ L(u_{12}) \propto P(U_{12} = u_{12}\vert u_{12}) }$$
(2)

Note that this definition is in terms of probability and hence the tool of likelihood gives a systematic procedure for including the uncertainty in the inputs during the construction of likelihood and estimating the probability distribution of the coupling variables, as explained below.

Note that there is a convergent value of u 12 for every realization of \(\boldsymbol{x}\). If \(\boldsymbol{x}\) is represented using a probability distribution, then one sample of \(\boldsymbol{x}\) has a relative likelihood of occurrence with respect to another sample of \(\boldsymbol{x}\). Correspondingly, a given sample of u 12 has a relative likelihood of being a convergent solution with respect to another sample of u 12, and hence u 12 can be represented using a probability distribution. It is this likelihood function and the corresponding probability distribution that will be calculated using the LAMDA method.

For a given value of u 12, consider the operation \(U_{12} = G(u_{12},\boldsymbol{x})\) defined earlier in Eq. (1). When \(\boldsymbol{x}\) is random, an uncertainty propagation method can be used to calculate the distribution of U 12. Let the PDF of U 12 be denoted by \(f_{U_{12}}(U_{12}\vert u_{12})\).

The aim is to calculate the likelihood of u 12, i.e. L(u 12) as the probability of satisfying the multi-disciplinary constraint, i.e. \(U_{12} = u_{12}\). Since \(f_{U_{12}}(U_{12}\vert u_{12})\) is a continuous PDF, the probability that U 12 is equal to any particular value, u 12 in this case, is equal to zero. Pawitan (2001) explained that this problem can be overcome by considering an infinitesimally small window [\(u_{12} - \frac{\epsilon } {2}\), \(u_{12} + \frac{\epsilon } {2}\)] around u 12 by acknowledging that there is only limited precision in the real world.

$$\displaystyle{ L(u_{12}) \propto P(U_{12} = u_{12}\vert u_{12}) =\int _{ u_{12}-\frac{\epsilon }{ 2} }^{u_{12}+ \frac{\epsilon }{2} }f_{U_{12}}(U_{12}\vert u_{12})dU_{12} \propto f_{U_{12}}(U_{12} = u_{12}\vert u_{12}) }$$
(3)

Note that this equation is very similar to the common practice of estimating the parameters of a probability distribution given observed data for the random variable. In other words, if X is a random variable whose PDF is given by \(f_{X}(x\vert \boldsymbol{P})\) where \(\boldsymbol{P}\) refers to the parameters to be estimated, and if the data available is denoted by x i (i = 1 to n), then the likelihood of the parameters can be calculated as \(L(\boldsymbol{P}) \propto f_{X}(x_{i}\vert \boldsymbol{P})\) where i varies from 1 to n. The maximizer of this expression is referred to as the MLE of \(\boldsymbol{P}\). Details can be found in statistics textbooks (Haldar and Mahadevan 2000; Pawitan 2001).

Note that the likelihood function L(u 12) is conditioned on u 12 and hence the PDF of U 12 is always conditioned on u 12. Once the likelihood function of u 12, i.e the probability of satisfying the multi-disciplinary compatibility for a given value of u 12. is calculated, the PDF of the converged value of the coupling variable u 12 can be calculated as:

$$\displaystyle{ f(u_{12}) = \frac{L(u_{12})} {\int L(u_{12})du_{12}} }$$
(4)

In the above equation, the domain of integration for the variable u 12 is such that L(u 12) ≠ 0. Note that Eq. (4) is a form of Bayes theorem with a non-informative uniform prior density for u 12. Once the PDF of u 12 is calculated, the MDA with uni-directional coupling in Fig. 3 can be used in lieu of the MDA with bi-directional coupling in Fig. 1. The system output f can then be calculated using well-known methods of uncertainty propagation such as MCS, FORM, and second-order reliability method (SORM).

During the aforementioned uncertainty propagation, the converged u 12 and \(\boldsymbol{x}\) are considered as independent inputs in order to compute the uncertainty in u 21, g 1, g 2, and f. However, for every given value of \(\boldsymbol{x}\), there is only one value of u 12; this is not a statistical dependence but a functional dependence. The functional dependence between the converged u 12 and \(\boldsymbol{x}\) is not known and not considered in the decoupled approach. If the functional dependence needs to be explicitly considered, one would have to revert to the computationally expensive FPI approach for every sample of \(\boldsymbol{x}\). (An alternative would be to choose a few samples of \(\boldsymbol{x}\), run FPI analysis on each of them and construct a surrogate/approximation of the functional dependence between \(\boldsymbol{x}\) and u 12, and explicitly use this surrogate in uncertainty propagation. Obviously, the surrogate could also be directly constructed for any of the responses—g 1, g 2, or f—instead of considering the coupling variable u 12. However, replacing the entire MDA by a surrogate model is a different approach and does not fall within the scope of the decoupled approach, which is the focus of this chapter.)

The above discussion calculated the PDF of u 12 and cut the coupling from “Analysis 1” to “Analysis 2.” Without loss of generality, the same approach can be used to calculate the PDF of u 21 and cut the coupling from “Analysis 2” to “Analysis 1.” This method has several advantages:

  1. 1.

    This method is free from first-order or second-order approximations of the coupling variables.

  2. 2.

    The equations of the individual disciplinary analyses are not approximated during the derivation of Eq. (3) and the calculation of the PDF of the coupling variables in Eq. (4) is exact from a theoretical perspective.

  3. 3.

    The method does not require any coupled system analysis, i.e. repeated iteration between “Analysis 1” and “Analysis 2” until convergence.

Though the computation of the PDF of u 12 is theoretically exact, two issues need to be addressed in computational implementation. (1) The calculation of L(u 12) requires the estimation of \(f_{U_{12}}(U_{12}\vert u_{12})\) which needs to be calculated by propagating the inputs \(\boldsymbol{x}\) through G for a given value of u 12. (2) This likelihood function needs to be calculated for several values of u 12 to perform the integration in Eq. (4). These two steps, i.e. uncertainty propagation and integration, could make the methodology computationally expensive if a Monte Carlo-type approach is pursued for uncertainty propagation.

Therefore, the following section presents a methodology that makes the numerical implementation inexpensive for the above two steps. From here on, there are approximations made; note that these approximations are only for the purpose of numerical implementation and not a part of the mathematical theory. Here, “theory” refers to the derivation and use of Eqs. (3) and (4) for uncertainty quantification in MDA, and “implementation” refers to the numerical computation of \(f_{U_{12}}(U_{12} = u_{12}\vert u_{12})\) in Eq. (3).

4 Numerical Implementation

This section addresses the two issues mentioned above in the numerical implementation of the LAMDA method.

4.1 Evaluation of the Likelihood Function L(u 12)

The first task is to calculate the likelihood function L(u 12) for a given value of u 12. This requires the calculation of the PDF \(f_{U_{12}}(U_{12}\vert u_{12})\). However it is not necessary to calculate the entire PDF. Based on Eq. (3), the calculation of likelihood L(u 12) only requires the evaluation of the PDF at u 12, i.e. \(f_{U_{12}}(U_{12} = u_{12}\vert u_{12})\). Hence, instead of entirely evaluating the PDF \(f_{U_{12}}(U_{12}\vert u_{12})\), only local analysis at \(U_{12}\,=\,u_{12}\) needs to be performed. One method is to make use of FORM to evaluate this PDF value. This is the first approximation.

The FORM estimates the probability that a performance function \(H = h(\boldsymbol{x})\) is less than or equal to zero, given uncertain input variables \(\boldsymbol{x}\). This probability is equal to the cumulative probability density (CDF) of the variable H evaluated at zero (Haldar and Mahadevan 2000). In this approach, the so-called MPP is calculated by transforming the variables \(\boldsymbol{x}\) into uncorrelated standard normal space u and by determining the point in the transformed space that is closest to the origin. An optimization problem can be formulated as shown in Fig. 4.

Fig. 4
figure 4

Use of FORM to estimate the CDF value

The details of the transformation u = T(x) in Fig. 4 can be found in Haldar and Mahadevan (2000). This optimization can be solved by using the well-known Rackwitz–Fiessler algorithm (Rackwitz and Flessler 1978), which is based on a repeated linear approximation of the constraint H = 0. Once the shortest distance to the origin is estimated to be equal to β, then the CDF value is calculated in FORM as:

$$\displaystyle{ P(H \leq 0) = \Phi (-\beta ) }$$
(5)

FORM can also be used to calculate the CDF value at any generic value h c , i.e. \(P(h(\boldsymbol{x}) \leq h_{c})\) and the probability that \(h(\boldsymbol{x})\) is less than or equal to h c can be evaluated by executing the FORM analysis for the performance function \(H = h(\boldsymbol{x}) - h_{c}\). For the problem at hand, it is necessary to calculate the PDF value at u 12 and not the CDF value. This can be accomplished by finite differencing, i.e. by performing two FORM analyses at \(h_{c} = u_{12}\) and \(h_{c} = u_{12}+\delta\), where δ is a small difference that can be chosen, for example, 0. 001 × u 12. The resultant CDF values from the two FORM analyses are differenced and divided by δ to provide an approximate value of the PDF value at u 12. This is the second approximation.

Hence, the evaluation of the likelihood function L(u 12) is based on two approximations: (1) the PDF value is calculated based on finite differencing two CDF values; and (2) each CDF value is in turn calculated using FORM which is a first-order approximation (Eq. (5)).

4.2 Construction of PDF of u 12

Recall that Eq. (4) is used to calculate the PDF of u 12 based on the likelihood function L(u 12). In theory, for any chosen value of u 12, the corresponding likelihood L(u 12) can be evaluated, and hence the integral in Eq. (4) can be computed. For the purpose of numerical implementation, the limits of integration need to be chosen. The first-order estimates of the mean and variance of u 12 can be estimated by calculating the converged value of u 12 at the mean of the uncertain input values using FPI. The derivatives of the coupling variables with respect to the inputs can be calculated using Sobieski’s system sensitivity equations (Hajela et al. 1990), as demonstrated later in Sect. 4.1. These first order estimates can be then used to select the limits (for example, six sigma limits) for integration.

For the purpose of implementation, the likelihood function is evaluated only at a few points; a recursive adaptive version of Simpson’s quadrature (McKeeman 1962) is used to evaluate this integral and the points at which the likelihood function needs to be evaluated are adaptively chosen until the quadrature algorithm converges.

This quadrature algorithm is usually applicable only in the case of one-dimensional integrals whereas in a typical multi-disciplinary problem, u 12 may be a vector, where there are several coupling variables in each direction. Hence, the multi-dimensional integral can be decomposed into multiple one-dimensional integrals so that the quadrature algorithm may be applied.

$$\displaystyle{ \int L(\alpha,\beta )d\alpha \text{ }d\beta =\int \Big (\int L(\alpha,\beta )d\alpha \Big)d\beta }$$
(6)

Each one-dimensional integral is evaluated using recursive adaptive Simpson’s quadrature algorithm (McKeeman 1962). Consider any general one-dimensional integral and its approximation using Simpson’s rule as:

$$\displaystyle{ \int _{a}^{b}f(x)dx \approx \frac{b - a} {6} \Big(f(a) + 4f\Big(\frac{a + b} {2} \Big) + f(b)\Big) = S(a,b) }$$
(7)

The adaptive recursive quadrature algorithm calls for subdividing the interval of integration (a, b) into two sub-intervals ((a, c) and (c, b), a ≤ c ≤ b) and then, Simpson’s rule is applied to each sub-interval. The error in the estimate of the integral is calculated by comparing the integral values before and after splitting. The criterion for determining when to stop dividing a particular interval depends on the tolerance level ε. The tolerance level for stopping may be chosen, for example as (McKeeman, 1962):

$$\displaystyle{ \vert S(a,c) + S(c,b) - S(a,b)\vert \leq 15\epsilon }$$
(8)

Once the integral is evaluated, the entire PDF is approximated by interpolating the points at which the likelihood has already been evaluated.

This technique ensures that the number of evaluations of the individual disciplinary analyses is minimal. Would it be possible to approximately estimate the number of disciplinary analyses needed for uncertainty propagation? Suppose that the likelihood function is evaluated at ten points to solve the integration in Eq. (4). Each likelihood evaluation requires a PDF calculation, and hence two FORM analyses. Assume that the optimization for FORM converges in five iterations on average; each iteration would require n + 1 (where n is the number of input variables) evaluations of the individual disciplinary analysis (one evaluation for the function value and n evaluations for derivatives). Thus, the number of individual disciplinary analyses required will approximately be equal to 100(n + 1). This is computationally efficient when compared to existing approaches. For example, Mahadevan and Smith (2006) report that for a MDA with 5 input variables, the multi-constraint FORM approach required 69 evaluations for the evaluation of a single CDF value, which on average may lead to 690 evaluations for 10 CDF values. While the LAMDA method directly calculates the entire PDF, it also retains the functional dependence between the disciplinary analyses, thereby enabling uncertainty propagation to the next analysis level.

As the number of coupling variables increases, the integration procedure causes the computational cost to increase exponentially. For example, if there are ten coupling variables, each with five discretization points (for the sake of integration), then the number of individual disciplinary analyses required will approximately be equal to 105 × 10 × (n + 1). Alternatively, a sampling technique such as Markov Chain Monte Carlo (MCMC) sampling can be used to draw samples of the coupling variables; this method can draw samples of the coupling variable without evaluating the integration constant in Eq. (4). Further, since this is sampling approach, the computational cost does not increase exponentially with the number of coupling variables. In each iteration of the MCMC chain, two FORM analyses need to be conducted to evaluate the likelihood for a given value of u 12 (which is now vector), and several thousands (say, Q) of evaluations of this likelihood function may be necessary for generating the entire PDFs of the coupling variables. Thus, the number of individual disciplinary analyses will be approximately equal to 10 × (n + 1) × Q. Currently, the LAMDA method is demonstrated only for a small number of coupling variables. Future work needs to extend the methodology to field-type quantities (temperatures, pressures, etc. in finite element analysis) where the number of coupling variables is large.

5 Numerical Example: Mathematical MDA Problem

5.1 Description of the Problem

This problem consists of three analyses, two of which are coupled with one another. This is an extension of the problem discussed by Du and Chen (2005), and later by Mahadevan and Smith (2006) where only two analyses were considered. The functional relationships are shown in Fig. 5. In addition to the two analyses given in Mahadevan and Smith (2006), the current example considers a third analysis where a system output is calculated based on g 1 and g 2 as \(f = g_{2} - g_{1}\). All the five input quantities \(\boldsymbol{x} = (x_{1},x_{2},x_{3},x_{4},x_{5})\) are assumed to be normally distributed (only for the sake of illustration) with unit mean and standard deviation equal to 0. 1; there is no correlation between them. The goal in Du and Chen (2005) and Mahadevan and Smith (2006) was to calculate the probability P(g 1 ≤ 0), and now, the goal is to calculate the entire probability distributions of the coupling variables u 12 and u 21, the outputs of the individual analyses g 1 and g 2, and the overall system output f.

Fig. 5
figure 5

Functional relationships

A coarse approximation of the uncertainty in the output variables and coupling variables can be obtained in terms of first-order mean and variance using Taylor’s series expansion (Haldar and Mahadevan 2000). For example, consider the coupling variable u 12; the procedure described for can be extended to u 21, g 1, g 2, and f.

The first-order mean of u 12 can be estimated by calculating the converged value of u 12 at the mean of the input values, i.e. \(\boldsymbol{x} = (1,1,1,1,1)\). The first-order mean values of u 12, u 21, g 1, g 2, and f are calculated to be equal to 8. 9, 11. 9, 0. 5, 2. 4, and 1. 9, respectively. The first-order variance of u 12 can be estimated as:

$$\displaystyle{ \mathrm{Var}(u_{12}) =\sum _{ i=1}^{n}\Big(\frac{du_{12}} {dx_{i}} \Big)^{2}\mathrm{Var}(x_{ i}) }$$
(9)

where the first-order derivatives are calculated using Sobieski’s system (or global) sensitivity equations (Hajela et al. 1990), by satisfying the multi-disciplinary compatibility as:

$$\displaystyle{ \frac{du_{12}} {dx_{i}} = \frac{\partial u_{12}} {\partial x_{i}} + \frac{\partial u_{12}} {\partial u_{21}} \frac{\partial u_{21}} {\partial x_{i}} }$$
(10)

All the derivatives are calculated at the mean of the input values, i.e. \(\boldsymbol{x} = (1,1,\) 1, 1, 1). The values of \(\frac{\partial u_{12}} {\partial x_{i}}\) are 2, 2, − 1, 0, and 0 (i = 1 to 5), respectively. The values of \(\frac{\partial u_{21}} {\partial x_{i}}\) are 1, 0, 0, 3, and 1 (i = 1 to 5), respectively. The value of \(\frac{\partial u_{12}} {\partial u_{21}}\) is \(\frac{1} {\sqrt{u_{21}}}\), evaluated at the mean and therefore, is equal to 0. 29. Hence, using Eqs. (9) and (10), the standard deviation of u 12 is calculated to be 0. 333.

The system sensitivity equation-based approach only provides approximations of the mean and variance, and it cannot calculate the entire PDF of u 12. The remainder of this section illustrates the LAMDA approach, which can accurately calculate the entire PDF of u 12. Though the system of equations in Fig. 5 may be solved algebraically by eliminating one variable, the forthcoming solution does not take advantage of this closed form solution and assumes each analysis to be a black-box. This is done to simulate the behavior of realistic multi-disciplinary analyses that may not have closed form solutions. For the same reason, finite differencing is used to approximate the gradients even though analytical derivatives can be calculated easily for this problem.

5.2 Calculation of the PDF of the Coupling Variable

In this numerical example, the coupling variable u 12 is estimated for the sake of illustration, and the arrow from “Analysis 1” to “Analysis 2” is severed. The PDF of u 12 is estimated using (1) SOMDA; and (2) LAMDA. In Fig. 6, the PDF using the LAMDA method uses ten integration points for the evaluation of Eq. (4). The resulting PDFs from the SOMDA method and the LAMDA method are compared with the benchmark solution which is estimated using 10, 000 Monte Carlo samples of \(\boldsymbol{x}\) and FPI (until convergence of Analysis 1 and Analysis 2) for each sample of \(\boldsymbol{x}\). The probability bounds on MCS results for the benchmark solution are also calculated using the formula \(\mathrm{CoV}(F) = \sqrt{\frac{(1-F)} {F\cdot N}}\) where F is the CDF value (Haldar and Mahadevan 2000), and found to be narrow and almost indistinguishable from the solution reported in Fig. 6. Since the benchmark solution uses FPI for each input sample, it is indicated as SOFPI (sampling outside fixed point iteration) in Fig. 6.

Fig. 6
figure 6

PDF of u 12

In addition to the PDF in Fig. 6, the CDF of u 12 is shown in Fig. 7. The CDF is plotted in linear and log-scale. Further, the tail probabilities are important in the context of reliability analysis; hence, the two tails of the CDF curves are also shown separately.

Fig. 7
figure 7

Cumulative distribution function of u 12. (a) Linear; (b) log-scale; (c) left tail; (d) right tail

It is seen that the solutions (PDF values and CDF values) from the LAMDA method match very well with the benchmark (SOFPI) solution and the SOMDA approach. Note that the mean and standard deviation of the PDF in Fig. 6 agree well with the first-order approximations previously calculated (8. 9 and 0. 333). Obviously, the above PDF provides more information than the first-order mean and standard deviation and is more suitable for calculation of tail probabilities in reliability analysis.

The differences (maximum error is less than 1 %) seen in the PDFs and the CDFs from the three methods, though small, are accountable. The PDF obtained using SOMDA differs from the benchmark solution because it uses only 1, 000 Latin hypercube samples (realizations of inputs) whereas the benchmark solution used 10, 000 samples. The PDF obtained using LAMDA differs from the benchmark solution because of two approximations—(1) finite differencing two CDF values to calculate the PDF value, and (2) calculating each CDF value using FORM.

The benchmark solution is based on FPI and required about 105 evaluations each of Analysis 1 and Analysis 2. The SOMDA method required 8, 000–9, 000 executions of each individual disciplinary analysis. (This number depends on the random samples of the input, since for each sample, the number of optimization iterations required for convergence is different.) Note that theoretically, the SOMDA method would produce a PDF that is identical to the benchmark solution if the same set of input samples were used in both the cases. This is because the SOMDA approach simply solves the deterministic MDA problem and then considers sampling in an outside loop. The solution approach in SOMDA is different from that in the benchmark solution approach; however, the treatment of uncertainty is the same. As discussed in Sect. 2, the SOMDA method is still expensive; replacing the brute force FPI in the benchmark solution by an optimization did not significantly improve the computational efficiency in this problem.

The LAMDA method treats the uncertainty directly in the definition of likelihood, and was found to be the least expensive, as it required only about 450–500 evaluations of each disciplinary analysis for the estimation of the entire PDF of u 12 in Fig. 6. The number of evaluations is given as a range because of three sources of variation: (1) different initial guesses for FORM analyses may require different numbers of function evaluations for convergence to MPP; (2) the number of integration points used for evaluation of Eq. (4); and (3) the actual values of the integration points used for evaluation of Eq. (4). In contrast, the multi-constraint FORM approach developed by Mahadevan and Smith (2006) required about 69 evaluations for the calculation of the CDF at one particular value. If the entire PDF as in Fig. 6 is desired, the multi-constraint FORM would take approximately 69 × 2n function evaluations, where n is the number of points on the PDF and each PDF evaluation would require two CDF evaluations.

5.3 Calculation of PDF of the System Output

Once the PDF of u 12 is calculated, the scheme in Fig. 3 can be used for uncertainty propagation and the PDF of the system output f is calculated. Note that this does not require any MDA (iterative analysis between the two subsystems) and it is now a simple uncertainty propagation problem. Well-known methods for uncertainty propagation such as MCS, FORM, and SORM (Haldar and Mahadevan 2000) can be used for this purpose. For the sake of illustration, MCS is used. The PDF of the system output f is shown in Fig. 8.

Fig. 8
figure 8

PDF of f

As the coupling variable u 12 has been estimated here, the “arrow” from Analysis 1 to Analysis 2 alone is severed, whereas the arrow from Analysis 2 to Analysis 1 is retained. Hence, to solve for the system output f, the probability distributions of the inputs \(\boldsymbol{x}\) and the probability distribution of the coupling variable u 12 are used first in Analysis 2 (to calculate u 21), and then in Analysis 1 to calculate the individual disciplinary system outputs g 1 and g 2, followed by the overall system output f. As seen from Fig. 8, the solutions from the three different methods—SOMDA, LAMDA, and the benchmark solution (SOFPI)—compare well against each other.

6 Three-Discipline Fire Detection Satellite Model

This section illustrates the LAMDA methodology for system analysis of a satellite that is used to detect forest fires. First, the various components of the satellite system model are described, and then, numerical results are presented.

6.1 Description of the Problem

This problem was originally described by Wertz and Larson (1999). This is a hypothetical but realistic spacecraft consisting of a large number of subsystems with both feedback and feed-forward couplings. The primary objective of this satellite is to detect, identify, and monitor forest fires in near real time. This satellite is intended to carry a large and accurate optical sensor of length 3. 2 m, weight 720 kg and has an angular resolution of 8. 8 × 10−7 rad. This example considers a modified version of this problem considered earlier by Ferson et al. (2009) and Zaman (2010).

Zaman (2010) considered a subset of three subsystems of the fire detection satellite, consisting of (1) Orbit Analysis, (2) Attitude Control, and (3) Power, based on Ferson et al. (2009). This three-subsystem problem is shown in Fig. 9. There are nine random variables in this problem, as indicated in Fig. 9.

Fig. 9
figure 9

A three-subsystem fire detection satellite

Fig. 10
figure 10

Schematic diagram for the satellite solar array

As seen in Fig. 9, the Orbit subsystem has feed-forward coupling with both Attitude Control and Power subsystems, whereas the Attitude Control and Power subsystems have feedback or bi-directional coupling through three variables P ACS, I min, and I max. A satellite configuration is assumed in which two solar panels extend out from the spacecraft body. Each solar panel has dimensions L by W and the inner edge of the solar panel is at a distance D from the centerline of the satellite’s body as shown in Fig. 10.

The functional relationships between the three subsystems are developed in detail by Wertz and Larson (1999) and summarized by Ferson et al. (2009) and Sankararaman and Mahadevan (2012). These functional relationships are briefly described in this section.

6.1.1 The Orbit Subsystem

The inputs to this subsystem are: radius of the earth (R E ); orbit altitude (H); earth’s standard gravitational parameter (μ); and target diameter (ϕ target).

The outputs of this subsystem are: satellite velocity (v); orbit period \((\Delta t_{\mathrm{orbit}})\); eclipse period \((\Delta t_{\mathrm{eclipse}})\); and maximum slewing angle (θ slew). The relationships between these variables are summarized in the following equations:

$$\displaystyle{ v = \sqrt{ \frac{\mu } {R_{E} + H}} }$$
(11)
$$\displaystyle{ \Delta t_{\mathrm{orbit}} = 2\pi \sqrt{\frac{(R_{E } + H)^{3 } } {\mu }} = \frac{2\pi (R_{E} + H)} {v} }$$
(12)
$$\displaystyle{ \Delta t_{\mathrm{eclipse}} = \frac{\Delta t_{\mathrm{orbit}}} {\pi } \arcsin \Big( \frac{R_{E}} {R_{E} + H}\Big) }$$
(13)
$$\displaystyle{ \theta _{\mathrm{slew}} =\arctan \Bigg ( \frac{\sin \big(\frac{\phi _{\mathrm{target}}} {R_{E}} \big)} {1 -\cos (\frac{\phi _{\mathrm{target}}} {R_{E}} ) + \frac{H} {R_{E}}}\Bigg) }$$
(14)

6.1.2 The Attitude Control Subsystem

The 23 inputs to this subsystem are: earth’s standard gravitational parameter (μ); radius of the earth (R E ); Altitude (H); maximum and minimum moment of inertia of the spacecraft (I max and I min); deviation of major moment axis from local vertical (θ); moment arm for the solar radiation torque (L sp ); average solar flux (F s ); speed of light (c); reflectance factor (q); surface area off which solar radiation is reflected (A s ); Slewing time period (\(\Delta t_{\mathrm{slew}}\)); magnetic moment of the Earth (M); residual dipole of the spacecraft (R D ); moment arm for aerodynamic torque (L a ); atmospheric density (ρ); maximum slewing angle (θ slew); sun incidence angle (i); drag coefficient (C d ); cross-sectional surface area in the direction of flight (A); satellite velocity (v); rotation velocity of reaction wheel (ω max); number of reaction wheels (n); and holding power (P hold), i.e. the power required to maintain the constant velocity (ω max).

The overall output of this subsystem is the total torque (τ tot). The value of the total torque is computed based on slewing torque (τ slew), disturbance torque (τ dist), gravity gradient torque (τ g ), solar radiation torque (τ sp ), magnetic field interaction torque (τ m ), and aerodynamic torque (τ a ), as shown in the following equations.

$$\displaystyle{ \tau _{\mathrm{tot}} = \mathrm{max}(\tau _{\mathrm{slew}},\tau _{\mathrm{dist}}) }$$
(15)
$$\displaystyle{ \tau _{\mathrm{slew}} = \frac{4\theta _{\mathrm{slew}}} {(\Delta t_{\mathrm{slew}})^{2}}I_{\mathrm{max}} }$$
(16)
$$\displaystyle{ \tau _{\mathrm{dist}} = \sqrt{\tau _{g }^{2 } +\tau _{ sp }^{2 } +\tau _{ m }^{2 } +\tau _{ a }^{2}} }$$
(17)
$$\displaystyle{ \tau _{g} = \frac{3\mu } {2(R_{E} + H)^{3}}\vert I_{\mathrm{max}} - I_{\mathrm{min}}\vert \sin (2\theta ) }$$
(18)
$$\displaystyle{ \tau _{sp} = L_{sp}\frac{F_{s}} {C} A_{s}(1 + q)\cos (i) }$$
(19)
$$\displaystyle{ \tau _{m} = \frac{2MR_{D}} {(R_{E} + H)^{3}} }$$
(20)
$$\displaystyle{ \tau _{a} = \frac{1} {2}L_{a}\rho C_{d}Av^{2} }$$
(21)

Note that this subsystem takes two coupling variables (I max and I min) as input and produces another coupling variable (Attitude control power: P ACS) as output, as given in the following equation.

$$\displaystyle{ P_{\mathrm{ACS}} =\tau _{\mathrm{tot}}\omega _{\mathrm{max}} + nP_{\mathrm{hold}} }$$
(22)

This coupling variable is an input to the power subsystem, as described in the following subsection.

6.1.3 The Power Subsystem

The 16 inputs to the power subsystem are: attitude control power (P ACS); other sources of power (P other); orbit period (\(\Delta t_{\mathrm{orbit}}\)); eclipse period (\(\Delta t_{\mathrm{eclipse}}\)); sun incidence angle (i); inherent degradation of the array (I d ); average solar flux (F s ); power efficiency (η); lifetime of the spacecraft (L T ); degradation in power production capability in % per year (ε deg); length to width ratio of solar array (r lw ); number of solar arrays (n sa); average mass density of solar arrays (ρ sa); thickness of solar panels (t); distance between the panels (D); and moments of inertia of the main body of the spacecraft (I bodyX , I bodyY , I bodyZ ).

The overall outputs of this subsystem are the total power (P tot), and the total size of the solar array (A sa), as calculated below.

$$\displaystyle{ P_{\mathrm{tot}} = P_{\mathrm{ACS}} + P_{\mathrm{other}} }$$
(23)

Let P e and P d denote the spacecraft’s power requirements during eclipse and daylight, respectively. For the sake of illustration, it is assumed that \(P_{e} = P_{d} = P_{\mathrm{tot}}\). Let T e and T d denote the time per orbit spent in eclipse and in sunlight, respectively. It is assumed that \(T_{e} = \Delta t_{\mathrm{eclipse}}\) and \(T_{d} = \Delta t_{\mathrm{orbit}} - T_{e}\). Then the required power output (P sa) is calculated as:

$$\displaystyle{ P_{\mathrm{sa}} = \frac{\big(\frac{P_{e}T_{e}} {0.6} + \frac{P_{d}T_{d}} {0.8} \big)} {T_{d}} }$$
(24)

The power production capabilities at the beginning of life (P BOL) and at the end of the life (P EOL) are calculated as:

$$\displaystyle{ \begin{array}{rlrlrl}P_{\mathrm{BOL}} =& \text{ }\eta F_{s}I_{d}\cos (i)& & \cr P_{\mathrm{EOL}} =& \text{ }P_{\mathrm{BOL}}(1 -\epsilon _{\mathrm{deg}})^{LT}& \cr & \cr \end{array} }$$
(25)

The total solar array size, i.e. the second output of this subsystem, is calculated as:

$$\displaystyle{ A_{\mathrm{sa}} = \frac{P_{\mathrm{sa}}} {P_{\mathrm{EOL}}} }$$
(26)

Note that this subsystem takes a coupling variable (P ACS) as input and produces the other two coupling variables (I max and I min) as output, to be fed into the attitude control subsystem described earlier.

The length (L), width (W), mass (m sa), moments of inertia (I saX , I saY , I saZ ) of the solar array are calculated as follows:

$$\displaystyle{ \begin{array}{rlrlrl}L =& \text{ }\sqrt{\frac{A_{\mathrm{sa } } r_{lw } } {m_{\mathrm{sa}}}} & & \cr W =& \text{ }\sqrt{ \frac{A_{\mathrm{sa } } } {r_{lw}m_{\mathrm{sa}}}}& \cr m_{\mathrm{sa}} =& \text{ }2\rho _{\mathrm{sa}}LWt& \cr & \cr \end{array} }$$
(27)
$$\displaystyle{ I_{\mathrm{sa}X} = m_{\mathrm{sa}}\bigg[ \frac{1} {12}(L^{2} + t^{2}) +\Big (D + \frac{L} {2} \Big)^{2}\bigg] }$$
(28)
$$\displaystyle{ I_{\mathrm{sa}Y } = \frac{m_{\mathrm{sa}}} {12} (t^{2} + W^{2}) }$$
(29)
$$\displaystyle{ I_{\mathrm{sa}Z} = m_{\mathrm{sa}}\bigg[ \frac{1} {12}(L^{2} + W^{2}) +\Big (D + \frac{L} {2} \Big)^{2}\bigg] }$$
(30)

The total moment of inertia (I tot) can be computed in all three directions (X, Y, and Z), from which the maximum and the minimum moments of inertia (I max and I min) can be computed.

$$\displaystyle{ I_{\mathrm{tot}} = I_{\mathrm{sa}} + I_{\mathrm{body}} }$$
(31)
$$\displaystyle{ I_{\mathrm{max}} = \mathrm{max}(I_{\mathrm{tot}X},I_{\mathrm{tot}Y },I_{\mathrm{tot}Z}) }$$
(32)
$$\displaystyle{ I_{\mathrm{min}} = \mathrm{min}(I_{\mathrm{tot}X},I_{\mathrm{tot}Y },I_{\mathrm{tot}Z}) }$$
(33)

6.2 Numerical Details

Some of the input quantities are chosen to be stochastic while others are chosen to be deterministic. Table 1 provides the numerical details for the deterministic quantities and Table 2 provides the numerical details for the stochastic quantities. All the stochastic quantities are treated to be normally distributed, for the sake of illustration.

Table 1 List of deterministic quantities
Table 2 List of stochastic quantities

6.3 Uncertainty Propagation Problem

As seen in Fig. 9, this is a three-disciplinary analysis problem, with feedback coupling between two disciplines “power” and “attitude control.” It is required to compute the uncertainty in three system output variables—total power P tot, required solar array area A sa, and total torque τ tot.

Prior to the quantification of the outputs, the first step is the calculation of the probability distribution of the coupling variables. The functional dependency can be severed in either direction, either from “power” to “attitude control” or from “attitude control” to “power,” and this choice can be made without loss of generality. The probability distribution of P ACS, i.e. the power of the attitude control system is chosen for calculation, and then, P ACS becomes an independent input to the “power subsystem”; the functional dependency between “power” to “attitude control” is retained through the two coupling variables in the opposite direction. The following subsections present these results; Sect. 6.4 calculates the PDF of the feedback variable P ACS and Sect. 6.5 calculates the PDFs of the system outputs.

6.4 Calculation of PDF of the Coupling Variable

Similar to the mathematical example presented in Sect. 5, this section calculates the PDF of the coupling variable P ACS using sampling with SOMDA and the LAMDA. These results are compared with the benchmark solution in Fig. 11. In Fig. 11, the PDF using the LAMDA method uses ten integration points for the evaluation of Eq. (4).

Fig. 11
figure 11

PDF of coupling variable P ACS

Similar to the mathematical example in Sect. 5, it is seen from Fig. 11 that the results from SOMDA and LAMDA compare well with the benchmark solution (SOFPI). In addition to the PDFs, the CDFs and the tail probabilities are also in reasonable agreement. The benchmark solution is based on FPI and required about 200, 000 evaluations each of the power subsystem and the attitude control subsystem. The SOMDA method required about 20, 000 evaluations whereas the LAMDA method required about 900–1, 000 evaluations. It is clear that the LAMDA approach provides an efficient and accurate alternative to sampling-based approaches.

6.5 Calculation of PDFs of the System Outputs

Once the probability distribution of the coupling variable P ACS is calculated, the system does not contain any feedback coupling and hence, methods for simple forward uncertainty propagation can be used to estimate the PDFs of the three system outputs total power (P tot), required solar array area (A sa), and total torque (τ tot). MCS is used for uncertainty propagation, and the resulting PDFs are plotted in Figs. 12, 13, and 14.

Fig. 12
figure 12

PDF of total output power P tot

Fig. 13
figure 13

PDF of area of solar array

Fig. 14
figure 14

PDF of total torque τ tot

As seen from Figs. 12, 13, and 14, the PDFs of the system outputs obtained using both SOMDA and LAMDA compare very well with the benchmark solution (SOFPI).

7 Conclusion

Existing methods for uncertainty propagation in multi-disciplinary system models are based on (1) MCS around FPI, which is computationally expensive; and/or (2), approximating the system equations; and/or (3) approximating the probability distributions of the coupling variables and then decoupling the disciplinary analyses. The fully decoupled approach does not preserve one-to-one correspondence between the individual disciplinary analyses and is not suitable for further downstream analysis using the converged MDA output.

The perspective of likelihood and the ability to include input uncertainty in the construction of the likelihood function provided a computationally efficient methodology for the calculation of the PDFs of the coupling variables. The MDA was reduced to a simple forward uncertainty propagation problem by replacing the feedback coupling with one-way coupling, the direction being chosen without loss of generality.

The LAMDA method has several advantages. (1) It provides a framework for the exact calculation of distribution of the coupling variables. (2) It retains the functional dependence between the individual disciplinary analyses, thereby utilizing the estimated PDFs of the coupling variables for uncertainty propagation, especially for downstream analyses. (3) It does not require any coupled system analysis (iterative analyses between the individual disciplines until convergence) for uncertainty propagation.

The LAMDA methodology has been demonstrated for problems with a small number of coupling variables. The methodology is straightforward to implement when there is a vector of coupling variables as explained earlier in Sect. 4.2. (Recall that the fire satellite example had two coupling variables in one of the directions.) However, if the coupling variable is a field-type quantity (e.g., pressures and displacements exchanged in a fluid–structure interaction problem at the interface of two disciplinary meshes), further research is needed to extend the LAMDA method for uncertainty propagation in such multi-disciplinary problems.

The likelihood-based approach can be extended to address MDO under uncertainty. Further, this chapter considered only aleatory uncertainty (natural variability) in the inputs. Future research may include different types of epistemic uncertainty such as data and model uncertainty in MDA and optimization.