1 Introduction

System reliability analysis plays a crucial role in engineering since the occurrence of failures maybe lead to catastrophic consequences. Most researchers assumed each component in the system works with a given probability and studied the system reliability from mathematical aspect. In 1947, Freudethal first developed structural reliability that is the application of probabilistic methods. Then Cornell (1969) proposed a structural reliability index. From then on, reliability analysis based on probability theory has got many significant achievements.

Before applying probability theory to practical problem, a fundamental premise is to estimate probability distribution that is close enough to frequency. Otherwise, the law of large numbers is no longer valid. In fact, we sometimes have no observed date because of the technological or economical difficulties. In this case, we have to invite the experts to evaluate their belief degree that a component works well. However, Liu (2015) pointed that human beings usually estimate a much wider range of values than the object actually takes. Therefore, the belief degrees deviate far from the frequency. If we still take human belief degrees as probability distribution, we maybe cause a counterintuitive result that was given by Liu (2012).

In order to model the belief degree, an uncertainty theory was founded by Liu (2007). It satisfies normality, duality, subadditivity and product axioms in mathematics. Nowadays, uncertainty theory has become a branch of pure mathematics and has been widely applied in many fields such as uncertain programming (Liu 2009a, b), uncertain risk analysis (Liu 2010a, b), and uncertain reliability analysis (Liu 2010b).

In a general system, some components may have enough samples to ascertain their functioning probabilities, while some others may have no samples. In order to deal with this phenomenon, Liu (2013a) proposed chance theory as a mixture of probability theory and uncertainty theory in 2013. After that, chance theory was developed steadily and applied widely in many fields such as uncertain random programming (Liu 2013b; Ke et al. 2014; Zhou et al. 2014), uncertain risk analysis (Liu and Ralescu 2014, 2016), uncertain random graph (Liu 2014), and uncertain random network (Liu 2014; Sheng and Gao 2014).

Probability theory is applicable when we have a large amount of samples, and uncertainty theory is applicable when we have no samples but belief degree from the experts. Chance theory, as a mixture of probability theory and uncertainty theory, is applicable for a complex system containing uncertainty and randomness. In this paper, we aim at employing chance theory to analyze the reliability of a complex system involving both uncertainty and randomness. The rest of this paper is organized as follows. Section 2 introduces some basic concepts about uncertain variable and uncertain random variable. Section 3 proposes the concept of reliability index of an uncertain random system, and the reliability of a series system and a parallel system will be analyzed. Section 4 proves a reliability index theorem. A k-out-of-n system, parallel–series system, series–parallel system and bridge system are studied. At last, some conclusions are made in Sect. 5.

2 Preliminaries

In this section, we introduce some basic concepts and results in uncertainty theory and chance theory.

2.1 Uncertainty theory

Uncertainty theory was founded by Liu (2007) and refined by Liu (2010a). Mathematically, uncertainty theory satisfies normality, duality, subadditivity and product axioms. Practically, uncertainty is anything that is described by belief degrees.

Definition 2.1

(Liu 2007) Let \(\Gamma \) be a nonempty set, and \(\mathcal {L}\) be a \(\sigma \)-algebra over \(\Gamma \). A set function \(\mathcal {M}\) is called an uncertain measure if it satisfies the following three axioms,

Axiom 1 :

\(\mathcal {M}\{\Gamma \}=1\) for the universal set \(\Gamma \).

Axiom 2 :

\(\mathcal {M}\{\Lambda \}+\mathcal {M}\{\Lambda ^c\}=1\) for any event \(\Lambda \in \mathcal {L}\).

Axiom 3 :

For every countable sequence of events \(\Lambda _1,\Lambda _2,\ldots ,\) we have

$$\begin{aligned} \mathcal {M}\left\{ \bigcup _{i=1}^{\infty }\Lambda _i \right\} \le \sum _{i=1}^{\infty } \mathcal {M}\{\Lambda _i\}. \end{aligned}$$

In this case, the triple \((\Gamma ,\mathcal {L},\mathcal {M})\) is called an uncertainty space.

Besides, in order to provide the operational law, another axiom named product axiom was proposed by Liu (2009a).

Axiom 4 :

Let \((\Gamma _k, \mathcal {L}_k,\mathcal {M}_k)\) be uncertainty spaces for \(k=1,2,\ldots \). The product uncertain measure \(\mathcal {M}\) is an uncertain measure satisfying

$$\begin{aligned} \mathcal {M}\left\{ \prod _{k=1}^{\infty } \Lambda _k \right\} =\bigwedge _{k=1}^{\infty } \mathcal {M}_k\{\Lambda _k\} \end{aligned}$$

where \(\Lambda _k\) are arbitrarily chosen events from \(\mathcal {L}_k\) for \(k=1,2,\ldots \), respectively.

Definition 2.2

(Liu 2007) An uncertain variable is a measurable function \(\xi \) from an uncertainty space \((\Gamma ,\mathcal {L},\mathcal {M})\) to the set of real numbers, i.e., for any Borel set B of real numbers, we have

$$\begin{aligned} \{\xi \in B\}=\{\gamma \in \Gamma \,|\,\xi (\gamma )\in B\}\in \mathcal {L}. \end{aligned}$$

Definition 2.3

(Liu 2007) The uncertainty distribution \(\Phi \) of an uncertain variable \(\xi \) is defined by

$$\begin{aligned} \Phi (x)=\mathcal {M}\{\xi \le x\} \end{aligned}$$

for any real number x.

If the uncertainty distribution \(\Phi (x)\) of \(\xi \) has an inverse function \(\Phi ^{-1}(\alpha )\) for \(\alpha \in (0,1)\), then \(\xi \) is called a regular uncertain variable, and \(\Phi ^{-1}(\alpha )\) is called the inverse uncertainty distribution of \(\xi .\) Inverse uncertainty distribution plays an important role in the operations of independent uncertain variables.

Definition 2.4

(Liu 2007) The uncertain variables \(\xi _1,\xi _2,\ldots ,\xi _n\) are said to be independent if

$$\begin{aligned} \mathcal {M}\left\{ \bigcap _{i=1}^{n}(\xi _i\in B_i)\right\} =\bigwedge _{i=1}^n\mathcal {M}\left\{ \xi _i\in B_i\right\} \end{aligned}$$

for any Borel sets \(B_1,B_2,\ldots ,B_n\) of real numbers.

Theorem 2.1

(Liu 2010a) Let \(\xi _1,\xi _2,\ldots ,\xi _n\) be independent uncertain variables with regular uncertainty distributions \(\Phi _1,\Phi _2,\ldots ,\Phi _n,\) respectively. Assume the function \(f(x_1,x_2,\ldots ,x_n)\) is strictly increasing with respect to \(x_1,x_2,\ldots ,x_m\), and strictly decreasing with respect to \(x_{m+1},x_{m+2},\ldots ,x_n.\) Then the uncertain variable \(\xi =f(\xi _1,\xi _2,\ldots ,\xi _n)\) has an inverse uncertainty distribution

$$\begin{aligned} \Phi ^{-1}(\alpha )=f\left( \Phi ^{-1}_1(\alpha ),\ldots ,\Phi ^{-1}_m(\alpha ),\Phi ^{-1}_{m+1}(1-\alpha ),\ldots ,\Phi ^{-1}_{n}(1-\alpha )\right) . \end{aligned}$$

An uncertain variable is called Boolean if it takes values either 0 or 1. The following is a Boolean uncertain variable

$$\begin{aligned} \xi =\left\{ \begin{array}{lll} 1 &{}\quad \text {with uncertain measure } a\\ 0&{}\quad \text {with uncertain measure } 1-a \end{array}\right. \end{aligned}$$

where \(a\in [0,1]\). The operational law of Boolean system was introduced by Liu (2010a) as follows.

Theorem 2.2

(Liu 2010a) Assume that \(\xi _1,\xi _2,\ldots ,\xi _n\) are independent Boolean uncertain variables, i.e.,

$$\begin{aligned} \xi _i=\left\{ \begin{array}{lll}1 &{}\quad \text {with uncertain measure } a_i\\ 0&{}\quad \text {with uncertain measure } 1-a_i \end{array}\right. \end{aligned}$$

for \(i=1,2,\ldots ,n\). If f is a Boolean function, then \(\xi =f(\xi _1,\xi _2,\ldots ,\xi _n)\) is a Boolean uncertain variable such that

$$\begin{aligned} \mathcal {M}\{\xi =1\}=\left\{ \begin{array}{l}\displaystyle \sup _{f(x_1,x_2,\ldots ,x_n)=1}\min _{1\le i\le n}\nu _i(x_i),\quad \text{ if } \displaystyle \sup _{f(x_1,x_2,\ldots ,x_n)=1}\min _{1\le i\le n}\nu _i(x_i)<0.5\\ \displaystyle 1-\sup _{f(x_1,x_2,\ldots ,x_n)=0}\min _{1\le i\le n}\nu _i(x_i),\quad \text{ if } \displaystyle \sup _{f(x_1,x_2,\ldots ,x_n)=1}\min _{1\le i\le n}\nu _i(x_i)\!\ge \!0.5 \end{array}\right. \end{aligned}$$

where \(x_i\) take values either 0 or 1, and \(\nu _i\) are defined by

$$\begin{aligned} \nu _i(x_i)=\left\{ \begin{array}{ll}a_i,&{}\quad \text{ if } x_i=1\\ 1-a_i,&{}\quad \text{ if } x_i=0 \end{array}\right. \end{aligned}$$

for \(i=1,2,\ldots ,n\), respectively.

Definition 2.5

(Liu 2007) Let \(\xi \) be an uncertain variable. Then the expected value of \(\xi \) is defined by

$$\begin{aligned} E[\xi ]=\int _{0}^{+\infty }\mathcal {M}\{\xi \ge r\}\text{ d }r-\int _{-\infty }^{0}\mathcal {M}\{\xi \le r\}\text{ d }r. \end{aligned}$$

For an uncertain variable \(\xi \) with uncertainty distribution \(\Phi (x)\), its expected value can be expressed as

$$\begin{aligned} E[\xi ]=\displaystyle \int _0^{+\infty }(1-\Phi (x))\mathrm{d}x-\int _{-\infty }^0\Phi (x)\mathrm{d}x. \end{aligned}$$

And if \(\xi \) has an inverse uncertainty distribution function \(\Phi ^{-1}(\alpha )\), then

$$\begin{aligned} E[\xi ]=\int _0^1\Phi ^{-1}(\alpha )\mathrm{d}\alpha . \end{aligned}$$

Theorem 2.3

(Liu and Ha 2010) Assume \(\xi _1,\xi _2,\ldots ,\xi _n\) are independent uncertain variables with regular uncertainty distributions \(\Phi _1,\Phi _2,\ldots ,\Phi _n\), respectively. If \(f(x_1,x_2,\ldots ,x_n)\) is strictly increasing with respect to \(x_1,x_2,\ldots ,x_m\) and strictly decreasing with respect to \(x_{m+1},\) \(x_{m+2},\ldots ,x_n\), then the uncertain variable \(\xi =f(\xi _1,\xi _2,\ldots ,\xi _n)\) has an expected value

$$\begin{aligned} E[\xi ]=\int _0^1 f\left( \Phi _1^{-1}(\alpha ),\ldots ,\Phi _m^{-1}(\alpha ), \Phi _{m+1}^{-1}(1-\alpha ),\ldots ,\Phi _n^{-1}(1-\alpha )\right) \mathrm{{d}}\alpha \end{aligned}$$

provided that \(E[\xi ]\) exists.

2.2 Chance theory

Chance theory, as a mixture of uncertainty theory and probability theory, was founded by Liu (2013a, b) to deal with a system exhibiting both randomness and uncertainty. The basic concept is the chance measure of an uncertain random event in a chance space.

Let \((\Gamma ,\mathcal {L},\mathcal {M})\) be an uncertainty space, and \((\Omega ,\mathcal {A},\mathrm{Pr})\) be a probability space. Then

$$\begin{aligned} (\Gamma ,\mathcal {L},\mathcal {M})\times (\Omega ,\mathcal {A},\mathrm{Pr})=(\Gamma \times \Omega ,\mathcal {L}\times \mathcal {A},\mathcal {M}\times \mathrm{Pr}) \end{aligned}$$

is called a chance space.

Definition 2.6

(Liu 2013a) Let \((\Gamma ,\mathcal {L},\mathcal {M})\times (\Omega ,\mathcal {A},\mathrm{Pr})\) be a chance space, and \(\Theta \in \mathcal {L}\times \mathcal {A}\) be an uncertain random event. Then the chance measure \(\mathrm{Ch}\) of \(\Theta \) is defined by

$$\begin{aligned} \mathrm{Ch}\{\Theta \}=\int _0^1\mathrm{Pr}\{\omega \in \Omega \,|\,\mathcal {M}\{\gamma \in \Gamma \,|\,(\gamma ,\omega )\in \Theta \}\ge r\}\mathrm{d}r. \end{aligned}$$

Theorem 2.4

(Liu 2013a) Let \((\Gamma ,\text{ L },\mathcal {M})\times (\Omega , \text{ A },\mathrm{Pr})\) be a chance space. Then the chance measure \(\mathrm{Ch}\{\Theta \}\) is a monotone increasing function of \(\Theta \) and

$$\begin{aligned} \mathrm{Ch}\{\Lambda \times A\}=\mathcal {M}\{\Lambda \}\times \mathrm{Pr}\{A\} \end{aligned}$$

for any \(\Lambda \in \text{ L }\) and any \(A\in \text{ A }\). Especially, we have

$$\begin{aligned} \mathrm{Ch}\{\emptyset \}=0,\quad \mathrm{Ch}\{\Gamma \times \Omega \}=1. \end{aligned}$$

Definition 2.7

(Liu 2013a) An uncertain random variable \(\xi \) is a measurable function from a chance space \((\Gamma ,\mathcal {L},\mathcal {M})\times (\Omega ,\mathcal {A},\mathrm{Pr})\) to the set of real numbers, i.e.,

$$\begin{aligned} \{\xi \in B\}=\{(\gamma ,\omega )\,|\,\xi (\gamma ,\omega )\in B\} \end{aligned}$$

is an uncertain random event for any Borel set B.

When an uncertain random variable \(\xi (\gamma ,\omega )\) does not vary with \(\gamma \), it degenerates to a random variable. When an uncertain random variable \(\xi (\gamma ,\omega )\) does not vary with \(\omega \), it degenerates to an uncertain variable. Therefore, a random variable and an uncertain variable are two special uncertain random variables.

Example 2.1

Let \(\xi _1,\xi _2,\ldots ,\xi _m\) be random variables and \(\eta _1,\eta _2,\ldots ,\eta _n\) be uncertain variables. If f is a measurable function, then

$$\begin{aligned} \tau =f(\xi _1,\xi _2,\ldots ,\xi _m,\eta _1,\eta _2,\ldots ,\eta _n) \end{aligned}$$

is an uncertain random variable determined by

$$\begin{aligned} \tau (\gamma ,\omega )=f(\xi _1(\omega ),\xi _2(\omega ),\ldots ,\xi _m(\omega ),\eta _1(\gamma ),\eta _2(\gamma ),\ldots ,\eta _n(\gamma )) \end{aligned}$$

for all \((\gamma ,\omega )\in \Gamma \times \Omega \).

Definition 2.8

(Liu 2013a) Let \(\xi \) be an uncertain random variable. Then its chance distribution is defined by

$$\begin{aligned} \Phi (x)=\mathrm{Ch}\{\xi \le x\} \end{aligned}$$

for any \(x\in \mathfrak {R}\).

As two special uncertain random variables, the chance distribution of a random variable \(\xi \) is just its probability distribution

$$\begin{aligned} \Phi (x)=\mathrm{Ch}\{\xi \le x\}=\mathrm{Pr}\{\xi \le x\}, \end{aligned}$$

and the chance distribution of an uncertain variable \(\xi \) is just its uncertainty distribution

$$\begin{aligned} \Phi (x)=\mathrm{Ch}\{\xi \le x\}=\mathcal {M}\{\xi \le x\}. \end{aligned}$$

Theorem 2.5

(Liu 2013b) Let \(\eta _1,\eta _2,\ldots \!,\eta _m\) be independent random variables with probability distributions \(\Psi _1,\Psi _2,\ldots ,\Psi _m\), respectively, and \(\tau _1,\tau _2,\ldots ,\tau _n\) be uncertain variables. Then the uncertain random variable

$$\begin{aligned} \xi =f(\eta _1,\eta _2,\ldots ,\eta _m,\tau _1,\tau _2,\ldots ,\tau _n) \end{aligned}$$

has a chance distribution

$$\begin{aligned} \Phi (x)=\int _{\mathfrak {R}^m}\!F(x;y_1,y_2,\ldots \!,y_m)\mathrm{d}\Psi _1(y_1)\mathrm{d}\Psi _2(y_2) \ldots \mathrm{d}\Psi _m(y_m) \end{aligned}$$

where \(F(x;y_1,y_2,\ldots ,y_m)\) is the uncertainty distribution of the uncertain variable

$$\begin{aligned} f(y_1,y_2,\ldots ,y_m,\tau _1,\tau _2,\ldots ,\tau _n) \end{aligned}$$

for any real numbers \(y_1,\ldots ,y_m\).

Definition 2.9

(Liu 2013a) Let \(\xi \) be an uncertain random variable. Then its expected value is

$$\begin{aligned} E[\xi ]=\int _0^{+\infty }\mathrm{Ch}\{\xi \ge r\}\mathrm{d}r-\int _{-\infty }^0 \mathrm{Ch}\{\xi \le r\}\mathrm{d}r \end{aligned}$$

provided that at least one of the two integrals is finite.

For an uncertain random variable \(\xi \) with chance distribution \(\Phi (x)\), its expected value can be briefed as

$$\begin{aligned} E[\xi ]=\displaystyle \int _0^{+\infty }(1-\Phi (x))\mathrm{d}x-\int _{-\infty }^0\Phi (x)\mathrm{d}x. \end{aligned}$$

If \(\Phi (x)\) is regular, then

$$\begin{aligned} E[\xi ]=\displaystyle \int _0^{1}\Phi ^{-1}(\alpha )\mathrm{d}\alpha . \end{aligned}$$

Theorem 2.6

(Liu 2013b) Let \(\eta _1,\eta _2,\ldots ,\eta _m\) be independent random variables with probability distributions \(\Psi _1,\Psi _2,\ldots ,\Psi _m\), respectively, and let \(\tau _1,\tau _2\), \(\ldots ,\tau _n\) be uncertain variables. Then the uncertain random variable

$$\begin{aligned} \xi =f(\eta _1,\ldots ,\eta _m,\tau _1,\ldots ,\tau _n) \end{aligned}$$

has an expected value

$$\begin{aligned} E[\xi ]=\int _{\mathfrak {R}^m}E[f(y_1,\ldots ,y_m,\tau _1,\ldots ,\tau _n)] \mathrm{d}\Psi _1(y_1)\ldots \mathrm{d}\Psi _m(y_m) \end{aligned}$$

where \(E[f(y_1,\ldots \!,y_m,\tau _1,\ldots \!,\tau _n)]\) is the expected value of the uncertain variable \(f(y_1,\ldots \!,y_m,\) \(\tau _1,\ldots ,\tau _n)\) for any given real numbers \(y_1,\ldots ,y_m\).

3 Reliability of uncertain random system

A function f is called a Boolean function if it maps \(\{0,1\}^n\) to \(\{0,1\}.\) It is usually used to model the structure of a Boolean system.

Definition 3.1

Assume that a Boolean system \(\xi \) is comprised of n components \(\xi _1,\xi _2,\ldots \!,\xi _n\). Then a Boolean function f is called its structure function if

$$\begin{aligned} \xi =1\quad \text{ if } \text{ and } \text{ only } \text{ if } f(\xi _1,\xi _2,\ldots ,\xi _n)=1. \end{aligned}$$
(1)

Obviously, when f is the structure function of the system, we also have \(\xi =0\) if and only if \(f(\xi _1,\xi _2,\ldots ,\xi _n)=0.\) For a series system containing n components, the structure function is

$$\begin{aligned} f(\xi _1,\ldots ,\xi _n)=\bigwedge _{i=1}^n\xi _i. \end{aligned}$$

For a parallel system containing n components, the structure function is

$$\begin{aligned} f(\xi _1,\ldots ,\xi _n)=\bigvee _{i=1}^n\xi _i. \end{aligned}$$

For a k-out-of-n system, the structure function is

$$\begin{aligned} f(\xi _1,\ldots ,\xi _n)=\left\{ \begin{array}{ll} 1, &{}\quad \text{ if } \sum \limits _{i=1}^n \xi _i\ge k \\ 0, &{}\quad \text{ if } \sum \limits _{i=1}^n \xi _i<k . \end{array}\right. \end{aligned}$$

In a complex system, some components may have enough samples to estimate their probability distributions, and can be regarded as random variables, while some others may have no samples, and can only be evaluated by the experts and regarded as uncertain variables. In this case, the system cannot be simply modeled by a stochastic system or an uncertain system. Then we will employ uncertain random variable to model the system, and analyze its reliability based on chance theory.

Definition 3.2

The reliability index of an uncertain random system \(\xi \) is defined as the chance measure that the system is working, i.e.,

$$\begin{aligned} Reliability=\mathrm{Ch}\{\xi =1\}. \end{aligned}$$
(2)

If all uncertain random components degenerate to random ones, then the reliability index is the probability measure that the system is working. If all uncertain random components degenerate to uncertain ones, then the reliability index (Liu 2010b) is the uncertain measure that the system is working.

Example 3.1

(Series System) Consider a series system containing independent random components \(\xi _1,\xi _2,\) \(\ldots \!,\xi _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), and independent uncertain components \(\eta _1,\eta _2,\ldots ,\eta _n\) with reliabilities \(b_1,b_2,\ldots ,b_n\), respectively. Since the structure function is

$$\begin{aligned} f(\xi _1,\ldots ,\xi _m,\eta _1,\ldots ,\eta _n)=\left( \bigwedge _{i=1}^m\xi _i\right) \wedge \left( \bigwedge _{j=1}^n\eta _j\right) , \end{aligned}$$

we have

$$\begin{aligned} Reliability&=\mathrm{Ch}\left\{ \left( \bigwedge _{i=1}^m\xi _i\right) \wedge \left( \bigwedge _{j=1}^n\eta _j\right) =1\right\} \\&=\mathrm{Ch}\left\{ \left( \bigwedge _{i=1}^m\xi _i=1\right) \cap \left( \bigwedge _{j=1}^n\eta _j=1\right) \right\} \\&=\mathrm{Pr}\left\{ \bigcap _{i=1}^m(\xi _i=1)\right\} \times \mathcal {M}\left\{ \bigcap _{j=1}^n(\eta _j=1)\right\} \\&=\left( \prod _{i=1}^m\mathrm{Pr}\{\xi _i=1\}\right) \times \left( \bigwedge _{j=1}^n\mathcal {M}\{\eta _j=1\}\right) \\&=\left( \prod _{i=1}^m a_i \right) \cdot \left( \bigwedge _{j=1}^n b_j\right) . \end{aligned}$$

Remark 3.1

If the series system degenerates to a system containing only random components \(\xi _1,\xi _2,\ldots ,\xi _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), then

$$\begin{aligned} Reliability= \prod _{i=1}^m a_i. \end{aligned}$$

If the series system degenerates to a system containing only uncertain components \(\eta _1,\eta _2,\ldots \!,\eta _n\) with reliabilities \(b_1,b_2,\ldots ,b_n\), then

$$\begin{aligned} Reliability=\bigwedge _{j=1}^n b_j. \end{aligned}$$

Example 3.2

(Parallel System) Consider a parallel system containing independent random components \(\xi _1,\xi _2,\ldots ,\xi _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), and independent uncertain components \(\eta _1,\eta _2,\ldots ,\eta _n\) with reliabilities \(b_1,b_2,\ldots ,b_n\), respectively. Since the structure function is

$$\begin{aligned} f(\xi _1,\ldots ,\xi _m,\eta _1,\ldots ,\eta _n)=\left( \bigvee _{i=1}^m\xi _i\right) \vee \left( \bigvee _{j=1}^n\eta _j\right) , \end{aligned}$$

we have

$$\begin{aligned} Reliability&=\mathrm{Ch}\left\{ \left( \bigvee _{i=1}^m\xi _i\right) \vee \left( \bigvee _{j=1}^n\eta _j\right) =1\right\} \\&=1-\mathrm{Ch}\left\{ \left( \bigvee _{i=1}^m\xi _i\right) \vee \left( \bigvee _{j=1}^n\eta _j\right) =0\right\} \\&=1-\mathrm{Ch}\left\{ \left( \bigcap _{i=1}^m(\xi _i=0)\right) \cap \left( \bigcap _{j=1}^n(\eta _j=0)\right) \right\} \\&=1-\mathrm{Pr}\left\{ \bigcap _{i=1}^m(\xi _i=0)\right\} \times \mathcal {M}\left\{ \bigcap _{j=1}^n(\eta _j=0)\right\} \\&=1-\left( \prod _{i=1}^m\mathrm{Pr}\{\xi _i=0\}\right) \times \left( \bigwedge _{j=1}^n\mathcal {M}\{\eta _j=0\}\right) \\&=1-\left( \prod _{i=1}^m (1-a_i) \right) \cdot \left( \bigwedge _{j=1}^n (1-b_j)\right) . \end{aligned}$$

Remark 3.2

If the parallel system degenerates to a system containing only random components \(\xi _1,\xi _2,\ldots ,\xi _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), then

$$\begin{aligned} Reliability=1- \prod _{i=1}^m (1-a_i) . \end{aligned}$$

If the series system degenerates to a system containing only uncertain components \(\eta _1,\eta _2,\ldots \!,\eta _n\) with reliabilities \(b_1,b_2,\ldots ,b_n\), then

$$\begin{aligned} Reliability=1-\left( \bigwedge _{j=1}^n (1-b_j)\right) =\bigvee _{j=1}^n b_j. \end{aligned}$$

4 Reliability index formula

This section aims at giving a formula to calculate the reliability of a system involving both random variables and uncertain variables.

Theorem 4.1

Assume that a Boolean system has a structure function f and contains independent random components \(\eta _1,\eta _2,\ldots ,\eta _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), respectively, and independent uncertain components \(\tau _1,\tau _2,\ldots ,\tau _n\) with reliabilities \(b_1,b_2,\) \(\ldots ,b_n\), respectively. Then the reliability index of the uncertain random system is

$$\begin{aligned} Reliability=\sum _{(y_1,\ldots ,y_m)\in \{0,1\}^m}\left( \prod _{i=1}^m \mu _i(y_i)\right) \cdot Z(y_1,y_2,\ldots ,y_m) \end{aligned}$$
(3)

where

$$\begin{aligned} Z(y_1,\ldots ,y_m)= & {} \left\{ \!\begin{array}{l} \displaystyle \sup _{f(y_1,\ldots ,y_m,z_1,\ldots ,z_n)=1}\min _{1\le j\le n}\nu _j(z_j),\\ \quad \text{ if } \displaystyle \sup _{f(y_1,\ldots ,y_m,z_1,\ldots ,z_n)=1}\min _{1\le j\le n}\nu _j(z_j)<0.5\\ \displaystyle 1-\sup _{f(y_1,\ldots ,y_m,z_1,\ldots ,z_n)=0}\min _{1\le j\le n}\nu _j(z_j),\\ \quad \text{ if } \displaystyle \sup _{f(y_1,\ldots ,y_m,z_1,\ldots ,z_n)=1}\min _{1\le j\le n}\nu _j(z_j)\ge 0.5, \end{array}\right. \end{aligned}$$
(4)
$$\begin{aligned} \mu _i(y_i)= & {} \left\{ \begin{array}{ll} a_i &{}\quad \text{ if } y_i=1 \\ 1-a_i &{}\quad \text{ if } y_i=0, \end{array}\right. \quad (i=1,2,\ldots ,m),\end{aligned}$$
(5)
$$\begin{aligned} \nu _j(z_j)= & {} \left\{ \begin{array}{ll} b_j &{}\quad \text{ if } z_j=1 \\ 1-b_j &{}\quad \text{ if } z_j=0 \end{array}\right. \quad (j=1,2,\ldots ,n). \end{aligned}$$
(6)

Proof

It follows from Definition 3.1 of structure function and Definition 3.2 of reliability index that

$$\begin{aligned} Reliability=\mathrm{Ch}\{f(\eta _1,\ldots ,\eta _m,\tau _1,\ldots ,\tau _n)=1\}. \end{aligned}$$

By the operational law of uncertain random variables (Theorem 2.5), we have

$$\begin{aligned} Reliability=\sum _{(y_1,\ldots ,y_m)\in \{0,1\}^m}\left( \prod _{i=1}^m \mu _i(y_i)\right) \cdot \mathcal {M}\{f(y_1,\ldots ,y_m,\tau _1,\ldots ,\tau _n)=1\}. \end{aligned}$$

When \((y_1,\ldots ,y_m)\) is given,

$$\begin{aligned} f(y_1,\ldots ,y_m,\tau _1,\ldots ,\tau _n)=1 \end{aligned}$$

is a Boolean function of uncertain variables. It follows from the operational law of Boolean system (Theorem 2.2) that

$$\begin{aligned} \mathcal {M}\{f(y_1,\ldots ,y_m,\tau _1,\ldots ,\tau _n)=1\}=Z(y_1,\ldots ,y_m) \end{aligned}$$

that is determined by (4), and we complete the proof.

Example 4.1

(k-out-of-n System) Consider a k-out-of-n system containing independent random components \(\xi _1,\xi _2,\ldots \!,\) \(\xi _m\) with reliabilities \(a_1,a_2,\ldots ,a_m\), respectively, and independent uncertain components \(\eta _1,\eta _2,\ldots \!,\eta _{n-m}\) with reliabilities \(b_1,b_2,\ldots ,b_{n-m}\), respectively. Note that the structure function is

$$\begin{aligned} f(y_1,y_2,\ldots ,y_m,z_1,z_2,\ldots ,z_{n-m})=\left\{ \begin{array}{ll} 1, &{}\quad \text{ if } \sum \limits _{i=1}^m y_i +\sum \limits _{j=1}^{n-m} z_j\ge k \\ 0, &{}\quad \text{ if } \sum \limits _{i=1}^m y_i +\sum \limits _{j=1}^{n-m} z_j<k . \end{array}\right. \end{aligned}$$

It follows from Theorem 4.1 that the reliability of the uncertain random system is

$$\begin{aligned} Reliability=\sum _{(y_1,\ldots ,y_m)\in \{0,1\}^m}\left( \prod _{i=1}^m \mu _i(y_i)\right) \cdot Z(y_1,y_2,\ldots ,y_m) \end{aligned}$$

in which

$$\begin{aligned} \mu _i(y_i)= & {} \left\{ \begin{array}{ll} a_i &{} \text{ if } y_i=1 \\ 1-a_i &{} \text{ if } y_i=0, \end{array}\right. \\&Z(y_1,y_2,\ldots ,y_m)=\mathcal {M}\left\{ \sum _{i=1}^my_i+\sum _{j=1}^{n-m}\eta _j\ge k\right\} \\&\quad =\,\left\{ \begin{array}{ll} \text{ the } (k-\sum \limits _{i=1}^m y_i) \text{ th } \text{ largest } \text{ value } \text{ of } b_1,b_2,\ldots ,b_{n-m}, &{}\quad \text{ if } \sum \limits _{i=1}^m y_i< k\\ 1, &{}\quad \text{ if } \sum \limits _{i=1}^m y_i\ge k. \end{array}\right. \end{aligned}$$

Remark 4.1

If the k-out-of-n system degenerates to a system containing only random components \(\xi _1,\xi _2,\ldots ,\xi _n\) with reliabilities \(a_1,a_2,\ldots ,a_n\), respectively. Then

$$\begin{aligned} Reliability= \sum _{y_1+\cdots +y_n\ge k}\left( \prod _{i=1}^n \mu _i(y_i)\right) . \end{aligned}$$

If the k-out-n system degenerates to a system containing only uncertain components \(\eta _1,\eta _2,\ldots \!,\eta _n\) with reliabilities \(b_1,b_2,\ldots ,b_n\), respectively. Then

$$\begin{aligned} Reliability=\text{ the } k \text{ th } \text{ largest } \text{ value } \text{ of } b_1,b_2,\ldots ,b_n . \end{aligned}$$

Example 4.2

(Parallel–series system) Consider a simple parallel–series system in Fig. 1 containing independent random components \(\xi _1,\xi _2\) with reliabilities \(a_1,a_2\), respectively, and independent uncertain components \(\eta _1,\eta _2\) with reliabilities \(b_1,b_2,\) respectively.

Fig. 1
figure 1

Parallel–series system

Note that the structure function is

$$\begin{aligned} f(\xi _1,\xi _2,\eta _1,\eta _2)=(\xi _1\vee \eta _1)\wedge (\xi _2\vee \eta _2). \end{aligned}$$

It follows from Theorem 4.1 that the reliability index is

$$\begin{aligned} \textit{Reliability}= & {} \mathrm{Ch}\{(\xi _1\vee \eta _1)\wedge (\xi _2\vee \eta _2)=1\}\\= & {} \mathrm{Pr}\{\xi _1=1,\xi _2=1\}\cdot Z(1,1)+\mathrm{Pr}\{\xi _1=1,\xi _2=0\}\cdot Z(1,0)\\&+\,\mathrm{Pr}\{\xi _1=0,\xi _2=1\}\cdot Z(0,1)+\mathrm{Pr}\{\xi _1=0,\xi _2=0\}\cdot Z(0,0)\\= & {} a_1a_2\cdot Z(1,1)+a_1(1-a_2)\cdot Z(1,0)+(1-a_1)a_2\cdot Z(0,1)\\&+\,(1-a_1)(1-a_2)\cdot Z(0,0) \end{aligned}$$

where

$$\begin{aligned}&Z(1,1)=\mathcal {M}\{(1\vee \eta _1)\wedge (1\vee \eta _2)=1\}=\mathcal {M}\{1\wedge 1=1\}=1,\\&Z(1,0)=\mathcal {M}\{(1\vee \eta _1)\wedge (0\vee \eta _2)=1\}=\mathcal {M}\{1\wedge \eta _2=1\}=\mathcal {M}\{\eta _2=1\}=b_2,\\&Z(0,1)=\mathcal {M}\{(0\vee \eta _1)\wedge (1\vee \eta _2)=1\}=\mathcal {M}\{\eta _1\wedge 1=1\}=\mathcal {M}\{\eta _1=1\}=b_1,\\&Z(0,0)=\mathcal {M}\{(0\vee \eta _1)\wedge (0\vee \eta _2)=1\}=\mathcal {M}\{\eta _1\wedge \eta _2=1\}=b_1\wedge b_2. \end{aligned}$$

Thus, the reliability index of the parallel–series system is

$$\begin{aligned} Reliability=a_1a_2+a_1(1-a_2)b_2+(1-a_1)a_2b_1+(1-a_1)(1-a_2)(b_1\wedge b_2). \end{aligned}$$

Example 4.3

(Series–parallel system) Consider a simple series–parallel system in Fig. 2 containing independent random components \(\xi _1,\xi _2\) with reliabilities \(a_1,a_2\), respectively, and independent uncertain components \(\eta _1,\eta _2\) with reliabilities \(b_1,b_2,\) respectively.

Fig. 2
figure 2

Series–parallel system

Note that the structure function is

$$\begin{aligned} f(\xi _1,\xi _2,\eta _1,\eta _2)=(\xi _1\wedge \eta _1)\vee (\xi _2\wedge \eta _2). \end{aligned}$$

It follows from Theorem 4.1 that the reliability index is

$$\begin{aligned} \textit{Reliability}= & {} \mathrm{Ch}\{(\xi _1\wedge \eta _1)\vee (\xi _2\wedge \eta _2)=1\}\\= & {} \mathrm{Pr}\{\xi _1=1,\xi _2=1\}\cdot Z(1,1)+\mathrm{Pr}\{\xi _1=1,\xi _2=0\}\cdot Z(1,0)\\&+\, \mathrm{Pr}\{\xi _1=0,\xi _2=1\}\cdot Z(0,1)+\mathrm{Pr}\{\xi _1=0,\xi _2=0\}\cdot Z(0,0)\\= & {} a_1a_2\cdot Z(1,1)+a_1(1-a_2)\cdot Z(1,0)+(1-a_1)a_2\cdot Z(0,1)\\&+\, (1-a_1)(1-a_2)\cdot Z(0,0) \end{aligned}$$

where

$$\begin{aligned}&Z(1,1)=\mathcal {M}\{(1\wedge \eta _1)\vee (1\wedge \eta _2)=1\}=\mathcal {M}\{\eta _1\vee \eta _2=1\}=b_1\vee b_2,\\&Z(1,0)=\mathcal {M}\{(1\wedge \eta _1)\vee (0\wedge \eta _2)=1\}=\mathcal {M}\{\eta _1\vee 0=1\}=\mathcal {M}\{\eta _1=1\}=b_1,\\&Z(0,1)=\mathcal {M}\{(0\wedge \eta _1)\vee (1\wedge \eta _2)=1\}=\mathcal {M}\{0\vee \eta _2=1\}=\mathcal {M}\{\eta _1=2\}=b_2,\\&Z(0,0)=\mathcal {M}\{(0\wedge \eta _1)\vee (0\wedge \eta _2)=1\}=\mathcal {M}\{0\vee 0=1\}=0. \end{aligned}$$

Thus, the reliability index of the series–parallel system is

$$\begin{aligned} Reliability=a_1a_2(b_1\vee b_2)+a_1(1-a_2)b_1+(1-a_1)a_2b_2. \end{aligned}$$

Example 4.4

(Bridge System) Consider a simple bridge system in Fig. 3 containing independent random components \(\xi _1,\xi _2\) with reliabilities \(a_1,a_2\), respectively, and independent uncertain components \(\eta _1,\eta _2,\eta _3\) with reliabilities \(b_1,b_2,b_3\) respectively.

Fig. 3
figure 3

Bridge system

Note that the structure function is

$$\begin{aligned} f(\xi _1,\xi _2,\eta _1,\eta _2,\eta _3)=(\xi _1\wedge \eta _3)\vee (\eta _1\wedge \xi _2)\vee (\xi _1\wedge \eta _2\wedge \xi _2)\vee (\eta _1\wedge \eta _2\wedge \eta _3). \end{aligned}$$

It follows from Theorem 4.1 that the reliability index is

$$\begin{aligned}&Reliability\\&\quad =\,\mathrm{Ch}\{(\xi _1\wedge \eta _3)\vee (\eta _1\wedge \xi _2)\vee (\xi _1\wedge \eta _2\wedge \xi _2)\vee (\eta _1\wedge \eta _2\wedge \eta _3)\}\\&\quad =\,\mathrm{Pr}\{\xi _1=1,\xi _2=1\}\cdot Z(1,1)+\mathrm{Pr}\{\xi _1=1,\xi _2=0\}\cdot Z(1,0)\\&\qquad +\mathrm{Pr}\{\xi _1=0,\xi _2=1\}\cdot Z(0,1)+\mathrm{Pr}\{\xi _1=0,\xi _2=0\}\cdot Z(0,0)\\&\quad =\,a_1a_2\cdot Z(1,1)+a_1(1-a_2)\cdot Z(1,0)+(1-a_1)a_2\cdot Z(0,1)\\&\qquad +(1-a_1)(1-a_2)\cdot Z(0,0) \end{aligned}$$

where

$$\begin{aligned} Z(1,1)&= \mathcal {M}\{(1\wedge \eta _3)\vee (\eta _1\wedge 1)\vee (1\wedge \eta _2\wedge 1)\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _3\vee \eta _1\vee \eta _2\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _3\vee \eta _1\vee \eta _2=1\}\\&=b_1\vee b_2\vee b_3,\\ Z(1,0)&=\mathcal {M}\{(1\wedge \eta _3)\vee (\eta _1\wedge 0)\vee (1\wedge \eta _2\wedge 0)\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _3\vee 0\vee 0\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _3=1\}\\&=b_3,\\ Z(0,1)&=\mathcal {M}\{(0\wedge \eta _3)\vee (\eta _1\wedge 1)\vee (0\wedge \eta _2\wedge 1)\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{0\vee \eta _1\vee 0\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _1=1\}\\&=b_1,\\ Z(0,0)&=\mathcal {M}\{(0\wedge \eta _3)\vee (\eta _1\wedge 0)\vee (0\wedge \eta _2\wedge 0)\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{0\vee 0\vee 0\vee (\eta _1\wedge \eta _2\wedge \eta _3)=1\}\\&=\mathcal {M}\{\eta _1\wedge \eta _2\wedge \eta _3=1\}\\&=b_1\wedge b_2\wedge b_3. \end{aligned}$$

Thus, the reliability index of the series–parallel system is

$$\begin{aligned} Reliability= & {} a_1a_2(b_1\vee b_2\vee b_3)+a_1(1-a_2)b_3+(1-a_1)a_2b_1\\&+\,(1-a_1)(1-a_2)(b_1\wedge b_2\wedge b_3). \end{aligned}$$

5 Conclusion

This paper mainly proposed the concept of reliability index in uncertain random systems. A reliability index theorem was derived to calculate the reliability index. Moreover, some special common systems in uncertain random environment such as k-out-of-n system, parallel–series system, series–parallel system and bridge system were discussed.