1 Introduction

In the framework of classic two-valued logic, a proposition can only take two values either 0 or 1, which means the proposition is absolutely false or absolutely true. So the two-valued logic can only deal with a proposition with precise knowledge. In order to deal with vague knowledge which fails to be absolutely false or absolutely true, Lukasiewicz proposed a multi-valued logic in 1920s as an extension of two-valued logic. As a form of multi-valued logic, the fuzzy logic was introduced with the proposal of fuzzy set theory in Zadeh (1965), where he provided a method to construct a three-valued logic via the fuzzy sets. To our best knowledge, the term “fuzzy logic” first appeared in Marinos (1969), where he pointed out the fuzzy logic could deal with propositions whose truth values disperse continuously in [0,1]. After that, Lee (1972) summarized some properties which the truth values of propositions are supposed to satisfy in the fuzzy logic, including \(T(A\wedge B)=T(A)\wedge T(B),\) \(T(A\vee B)=T(A)\vee T(B)\) and \(T(\lnot A)=1-T(A)\) for the propositions A and B. Zadeh (1973) employed the fuzzy sets to describe the linguistic variables such as “small” and “old”, and proposed the fuzzy “If-Then” inference rules which implicate the idea of fuzzy logic controller.

In order to combine the probability theory and the multi-valued logic, Nilsson (1986) proposed a probabilistic logic which regards the truth value of a proposition as the probability that a Boolean random variable takes the value 1. Meanwhile, he showed the consistency between the probabilistic logic and the classical logic. In addition, Nilsson (1986) proposed a probabilistic entailment as an inverse problem of the probabilistic logic, and he suggested entailing the truth value of a proposition by means of solving some system of linear equations. The ideas of describing the truth value of a proposition via possibility measure (Zadeh 1978) or necessity measure (Zadeh 1979) first appeared in Prade (1982). After that, Dubois and Prade (1987, 1990) investigated the resolution principles with necessity measure and possibility measure, respectively. In addition, Dubois and Prade (1988) gave a detailed introduction to possibilistic logic. Liu and Liu (2002) proposed a credibility measure, where the credibility of an event is just the average of its possibility and its necessity. After that, Li and Liu (2009a) proposed a credibilistic logic which describes the truth value of a proposition via the credibility measure. As a combination of the probabilistic logic and the credibilistic logic, Li and Liu (2009b) proposed a hybrid logic, and showed that the hybrid logic satisfies the laws of tautology, contradiction and truth conservation.

Except for randomness and fuzziness, human uncertainty is another source of indeterminacy information. In order to deal with the human uncertainty, an uncertainty theory was founded by Liu (2007) and refined by Liu (2009a) based on normality, duality, subadditivity and product axioms. These axioms are essentially the foundation of an uncertain measure, which indicates human’s belief degree that an uncertain event will occur. Meanwhile, a concept of uncertain variable was defined to model a quantity with human uncertainty, and concepts of uncertainty distribution, expected value, variance were proposed to describe an uncertain variable. Peng and Iwamura (2010) gave a sufficient and necessary condition for a real function to be an uncertainty distribution of some uncertain variable. Liu and Ha (2010) gave a formula to calculate the expected value of a function of uncertain variables.

In order to deal with the knowledge with human uncertainty, Li and Liu (2009b) proposed an uncertain logic in the framework of uncertainty theory, which regards a proposition as an uncertain variable taking Boolean values. They proved the uncertain logic is consistent with the classical logic, that is, the uncertain logic satisfies the laws of tautology, contradiction and truth conservation. Then Chen and Ralescu (2011) gave a formula to calculate the truth value of a Boolean function of some uncertain propositions. In addition, Zhang and Li (2014) studied a first-order predicate logic with human uncertainty. As an inverse problem of the uncertain logic, Liu (2009b) proposed an uncertain entailment to calculate the truth value of an uncertain proposition when the truth values of some related uncertain propositions are given.

In order to deal with a complex system with both random factors and human uncertainty, Liu (2013a, b) founded a chance theory based on a chance measure. A concept of uncertain random variable was proposed as a combination of the uncertain variable and the random variable. So far, uncertain random programming (Liu 2013b), uncertain random graph (Liu 2014), uncertain random risk analysis (Liu and Ralescu 2014), uncertain random reliability analysis (Wen and Kang 2016), uncertain random process (Gao and Yao 2015) have already been deeply studied.

In this paper, we will study uncertain random logic and uncertain random entailment in the framework of chance theory. The rest of this paper is organized as follows. Sections 2 and 3 introduce the uncertain variable and the uncertain random variable, respectively. Then the probabilistic logic and the probabilistic entailment model are introduced in Sect. 4. The uncertain random logic is proposed in Sect. 5, and its consistency with the classical logic is also proved. The uncertain random entailment model is presented in Sect. 6, and the modus ponens, the modus tollens and the hypothetical syllogism are studied as some applications. Finally, some conclusions are made in Sect. 7.

2 Uncertain variable

Let \(\Gamma \) be a nonempty set, and \($$\mathcal{L}$$\) be a \(\sigma \)-algebra over \(\Gamma \). Each element \(\Lambda \) in \($$\mathcal{L}$$\) is called an event. A set function \(\mathcal{M}\) from \($$\mathcal{L}$$\) to [0, 1] is called an uncertain measure (Liu 2007) if it satisfies the following axioms:

  • Axiom 1. (Normality Axiom) \(\mathcal{M}\{\Gamma \}=1\) for the universal set \(\Gamma \);

  • Axiom 2. (Duality Axiom) \(\mathcal{M}\{\Lambda \}+\mathcal{M}\{\Lambda ^{c}\}=1\) for any event \(\Lambda \);

  • Axiom 3. (Subadditivity Axiom) For every countable sequence of events \(\Lambda _1,\Lambda _2,\ldots \), we have

    $$\begin{aligned} \displaystyle \mathcal{M}\left\{ \bigcup _{i=1}^{\infty }\Lambda _i\right\} \le \sum _{i=1}^{\infty }\mathcal{M}\{\Lambda _i\}. \end{aligned}$$

    The triplet \((\Gamma ,\mathcal{L},\mathcal{M})\) is called an uncertainty space. In order to obtain an uncertain measure of compound event, a product uncertain measure was defined by Liu (2009a), thus producing the fourth axiom of uncertainty theory:

  • Axiom 4. (Product Axiom) Let \((\Gamma _k,\mathcal{L}_k,\mathcal{M}_k)\) be uncertainty spaces for \(k=1,2,\ldots \) The product uncertain measure \(\mathcal{M}\) is an uncertain measure satisfying

    $$\begin{aligned} \mathcal{M}\left\{ \prod _{k=1}^{\infty }\Lambda _k\right\} = \bigwedge _{k=1}^{\infty }\mathcal{M}_k\{\Lambda _k\} \end{aligned}$$

    where \(\Lambda _k\) are arbitrarily chosen events from \(\mathcal{L}_k\) for \(k=1,2,\ldots \), respectively.

Definition 1

(Liu 2007) An uncertain variable is defined as a measurable function \(\xi \) from an uncertainty space \((\Gamma ,\mathcal{L},\mathcal{M})\) to the set of real numbers, i.e., for any Borel set B of real numbers, the set

$$\begin{aligned} \{\xi \in B\}=\{\gamma \in \Gamma \bigm |\xi (\gamma )\in B\} \end{aligned}$$

is an event.

In order to describe an uncertain variable in practice, the concept of uncertainty distribution is defined by

$$\begin{aligned} \Phi (x)=\mathcal{M}\left\{ \xi \le x\right\} ,\quad \forall x\in \mathfrak {R}. \end{aligned}$$

Peng and Iwamura (2010) proved that a function \(\Phi :\mathfrak {R}\rightarrow [0,1]\) is an uncertainty distribution if and only if it is a monotone increasing function except \(\Phi (x)\equiv 0\) and \(\Phi (x)\equiv 1\). The expected value of an uncertain variable is an average value of the uncertain variable in the sense of uncertain measure.

Definition 2

(Liu 2007) Let \(\xi \) be an uncertain variable. Then its expected value is defined by

$$\begin{aligned} E[\xi ]=\int _0^{+\infty }\mathcal{M}\{\xi \ge x\}\mathrm{d}x-\int _{-\infty }^0 \mathcal{M}\{\xi \le x\}\mathrm{d}x \end{aligned}$$

provided that at least one of the two integrals is finite.

For an uncertain variable \(\xi \) with an uncertainty distribution \(\Phi \), Liu (2007) proved that its expected value can be calculated by

$$\begin{aligned} E[\xi ]=\displaystyle \int _0^{+\infty }(1-\Phi (x))\mathrm{d}x-\int _{-\infty }^0\Phi (x)\mathrm{d}x. \end{aligned}$$

Theorem 1

(Liu 2010) Let \(\xi _1,\xi _2,\ldots ,\xi _n\) be independent uncertain variables with regular uncertainty distributions \(\Phi _1,\Phi _2,\) \(\ldots ,\) \(\Phi _n\), respectively. If the function \(f(x_1,x_2,\ldots ,x_n)\) is strictly increasing with respect to \(x_1,x_2,\ldots \), \(x_m\) and strictly decreasing with respect to \(x_{m+1},x_{m+2},\ldots ,x_n\), then the uncertain variable

$$\begin{aligned} \xi =f(\xi _1,\xi _2,\ldots ,\xi _n) \end{aligned}$$

has an uncertainty distribution

$$\begin{aligned} \Psi (x)\!=\!\!\!\!\sup _{f(x_1,x_2,\ldots ,x_n)\le x}\!\!\!\left( \!\min _{1\le i\le m}\!\Phi _i(x_i)\!\wedge \!\!\!\!\min _{m+1\le i\le n}\!(1-\Phi _i(x_i))\!\right) . \end{aligned}$$

Particularly, when \(\xi _1,\xi _2,\ldots ,\xi _n\) are Boolean uncertain variables, and f is a Boolean function, the uncertainty distribution of the uncertain variable \(f(\xi _1,\xi _2,\ldots ,\) \(\xi _n)\) can be obtained from the following theorem.

Theorem 2

(Liu 2010) Assume that \(\xi _1,\xi _2,\ldots ,\xi _n\) are independent Boolean uncertain variables, i.e.,

$$\begin{aligned} \xi _i=\left\{ \begin{array}{l} 1 \text{ with } \text{ uncertain } \text{ measure }\,\,\alpha _i,\\ 0 \text{ with } \text{ uncertain } \text{ measure }\,\,1-\alpha _i \end{array}\right. \end{aligned}$$

for \(i=1,2,\ldots ,n\). If f is a Boolean function, then

$$\begin{aligned} \xi =f(\xi _1,\xi _2,\ldots ,\xi _n) \end{aligned}$$

is a Boolean uncertain variable with

$$\begin{aligned} \mathcal{M}\{\xi =1\}=\left\{ \begin{array}{l} \sup \limits _{f(x_1,x_2,\ldots ,x_n)=1} \min \limits _{1\le i\le n}\nu _i(x_i),\\ \quad \text{ if } \sup \limits _{f(x_1,x_2,\ldots ,x_n)=1} \min \limits _{1\le i\le n}\nu _i(x_i)<0.5\\ 1-\sup \limits _{f(x_1,x_2,\ldots ,x_n)=0} \min \limits _{1\le i\le n}\nu _i(x_i),\\ \quad \text{ if } \sup \limits _{f(x_1,x_2,\ldots ,x_n)=1} \min \limits _{1\le i\le n}\nu _i(x_i)\ge 0.5, \end{array} \right. \end{aligned}$$

where \(x_i\) take values either 0 or 1, and \(\nu _i\) are defined by

$$\begin{aligned} \nu _i(x_i)=\left\{ \begin{array}{cl} \alpha _i,&\text{ if } x_i=1\\ 1-\alpha _i,&\text{ if } x_i=0 \end{array}\right. \end{aligned}$$

for \(i=1,2,\ldots ,n,\) respectively.

For example, for the aforementioned Boolean uncertain variables \(\xi _1,\xi _2,\ldots ,\xi _n\), the Boolean uncertain variable

$$\begin{aligned} \eta =\xi _1\wedge \xi _2\wedge \ldots \wedge \xi _n \end{aligned}$$

satisfies

$$\begin{aligned} \mathcal{M}\{\eta =1\}=\alpha _1\wedge \alpha _2\wedge \ldots \wedge \alpha _n, \end{aligned}$$

and the the Boolean uncertain variable

$$\begin{aligned} \tau =\xi _1\vee \xi _2\vee \ldots \vee \xi _n \end{aligned}$$

satisfies

$$\begin{aligned} \mathcal{M}\{\tau =1\}=\alpha _1\vee \alpha _2\vee \ldots \vee \alpha _n. \end{aligned}$$

3 Uncertain random variable

Let \((\Gamma ,\mathcal{L},\mathcal{M})\) be an uncertainty space and \((\Omega ,\mathcal{A},\mathrm{Pr})\) be a probability space. Then the product \((\Gamma ,\mathcal{L},\mathcal{M})\times (\Omega ,\mathcal{A},\mathrm{Pr})\) is called a chance space, which may also be written as a triple \((\Gamma \times \Omega ,\mathcal{L}\times \mathcal{A},\mathcal{M}\times \mathrm{Pr}).\)

Definition 3

(Liu 2013a) Let \((\Gamma ,\mathcal{L},\mathcal{M})\times (\Omega ,\mathcal{A},\mathrm{Pr})\) be a chance space, and let \(\Theta \in \mathcal{L}\times \mathcal{A}\) be an event. Then the chance measure of \(\Theta \) is

$$\begin{aligned} \mathrm{Ch}\{\Theta \}=\!\!\int _0^1\!\mathrm{Pr}\{\omega \in \Omega \mid \mathcal{M}\{\gamma \in \Gamma \mid (\gamma ,\omega )\in \Theta \}\ge x\}\mathrm{d}x. \end{aligned}$$

Definition 4

(Liu 2013a) An uncertain random variable \(\xi \) is a measurable function from a chance space \((\Gamma ,\mathcal{L},\mathcal{M})\) \(\times (\Omega ,\mathcal{A},\mathrm{Pr})\) to the set of real numbers \(\mathfrak {R}\) such that

$$\begin{aligned} \{\xi \in B\}=\{(\gamma ,\omega )\mid \xi (\gamma ,\omega )\in B\}\in \mathcal{L}\times \mathcal{A}\end{aligned}$$

for any Borel set B.

The concept of chance distribution for an uncertain random variable \(\xi \) is defined by

$$\begin{aligned} \Phi (x)=\mathrm{Ch}\left\{ \xi \le x\right\} ,\quad \forall x\in \mathfrak {R}. \end{aligned}$$

A function \(\Phi :\mathfrak {R}\rightarrow [0,1]\) is a chance distribution if and only if it is a monotone increasing function except \(\Phi (x)\equiv 0\) and \(\Phi (x)\equiv 1\). The expected value of an uncertain random variable \(\xi \) is

$$\begin{aligned} E[\xi ]=\int _0^{+\infty }\mathrm{Ch}\{\xi \ge x\}\mathrm{d}x-\int _{-\infty }^0\mathrm{Ch}\{\xi \le x\}\mathrm{d}x \end{aligned}$$

provided that at least one of the two integrals is finite. Let \(\Phi \) denote the chance distribution of \(\xi \). Then we have

$$\begin{aligned} E[\xi ]=\int _0^{+\infty }(1-\Phi (x))\mathrm{d}x-\int _{-\infty }^0\Phi (x)\mathrm{d}x. \end{aligned}$$

Theorem 3

(Liu 2013b) Let \(\eta _{1}, \eta _{2},\ldots , \eta _{m}\) be independent random variables with probability distributions \(\Psi _{1},\) \(\Psi _{2},\) \(\ldots ,\) \(\Psi _{m},\) respectively, and let \(\tau _{1},\tau _{2},\ldots ,\tau _{n}\) be uncertain variables. Then the uncertain random variable

$$\begin{aligned} \xi =f(\eta _{1}, \eta _{2},\dots , \eta _{m},\tau _{1},\tau _{2},\ldots ,\tau _{n}) \end{aligned}$$

has a chance distribution

$$\begin{aligned} \Phi (x)=\int _{\mathfrak {R}^{m}}F(x,y_{1},\ldots ,y_{m})\mathrm{d}\Psi _{1}(y_{1})\ldots \mathrm{d}\Psi _{m}(y_{m}) \end{aligned}$$

where \(F(x,y_{1},\ldots ,y_{m})\) is the uncertainty distribution of the uncertain variable \(f(y_{1},\ldots ,y_{m},\tau _1,\ldots ,\tau _n)\), and \(\xi \) has an expected value

$$\begin{aligned} E[\xi ]\!=\!\!\!\int _{\mathfrak {R}^{m}}\!\!\!\!E[f(y_{1},\ldots \!,y_{m}, \tau _1,\ldots \!,\tau _n)]\mathrm{d}\Psi _{1}(y_{1})\ldots \mathrm{d}\Psi _{m}(y_{m}) \end{aligned}$$

where \(E[f(y_{1},\ldots ,y_{m},\tau _1,\ldots ,\tau _n)]\) is the expected value of the uncertain variable \(f(y_{1},\ldots ,y_{m},\tau _1,\ldots ,\tau _n).\)

Theorem 4

(Liu 2013b) Assume \(\eta _1,\eta _2,\ldots ,\eta _m\) are independent Boolean random variable with truth values \(\alpha _1,\) \(\alpha _2,\) \(\ldots ,\) \(\alpha _m\), and \(\tau _1,\) \(\tau _2,\) \(\ldots ,\) \(\tau _n\) are independent Boolean uncertain variables with truth values \(\beta _1\), \(\beta _2,\ldots ,\beta _n\), respectively. If f is a Boolean function, then

$$\begin{aligned} \xi =f(\eta _1,\ldots ,\eta _m,\tau _1,\ldots ,\tau _n) \end{aligned}$$

is a Boolean uncertain random variable such that

$$\begin{aligned} \mathrm{Ch}\{\xi =1\}=\!\!\!\!\sum _{(x_1,\ldots ,x_m)\in \{0,1\}^m}\!\!\! \left( \prod _{i=1}^m\mu _i(x_i)\!\right) f^*(x_1,\ldots ,x_m) \end{aligned}$$

where

$$\begin{aligned} \begin{array}{rl} &f^*(x_1,\ldots ,x_m)\\ =&\left\{ \begin{array}{l} \displaystyle \sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j),\\ \\ \quad \text{ if } \sup \limits _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)<;0.5\\ \\ \displaystyle 1-\sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=0}\min _{1\le j\le n}\nu _j(y_j),\\ \\ \quad \text{ if } \sup \limits _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)\ge 0.5, \end{array}\right. \end{array} \end{aligned}$$
$$\begin{aligned} \mu _i(x_i)=\left\{ \begin{array}{cl} \alpha _i,&\text{ if } x_i=1\\ 1-\alpha _i,&\text{ if } x_i=0 \end{array}\right. \quad (i=1,2,\ldots ,m), \end{aligned}$$
$$\begin{aligned} \nu _j(y_j)=\left\{ \begin{array}{cl} \beta _j,&\text{ if } y_j=1\\ 1-\beta _j,&\text{ if } y_j=0 \end{array}\right. \quad (j=1,2,\ldots ,n). \end{aligned}$$

4 Probabilistic logic and probabilistic entailment

A proposition is essentially a statement with a truth value belonging to [0, 1]. For example, “it takes 5 hours by high-speed train from Beijing to Shanghai with a truth value 0.9” is a proposition, where “it takes 5 h by high-speed train from Beijing to Shanghai” is a statement, and its truth value is 0.9. When we describe the truth value via the probability measure, the proposition becomes a random proposition which can be regarded as a Boolean random variable.

4.1 Probabilistic logic

Let X be a random proposition that is a string of Boolean random variables and connective symbols. Its truth value was defined in Nilsson (1986) as the probability measure that the random proposition is true, i.e.,

$$\begin{aligned} T(X)=\mathrm{Pr}\{X=1\}. \end{aligned}$$

The probabilistic logic is consistent with the law of excluded middle and the law of contradiction. That is, \(X\vee \lnot X\) is a tautology, i.e.,

$$\begin{aligned} T(X\vee \lnot X)=1, \end{aligned}$$

and \(X\wedge \lnot X\) is a contradiction, i.e.,

$$\begin{aligned} T(X\wedge \lnot X)=0. \end{aligned}$$

In addition, the probabilistic logic also satisfies the law of truth conservation, i.e.,

$$\begin{aligned} T(X)+T(\lnot X)=1. \end{aligned}$$

Theorem 5

(Truth Value Theorem) Assume that \(X_1,\) \(X_2,\) \(\ldots ,\) \(X_m\) are independent random propositions with truth values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\), respectively. Then the random proposition

$$\begin{aligned} Z=f(X_1,X_2,\ldots ,X_m) \end{aligned}$$

has a truth value

$$\begin{aligned} T(Z)=\sum _{(x_1,\ldots ,x_m)\in \{0,1\}^m}\left( \prod _{i=1}^m\mu _i(x_i)\right) f(x_1,\ldots ,x_m), \end{aligned}$$

where

$$\begin{aligned} \mu _i(x_i)=\left\{ \begin{array}{cl} \alpha _i,&\text{ if } x_i=1\\ 1-\alpha _i,&\text{ if } x_i=0 \end{array} \right. \quad (i=1,2,\ldots ,m). \end{aligned}$$

4.2 Probabilistic entailment

Assume that \(X_1,X_2,\ldots ,X_m\) are independent random propositions with unknown truth values \(\alpha _1,\alpha _2,\ldots ,\) \(\alpha _m\), respectively. Also assume that

$$\begin{aligned} Y_k=f_k(X_1,X_2,\ldots ,X_m) \end{aligned}$$

are random propositions with known truth values \(c_k\), \(k=1,2,\ldots \!,p\), respectively. Now let

$$\begin{aligned} Z=f(X_1,X_2,\ldots ,X_m) \end{aligned}$$

be an additional random proposition. What is the truth value of Z? In order to obtain it, let us consider what values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\) may take. The first type of constraints is

$$\begin{aligned} 0\le \alpha _i\le 1,\quad i=1,2,\ldots ,m. \end{aligned}$$

The second type of constraints is represented by

$$\begin{aligned} T(Y_k)=c_k,\quad k=1,2,\ldots ,p. \end{aligned}$$

Note that the truth values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\) may not be unique, because they are just partially determined by

$$\begin{aligned} \sum _{(x_1,x_2\ldots ,x_m)\in \{0,1\}^m}\left( \prod _{i=1}^m\mu _i(x_i)\right) f_k(x_1,x_2,\ldots ,x_m)=c_k \end{aligned}$$

for \(k=1,2,\ldots ,p\), where

$$\begin{aligned} \mu _i(x_i)=\left\{ \begin{array}{cl} \alpha _i,&\text{ if } x_i=1\\ 1-\alpha _i,&\text{ if } x_i=0 \end{array}\right. \quad (i=1,2,\ldots ,m). \end{aligned}$$

As a result, the truth value T(Z) may not be unique, either. In this case, we accept the maximum uncertainty principle (Liu 2007), which implies a proposition will be assigned a truth value as close to 0.5 as possible if it can take multiple reasonable truth values. In other words, the objective is to minimize the value \(|T(Z)-0.5|\) via choosing appreciate values of \(\alpha _1,\alpha _2,\ldots ,\alpha _m\). The probabilistic entailment model is as follows,

$$\begin{aligned} \left\{ \begin{array}{l} \min |T(Z)-0.5|\\ \text{ subject } \text{ to: }\\ \quad \begin{array}{ll} 0\le \alpha _i\le 1,& i=1,2,\ldots ,m\\ T(Y_k)=c_k,&k=1,2,\ldots ,p \end{array} \end{array}\right. \end{aligned}$$
(1)

where \(T(Z),T(Y_1),\ldots ,T(Y_p)\) are functions of unknown truth values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\).

Theorem 6

(Probabilistic Modus Ponens) Let A and B be independent random propositions. Assume that A and \(A\rightarrow B\) have truth values a and b, respectively. Then the random proposition B has a truth value

$$\begin{aligned} T(B)=\left\{ \begin{array}{cl} \displaystyle (a+b-1)/a,&\text{ if } a+b\ge 1\\ \text{ illness },&\text{ if } a+b<1. \end{array}\right. \end{aligned}$$
(2)

Proof

Denote the truth values of A and B by \(\alpha _1\) and \(\alpha _2\), respectively, and write

$$\begin{aligned} Y_1=A,\quad Y_2=A\rightarrow B,\quad Z=B. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=\alpha _1=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=1-\alpha _1+\alpha _1\alpha _2=b, \end{aligned}$$
$$\begin{aligned} T(Z)=\alpha _2. \end{aligned}$$

In this case, the probabilistic entailment model (1) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |\alpha _2-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad \alpha _1=a\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=b. \end{array}\right. \end{aligned}$$

When \(a+b\ge 1\), there is a unique feasible solution

$$\begin{aligned} \alpha _1^*=a,\quad \alpha _2^*=(a+b-1)/a. \end{aligned}$$

Thus \(T(B)=\alpha _2^*=(a+b-1)/a.\) When \(a+b<1\), there is no feasible solution and the truth values are ill-assigned. In summary, from

$$\begin{aligned} T(A)=a,\quad T(A\rightarrow B)=b \end{aligned}$$

we entail

$$\begin{aligned} T(B)=\left\{ \begin{array}{cl}\displaystyle (a+b-1)/a,&\text{ if } a+b\ge 1\\ \text{ illness },&\text{ if } a+b<1. \end{array}\right. \end{aligned}$$

\(\square \)

Remark 1

Please note that the truth value (2) of the probabilistic modus ponens coincides with that of the classical modus ponens: if both A and \(A\rightarrow B\) are true, then B is true.

Theorem 7

(Probabilistic Modus Tollens) Let A and B be independent random propositions. Assume that \(A\rightarrow B\) and B have truth values a and b, respectively. Then the random proposition A has a truth value

$$\begin{aligned} T(A)=\left\{ \begin{array}{cl}\displaystyle (1-a)/(1-b),&\text{ if } a\ge b\\ \text{ illness },&\text{ if } a<b. \end{array}\right. \end{aligned}$$
(3)

Proof

Denote the truth values of A and B by \(\alpha _1\) and \(\alpha _2\), respectively, and write

$$\begin{aligned} Y_1=A\rightarrow B,\quad Y_2=B,\quad Z=A. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=1-\alpha _1+\alpha _1\alpha _2=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=\alpha _2=b, \end{aligned}$$
$$\begin{aligned} T(Z)=\alpha _1. \end{aligned}$$

In this case, the probabilistic entailment model (1) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |\alpha _1-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=a\\ \qquad \alpha _2=b. \end{array}\right. \end{aligned}$$

When \(a\ge b\), there is a unique feasible solution

$$\begin{aligned} \alpha _1^*=(1-a)/(1-b),\quad \alpha _2^*=b. \end{aligned}$$

Thus \(T(A)=\alpha _1^*=(1-a)/(1-b).\) When \(a<b\), there is no feasible solution and the truth values are ill-assigned. In summary, from

$$\begin{aligned} T(A\rightarrow B)=a,\quad T(B)=b \end{aligned}$$

we entail

$$\begin{aligned} T(A)=\left\{ \begin{array}{cl} \displaystyle (1-a)/(1-b),&\text{ if } a\ge b\\ \text{ illness },&\text{ if } a<b. \end{array}\right. \end{aligned}$$

\(\square \)

Remark 2

Please note that the truth value (3) of the probabilistic modus tollens coincides with that of the classical modus tollens: if \(A\rightarrow B\) is true and B is false, then A is false.

Theorem 8

(Probabilistic Hypothetical Syllogism) Let AB and C be independent random propositions. Assume that \(A\rightarrow B\) and \(B\rightarrow C\) have truth values a and b, respectively. Then the random proposition \(A\rightarrow C\) has a truth value

$$\begin{aligned} \begin{array}{rl} &T(A\rightarrow C)\\ =&\left\{ \begin{array}{l}\displaystyle \frac{a+b-1}{a},\quad \text{ if } a+2b\ge 2 \text{ and } a\ge b, \\ \quad \text{ or } a+2b\le 2,a+b\ge 1 \text{ and } a\le 0.5\\ \\ \displaystyle \frac{a+b-1}{b}, \quad \text{ if } 2a+b\ge 2 \text{ and } b\ge a,\\ \quad \text{ or } 2a+2\le 2,a+b\ge 1 \text{ and } b\le 0.5\\ \\ 1-4(1-a)(1-b),\\ \quad \text{ if } a\ge 0.5,b\ge 0.5 \text{ and } 8(1-a)(1-b)\ge 1\\ \\ 0.5,\quad \text{ otherwise. }\\ \end{array}\right. \end{array} \end{aligned}$$
(4)

Proof

Denote the truth values of ABC by \(\alpha _1,\alpha _2,\alpha _3\), respectively, and write

$$\begin{aligned} Y_1=A\rightarrow B,\quad Y_2=B\rightarrow C,\quad Z=A\rightarrow C. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=1-\alpha _1+\alpha _1\alpha _2=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=1-\alpha _2+\alpha _2\alpha _3=b, \end{aligned}$$
$$\begin{aligned} T(Z)=1-\alpha _1+\alpha _1\alpha _3. \end{aligned}$$

In this case, the probabilistic entailment model (1) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |1-\alpha _1+\alpha _1\alpha _3-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad 0\le \alpha _3\le 1\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=a\\ \qquad 1-\alpha _2+\alpha _2\alpha _3=b. \end{array}\right. \end{aligned}$$

When \(a+b<1\), there is no feasible solution and the truth values are ill-assigned. Otherwise, we have

$$\begin{aligned} \alpha _1=\frac{1-a}{1-\alpha _2},\quad 1-b\le \alpha _2\le a,\quad \alpha _3=1-\frac{1-b}{\alpha _2} \end{aligned}$$

and

$$\begin{aligned} T(A\rightarrow C)(\alpha _2)=1-\frac{(1-a)(1-b)}{\alpha _2(1-\alpha _2)}. \end{aligned}$$

Note that \(T(A\rightarrow C)(\alpha _2)\) is increasing with respect to \(\alpha _2\) when \(\alpha _2\le 0.5\), and decreasing with respect to \(\alpha _2\) when \(\alpha _2\ge 0.5\). It is also easy to verify that

$$\begin{aligned} T(A\rightarrow C)(1-b)=\frac{a+b-1}{b}, \end{aligned}$$
$$\begin{aligned} T(A\rightarrow C)(a)=\frac{a+b-1}{a}, \end{aligned}$$
$$\begin{aligned} T(A\rightarrow C)(0.5)=1-4(1-a)(1-b). \end{aligned}$$

When \(a+2b\ge 2\) and \(a\ge b,\) we have

$$\begin{aligned} \frac{a+b-1}{b}\ge \frac{a+b-1}{a}\ge 0.5 \end{aligned}$$

and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{a}. \end{aligned}$$

When \(2a+b\ge 2\) and \(b\ge a\), we have

$$\begin{aligned} \frac{a+b-1}{a}\ge \frac{a+b-1}{b}\ge 0.5 \end{aligned}$$

and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{b}. \end{aligned}$$

When \(a+2b\ge 2\) and \(a\le 0.5\), we have

$$\begin{aligned} \frac{a+b-1}{b}\le 0.5\le \frac{a+b-1}{a} \end{aligned}$$

and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=0.5. \end{aligned}$$

When \(2a+b\ge 2\) and \(b\le 0.5\), we have

$$\begin{aligned} \frac{a+b-1}{a}\le 0.5\le \frac{a+b-1}{b} \end{aligned}$$

and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=0.5. \end{aligned}$$

When \(a\ge 0.5\), \(b\ge 0.5\), \((a+2b)\wedge (2a+b)\le 2\) and \(8(1-a)(1-b)\le 1\), the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=0.5. \end{aligned}$$

When \(a\ge 0.5\), \(b\ge 0.5\), and \(8(1-a)(1-b)\ge 1\), we have

$$\begin{aligned} 1-4(1-a)(1-b)\le 0.5 \end{aligned}$$

and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=1-4(1-a)(1-b). \end{aligned}$$

When \(a+b\ge 1\), \(a+2b\le 2\) and \(a\le 0.5\), we have \(1-b\le a\le 0.5\) and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{a}. \end{aligned}$$

When \(a+b\ge 1\), \(2a+b\le 2\) and \(b\le 0.5\), we have \(0.5\le 1-b\le a\) and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{b}. \end{aligned}$$

The truth value of \(A\rightarrow C\) is thus obtained for all cases. \(\square \)

Remark 3

Please note that the truth value (4) of the probabilistic hypothetical syllogism coincides with that of the classical hypothetical syllogism: if both \(A\rightarrow B\) and \(B\rightarrow C\) are true, then \(A\rightarrow C\) is true.

It should be noted that our probabilistic entailment model is absolutely different from Nilsson’s probabilistic entailment model. Nilsson (1986) entailed the truth value of a proposition by means of solving a system of linear equations whose coefficient matrix is a 0-1 matrix obtained from all the related consistent propositions. When the system of linear equations has multiple solutions, Nilsson (1986) suggested two techniques of projection and maximum entropy for obtaining a so-called optimal solution. However, the truth value entailed in this way is not consistent with the truth values of other related propositions. Take the probabilistic modus ponens as an example which entails T(B) from T(A) and \(T(A\rightarrow B)\). Nilsson (1986) showed that the proposition B is supposed to have a truth value

$$\begin{aligned} T_{Nilsson}(B)=\frac{T(A)}{2}+T(A\rightarrow B)-\frac{1}{2}. \end{aligned}$$

With this formula, if \(T(A)=2/3\) and \(T(A\rightarrow B)=1/2,\) then \(T_{Nilsson}(B)=1/3.\) But with \(T(A)=2/3\) and \(T_{Nilsson}(B)=1/3,\) we have

$$\begin{aligned} T(A\rightarrow B)=1-T(A)\times (1-T_{Nilsson}(B))=\frac{5}{9}\not =\frac{1}{2}. \end{aligned}$$

We showed in Theorem 6 that the proposition B is supposed to have a truth value

$$\begin{aligned} T_{Liu\text{- }Yao}(B)=\frac{T(A)+T(A\rightarrow B)-1}{T(A)}. \end{aligned}$$

With this formula, if \(T(A)=2/3\) and \(T(A\rightarrow B)=1/2,\) then \(T_{Liu\text{- }Yao}(B)=1/4.\) And with \(T(A)=2/3\) and \(T_{Liu\text{- }Yao}(B)=1/4,\) we have

$$\begin{aligned} T(A\rightarrow B)=1-T(A)\times (1-T_{Liu\text{- }Yao}(B))=\frac{1}{2}. \end{aligned}$$

5 Uncertain random logic

An uncertain random proposition is essentially a proposition whose truth value is described via a chance measure. In fact, an uncertain random proposition can be regarded as a Boolean uncertain random variable.

Let X be an uncertain random proposition that is a string of Boolean uncertain random variables and connective symbols. What is the truth value of X? We define it as the chance measure that the uncertain random proposition is true, i.e.,

$$\begin{aligned} T(X)=\mathrm{Ch}\{X=1\}. \end{aligned}$$

It is emphasized that the uncertain random logic is consistent with the law of excluded middle and the law of contradiction. That is, \(X\vee \lnot X\) is a tautology, i.e.,

$$\begin{aligned} T(X\vee \lnot X)=1, \end{aligned}$$

and \(X\wedge \lnot X\) is a contradiction, i.e.,

$$\begin{aligned} T(X\wedge \lnot X)=0. \end{aligned}$$

In addition, the uncertain random logic also satisfies the law of truth conservation, i.e.,

$$\begin{aligned} T(X)+T(\lnot X)=1. \end{aligned}$$

Assume Z is an uncertain random proposition containing uncertain random propositions \(X_1,X_2,\ldots ,X_n\). It is clear that there is a Boolean function f such that

$$\begin{aligned} Z=f(X_1,X_2,\ldots ,X_n). \end{aligned}$$

Then the truth value of Z is

$$\begin{aligned} T(Z)=\mathrm{Ch}\{f(X_1,X_2,\ldots ,X_n)=1\}. \end{aligned}$$

The following theorem provides a formula for calculating the truth value of uncertain random proposition.

Theorem 9

(Truth Value Theorem) Assume \(A_1,A_2,\) \(\ldots ,A_m\) are independent random propositions with truth values \(\alpha _1,\alpha _2,\) \(\ldots ,\alpha _m\), and \(B_1,B_2,\ldots ,B_n\) are independent uncertain propositions with truth values \(\beta _1\), \(\beta _2,\ldots ,\) \(\beta _n\), respectively. Then the uncertain random proposition

$$\begin{aligned} Z=f(A_1,\ldots ,A_m,B_1,\ldots ,B_n) \end{aligned}$$

has a truth value

$$\begin{aligned} T(Z)=\sum _{(x_1,\ldots ,x_m)\in \{0,1\}^m}\left( \prod _{i=1}^m\mu _i(x_i)\right) f^*(x_1,\ldots ,x_m) \end{aligned}$$
(5)

where

$$\begin{aligned} \begin{array}{cl} &f^*(x_1,\ldots ,x_m)\\ =&\left\{ \begin{array}{l} \displaystyle \sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j),\\ \quad \text{ if } \displaystyle \sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)<0.5\\ \\ \displaystyle 1-\sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=0}\min _{1\le j\le n}\nu _j(y_j),\\ \quad \text{ if } \displaystyle \sup _{f(x_1,\ldots ,x_m,y_1,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)\ge 0.5, \end{array}\right. \end{array} \end{aligned}$$
$$\begin{aligned} \mu _i(x_i)=\left\{ \begin{array}{cl}\alpha _i,&\text{ if } x_i=1\\ 1-\alpha _i,&\text{ if } x_i=0 \end{array}\right. \quad (i=1,2,\ldots ,m), \end{aligned}$$
$$\begin{aligned} \nu _j(y_j)=\left\{ \begin{array}{cl}\beta _j,&\text{ if } y_j=1\\ 1-\beta _j,&\text{ if } y_j=0 \end{array}\right. \quad (j=1,2,\ldots ,n). \end{aligned}$$

Proof

It follows from

$$\begin{aligned} T(Z)=\mathrm{Ch}\{f(A_1,\ldots ,A_m,B_1,\ldots ,B_n)=1\} \end{aligned}$$

and Theorem 4 immediately. \(\square \)

Remark 4

When the uncertain propositions disappear, the uncertain random logic becomes the probabilistic logic (Nilsson 1986). That is, the random proposition

$$\begin{aligned} Z=f(A_1,A_2,\ldots ,A_m) \end{aligned}$$

has a truth value

$$\begin{aligned} T(Z)=\!\!\!\sum _{(x_1,x_2,\ldots ,x_m)\in \{0,1\}^m}\!\!\!\left( \prod _{i=1}^m\mu _i(x_i)\right) \, f(x_1,x_2,\ldots ,x_m). \end{aligned}$$

Remark 5

When the random propositions disappear, the uncertain random logic becomes the uncertain logic (Li and Liu 2009b), and the truth value formula (5) becomes the one in Chen and Ralescu (2011). That is, the uncertain proposition \(Z=f(B_1,B_2,\ldots ,B_n)\) has a truth value

$$\begin{aligned} T(Z)=\left\{ \begin{array}{l} \displaystyle \sup _{f(y_1,y_2,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j),\\ \quad \qquad \text{ if } \displaystyle \sup _{f(y_1,y_2,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)<0.5\\ \displaystyle 1-\sup _{f(y_1,y_2,\ldots ,y_n)=0}\min _{1\le j\le n}\nu _j(y_j),\\ \quad \qquad \text{ if } \displaystyle \sup _{f(y_1,y_2,\ldots ,y_n)=1}\min _{1\le j\le n}\nu _j(y_j)\ge 0.5. \end{array}\right. \end{aligned}$$

Example 1

Let A be a random proposition with a truth value \(\alpha \), and B be an uncertain proposition with a truth value \(\beta \). Then

$$\begin{aligned} T(A\wedge B)=\alpha \beta , \end{aligned}$$
$$\begin{aligned} T(A\vee B)=\alpha +\beta -\alpha \beta , \end{aligned}$$
$$\begin{aligned} T(A\rightarrow B)=1-\alpha +\alpha \beta , \end{aligned}$$
$$\begin{aligned} T(B\rightarrow A)=1-\beta +\alpha \beta . \end{aligned}$$

Example 2

Let \(A_1\) and \(A_2\) be two random propositions with truth values \(\alpha _1\) and \(\alpha _2\), respectively, and \(B_1\) and \(B_2\) be two uncertain propositions with truth values \(\beta _1\) and \(\beta _2\), respectively. Then

$$\begin{aligned} T(A_1\wedge A_2\rightarrow B_1\wedge B_2)=1-\alpha _1\alpha _2+\alpha _1\alpha _2\cdot (\beta _1\wedge \beta _2), \end{aligned}$$
$$\begin{aligned} T(A_1\wedge A_2\rightarrow B_1\vee B_2)=1-\alpha _1\alpha _2+\alpha _1\alpha _2\cdot (\beta _1\vee \beta _2), \end{aligned}$$
$$\begin{aligned} \begin{array}{rl} &T(A_1\vee A_2\rightarrow B_1\wedge B_2)\\ =&1-(1-\alpha _1)(1-\alpha _2)+(1-\alpha _1)(1-\alpha _2)\cdot (\beta _1\wedge \beta _2), \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{rl} &T(A_1\vee A_2\rightarrow B_1\vee B_2)\\ =&1-(1-\alpha _1)(1-\alpha _2)+(1-\alpha _1)( 1-\alpha _2)\cdot (\beta _1\vee \beta _2). \end{array} \end{aligned}$$

6 Uncertain random entailment

Assume that \(A_1,A_2,\ldots ,A_m\) are independent random propositions with unknown truth values \(\alpha _1,\alpha _2,\ldots ,\) \(\alpha _m\), and \(B_1,B_2,\ldots ,B_n\) are independent uncertain propositions with unknown truth values \(\beta _1,\beta _2,\ldots ,\beta _n\), respectively. Also assume that

$$\begin{aligned} Y_k=f_k(A_1,A_2,\ldots ,A_m,B_1,B_2,\ldots ,B_n) \end{aligned}$$

are uncertain random propositions with known truth values \(c_k\), \(k=1,2,\ldots \!,p\), respectively. Now let

$$\begin{aligned} Z=f(A_1,A_2,\ldots ,A_m,B_1,B_2,\ldots ,B_n) \end{aligned}$$

be an additional uncertain random proposition. What is the truth value of Z? In order to obtain it, let us consider what values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\) and \(\beta _1,\beta _2,\ldots ,\beta _n\) may take. The first type of constraints is

$$\begin{aligned} 0\le \alpha _i\le 1,\quad i=1,2,\ldots ,m, \end{aligned}$$
$$\begin{aligned} 0\le \beta _j\le 1,\quad j=1,2,\ldots ,n. \end{aligned}$$

The second type of constraints is represented by

$$\begin{aligned} T(Y_k)=c_k,\quad k=1,2,\ldots ,p. \end{aligned}$$

We use the maximum uncertainty principle to determine the truth value T(Z). That is, T(Z) should be assigned the value as close to 0.5 as possible if it has more than one feasible values. In other words, we should minimize the value \(|T(Z)-0.5|\) via choosing appreciate values of \(\alpha _1,\alpha _2,\ldots ,\alpha _m\) and \(\beta _1,\beta _2,\ldots ,\beta _n\). The uncertain random entailment model is as follows,

$$\begin{aligned} \left\{ \begin{array}{l} \min |T(Z)-0.5|\\ \text{ subject } \text{ to: }\\ \quad \begin{array}{ll} 0\le \alpha _i\le 1,& i=1,2,\ldots ,m\\ 0\le \beta _j\le 1,& j=1,2,\ldots ,n\\ T(Y_k)=c_k,&k=1,2,\ldots ,p \end{array} \end{array}\right. \end{aligned}$$
(6)

where \(T(Z),T(Y_1),\ldots ,T(Y_p)\) are functions of unknown truth values \(\alpha _1,\alpha _2,\ldots ,\alpha _m\) and \(\beta _1,\beta _2,\ldots ,\beta _n\).

Remark 6

When the uncertain propositions disappear, the uncertain random entailment model (6) becomes a probabilistic entailment model (1), i.e.,

$$\begin{aligned} \left\{ \begin{array}{l} \min |T(Z)-0.5|\\ \text{ subject } \text{ to: }\\ \quad \begin{array}{ll} 0\le \alpha _i\le 1,& i=1,2,\ldots ,m\\ T(Y_k)=c_k,&k=1,2,\ldots ,p \end{array} \end{array}\right. \end{aligned}$$

where \(T(Z),T(Y_1),\ldots ,T(Y_p)\) are functions of \(\alpha _1,\alpha _2,\) \(\ldots ,\) \(\alpha _m\).

Remark 7

When the random propositions disappear, the uncertain random entailment model (6) becomes an uncertain entailment model (Liu 2009b), i.e.,

$$\begin{aligned} \left\{ \begin{array}{l} \min |T(Z)-0.5|\\ \text{ subject } \text{ to: }\\ \quad \begin{array}{ll} 0\le \beta _j\le 1,& j=1,2,\ldots ,n\\ T(Y_k)=c_k,&k=1,2,\ldots ,p \end{array} \end{array}\right. \end{aligned}$$

where \(T(Z),T(Y_1),\ldots ,T(Y_p)\) are functions of \(\beta _1,\beta _2,\) \(\ldots ,\) \(\beta _n\).

Theorem 10

(Uncertain Random Modus Ponens) Let A and B be two propositions, one is random and another is uncertain. Assume that A and \(A\rightarrow B\) have truth values a and b, respectively. Then the uncertain random proposition B has a truth value

$$\begin{aligned} T(B)=\left\{ \begin{array}{cl} \displaystyle (a+b-1)/a,&\text{ if } a+b\ge 1\\ \text{ illness },&\text{ if } a+b<1. \end{array}\right. \end{aligned}$$
(7)

Proof

Denote the truth values of A and B by \(\alpha _1\) and \(\alpha _2\), respectively, and write

$$\begin{aligned} Y_1=A,\quad Y_2=A\rightarrow B,\quad Z=B. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=\alpha _1=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=1-\alpha _1+\alpha _1\alpha _2=b, \end{aligned}$$
$$\begin{aligned} T(Z)=\alpha _2. \end{aligned}$$

In this case, the uncertain random entailment model (6) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |\alpha _2-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad \alpha _1=a\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=b. \end{array}\right. \end{aligned}$$

When \(a+b\ge 1\), there is a unique feasible solution

$$\begin{aligned} \alpha _1^*=a,\quad \alpha _2^*=(a+b-1)/a. \end{aligned}$$

Thus \(T(B)=\alpha _2^*=(a+b-1)/a.\) When \(a+b<1\), there is no feasible solution and the truth values are ill-assigned. In summary, from

$$\begin{aligned} T(A)=a,\quad T(A\rightarrow B)=b, \end{aligned}$$

we entail

$$\begin{aligned} T(B)=\left\{ \begin{array}{cl}\displaystyle (a+b-1)/a,&\text{ if } a+b\ge 1\\ \text{ illness },&\text{ if } a+b<1. \end{array}\right. \end{aligned}$$

\(\square \)

Remark 8

Please note that the truth value (7) of uncertain random modus ponens coincides with that of the classical modus ponens: if both A and \(A\rightarrow B\) are true, then B is true.

Theorem 11

(Uncertain Random Modus Tollens) Let A and B be two propositions, one is random and another is uncertain. Assume that \(A\rightarrow B\) and B have truth values a and b, respectively. Then the uncertain random proposition A has a truth value

$$\begin{aligned} T(A)=\left\{ \begin{array}{cl}\displaystyle (1-a)/(1-b),&\text{ if } a\ge b\\ \text{ illness },&\text{ if } a<b. \end{array}\right. \end{aligned}$$
(8)

Proof

Denote the truth values of A and B by \(\alpha _1\) and \(\alpha _2\), respectively, and write

$$\begin{aligned} Y_1=A\rightarrow B,\quad Y_2=B,\quad Z=A. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=1-\alpha _1+\alpha _1\alpha _2=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=\alpha _2=b, \end{aligned}$$
$$\begin{aligned} T(Z)=\alpha _1. \end{aligned}$$

In this case, the uncertain random entailment model (6) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |\alpha _1-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=a\\ \qquad \alpha _2=b. \end{array}\right. \end{aligned}$$

When \(a\ge b\), there is a unique feasible solution

$$\begin{aligned} \alpha _1^*=(1-a)/(1-b),\quad \alpha _2^*=b. \end{aligned}$$

Thus \(T(A)=\alpha _1^*=(1-a)/(1-b).\) When \(a<b\), there is no feasible solution and the truth values are ill-assigned. In summary, from

$$\begin{aligned} T(A\rightarrow B)=a,\quad T(B)=b, \end{aligned}$$

we entail

$$\begin{aligned} T(A)=\left\{ \begin{array}{cl} \displaystyle (1-a)/(1-b),&\text{ if } a\ge b\\ \text{ illness },&\text{ if } a<b. \end{array}\right. \end{aligned}$$

\(\square \)

Remark 9

Please note that the truth value (8) of uncertain random modus tollens coincides with that of the classical modus tollens: if \(A\rightarrow B\) is true and B is false, then A is false.

Theorem 12

(Uncertain Random Hypothetical Syllogism) Let A and C be independent uncertain propositions, and let B be a random proposition. Assume that \(A\rightarrow B\) and \(B\rightarrow C\) have truth values a and b, respectively. Then the uncertain random proposition \(A\rightarrow C\) has a truth value

$$\begin{aligned} \begin{array}{rl} &T(A\rightarrow C)\\ =&\left\{ \begin{array}{l} a+b-1,\quad \text{ if } a+b\ge 1.5\\ 0.5,\\ \qquad \text{ if } a+b\le 1.5 \text{ and } (2a+b)\vee (a+2b)\ge 2\\ \displaystyle \frac{a+b-1}{a},\\ \qquad \text{ if } a+b\ge 1 \text{ and } a+2b\le 2 \text{ and } a\le b\\ \displaystyle \frac{a+b-1}{b},\\ \qquad \text{ if } a+b\ge 1 \text{ and } 2a+b\le 2 \text{ and } a\ge b\\ \text{ illness },\quad \text{ if } a+b<1. \end{array}\right. \end{array} \end{aligned}$$
(9)

Proof

Denote the truth values of ABC by \(\alpha _1,\alpha _2,\alpha _3\), respectively, and write

$$\begin{aligned} Y_1=A\rightarrow B,\quad Y_2=B\rightarrow C,\quad Z=A\rightarrow C. \end{aligned}$$

It is clear that

$$\begin{aligned} T(Y_1)=1-\alpha _1+\alpha _1\alpha _2=a, \end{aligned}$$
$$\begin{aligned} T(Y_2)=1-\alpha _2+\alpha _2\alpha _3=b, \end{aligned}$$
$$\begin{aligned} T(Z)=(1-\alpha _1)\vee \alpha _3. \end{aligned}$$

In this case, the uncertain random entailment model (6) becomes

$$\begin{aligned} \left\{ \begin{array}{l} \min |(1-\alpha _1)\vee \alpha _3-0.5|\\ \text{ subject } \text{ to: }\\ \qquad 0\le \alpha _1\le 1\\ \qquad 0\le \alpha _2\le 1\\ \qquad 0\le \alpha _3\le 1\\ \qquad 1-\alpha _1+\alpha _1\alpha _2=a\\ \qquad 1-\alpha _2+\alpha _2\alpha _3=b. \end{array}\right. \end{aligned}$$

When \(a+b<1\), there is no feasible solution and the truth values are ill-assigned. Otherwise, we have

$$\begin{aligned} \alpha _1=\frac{1-a}{1-\alpha _2},\quad 1-b\le \alpha _2\le a,\quad \alpha _3=1-\frac{1-b}{\alpha _2} \end{aligned}$$

and

$$\begin{aligned} T(A\rightarrow C)(\alpha _2)=\left\{ \begin{array}{l} 1-\displaystyle \frac{1-a}{1-\alpha _2},\\ \displaystyle \text{ if } 1-b\le \alpha _2\le (1-b)/(2-a-b)\\ 1-\displaystyle \frac{1-b}{\alpha _2},\\ \displaystyle \text{ if } (1-b)/(2-a-b)\le \alpha _2\le a. \end{array}\right. \end{aligned}$$

Note that \(T(A\rightarrow C)(\alpha _2)\) is decreasing with respect to \(\alpha _2\) when \(\alpha _2\le (1-b)/(2-a-b)\), and increasing with respect to \(\alpha _2\) when \(\alpha _2\ge (1-b)/(2-a-b)\). It is also easy to verify that

$$\begin{aligned} T(A\rightarrow C)(1-b)=\frac{a+b-1}{b}, \end{aligned}$$
$$\begin{aligned} T(A\rightarrow C)(a)=\frac{a+b-1}{a}, \end{aligned}$$
$$\begin{aligned} T(A\rightarrow C)\left( \frac{1-b}{2-a-b}\right) =a+b-1. \end{aligned}$$

When \(a+b\ge 1.5\), we immediately have \(a+b-1\ge 0.5\), and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=a+b-1. \end{aligned}$$

When \(a+b\le 1.5\) and \(2a+b\ge 2\), we have \((a+b-1)/{b}\ge 0.5\ge a+b-1\), and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=0.5. \end{aligned}$$

When \(a+b\le 1.5\) and \(a+2b\ge 2\), we have \((a+b-1)/{a}\ge 0.5\ge a+b-1\), and the optimal solution also makes

$$\begin{aligned} T(A\rightarrow C)=0.5. \end{aligned}$$

When \(a+b\ge 1\), \(a+2b\le 2\) and \(a\le b\), we have \((a+b-1)/{b}\le (a+b-1)/{a}\le 0.5\), and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{a}. \end{aligned}$$

When \(a+b\ge 1\), \(2a+b\le 2\) and \(a\ge b\), we have \((a+b-1)/{a}\le (a+b-1)/{b}\le 0.5\), and the optimal solution makes

$$\begin{aligned} T(A\rightarrow C)=\frac{a+b-1}{b}. \end{aligned}$$

In summary, from

$$\begin{aligned} T(A\rightarrow B)=a,\quad T(B\rightarrow C)=b, \end{aligned}$$

we entail

$$\begin{aligned} \begin{array}{rl} &T(A\rightarrow C)\\ =&\left\{ \begin{array}{l} a+b-1,\quad \text{ if } a+b\ge 1.5\\ \\ 0.5,\\ \qquad \text{ if } a+b\le 1.5 \text{ and } (2a+b)\vee (a+2b)\ge 2\\ \\ \displaystyle \frac{a+b-1}{a},\\ \qquad \text{ if } a+b\ge 1 \text{ and } a+2b\le 2 \text{ and } a\le b\\ \\ \displaystyle \frac{a+b-1}{b},\\ \qquad \text{ if } a+b\ge 1 \text{ and } 2a+b\le 2 \text{ and } a\ge b\\ \\ \text{ illness },\quad \text{ if } a+b<1. \end{array}\right. \end{array} \end{aligned}$$

\(\square \)

Remark 10

Please note that the truth value (9) of uncertain random hypothetical syllogism coincides with that of the classical hypothetical syllogism: if both \(A\rightarrow B\) and \(B\rightarrow C\) are true, then \(A\rightarrow C\) is true.

7 Conclusion

This paper first proposed an uncertain random logic to deal with the uncertain random knowledge. When some uncertain random propositions are given with known truth values, a formula was derived to calculate the truth value of a Boolean function of these propositions. As an inverse problem, an uncertain random entailment model was built to calculate the truth value of a function of some uncertain random propositions based on the truth values of some other functions of these uncertain random propositions. The cases of modus ponens, modus tollens and hypothetical syllogism were studied as examples in the uncertain random environment. In addition, the probabilistic entailment model was constructed as a byproduct.