1 Introduction

This paper deals with the solution of linear algebraic systems involving dependencies between interval-valued parameters.

Denote by \(\mathbb {R}^{m\times n}\) the set of real m × n matrices. Vectors are considered one-column matrices. A real compact interval is \(\mathbf {a} = [a^{-}, a^{+}] := \{a\in \mathbb {R} \mid a^{-}\leq a\leq a^{+}\}\) and \(\mathbb {IR}^{m\times n}\) denotes the set of interval m × n matrices. We consider systems of linear algebraic equations having affine-linear uncertainty structure:

$$ \begin{array}{@{}rcl@{}} A(p) x &=& a(p), \quad p\in\mathbf{p}\in\mathbb{IR}^{K},\\ A(p) &:=& A_{0} + \sum\limits_{k = 1}^{K} p_{k}A_{k}, \qquad a(p) := a_{0} + \sum\limits_{k = 1}^{K} p_{k}a_{k}, \end{array} $$
(1)

where \(A_{k}\in \mathbb {R}^{n\times n}\), \(a_{k}\in \mathbb {R}^{n}\), k = 0,…,K and the parameters p = (p1,…, pK) are considered to be uncertain and varying within given non-degenerateFootnote 1 intervals p = (p1,…,pK). Nonlinear dependencies between interval valued parameters in linear algebraic systems are often linearized to the form (1) and methods for the latter are applied to bound the corresponding solution set. The united parametric solution set of the system (1), which is considered most often, is defined by:

$$ {\varSigma}^{p}_{\text{uni}}={\varSigma}_{\text{uni}}(A(p),a(p),\mathbf{p}) := \{x\in\mathbb{R}^{n} \mid (\exists p\in\mathbf{p})(A(p)x=a(p))\}. $$
(2)

For a nonempty and bounded set \(\varSigma \subset \mathbb {R}^{n}\), its interval hull \(\square \varSigma \) is defined by:

$$ \square\varSigma := \bigcap \{\mathbf{x}\in\mathbb{IR}^{n} \mid \varSigma\subseteq \mathbf{x}\}. $$

Since obtaining exact bounds for the solution of an interval linear system is an NP-hard problem [20], a variety of interval methods are developed to deliver efficiently an interval vector \(\mathbf {x}\supseteq {\varSigma }^{p}_{\text {uni}}\) and the quality of the latter enclosure is measured with respect to the sharpest enclosure \(\square {\varSigma }^{p}_{\text {uni}}\) which is the ultimate goal.

The focus of the present research is parametric united solution sets (2) which possess the so-called endpoint property.

Definition 1

A parametric united solution set (2) possesses the endpoint property if the vertices of its interval hull are attained at particular endpoints of the parameter intervals.

Graphical presentations of solution sets with this property can be found in [16, Example 5.2] and Fig. 1. The endpoint property is possessed partly if the vertices of the interval hull solution \(\square {\varSigma }^{p}_{\text {uni}}\) are attained for particular endpoints of some (not all) parameter intervals. Graphical presentations of such solution sets can be found in [16, Examples 4.1 and 4.2]. Note that the endpoint property of the solution set is possessed (fully or partly) by a very wide class of interval parametric linear systems. This property is proven in [11] for parametric linear systems involving rank 1 uncertainty structure. More generally, this property follows if the boundary of a parametric united solution set has a linear shape. Furthermore, the boundary of a united parametric solution set may involve hypersurfaces which are nonlinear and the endpoint property might hold true (see [16, Example 5.1]). If the endpoint property of a solution set holds true with respect to all (or some of the) interval parameters, one can obtain the best interval solution enclosure \(\square {\varSigma }^{p}_{\text {uni}}\) if the endpoint dependence is known (or a solution enclosure which is sharper than the enclosure obtained by methods not accounting for the endpoint property). In other words, it is possible to obtain a better solution than usual if the endpoints that generate the lower/upper bounds of the interval hull (component by component) are known. The ultimate goal of most scientific and applied problems (an interval solution enclosure which is most close to \(\square {\varSigma }^{p}_{\text {uni}}\)) highlights the importance of the basic problem considered in this work: to prove that the vertices of the exact interval hull of a parametric united solution set are attained at particular endpoints of the parameter intervals and during this proof to find which endpoints of the parameter intervals generate the lower/upper bounds of the interval hull components. Usually, the endpoint property and the endpoint dependence are considered simultaneously by interval numerical methods.

Fig. 1
figure 1

Projections of the parametric solution sets of system (23) with parameters varying in 0.25[− 1,1] (dashed lines) and with parameters varying in 0.45[− 1,1] (solid red lines). The nonlinear parts of the boundary of the second solution set are better visible on the x2x3 projection

J. Rohn was the first who proposed to exploit monotonic influence of the interval parameters on the parametric solution set for its sharper interval enclosure, cf. [18]. This approach was further developed in [3, 10]. In [1], Ganesan et al. developed analytical methods to find the monotonic dependence of the solution components on the rank one interval parameters involved in the system. Both the numerical approach, initially proposed by Rohn, and the analytic one in [1] are based on determining the sign of the partial derivatives of \({\varSigma }_{\text {uni}}^{p}\) with respect to interval parameters. The numerical proof of endpoint dependence is in general an iterative process; it may involve proving both global and local monotonicity and at each iteration require solving interval parametric linear systems, cf. [10]. An essential part of proving the endpoint dependence is the first iteration, where the initial monotonic dependence of the solution components on some interval parameters is proven. The efficiency of the initial monotonicity proof in the first iteration has various aspects discussed in more detail in Section 2. An essential improvement in the computational complexity of the initial monotonicity proof is proposed in [12]. Recently, an attempt to compare three approaches for the initial monotonicity proof is done in [22].

The goal of the present work is two-fold:

  1. 1.

    To propose a new approach for the initial monotonicity proof (finding the endpoint dependence), which is highly efficient, both computationally and for proving the monotonicity, for interval parametric linear systems involving true rank 1 interval parametersFootnote 2;

  2. 2.

    To provide enhanced analysis and comparison of the methodologies for proving the initial monotonic dependence.

The paper is organized as follows. In Section 2, we discuss several aspects of the initial monotonicity proof. This motivates different approaches for implementing the latter that will be compared in Section 4. Section 3.1 recalls three different methods for obtaining a parameterized enclosure of a united parametric solution set. These methods will be part of the numerical comparisons done in Section 4 and provide a background for a novel initial monotonicity proving method presented in Section 3.2. The new approach efficiently utilizes a parameterized solution in a single monotonicity proving procedure, which fully accounts for the dependencies on the interval parameters in the partial derivatives. In Section 4, we propose various measures for the efficiency of the initial monotonicity proof of a given approach. In this section, we discuss five monotonicity proving approaches, some of them in two versions, and compare them on some numerical examples. The article ends by some conclusions.

2 Aspects of the initial monotonicity proof

We consider the system (1). If A(p) is invertible, then the solution of the parametric system, \(x(p)=\left (A(p)\right )^{-1}a(p)\), is a real-valued function of the parameters p. If, in addition, A(p) is nonsingular for each pp, then the solution set \({\varSigma }^{p}_{\text {uni}}\) is bounded, respectively, the range of xi(p) is bounded on p. If a solution component i of a bounded solution set is monotonic with respect to all parameters p, then the lower and upper bounds of \(\left (\square {\varSigma }^{p}_{\text {uni}}\right )_{i}\) are attained at respective endpoints of p corresponding to the type of monotonicity. Proving monotonic dependence of \({\varSigma }^{p}_{\text {uni}}\) with respect to a parameter pk, k = 1,…,K, can be done by enclosing the united solution set of the following interval parametric system of partial derivatives:

$$ A(p)\frac{\partial x(p)}{\partial p_{k}} = \frac{\partial a(p)}{\partial p_{k}} - \frac{\partial A(p)}{\partial p_{k}}x(p), \qquad p\in\mathbf{p}. $$
(3)

Let

$$ \mathbf{z}_{k}:=\left[\frac{\partial x(p)}{\partial p_{k}}\right]\supseteq {\varSigma}^{p}_{\text{uni}}\left( (3)\right). $$
(4)

If \(0\not \in \left [\frac {\partial x(p)}{\partial p_{k}}\right ]_{i}\), then xi(p) is monotonic with respect to pk in p, and sign\(\left (\mathbf {z}_{k, i}\right )\) gives the type of monotonicity. If \(0\in \left [\frac {\partial x(p)}{\partial p_{k}}\right ]_{i}\), the monotonic dependence cannot be proven, although it may exist. Due to overestimation in the interval enclosures (4), monotonic dependence of some solution components with respect to some interval parameters might not be proven. As discussed and illustrated in [10], proving endpoint dependence (called there global and local monotonicity) is an iterative process. Therefore, we call the first solving of systems (3) initial monotonicity proof.

Since x(p) in (3) is not known, it is usually estimated by an outer interval enclosure \(\mathbf {x}\supseteq {\varSigma }^{p}_{\text {uni}}\left ((1)\right )\) of the initial interval parametric system (1). The quality of the enclosure for x determines the quality of the enclosure for z, thus influencing the proof of monotonic dependence. Also, in x, the dependence on the parameters p is implicit, which may lead to an overestimation in the enclosure provided by zk resulting in a failure of the monotonicity proof. Therefore, various modifications and alternative approaches aiming at a sharp interval enclosure of \(\frac {\partial x(p)}{\partial p_{k}}\), that eventually could prove monotonicity, are proposed, cf. [4, 22]. These approaches have different computational complexities and different efficiencies in the monotonicity proof.

It was proposed in [4] and studied in [22] that an interval enclosure for the following interval parametric linear system:

$$ \left( \begin{array}{cc}A(p)& 0\\ \frac{\partial A(p)}{\partial p_{k}} & A(p) \end{array}\right) \left( \begin{array}{cc}x\\ \frac{\partial x}{\partial p_{k}} \end{array}\right) = \left( \begin{array}{cc}a(p)\\ \frac{\partial a(p)}{\partial p_{k}} \end{array}\right), p\in\mathbf{p} $$
(5)

for each k = 1,…,K, providing a better assessment of the monotonic dependence than the other considered methods.

Usually, the interval methods (providing interval enclosure of \(\square {\varSigma }^{p}_{\text {uni}}\)) generate numerical interval vectors that contain the solution set and its interval hull. A new type of solution, x(p,r), called parameterized or p-solution, is proposed in [5]. This solution is in the form of an affine-linear function of K + n interval-valued parameters:

$$ x(p,r) = \tilde{x} + V\left( \check{p}-p\right) + r, \quad p\in\mathbf{p}, r\in\mathbf{r}= [-\hat{r},\hat{r}], $$
(6)

where \(\tilde {x}, \hat {r}\in \mathbb {R}^{n}\), \(\check {p}\in \mathbf {p}\), \(V\in \mathbb {R}^{n\times K}\) will be precisely defined below for each method. Some representations move \(\tilde {x}\) into the interval vector r and consider the parameters p,r varying independently within the interval [− 1,1]. The parameterized solutions have the properties:

$$ \begin{array}{@{}rcl@{}} {\varSigma}^{p}_{uni}&\subseteq&\left\{x(p,r)\mid p\in\mathbf{p}, r\in\mathbf{r}\right\}, \end{array} $$
(7)
$$ \begin{array}{@{}rcl@{}} \square{\varSigma}^{p}_{uni} &\subseteq& x(\mathbf{p},\mathbf{r}), \end{array} $$
(8)

where x(p,r) is the interval evaluation of x(p,r). There are various approaches to find a parameterized solution depending on K + n parameters; see, for example, [21] and the references given therein. Theorem 1 in Section 3 recalls the simplest single-step method for obtaining the p,r-solution to a united parametric solution set \({\varSigma }^{p}_{\text {uni}}\). In [22], the authors consider proving the initial monotonic dependence by solving the derivative systems (3), where x(p) is replaced by a parameterized solution (6) depending on K + n interval parameters. In [22], both the initial parameterized solution x(p,r) and the interval enclosures of the derivative systems (3) are obtained by the computationally heaviest methods based on affine arithmetic. In this work (Section 4), we compare the initial monotonicity proving approach based on three different forms of parameterized solutions, none of them applying affine arithmetic.

For all k = 1,…,K, the interval parametric systems (3) have the same interval parametric matrix. Therefore, it is proposed in [12] that the initial proof of monotonic dependence be done by solving one interval parametric matrix equation:

$$ A(p)Z(p) = B(p), \qquad p\in\mathbf{p}, $$
(9)

where \(Z(p), B(p)\in \mathbb {R}^{n\times K}\) and \(\frac {\partial x(p)}{\partial p_{k}}\), \(\frac {\partial a(p)}{\partial p_{k}} - \frac {\partial A(p)}{\partial p_{k}}x(p)\) are the k-th columns of Z(p) and B(p) respectively. Thus, solving one matrix equation will save K − 1 inversions of the same matrix when solving separately K systems (3). A numerical example in Section 4 will demonstrate the huge difference. Therefore, in this work, all initial monotonicity proving methods, except the one based on (5), are implemented by solving one interval parametric matrix equation.

Since all the proposed approaches for proving monotonic dependence reduce to solving (other) interval parametric linear systems, the efficiency, in terms of the width of the enclosure, of these approaches will depend on the quality of the numerical interval methods used for solving interval parametric linear systems (or matrix equations), referred here as interval parametric solvers. The different interval parametric solvers have different computational complexities and the quality of the enclosure they provide often depends on the structure of the parameter dependencies. In this work, we illustrate the application of two kinds of parametric solvers: one based on the single-sided strong regularity condition (11), and another based on a more general double-sided strong regularity condition (15). A comparison between the two kinds of parametric solvers is done in [13]. In Section 4, we only discuss the application of these solvers in the initial monotonicity proof.

3 New approach for finding monotonic dependency

3.1 Background

For a = [a,a+], define its mid-point \(\check {a}:= (a^{-} + a^{+})/2\), the radius \(\hat {a} := (a^{+} - a^{-})/2\), and the magnitude \(|\mathbf {a}|:= \max \limits \{|a^{-}|, |a^{+}|\}\). These functions are applied to interval vectors and matrices componentwise. The inequalities are understood componentwise. The spectral radius of a matrix \(A\in \mathbb {R}^{n\times n}\) is denoted by ϱ(A). The identity matrix of appropriate dimension is denoted by I. For \(A_{k}\in \mathbb {R}^{n\times m}\), 1 ≤ kt, \(\left (A_{1},\ldots ,A_{t}\right )\in \mathbb {R}^{n\times tm}\) denotes the matrix obtained by putting side by side the columns of the matrices Ak. Denote the i th column of \(A\in \mathbb {R}^{n\times m}\) by Ai and its i th row by Ai. For an interval matrix \(\mathbf {A}=\left [I-{\Delta }, I+{\Delta }\right ]\in \mathbb {IR}^{n\times n}\) with ϱ(Δ) < 1, [19, Theorem 4.4] determines the inverse interval matrix \(\mathbf {H}=[\underline {H},\overline {H}] = \left [\min \limits \{A^{-1}\mid A\in \mathbf {A}\}, \max \limits \{A^{-1}\mid A\in \mathbf {A}\}\right ]\) by:

$$ \begin{array}{@{}rcl@{}} \overline{H}&=&(\overline{h}_{ij})=\left( I-{\Delta}\right)^{-1}, \\ \underline{H} &=& (\underline{h}_{ij}), \quad \underline{h}_{ij}=\left\{ -\overline{h}_{ij} \text{ if } i\neq j, \quad \frac{\overline{h}_{jj}}{2\overline{h}_{jj}-1} \text{ if } i=j\right\}. \end{array} $$
(10)

The following theorem, modified from [6, Theorem 1], recalls the simplest single-step method for obtaining a parameterized solution enclosing \({\varSigma }^{p}_{\text {uni}}\).

Theorem 1 ([6, Theorem 1])

Let \(\check {A}=A(\check {p})\) be nonsingular. Denote \(\check {x}= \left (\check {A}\right )^{-1}a(\check {p})\), F = (a1,…,aK), \(G = (A_{1}\check {x},\ldots , A_{K}\check {x})\), HCode \(B^{0} = \left (\check {A}\right )^{-1}(F-G)\). Assume that:

$$ \varrho \left( \sum\limits_{i=1}^{K}\left|\left( \check{A}\right)^{-1}A_{i}\right|\hat{p}_{i}\right)<1. $$
(11)

The united p,r-solution x(p,r) of the system (1) exists and is determined by:

$$ x(p,r) = \check{x} + V\left( \check{p}-p\right) + r, \qquad p\in \mathbf{p}, r\in [-\hat{r},\hat{r}]\in\mathbb{IR}^{n}, $$
(12)

where \(V=\check {H}B^{0}\), \(\hat {r}=\hat {H}|B^{0}|\hat {p}\), and \(\check {H}\), \(\hat {H}\) are the midpoint and radius matrices, respectively, of the inverse interval matrix \(\mathbf {H}=[\underline {H},\overline {H}]\) obtained by (10) for \({\Delta } = {\sum }_{i=1}^{K}\left |\left (\check {A}\right )^{-1}A_{i}\right |\hat {p}_{i}\).

In [13], the condition (11) is called single-sided strong regularity condition.

We consider another form of the parametric system (1). Let \({\mathcal {K}}=\{1,\ldots ,K\}\) and \(\pi ^{\prime },\pi ^{\prime \prime }\) be two subsets of \({\mathcal {K}}\) such that \(\pi ^{\prime }\cap \pi ^{\prime \prime }=\emptyset \), \(\pi ^{\prime }\cup \pi ^{\prime \prime }={\mathcal {K}}\), Card\((\pi ^{\prime })=K_{1}\). The indices of the parameters that appear in both the matrix and the right-hand side of the system are involved in \(\pi ^{\prime }\), while \(\pi ^{\prime \prime }\) involves the indices of the parameters that appear only in a(p) of (1). For \(p_{\pi } = (p_{\pi _{1}},\ldots ,p_{\pi _{K}})\), \(D_{p_{\pi }}\) denotes a diagonal matrix with diagonal vector pπ. For every parameter pk, \(k\in \pi ^{\prime }\), we consider a full-rank factorization of its coefficient matrix \(A_{k}\in \mathbb {R}^{n\times n}\):

$$ A_{k} = L_{k}R_{k}, \qquad L_{k}\in\mathbb{R}^{n\times \gamma_{k}}, R_{k}\in\mathbb{R}^{\gamma_{k}\times n}, \gamma_{k}=\text{rank}(A_{k}). $$
(13)

Also, \(p_{k}A_{k} = L_{k}D_{g_{k}(p_{k})}R_{k}\), where \(g_{k}(p_{k})=(p_{k},\ldots ,p_{k})^{\top }\in \mathbb {R}^{\gamma _{k}}\). We define \(\gamma ={\sum }_{k=1}^{K_{1}} \gamma _{k}\), \(g(p_{\pi ^{\prime }}) = \left (g_{1}^{\top } (p_{\pi ^{\prime }_{1}}),\ldots , g_{K_{1}}^{\top } (p_{\pi ^{\prime }_{K_{1}}})\right )^{\top }\), \(L=\left (L_{1},\ldots , L_{K_{1}}\right )\in \mathbb {R}^{n\times \gamma }\), \(R=\left (R_{1}^{\top },\ldots ,R_{K_{1}}^{\top }\right )^{\top }\in \mathbb {R}^{\gamma \times n}\). A numerical vector \(t\in \mathbb {R}^{\gamma }\) is chosen so that \({\sum }_{k\in \pi ^{\prime }} p_{k}a_{k} = LD_{g(p_{\pi ^{\prime }})}t\), (for example, by solving the latter equation with respect to t), and \(F\in \mathbb {R}^{n\times \text {Card}(\pi ^{\prime \prime })}\) is such that \(a(p) = a_{0}+LD_{g(p_{\pi ^{\prime }})}t + Fp_{\pi ^{\prime \prime }}\). Then, the system (1) has the following equivalent form:

$$ \left( A_{0}+LD_{g(p_{\pi^{\prime}})}R\right)x = a_{0}+LD_{g(p_{\pi^{\prime}})}t + Fp_{\pi^{\prime\prime}}, \quad p\in\mathbf{p}. $$
(14)

For each parameter \(g_{i} = \left (g(p_{\pi ^{\prime }})\right )_{i}\), its coefficient matrix Ai = LiRi has rank 1. Although the coefficient matrices Ak in (1) may not have rank 1 in general, some sufficient conditions for regularity of an interval parametric matrix are based on its approximation by another matrix with rank 1 uncertainty structure, cf. [13]. The latter means that each instance of the parameter pi is cloned as a separate parameter, in the diagonal matrix D, and each of these clones is considered independently. We assume that (14) provides an equivalent, optimal, rank 1 representation (cf. [13], [14, Definition 1]) of either \(A(p_{\pi ^{\prime }})-A_{0}\), or of \(A^{\top }(p_{\pi ^{\prime }})-A_{0}^{\top }\). Every interval parametric linear system (1) has an equivalent, optimal, rank 1 representation (14) and there are various ways to obtain it, cf. [9, 13]. The representation (14) of the system (1) is closely related to the monotonicity properties of the parametric united solution set and is a background for the newly proposed monotonicity proving approach.

Theorem 2 (13)

Let (14) be an equivalent, optimal, rank 1 representation of the system (1) and let the matrix \(A(\check {p})\) be nonsingular. Denote \(C=\left (A(\check {p})\right )^{-1}\) and \(\check {x}=Ca(\check {p})\). If

$$ \varrho\left( \left|(RCL)D_{g(\check{p}_{\pi^{\prime}}-\mathbf{p}_{\pi^{\prime}})}\right|\right)<1, $$
(15)

then

  1. (i)

    \({\varSigma }_{uni}\left (A(p),a(p), \mathbf {p}\right )\) and the united solution set Σuni((16)) of the interval parametric linear system:

    $$ \begin{array}{@{}rcl@{}} \left( I-RCLD_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}\right)y = R\check{x} - RCF\left( \check{p}_{\pi^{\prime\prime}} - p_{\pi^{\prime\prime}}\right) - RCLD_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}t,\\ p\in\mathbf{p}\\ \end{array} $$
    (16)

    are bounded,

  2. (ii)

    \(\mathbf {y}\supseteq {\varSigma }_{\text {uni}}((16))\) is computable by methods that require (11) (cf. [13]),

  3. (iii)

    Every \(x\in {\varSigma }_{uni}\left (A(p), a(p), \mathbf {p}\right )\) satisfies

    $$ x \in \check{x} - (CF)[-\hat{p}_{\pi^{\prime\prime}}, \hat{p}_{\pi^{\prime\prime}}] + (CL)\left( D_{g([-\hat{p}_{\pi^{\prime}}, \hat{p}_{\pi^{\prime}}])}|\mathbf{y}-t|\right). $$
    (17)

The condition (15) is called in [13] double-sided strong regularity condition. Its proof is based on the Woodbury formula [23]. Note also that (16) is obtained from (14) by the substitution y = Rx. Theorem 2 is a background of Theorem 3 which presents two different parameterized solutions to the system (14) and summarizes the results of [14, Theorem 4] and [15, Theorem 3].

Theorem 3

Let (14) be an equivalent, optimal, rank 1 representation of the system (1) and let the matrix \(A(\check {p})\) be nonsingular. Denote \(C=\left (A(\check {p})\right )^{-1}\) and \(\check {x}=Ca(\check {p})\). If the condition (15) holds true, then:

  1. (i)

    There exists an united parameterized solution of the system (1), respectively the system (14),

    $$ x(p) = \check{x} - (CF)\left( \check{p}_{\pi^{\prime\prime}}-p_{\pi^{\prime\prime}}\right) + \left( CLD_{|\mathbf{y} - t|}\right)g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}}), \quad p\in \mathbf{p}, $$
    (18)

    where \(\mathbf {y}\supseteq {\varSigma }_{\text {uni}}((16))\) is computable by methods that require (11);

  2. (ii)

    There exists an united parameterized solution of the system (1), respectively the system (14),

    $$ \begin{array}{@{}rcl@{}} x(p, r) &=& \check{x} - (CF)(\check{p}_{\pi^{\prime\prime}}-p_{\pi^{\prime\prime}})+ (CLD_{\check{y}-t})g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}}) + r,\\ && p\in\mathbf{p}, r\in \mathbf{r}=[-\hat{r},\hat{r}], \end{array} $$
    (19)

    where \(\check {y}=R\check {x}\), \(\mathbf {y}\supseteq {\varSigma }_{uni}((16))\) is computable by methods that require (11), and \(\hat {r}=|CL|D_{|\mathbf {y}-\check {y}|}g(\hat {p}_{\pi ^{\prime }})\).

  3. (iii)

    With the same y used in (17), (18), and (19), interval evaluation \(x\left (\mathbf {p}\right )\) of x(p) (18) is equal to the interval evaluation \(x\left (\mathbf {p},\mathbf {r}\right )\) of x(p,r) (19), and to the interval vector x obtained by Theorem 2.

Although it is obtained by a better enclosure method, the parameterized solution (19) is similar to that of Theorem 1. For parametric systems involving only rank 1 interval parameters, the parameterized solution (18) depends only on the initial K interval parameters, while (19) depends on K + n interval parameters; thus, the latter one is greater than the former one.

3.2 Main result

Proposition 1

With the notation of Theorem 3, if (15) holds true, then

$$ \square {\varSigma}_{\text{uni}}((16)) \subseteq y^{(1)}(\mathbf{p},\mathbf{r}) = y^{(2)}(\mathbf{p}), $$
(20)

where y(1)(p,r) is the parameterized solution of (16), obtained by Theorem 1, and

$$ \begin{array}{@{}rcl@{}} y^{(2)}(p) &=& \check{y} + H\left( B^{0}\left( \check{p}_{\pi}-p_{\pi}\right)\right)\\ &=& \check{y} + \left( H|B^{0}|\right)\left( \check{p}_{\pi}-p_{\pi}\right), \qquad p\in\mathbf{p}, \end{array} $$

where \(B^{0}=RC\left (-F, LD_{\check {y}-t}K\right )\), \(\pi =\left (\pi ^{\prime \prime }, \pi ^{\prime }\right )\), \(K\in \mathbb {R}^{\gamma \times \textup {Card}(\pi ^{\prime })}\) is defined by

$$ K_{\bullet i} = \frac{\partial g(p_{\pi^{\prime}})}{\partial p_{i}}, \quad \text{for every } i\in\pi^{\prime}, $$
(21)

and \(H=\left (I-\left |RCL\right |D_{g(\hat {p}_{\pi ^{\prime }})}\right )^{-1}\).

Proof

We apply Theorem 1 to the interval parametric system (16). The matrix B0 for this system is:

$$ B^{0} = I\left( -RC\left( F, LD_{t}K\right) +RC\left( 0, LD_{\check{y}}K\right)\right) = RC\left( -F, LD_{\check{y}-t}K\right). $$

For \({\Delta } = \left |RCL\right |D_{g(\hat {p}_{\pi ^{\prime }})}\), by (10), we obtain the corresponding inverse interval matrix \(\left [\overline {H},\underline {H}\right ]\). Theorem 1 implies the existence of

$$ y^{(1)}(p,r) = \check{y}+ \left( \check{H}B^{0}\right)\left( \check{p}_{\pi} - p_{\pi}\right) + r, \quad p_{\pi}\in\mathbf{p}_{\pi}, r\in [-\hat{r}, \hat{r}], $$
(22)

where \(\hat {r}=\hat {H}\left |B^{0}\right |\hat {p}_{\pi }\), with properties (7) and (8). From the interval evaluation of (22), that is \(y^{(1)}(\mathbf {p},\mathbf {r})=\check {y}+ [-\hat {y},\hat {y}]\), where

$$ \hat{y}= \check{H}|B^{0}|\hat{p}_{\pi} + \hat{H}|B^{0}|\hat{p}_{\pi} = \overline{H}|B^{0}|\hat{p}_{\pi}, $$

we obtain

$$ \begin{array}{@{}rcl@{}} y^{(2)}(p) &=& \check{y}+ \overline{H}\left( B^{0}\left( \check{p}_{\pi} -p_{\pi}\right)\right) = \check{y}+ \left( \overline{H}|B^{0}|\right)\left( \check{p}_{\pi} -p_{\pi}\right), \quad p_{\pi}\in\mathbf{p}_{\pi} \end{array} $$

and the relation (20). □

We use the relation (20) when evaluating partial derivatives of x(p). Consider the function in (18):

$$ \begin{array}{@{}rcl@{}} x(p) &=& \check{x}- (CF)\left( \check{p}_{\pi^{\prime\prime}}-p_{\pi^{\prime\prime}}\right) + (CL)\left( D_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}\left( y(p)-t\right)\right), \quad p\in\mathbf{p}, \end{array} $$

where y is replaced by y(p) = y(2)(p) from Proposition 1,

$$ y(p) = \check{y} + H\left( \left( RC\left( -F, LD_{\check{y}-t}K\right)\right)\left( \check{p}_{\pi}-p_{\pi}\right)\right), \qquad p\in\mathbf{p}. $$

We find the expressions of the partial derivatives of x(p) with respect to each parameter pi, iπ and evaluate these expressions at pπ.

For \(i\in \pi ^{\prime \prime }\),

$$ \begin{array}{@{}rcl@{}} \frac{\partial x(p)}{\partial p_{i}} &=& (CF)e_{i} + (CL)\left( D_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}HRCFe_{i}\right), \end{array} $$

where ei is the i-th coordinate vector in \(\mathbb {R}^{\text {Card}(\pi ^{\prime \prime })}\). In matrix form:

$$ \begin{array}{@{}rcl@{}} \frac{\partial x(p)}{\partial p_{\pi^{\prime\prime}}} &=& CF + (CL)\left( D_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}(HRCF)\right). \end{array} $$

The interval evaluation of the latter is:

$$ \begin{array}{@{}rcl@{}} \left.\frac{\partial x(p)}{\partial p_{\pi^{\prime\prime}}}\right|_{p\in\mathbf{p}} &=& CF + [-\hat{V}^{\prime\prime}, \hat{V}^{\prime\prime}], \end{array} $$

where \( \hat {V}^{\prime \prime } = |CL|\left (g(\hat {p}_{\pi ^{\prime }})\circ |HRCF|\right ) = |CL|D_{g(\hat {p}_{\pi ^{\prime }})}|HRCF|\) and ∘ is the componentwise Hadamard product.

For \(i\in \pi ^{\prime }\),

$$ \begin{array}{@{}rcl@{}} \frac{\partial x(p)}{\partial p_{i}} &=& (CL)\left( -D_{K_{\bullet i}}(y(p)-t) - D_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}HRCLD_{\check{y}-t}Ke_{i}\right), \end{array} $$

where ei is the i-th coordinate vector in \(\mathbb {R}^{\text {Card}(\pi ^{\prime })}\). In matrix form:

$$ \begin{array}{@{}rcl@{}} \frac{\partial x(p)}{\partial p_{\pi^{\prime}}} &=& -(CL)\left( (y(p)-t)\circ K + D_{g(\check{p}_{\pi^{\prime}}-p_{\pi^{\prime}})}HRCLD_{\check{y}-t}K\right). \end{array} $$

In order to minimize the number of interval operations in the interval evaluation of the last expression, we rearrange it so that:

$$ \begin{array}{@{}rcl@{}} \left.\frac{\partial x(p)}{\partial p_{\pi^{\prime}}}\right|_{p\in\mathbf{p}} &=& -CLD_{\check{y}-t}K + [-\hat{V}^{\prime}, \hat{V}^{\prime}], \quad \text{where} \end{array} $$
$$ \begin{array}{@{}rcl@{}} \hat{V}^{\prime} \!&=&\! |CL|\left( \left( \left|HRC(-F,LD_{\check{y}-t}K)\right|\hat{p}_{\pi}\right)\circ K + D_{g(\hat{p}_{\pi^{\prime}})}\left|HRCLD_{\check{y}-t}K\right|\right) \\ \!&=&\! |CL|\left( \left( \left|HRCF\right|\hat{p}_{\pi^{\prime\prime}} + \left|HRCLD_{\check{y}-t}K\right|\!\hat{p}_{\pi^{\prime}}\right)\circ K \!+ \!g(\hat{p}_{\pi^{\prime}})\!\circ \!\left|HRCLD_{\check{y}-t}K\right|\right)\!. \end{array} $$

Thus, we have proven the following theorem.

Theorem 4

Let (14) be an equivalent, optimal, rank 1 representation of the system (1) and let the matrix \(A(\check {p})\) be nonsingular. Denote \(C=\left (A(\check {p})\right )^{-1}\). If the condition (15) holds true, define interval matrix \(\mathbf {M}\in \mathbb {IR}^{n\times K}\) by

$$ \mathbf{M} = C\left( F, -LD_{\check{y}-t}K\right) + [-\hat{V}, \hat{V}],$$

where K, H are defined in Proposition 1 and

$$ \begin{array}{@{}rcl@{}} \hat{V}&=&\left|CL\right|\left( g(\hat{p}_{\pi^{\prime}})\circ\left|HRCF\right|, \left( |HRCF|\hat{p}_{\pi^{\prime\prime}}+ \left|HRCLD_{\check{y}-t}K\right|\hat{p}_{\pi^{\prime}}\right)\circ K \right.\\ &&+ \left. g(\hat{p}_{\pi^{\prime}})\circ\left|HRCLD_{\check{y}-t}K\right|\right). \end{array} $$

Then, for i = 1,…,n, jπ, \( \textup {sign}\left (\mathbf {M}_{ij}\right )\neq 0\) gives the type of global monotonicity of Σuni,i((1)) with respect to \(p_{\pi _{j}}\).

In the special case of rank 1 interval parameters K = I and

$$ \begin{array}{@{}rcl@{}} \hat{V}^{\prime} &=& |CL|\left( \left( \left|HRC(-F,LD_{\check{y}-t})\right|\hat{p}_{\pi}\right)\circ I + g(\hat{p}_{\pi^{\prime}})\circ\left|HRCLD_{\check{y}-t}\right|\right)\\ &=& |CL|\left( D_{\left|HRC(-F,LD_{\check{y}-t})\right|\hat{p}_{\pi}} + D_{g(\hat{p}_{\pi^{\prime}})}\left|HRCLD_{\check{y}-t}\right|\right). \end{array} $$

The following example presents in detail the numerical computations based on the theoretical results of this section.

Example 1

Consider the following parametric linear system:

$$ \left( \begin{array}{ccc} 1 + p_{1} - p_{2}, & -p_{1} + p_{2}, & 1 + p_{1}\\ 2 + p_{1} + p_{2}, & -1 - p_{1} - p_{2} - p_{3}, & -1 + p_{1} - 2 p_{3}\\ 1 + p_{1}, & -3 -p_{1} - 2p_{3}, & 6 + p_{1} + 4p_{3} \end{array}\right)x= \left( \begin{array}{cc}1\\1\\1 \end{array}\right) $$
(23)

with \(p_{i}\in \frac {1}{4}[-1, 1]\), i = 1,2,3. In this system, the coefficient matrices of p1, p2 have rank 1, while the matrix of p3 has rank 2. An equivalent, optimal, rank 1 representation (14) of the system is obtained for \(\pi ^{\prime }=\{1,2,3\}\), \(\pi ^{\prime \prime }=\emptyset \) implying F = 0 and omitting the terms involving it, \(g(p_{\pi ^{\prime }})=(p_{1},p_{2},p_{3},p_{3})^{\top }\), a0 = (1,1,1), \(t=0\in \mathbb {R}^{4}\) and

$$ A_{0}=\left( \begin{array}{ccc}1&0&1\\ 2&-1&-1\\ 1&-3&6 \end{array}\right),\quad L=\left( \begin{array}{cccc}1& -1& 0& 0\\ 1& 1& -1& -2\\ 1& 0& -2& 4 \end{array}\right), \quad R=\left( \begin{array}{ccc}1& -1& 1\\ 1& -1& 0\\ 0& 1& 0\\ 0& 0& 1 \end{array}\right). $$

The spectral radius of the matrix in condition (11) is ϱ(11) ≈ 0.976, while that in condition (15) is ϱ(15) ≈ 0.582, showing that (15) will hold true for parameters varying in wider intervals than the condition (11). We apply (as in the proof of Proposition 1) Theorem 1 to the parametric system (16), which involves only column dependencies in the matrix. By \(\check {y}=R.\check {x} = \frac {1}{14}(9, 6, 5, 3)^{\top }\):

$$ H=\overline{H}=\left( I-\left|RCL\right|D_{g(\hat{p}_{\pi^{\prime}})}\right)^{-1} = \frac{1}{6081}\left( \begin{array}{cccc}7784& 9156/5& 2408& 3808/5\\ 1636& 9432& 2956& 1840\\ 1556& 20292/5& 9056& 5836/5\\ 826& 1848& 868& 7798 \end{array}\right), $$
$$ K=\left( \begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\\ 0&0&1 \end{array}\right),\quad B^{0} = RC\left( LD_{\check{y}}K\right)=\frac{1}{196}\left( \begin{array}{ccc}81& 24& -49\\ 54& 72& -84\\ 45& -108& 49\\ 27& -48& 35 \end{array}\right), $$

we obtain the parameterized solutions y(1)(p,r) and y(2)(p) of Proposition 1

$$ \begin{array}{@{}rcl@{}} &&y^{(1)}(p,r) \approx \\ &&\left( \begin{array}{cc}\frac{9}{14}\\ \frac{3}{7}\\ \frac{5}{14}\\ \frac{3}{14} \end{array}\right) + \left( \begin{array}{ccc}0.434041 & 0.128605& -0.262568\\ 0.31531 & 0.420413 & -0.490482\\ 0.257367& -0.61768 & 0.280244\\ 0.144774& -0.257376& 0.18767 \end{array}\right)\left( \check{p}_{\pi} - p_{\pi}\right) \!+ \left( \begin{array}{cc}0.245396\\ 0.329458\\ 0.350839\\ 0.177327 \end{array}\right)\circ r,\\ &&p_{\pi}\in\mathbf{p}_{\pi}, \pi=\pi^{\prime}, r_{i}\in [-1,1], i=1,\ldots,4, \end{array} $$
$$ \begin{array}{@{}rcl@{}} y^{(2)}(p) &=& \check{y}+ \left( H|B^{0}|\right)\left( \check{p}_{\pi}-p_{\pi}\right) \\ &\approx & \frac{1}{14}\!\left( \begin{array}{cc}\!9\\ \!6\\ \!5\\ \!3 \end{array}\!\right) \!+ \!\left( \begin{array}{ccc} 0.720135& 0.516231& 0.570432\\ 0.691804& 0.944675& 0.907557\\ 0.657974& 1.1441& 0.756577\\ 0.349285& 0.520967& 0.428877 \end{array}\right)\!\left( \check{p}_{\pi} \!- \!p_{\pi}\right), ~~ \pi = \pi^{\prime}, p\!\in\mathbf{\!p}, \end{array} $$

and the relation (20). Then, applying Theorem 4, we obtain:

$$ \begin{array}{@{}rcl@{}} \hat{V}&=&\left|CL\right|\left( \left( \left|HRCLD_{\check{y}}K\right|\hat{p}_{\pi^{\prime}}\right)\circ K + g(\hat{p}_{\pi^{\prime}})\circ \left|HRCLD_{\check{y}}K\right|\right) \\ &=& \frac{1}{2979690}\left( \begin{array}{ccc}1487336& 747981& 875411\\ 1560140& 2074047& 1504139\\ 800508& 928444& 795130 \end{array}\right), \\ -CLD_{\check{y}}K &=& -\frac{1}{196}\left( \begin{array}{ccc}-99& 36& 35\\ -45& 108& -49\\ -27& 48& -35 \end{array}\right). \end{array} $$

Thus, the interval matrix \(\mathbf {M}=-CLD_{\check {y}}K+\left [-\hat {V}, \hat {V}\right ]\) of Theorem 4

$$ \begin{array}{@{}rcl@{}} \mathbf{M} &\approx& \left( \begin{array}{ccc} [-1.00426, -0.00594407]& [-0.067353, 0.4347]& [-0.115221, 0.472364]\\ {[-0.753183, 0.294]}& [-0.145041, 1.24708]& [-0.754797, 0.254797]\\ {[-0.40641, 0.1309]}& [-0.0666928, 0.556489]& [-0.445421, 0.0882785] \end{array}\right) \end{array} $$

has zero involving intervals in all its elements except for M11, which has negative sign and shows monotone decreasing dependence of the first solution component with respect to p1. For the same parametric system (23) with narrower intervals for the parameters, Theorem 4 gives the monotonic dependence of more solution components with respect to more parameters (see Example 3 and Fig. 1).

4 Numerical comparisons

In this section, we compare the following approaches for finding initial monotonic dependencies:

Mdx:

Finding initial monotonicity by solving one interval parametric matrix equation (9), where x(p) = x is an initial interval enclosure of the considered system (1). Two versions of this approach are considered—parametric solvers based on: the single-sided regularity condition (11) and on the more efficient double-sided regularity condition (15).

MK2n:

The approach based on the systems (5). The abbreviation MK2n reflects the requirement of this approach to solve K interval parametric linear systems, each of dimension 2n. The efficiency of this approach in proving monotonic dependencies depends on the quality of the method for solving interval parametric linear systems. Since the parametric matrix A(p) appears twice in the systems that have to be solved, the MK2n method cannot use the advantage of the parametric solvers for matrices with rank 1 uncertainty structure. Therefore, the MK2n method seems to have the best proving efficiency for systems with matrices whose radius of single-sided strong regularity is less than the radius of double-sided strong regularity. Another deficiency of the MK2n method is its high computational complexity since the approach of solving one matrix equation for all the parameter dependencies [12] cannot be applied. Thus, for each parameter dependency, one has to solve a separate interval parametric linear system of twice bigger dimension.

MpKsol:

Proving initial monotonicity by solving one interval parametric matrix equation (9), where x(p) = x(p,r), pp, rr, is an initial parameterized solution (6) depending on K + n interval parameters for the considered system (1). Obtaining x(p,r) (6) is considered in two forms: one based on Theorem 1 using regularity condition (11) and another based on Theorem 3.(ii) using the more efficient double-sided regularity condition (15).

MpPsol:

Proving initial monotonicity by solving one interval parametric matrix equation (9), where x(p) = x(p,q), pp, qq, is an initial parameterized solution of the considered system (1) obtained by the method of Theorem 3.(i) using the double-sided regularity condition (15).

Mds:

The newly proposed method of Theorem 4. This approach combines the two steps of the methods MpKsol and MpPsol into one procedure, which fully accounts for the dependencies on the interval parameters in the partial derivatives.

The first three approaches above are the same as those considered in [22]. Here, we apply the computationally more efficient approach [12] of solving one matrix equation for all the parameter dependencies instead of solving K separate parametric systems involving the same matrix, applied in [22]. This difference in the computational efficiency is demonstrated by Example 4. The second difference is in the parametric solvers that are used. Contrary to [22], applying the heaviest parameterized method based on affine arithmetic, we use only interval methods that do not require affine arithmetic. The latter also implies a better computational efficiency. The third major difference is in the presentation of the numerical results. In [22], the results of the initial monotonicity proof are not presented but only the solution enclosures, obtained after applying proven monotonic dependencies, are given. Below, we propose several quantitative measures for the efficiency of the monotonicity proving approaches.

Similarly to the radius of applicability of a solver of an interval parametric linear system, we introduce two radiuses (r1(Mth) and r2(Mth)) of the efficiency of an initial monotonicity proving method, denoted by Mth. We consider system (1), where pkpk = [− 1,1], k = 1,…,K. Let M ∈{0,1,− 1}n×K be the sign matrix resulting from an initial monotonicity proving method. We define a radius r1(Mth) of the initial enclosure (monotonicity proving) efficiency of the approach Mth by

$$ \begin{array}{@{}rcl@{}} r_{1}(\text{Mth}) &:=& \arg\!\max_{r} \{M_{ij}\neq 0 \text{ for exactly one } (ij), \\ i&=&1,\ldots,n, j=1,\ldots,K \text{ and for } p\in r\mathbf{p}\}. \end{array} $$

r1(Mth) corresponds to the maximal radius for the parameters such that exactly one sign in M is known, which means that the monotonicity of one solution component with respect to one parameter is determined. Next, we define an r2(Mth) radius of the initial enclosure (monotonicity proving) efficiency by:

$$ r_{2}(\text{Mth}) := \arg\!\max_{r} \{0\not\in M \text{ for } p\in r\mathbf{p}\}. $$

r2(Mth) corresponds to the maximal radius for the parameters such such that all signs in M are known, which means that the monotonicity is determined for each solution component with respect to each parameter. In practical applications, the exact values r1(Mth) and r1(Mth) are usually approximated by corresponding floating point radiuses r = rprec, where “prec” denotes the number of decimal digits in the mantissa of rprec. Thus, for a given interval parametric linear system (1) and a specified precision, one can find approximations of r1(Mth) and r2(Mth) by subsequently changing the value of rprec. Below, in the numerical examples, we use prec = 4.

For problems of high dimension, involving many interval parameters, we propose the following quantitative measure for the efficiency of a given approach. For an interval parametric linear system (1) with a given radius of parameter uncertainties r, such that pkpk = [−r,r], k = 1,…,K, the quality of an initial monotonicity proving methodology Mth is denoted by ε(r,Mth). Let #0 be the number of zero components in the sign matrix M of the monotonic dependencies, proven by Mth, and # is the total number of the elements in the matrix M. Then,

$$ \varepsilon (r, \text{Mth}) := \frac{\#-\#_{0}}{\#}\in [0,1]. $$

The measure ε(r,Mth) can be also applied to a row of the sign matrix M. This means that the measure can be applied to problems that require finding monotonic dependence of only one (or some) components of the parametric solution. Obtaining ε(r,Mth) is exact and does not require additional computational effort.

Example 2

Consider the following interval parametric linear system after [22, Example 3]:

$$ \left( \begin{array}{cccc} 2p_{1} & p_{2}-1 &-p_{3} & p_{2}+3p_{5} \\ p_{2}+1 &0 & p_{1} &p_{4} +1 \\ 2-p_{3} &4p_{2}+1 &1&-p_{5} \\ -1 &2p_{5}+2 & \frac{1}{2}& 2p_{1}+p_{4} \end{array}\right)x = \left( \begin{array}{cc}1+2p_{3}\\-p_{4}+2\\3p_{4}+p_{5}\\p_{1}+p_{2}+2p_{5} \end{array}\right)\!\!, \quad \left( \begin{array}{cc}p_{k}\in [0.8-\delta, 1.1+\delta],\\ k=1,\ldots, 5. \end{array}\right) $$

In [22, Example 3], Mdx, MK2n, and MpKsol are applied for δ = 0.01,0.02,0.03, 0.04,0.05. Here, we have additionally applied the three proposed measures for the efficiency of the considered approaches. Since the system involves parameters whose dependency structure has rank greater than 1, all interval parametric solvers are based on the single-sided regularity condition (11). Both Mdx and MpKsol are applied by solving one interval parametric matrix equation (9). It should be mentioned that Mdx yields the same initial monotonicity result for different δ = 0.01,0.02,0.03, and the same result for δ = 0.04,0.05, the latter mentioned in Table 1.

Table 1 Example 2: efficiency of the considered approaches for finding monotonic dependencies

Similarly, MK2n yields the same result for δ = 0.02,0.03,0.04, and the same result for δ = 0.05,0.06,0.07. MpKsol yields the same initial monotonicity result for δ = 0.02,0.03,0.04, and the same result for δ = 0.05,0.06,0.07, the latter mentioned in Table 1. It is clear that for the particular parametric system and the considered parameter uncertainties the three compared approaches have similar proving efficiency.

Example 3

Consider the parametric linear system (23) with starting parameter intervals pi ∈ [− 1,1], i = 1,2,3. Note, that none of the conditions (11) and (15) holds true for these parameter intervals, which implies \(M=0\in \mathbb {R}^{n\times K}\). Table 2 clearly presents the initial monotonicity proving efficiency of the three approaches, where the corresponding parametric solvers are based on the single-sided regularity condition (11), which holds true for parameters with smaller interval radiuses. The differences between the three approaches are more pronounced than those in Example 2.

Table 2 Example 3: efficiency of the considered approaches based on the single-sided regularity condition (11)

Since the considered parametric system involves rank 1 interval parameters, we apply monotonicity proving approaches based on the more efficient double-sided regularity condition (15). The estimated efficiency of these approaches is presented in Table 3. For the traditional approach Mdx, both the initial interval solution enclosure and the solution enclosure of the derivative matrix equation are based on (15). When applying the approaches MpKsol and MpPsol, the respective two kinds of initial parameterized solution enclosures are based on (15). In the corresponding derivative systems, the initial rank 1 uncertainty structure is changed due to the parameterized solutions in the right-hand sides. This implies that their solutions obtained by methods based on (15) will be overestimated. Therefore, the derivative matrix equation is solved by methods applying (11). Comparing the results in Tables 2 and 3, we see that any of the methods based on (15) has a better enclosure efficiency than any of the methods based on (11). Table 3 clearly presents the best monotonicity proving efficiency of the newly proposed method of Theorem 4, which fully utilizes the parameter dependencies.

Table 3 Example 3: efficiency of the considered approaches based on both regularity conditions (15), (11)

In the next examples, we consider interval parametric linear systems based on a finite element model of a one-bay 20-floor truss cantilever presented in Fig. 2, after [7].

Fig. 2
figure 2

One-bay 20-floor truss cantilever after [7]

The structure consists of 42 nodes and 101 elements. The bay is L = 1 m, every floor is 0.75 L, the element cross-sectional area is A = 0.01 m2, and the crisp value for the element Young modulus is E = 2 × 108 kN/m2. Twenty horizontal loads with nominal value P = 10 kN are applied at the left nodes. The boundary conditions are determined by the supports: at A the support is a pin, at B the support is roller. It is assumed that the modulus of elasticity Ek of each element and the external loads are independent interval parameters. This problem is often used as a benchmark problem for the computational efficiency and scalability of various interval methods applied to mechanical structures with complex configuration and a large number of interval parameters (see, e.g., [8, 14, 17]). Since the interval parametric linear system for the unknown displacements at each node has rank 1 uncertainty structure, by [11, Theorem 2, Proposition 1] the vertices of the interval hull of the unknown displacements are attained at particular endpoints of the interval parameters. Similar properties are verified for interval models of linear DC electric circuits [2, 3, 11]. However, finding \(\square {\varSigma }^{p}_{uni}\) by the combinatorial endpoint approach requires solving prohibitively many 2101 + 20 ≈ 2.66 × 1036 point linear systems for the model of truss cantilever. Therefore, it is of particular practical interest to have computationally feasible methods for obtaining \(\square {\varSigma }^{p}_{uni}\).

For any interval parametric linear system where some parameters are involved only in the right-hand side, the monotonic dependence of \(\square {\varSigma }^{p}_{uni}\) on these parameters can be proven first. Thus, these parameters are removed from the subsequent considerations. By the next example, we demonstrate the computational advantage of considering one matrix equation for the partial derivatives instead of solving a number of interval parametric linear systems.

Example 4

Consider the interval parametric linear system for the displacements in the model of truss cantilever presented in Fig. 2. It is assumed 6% uncertainty in the modulus of elasticity Ek of each element (∓ 3% from the corresponding mean value) and 10% uncertainty in each of the loads. The goal is to find the monotonic dependence of the unknown displacements with respect to each of the 20 interval load parameters that appear only in the right-hand side of the system.

For proving monotonic dependence of the solution with respect to 20 interval parameters in the right-hand side of the system, we solve one interval parametric matrix equation (9) where, due to the lack of dependence, \(B(p)\in \mathbb {R}^{81\times 20}\) is a numerical matrix. This matrix equation is solved by an interval method based on the regularity condition (11). Monotonic dependence of every solution component is proven with respect to each interval parameter, except for five solution components with respect to the first interval parameter. Then, we solve an interval parametric linear system for the solution derivatives with respect to the first parameter by the method of Theorem 2, which finds monotonic dependence for all the solution components. For the sake of comparison of the computing times, we solve 20 separate interval parametric linear systems for the solution derivatives with respect to each of the parameters by the same method used to solve the corresponding matrix equation. Solving 20 separate systems was about 15.6 times slower than solving the interval parametric matrix equation.

Example 5

In this example, we compare both the monotonicity proving efficiency and the computational efficiency of the newly proposed method Mds (Theorem 4) with those of the method MpPsol. The latter method showed the best monotonicity proving efficiency in Example 3 compared with the methods Mdx and MpKsol. Consider the interval parametric linear system for the displacements in the model of truss cantilever presented in Fig. 2 which is considered also in Example 4 with more interval parameters. Here, it is assumed 6% uncertainty in the modulus of elasticity Ek of each element while the twenty external loads are considered deterministic at their nominal value. Thus, we have an interval parametric linear system involving 101 rank 1 interval parameters Ek in the matrix and a numerical right-hand side vector of dimension 81. The newly proposed method showed monotonicity proving efficiency ε(0.03,Mds) = 6875/8181 ≈ 0.8404. We denote the computing time of this method by Mdst. The method MpPsol, applied as in Example 3, showed monotonicity proving efficiency ε(0.03,MpPsol) = 5663/8181 ≈ 0.6922 and a computing time MpPsolt ≈ 2.6 Mdst. Thus, the newly proposed initial monotonicity proving method of Theorem 4 demonstrates essentially better computational efficiency and monotonicity proving efficiency in comparison with the methods based on explicit parameterized solutions.

5 Conclusion

Proving the dependence of the unknown variables in interval parametric linear systems on particular endpoints of the intervals for the model parameters is a computationally heavy problem of particular importance for many scientific and applied problems. An essential part of this iteration process is the initial proof of monotonic dependence. In this work, we analyzed both theoretically and by numerical examples various aspects of different methodological approaches aiming at the initial monotonicity proof. The computational efficiency of some of the considered methods is also analyzed. Some quantitative measures for the efficiency of different monotonicity proving methodologies are proposed. These measures are applied at the numerical comparisons between a variety of initial monotonicity proving approaches and a newly proposed method.

For parametric linear systems involving true rank 1 interval parameters, which appear in a variety of application domains, we proposed a novel method for proving an initial monotonic dependence of the solution components on the interval parameters. This novel method, presented in Theorem 4, combines the best computational efficiency with the best monotonicity proving efficiency in comparison with all considered approaches.

In [14], the parameterized solution (18) is a basis of a new approach for obtaining sharp bounds for derived quantities (e.g., forces or stresses) which are functions of the displacements (primary variables) in interval finite element models of mechanical structures. A further research could expand the present novel method of Theorem 4, which is also based on (18), in proving monotonic dependence of secondary variables defined by classes of functions depending on the initial interval parameters and the primary unknown variables.

Both the theoretical discussions and the numerical comparisons, involved in this paper, clearly reveal what are the properties of the initial monotonicity proving approaches. Thus, the end user can choose properly which is the best approach for an application. The following general advice can be given. For problems of small dimension or involving a few interval parameters with relatively small uncertainties, any approach could be applied. If monotonicity should be checked simultaneously for a big number of interval parameters, solving one matrix equation for all partial derivatives [12] is computationally more efficient than solving separate derivative systems. Applying initial parameterized solutions in the right-hand side of the partial derivatives improves the efficiency of the proof. If parameters with rank 1 uncertainty structure dominate in the considered problem, then the novel approach, proposed in Theorem 4, has the best computational and proving efficiency. In any case, choosing how to prove the monotonic dependence or the endpoint property of a parametric solution should be guided by the properties of the particular problem at hand and by the properties (discussed in this work) of the corresponding methodologies. In order to be informative, any comparisons of the monotonicity proving efficiency of present or future methodologies should involve some of the three quantitative measures proposed in this work.