1 Introduction

Let \(\mathbb {R}\) be the standard notation for real numbers and \(\mathbb {R}^m, \mathbb {R}^{m\times n}\) denote the sets of real m-vectors and real \(m\times n\) matrices, respectively. A real compact interval is defined by \(\mathbf {a} = [\underline{a}, \overline{a}] := \{a\in \mathbb {R}\mid \underline{a}\le a\le \overline{a}, \underline{a}, \overline{a}\in \mathbb {R}\}\). Denote the set of all intervals by \(\mathbb {IR}\) and \(\mathbb {IR}^m\), \(\mathbb {IR}^{m\times n}\) denote the sets of interval m-vectors and interval \(m\times n\) matrices, respectively. We consider systems of linear algebraic equations

$$\begin{aligned} A(p)x=b(p), \end{aligned}$$

where the matrix and the right hand side vector depend linearly on a number of parameters \(p=(p_1,\ldots ,p_K)^{\top }\), which are uncertain and vary within given intervals \(\mathbf {p}=(\mathbf {p}_1,\ldots ,\mathbf {p}_K)^{\top }\). In the worst-case analysis of an uncertain system, represented by interval parameters, one is interested to obtain componentwise interval bounds for the so-called united parametric solution set of this system

$$\begin{aligned} \varSigma _{uni}\left( A(p),b(p),\mathbf {p}\right) := \left\{ x\in \mathbb {R}^n \mid \exists p\in \mathbf {p} : A(p)x = b(p)\right\} . \end{aligned}$$
(1.1)

Most of the interval methods for enclosing \(\varSigma _{uni}\left( A(p),b(p),\mathbf {p}\right) \), e.g., [2, 4, 5, 16, 22], either require or verify the so-called strong regularity [15] of the parametric interval matrix. A numerical method is proposed in [11] by Neumaier and Pownuk, which does not require strong regularity of A(p) on \(\mathbf {p}\), and which has an expanded scope of applicability shown in [19]. The only requirement of the method is a particular structure of the parameter dependencies, so that the system is presented as

$$\begin{aligned} (A_0+LD_pR)x \;=\; b_0+Fq, \quad p\in \mathbf {p}, q\in \mathbf {q}, \end{aligned}$$
(1.2)

where LRF are real matrices and the parameters p are isolated in a diagonal matrix \(D_p\). In [18] some constructive sufficient conditions for a parametric matrix A(p) to be representable in the form \(A_0+LD_pR\) are proven and the method is generalized to parametric systems involving dependencies between the matrix and the right hand side. In [19] a more general constructive theorem specifies how any parametric linear system in general form can be transformed into a form similar to (1.2). Based on this representation, a new sufficient condition for regularity of a parametric matrix in the interval vector of the parameters is proven therein. This condition corresponds to the background requirement of the method of Neumaier and Pownuk, and its power is demonstrated on some numerical examples.

The present article is based on the ability to represent the linear parameter dependencies in any parametric matrix, or system of equations, by a sum of rank one matrices. Such representations and their properties are discussed in Sect. 2.1. The notion strong regularity of a parametric interval matrix, naming a number of equivalent sufficient conditions for regularity of a parametric matrix in an interval vector for the parameters, is generalized in Sect. 3 to cover a wider class of parametric matrices. It is proven therein that the sufficient conditions, based on optimal rank one representation (Definition 2.3) of the parameter dependencies are more general than the available so far sufficient conditions for regularity of a parametric interval matrix. In Sect. 4 we present a general methodological framework for solving parametric interval linear systemsFootnote 1 having optimal rank one representation of the parameter dependencies. The presented methodology has the same expanded scope of applicability as the method of Neumaier and Pownuk [11] and its generalization [18], and eliminates the implementation features required by the latter method. Some numerical examples, presented in this section, illustrate the advantages of the newly proposed methodology. The article ends by some conclusions.

2 Notation and basic definitions

Denote the identity matrix of appropriate dimension by I and the spectral radius of \(A\in \mathbb {R}^{n\times n}\) by \(\rho (A)\). A diagonal matrix, whose diagonal entries are the elements of a vector \(x\in \mathbb {R}^n\), is denoted by \(D_x= \mathrm{diag}(x_1,\ldots ,x_n)\). For a matrix \(A_k\in \mathbb {R}^{m\times n}\), denote its i-th row by \(A_{k, i\bullet }\) and its j-th column by \(A_{k, \bullet j}\). For a number of matrices \(A_1,\ldots ,A_k\in \mathbb {R}^{m\times n}\), \(A=(A_1|\ldots |A_k)\) denotes the matrix \(A\in \mathbb {R}^{m\times kn}\) obtained by stacking the columns of \(A_1,\ldots ,A_k\).

For an interval \(\mathbf {a} = [\underline{a}, \overline{a}]\), define its midpoint by \(\check{a}= \mathrm{mid}(\mathbf {a}):= (\underline{a} + \overline{a})/2\), radius by \(\hat{a} = \mathrm{rad}(\mathbf {a}) := (\overline{a} - \underline{a})/2\) and absolute value (magnitude) \(|\mathbf {a}|:= \max \{|\underline{a}|, |\overline{a}|\}\). These functions are applied to interval vectors and matrices componentwise. For a nonempty and bounded set \(\varSigma \subset \mathbb {R}^m\), its interval hull is defined by

$$\begin{aligned} \square \varSigma :=\bigcap \{\mathbf {x}\in \mathbb {IR}^m \mid \mathbf {x}\supseteq \varSigma \}. \end{aligned}$$

For an interval matrix \(\mathbf {A}\in \mathbb {IR}^{m\times m}\) and a given scaling vector \(u\in \mathbb {R}^m\), \(u>0\), the corresponding scaled maximum norm is

$$\begin{aligned} ||\mathbf {A}||_u = \max _{i=1,\ldots ,m}\frac{\sum _{j=1}^m|\mathbf {A}_{ij}|u_j}{u_i}. \end{aligned}$$

Definition 2.1

For a given interval vector \(\mathbf {p}\in \mathbb {IR}^K\), such that \(\hat{p}_k > 0\) for each \(1\le k\le K\), the set of real matrices

$$\begin{aligned} \left\{ A(p), \mathbf {p}\right\} := \left\{ A(p)=A_0 + \sum _{k=1}^K p_kA_k \in \mathbb {R}^{m\times n} \mid p\in \mathbf {p}\right\} \end{aligned}$$
(2.1)

is called an \(m\times n\)parametric interval matrix involving linear parameter dependencies represented by the matrices \(A_k\in \mathbb {R}^{m\times n}\), \(0\le k\le K\).

A parametric interval matrix is not an interval matrix and \(\left\{ A(p), \mathbf {p}\right\} \) is a short notation for a family of real matrices specified by A(p), where \(p\in \mathbf {p}\). This notation follows the style of notation (1.1).

For every parametric interval matrix \(\{A(p), \mathbf {p}\}\) of Definition 2.1, there is a corresponding nonparametric interval matrix

$$\begin{aligned} A(\mathbf {p}) := \square \{A(p), \mathbf {p}\} = A_0+\sum _{k=1}^K\mathbf {p}_kA_k. \end{aligned}$$
(2.2)

Nonparametric interval matrices \(\mathbf {A}\in \mathbb {IR}^{m\times n}\) can be considered as a special class of parametric interval matrices. Namely, \(\mathbf {A}\in \mathbb {IR}^{m\times n}\) can be considered as a parametric interval matrix involving \(m\times n\) interval parameters \(a_{ij}\in \mathbf {a}_{ij}\), \(1\le i\le m\), \(1\le j\le n\). Thus, the following Definition 2.2 comprises both the parametric and nonparametric interval matrices. In this article we will consider square parametric matrices.

Definition 2.2

A square parametric interval matrix \(\{A(p), \mathbf {p}\}\) is called regular if A(p) is regular for every \(p\in \mathbf {p}\).

\(\{A(p), \mathbf {p}\}\) is said singular if Definition 2.2 is not satisfied, i.e., if A(p) is singular for some \(p\in \mathbf {p}\).

2.1 Equivalent representations of a parametric matrix

For a square parametric matrix \(A(p)\in \mathbb {R}^{n\times n}\), beside the representation (2.1), in this article we consider another equivalent representation, called LDR representation,

$$\begin{aligned} A(p) = A_0+LD_{g(p)}R \end{aligned}$$
(2.3)

with suitable matrices \(L,R^{\top }\in \mathbb {R}^{n\times s}\) and a parameter vector \(g(p)\in \mathbb {R}^s\), \(s\ge K\), which is obtained from p by possibly involving some parameters \(p_k\), \(1\le k\le K\), more than once.

An equivalent LDR representation (2.3) can be found for any parametric matrix (2.1). A simple way is if for each \(1\le k\le K\), we consider \(A_k\) and define \(L_k, R_k^{\top }\in \mathbb {R}^{n\times s_k}\), \(A_k=L_kR_k\), by the \(s_k\) nonzero rows of \(A_k\). Namely, \(R_{k, i\bullet }=A_{k, i\bullet }\), \(L_{k, \bullet i}= e(i)\), where e(i) is the i-th coordinate unit vector in \(\mathbb {R}^n\), and i is the i-th nonzero row of \(A_k\), \(1\le i\le s_k\le n\). Thus, with \(g_k(p_k)=(p_k,\ldots ,p_k)^{\top }\in \mathbb {R}^{s_k}\), \(1\le k\le K\) and \(L=\left( L_1|\ldots |L_K\right) \), \(R=\left( R_1^{\top }|\ldots |R_K^{\top }\right) ^{\top }\), \(g(p) = \left( g_1^{\top } (p_1)|\ldots | g_K^{\top } (p_k)\right) ^{\top }\), we have an equivalent LDR representation (2.3). It is discussed below that a parametric matrix may have many equivalent LDR representations which differ in their properties.

For every parameter \(p_k\) in (2.1), \(1\le k\le K\), we consider a full rank factorization of its coefficient matrix \(A_k\in \mathbb {R}^{n\times n}\)

$$\begin{aligned} A_k = L_kR_k, \qquad L_k\in \mathbb {R}^{n\times s_k}, \; R_k\in \mathbb {R}^{s_k\times n}, \; s_k=\mathrm{rank}(A_k). \end{aligned}$$
(2.4)

Also, \(p_kA_k = L_kD_{g_k(p_k)}R_k\), where \(g_k(p_k)=(p_k,\ldots ,p_k)^{\top }\in \mathbb {R}^{s_k}\). For a given matrix \(A_k\in \mathbb {R}^{n\times n}\) with \(\mathrm{rank}(A_k)=s_k\), there are various ways to obtain a factorization (2.4), cf. [6, 13]. A full-rank LDR representation can be obtained by the reduced singular value decomposition (SVD), [6], \(A_k = L_kD_{\sigma _k}R_k^{\top }\), where \(\sigma _k=(\sigma _{k,1},\ldots ,\sigma _{k,s_k})^{\top }\) are the nonzero singular values of \(A_k\) and \(L_k,R_k\in \mathbb {R}^{n\times s_k}\) are the corresponding orthogonal matrices. However, obtaining SVD of a matrix has high computational complexity and the decomposition may not be exact even if the input data are exact. Other approaches for obtaining a decomposition (2.4) are those based on (a) a basis of the column space of \(A_k\) and (b) the row reduced echelon form of \(A_k\). These decompositions are exact in exact arithmetic but not unique, cf. [6, 13]. An approach similar to (a) is applied in [19].

Definition 2.3

For a parametric matrix \(A(p)=A_0+\sum _{k=1}^K p_kA_k\), the following LDR-representation

$$\begin{aligned} A_0 + LD_{g(p)}R, \end{aligned}$$
(2.5)

where \(g(p)\in \mathbb {R}^s\), \(s=\sum _{k=1}^K s_k\), \(g(p) = \left( g_1^{\top } (p_1)|\ldots | g_K^{\top } (p_K)\right) ^{\top }\), \(L=\left( L_1|\ldots |L_K\right) \in \mathbb {R}^{n\times s}\), \(R=\left( R_1^{\top }|\ldots |R_K^{\top }\right) ^{\top }\in \mathbb {R}^{s\times n}\) and for \(1\le k\le K\), \(g_k(p_k)=(p_k,\ldots ,p_k)^{\top }\in \mathbb {R}^{s_k}\), \(p_kA_k = L_kD_{g_k(p_k)}R_k\), is an equivalent, optimal, rank one representation of A(p) if

  1. (i)

    (2.5) restores A(p) exactly, that is

    $$\begin{aligned} \nonumber A(p)= & {} A_0 + LD_{g(p)}R \; = \; A_0+ \sum _{k=1}^K L_kD_{g_k(p_k)}R_k; \end{aligned}$$
  2. (ii)

    for each \(1\le k\le K\), matrices \(L_k, R_k\) provide a full rank factorization (2.4) of \(A_k\).

Definition 2.3 implies that if we introduce new parameters \(t_i\), \(1\le i\le s\), by \(t_i:=g_i(p)\), \(t_i\in g_i(\mathbf {p})\), then the coefficient matrix (denoted by \(A_i\)) for each parameter \(t_i\) has rank one, that is \(A_i = L_{\bullet i}R_{i\bullet }\). Section 3 presents that although the coefficient matrices \(A_k\) of the parameters \(p_k\), \(1\le k\le K\), may not have rank one in general, the sufficient conditions for regularity of a parametric interval matrix (used also by the methods for enclosing the solution set \(\varSigma _{uni}\left( A(p),b(p),\mathbf {p}\right) \)) are based on approximating the parametric matrix by another one with rank one uncertainty structure.

In what follows, except if it is explicitly specified, we will not distinguish the variety of equivalent optimal rank one LDR representations (2.5) of a parametric matrix A(p) in (2.1).

Definition 2.4

A parametric matrix \(A(p)\in \mathbb {R}^{n\times n}\), \(p\in \mathbb {R}^K\), involves only column dependencies if for every \(1\le k\le K\) there exists \(j\in \{1,\ldots ,n\}\) such that \(A_{k, \bullet j}\ne 0\) and \(A_{k, \bullet i}=0\) for all \(i\ne j\).

A parametric matrix A(p) involves row dependencies if \(A^{\top }(p)\) involves column dependencies. Column dependencies are “true” (most general) if \(A_{k, \bullet j}\ne 0\) has more than one nonzero component. Column dependencies and row dependencies are special cases of rank one dependency structure. Therefore, we say that a parameter dependence has “true” (most general) rank one dependency structure if \(A_k=u_kv^{\top }_k\) is such that both \(u_k,v_k\in \mathbb {R}^n\) have more than one nonzero component.

In some proofs below we use the following statement, known as Woodbury formula [6, 23]. If \(A\in \mathbb {R}^{n\times n}\) is nonsingular and if \(U\in \mathbb {R}^{n\times s}\), \(S\in \mathbb {R}^{s\times s}\), \(V\in \mathbb {R}^{s\times n}\) are such that \(I + VA^{-1}US\) is nonsingular, respectively \(S - VA^{-1}U\) is nonsingular, then \(A+USV\) is nonsingular, respectively \(A - US^{-1}V\) is nonsingular and

$$\begin{aligned} (A+USV)^{-1}= & {} A^{-1} - A^{-1}US(S+SVA^{-1}US)^{-1}SVA^{-1}, \\ (A-US^{-1}V)^{-1}= & {} A^{-1} +A^{-1}U(S-VA^{-1}U)^{-1}VA^{-1}. \end{aligned}$$

3 Strong regularity of parametric interval matrices

3.1 The evolution of strong regularity

We study regularity of parametric interval matrices in the context of solving parametric interval linear systems. Since proving regularity is NP-hard, cf. [14], the focus is on sufficient conditions for regularity of the parametric interval matrix.

Some sufficient conditions for regularity of a parametric interval matrix \(\{A(p), \mathbf {p}\}\) are based on sufficient conditions for regularity of the corresponding nonparametric interval matrix \(A(\mathbf {p})\). Regularity of \(A(\mathbf {p})\) implies regularity of \(A(p)\in A(\mathbf {p})\) for each \(p\in \mathbf {p}\), which (by Definition 2.2) is regularity of the parametric interval matrix. The term strong regularity of a nonparametric interval matrix \(\mathbf {A}\) is introduced by Neumaier in [8] and further developed in [9] to name a number of equivalent sufficient conditions, [9, Proposition 4.1.1], for regularity of \(\mathbf {A}\). Since, for an arbitrary matrix R with the dimension of A(p),

$$\begin{aligned} \rho \left( I - \mathrm{rad}(RA(\mathbf {p}))\right) < 1 \end{aligned}$$
(3.1)

implies strong regularity of \(A(\mathbf {p})\) and therefore regularity of \(\{A(p), \mathbf {p}\}\), the sufficient condition (3.1) is used by the first self-verified method for solving parametric interval linear systems [3] and by its generalization developed by Rump in [21]. In [15], strong regularity of a nonparametric interval matrix \(\mathbf {A}\) is generalized for parametric interval matrices as a general term of a number of equivalent sufficient conditions for regularity of a parametric interval matrix.

Definition 3.1

([15]) A square parametric interval matrix \(\{A(p), \mathbf {p}\}\) is single-sided strongly regular if \(A(\check{p})\) is nonsingular and some of the following interval matrices is regular

$$\begin{aligned} \mathbf {B}:=\square \{A^{-1}(\check{p})A(p)\mid p\in \mathbf {p}\}, \qquad \mathbf {B}':=\square \{A(p)A^{-1}(\check{p})\mid p\in \mathbf {p}\}. \end{aligned}$$
(3.2)

Single-sided strong regularity of a parametric interval matrix is called up to now just “strong regularity”. We specify Definition 3.1 as single-sided in order to reflect its property to account for true column- or row-dependencies but not both, which will be discussed latter on. As above, single-sided strong regularity of a parametric interval matrix implies regularity of the latter. The equivalence of some necessary and sufficient conditions for single-sided strong regularity of a parametric interval matrix are proven in [15]. A parametric generalization of the condition (3.1) and its verifiable in floating point arithmetic forms are also presented in [15]. Since Definition 3.1 defines a more general and more powerful sufficient condition for regularity of a parametric matrix than strong regularity of \(A(\mathbf {p})\), cf. [15, Theorem 2], the single-sided strong regularity condition

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {B})) < 1, \end{aligned}$$
(3.3)

which implies regularity of \(\{A(p), \mathbf {p}\}\), is required by most of the methods for solving parametric interval linear systems and elsewhere in the theory where regularity of \(\{A(p), \mathbf {p}\}\) is needed. An improved self-verified method for solving parametric interval linear systems, [16], verifies by floating point interval computations the parametric generalization of (3.1), namely \(\rho (I - \mathrm{rad}(\mathbf {C})) < 1\), where \(\mathbf {C}=\square \{RA(p) \mid p\in \mathbf {p}\}\) for an arbitrary matrix R with the dimension of A(p).

Definition 3.2

The row dependencies in a parametric matrix dominate over the column ones if \(\rho (\mathrm{rad}(\mathbf {B}')) < \rho (\mathrm{rad}(\mathbf {B}))\).

In [19, Theorem 7] a new sufficient condition for regularity of parametric interval matrices \(\{A(p), \mathbf {p}\}\) is proven and it is demonstrated by numerical examples that this condition is more powerful than the conditions based on Definition 3.1. This motivates us, in what follows, to expand the existing definition of single-sided strong regularity of parametric interval matrices, to prove that the expanded definition is more general than Definition 3.1 and to present some of its equivalent or sufficient conditions.

3.2 Generalized sufficient conditions

First, we mention that, with \(\check{A}^{-1}:=A^{-1}(\check{p})\), it holds \(A(p)\check{A}^{-1} = \left( (\check{A}^{\top })^{-1}A^{\top } (p)\right) ^{\top }\). Since regularity of \(A\in \mathbb {R}^{n\times n}\) is equivalent to regularity of \(A^{\top }\in \mathbb {R}^{n\times n}\), the matrix \(\mathbf {B}'\) in Definition 3.1 can be equivalently replaced by the matrix

$$\begin{aligned} \mathbf {B}':=\square \left\{ \left( A^{-1}(\check{p})\right) ^{\top } A^{\top } (p) \mid p\in \mathbf {p}\right\} . \end{aligned}$$
(3.4)

Definition 3.3

Let \(\{A(p), \mathbf {p}\}\) be a square parametric interval matrix with equivalent optimal rank one representations \(A(p)-A_0 = LD_{g(p)}R\) and \((A(p)-A_0)^{\top } = L'D_{g'(p)}R'\), defined in (2.5) and obtained in the same manner, e.g., via a basis of the column space of \(A_k\), respectively of \(A_k^{\top }\), for each \(1\le k\le K\). For any \(p_0\in \mathbf {p}\), such that \(A(p_0)\) is nonsingular, define two new parametric interval matrices \(\{G(p_0,p), \mathbf {p}\}\), \(\{G'(p_0,p), \mathbf {p}\}\) by

$$\begin{aligned} G(p_0,p):= & {} I_{s\times s} - RA^{-1}(p_0)LD_{g(p_0-p)}, \quad p\in \mathbf {p}, \end{aligned}$$
(3.5)
$$\begin{aligned} G'(p_0,p):= & {} I_{s\times s} - R'(A^{\top } (p_0))^{-1} L'D_{g'(p_0-p)}, \quad p\in \mathbf {p}, \end{aligned}$$
(3.6)

and the corresponding nonparametric interval matrices

$$\begin{aligned} \mathbf {G}(p_0):= & {} \square \{G(p_0,p) \mid p\in \mathbf {p}\} = I-\left( RA^{-1}(p_0)L\right) D_{g(p_0-\mathbf {p})}, \end{aligned}$$
(3.7)
$$\begin{aligned} \mathbf {G}'(p_0):= & {} \square \{G'(p_0,p) \mid p\in \mathbf {p}\} = I-\left( R'(A^{-1}(p_0))^{\top } L'\right) D_{g'(p_0-\mathbf {p})}. \end{aligned}$$
(3.8)

Definition 3.4

A square parametric interval matrix \(\{A(p), \mathbf {p}\}\) is rank one strongly regular if \(\check{A}=A(\check{p})\) is nonsingular and some of the interval matrices \(\mathbf {G}(\check{p})\), \(\mathbf {G}'(\check{p})\) of Definition 3.3 is regular.

Strong regularity by Definition 3.4 is called rank one strong regularity because this definition is based on an equivalent optimal rank one LDR representation of A(p). This definition accounts for both column- and row-dependencies of the parameters whose coefficient matrices have rank one, in contrast to the single-sided strong regularity of Definition 3.1.

For the matrices \(\mathbf {G}(\check{p})\), \(\mathbf {G}'(\check{p})\) we have \(\hbox {mid}(\mathbf {G}(\check{p}))=\mathrm{mid}(\mathbf {G}'(\check{p})) = I\) and

$$\begin{aligned} \mathrm{rad}(\mathbf {G}(\check{p}))= & {} |R\check{A}^{-1}L|D_{g(\hat{p})}= |I-\mathbf {G}(\check{p})|, \\ \mathrm{rad}(\mathbf {G}'(\check{p}))= & {} |R'(\check{A}^{\top })^{-1}L'|D_{g'(\hat{p})}=|I-\mathbf {G}'(\check{p})|. \end{aligned}$$

Then, the following theorem holds true.

Theorem 3.1

Let \(\{A(p)\in \mathbb {R}^{n\times n}, \mathbf {p}\in \mathbb {IR}^K\}\) be a parametric interval matrix involving linear dependencies and A(p), \(A^{\top } (p)\) have equivalent optimal rank one representations specified in Definition 3.3. For nonsingular \(\check{A}=A(\check{p})\) and some \(\mathbf {H}\in \{\mathbf {G}(\check{p}), \mathbf {G}'(\check{p})\}\), the following conditions are equivalent

  1. (i)

    \(\{A(p), \mathbf {p}\}\) is rank one strongly regular,

  2. (ii)

    \(\mathbf {H}\) is regular,

  3. (iii)

    \(\rho (\mathrm{rad}(\mathbf {H})) < 1\),

  4. (iv)

    \(||\mathrm{rad}(\mathbf {H})||_{u} < 1\) for some \(u>0\),

  5. (v)

    \(I- \mathrm{rad}(\mathbf {H})\) is an M-matrix,

  6. (vi)

    \(\mathbf {H}\) is an H-matrix,

  7. (vii)

    forty necessary and sufficient conditions for regularity of \(\mathbf {H}\), enlisted in [20, Theorem 4.1].

Proof

The proof of Theorem 3.1 goes the same way as the proof of Proposition 4.1.1 in [9], see also the proof of Theorem 1 in [15]. \(\square \)

Proposition 3.1

If \(\{A(p), \mathbf {p}\}\) is rank one strongly regular, then it is regular.

Proof

If \(\mathbf {G}(\check{p})\) is regular, then for each \(p\in \mathbf {p}\) the matrix \(I_{s\times s} - RA^{-1}(\check{p})(LD_{g(\check{p}-p)})\) is regular. Then, the second Woodbury formula implies regularity of \(A(\check{p}) - LD_{g(\check{p}-p)}R = A(p)\) for each \(p\in \mathbf {p}\). The latter means regularity of \(\{A(p), \mathbf {p}\}\) by Definition 2.2. Similarly, regularity of \(\mathbf {G}'(\check{p})\) implies regularity of \(\{A(p), \mathbf {p}\}\), which finishes the proof. \(\square \)

For single-sided strong regularity we have a theorem similar to Theorem 3.1. However, rank one strong regularity of Definition 3.4 and Theorem 3.1 are more general than single-sided strong regularity of Definition 3.1 (and its respective Theorem 3.1 with \(\mathbf {B}, \mathbf {B}'\)). The following theorem proves this statement.

Theorem 3.2

Let \(\{A(p)\in \mathbb {R}^{n\times n}, \mathbf {p}\in \mathbb {IR}^K\}\) be a parametric interval matrix involving linear dependencies. Let \(\mathbf {B}, \mathbf {B}'\) be those of Definition 3.1 and \(\mathbf {G}(\check{p}), \mathbf {G}'(\check{p})\) be those of Definition 3.4. Then,

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {G}(\check{p}))) \le \rho (\mathrm{rad}(\mathbf {B})), \end{aligned}$$
(3.9)

similarly, \(\rho (\mathrm{rad}(\mathbf {G}'(\check{p}))) \le \rho (\mathrm{rad}(\mathbf {B}'))\). For some parametric interval matrices we have \(\rho (\mathrm{rad}(\mathbf {G}(\check{p}))) < \rho (\mathrm{rad}(\mathbf {B}))\) or \(\rho (\mathrm{rad}(\mathbf {G}'(\check{p}))) < \rho (\mathrm{rad}(\mathbf {B}'))\).

Proof

We prove the relation (3.9). The other one can be proven in a similar way.

The interval matrix \(\mathbf {B}\) has the following equivalent LDR representation

$$\begin{aligned} \mathbf {B} = I - |\tilde{R}\check{A}^{-1}\tilde{L}|D_{\mathbf {v}}, \end{aligned}$$

which is obtained by \(\tilde{L}_k=A_k\), \(\tilde{R}_k=I\), \(1\le k\le K\). In \(\tilde{L}_k\) the zero columns of \(A_k\) are removed, as well as the corresponding rows of I, so that \(v_k(p_k)\in \mathbb {R}^{\tilde{t}_k}\) and \(\tilde{t}_k\) is the number of nonzero columns of \(A_k\). Then \(v=(v_1,\ldots ,v_{s_B})\) with \(s_B=\sum _{k=1}^K\tilde{t}_k\), \(v_i\in \mathbf {v}_i=v_i([-\hat{p}, \hat{p}])\), \(1\le i\le s_B\).

\(\mathbf {G}(\check{p}))\) is that of Definition 3.4 and has an equivalent representation \(\mathbf {G}(\check{p})) = I - |R\check{A}^{-1}L|D_{\mathbf {u}}\), obtained via the basis of the column space of \(A_k\) for every \(1\le k\le K\), where \(u=(u_1,\ldots ,u_{s_G})\), \(u_i\in \mathbf {u}_i=u_i([-\hat{p},\hat{p}])\), \(1\le i\le s_G\) and \(s_G=\sum _{k=1}^Kt_k\), \(t_k=\hbox {rank}(A_k)\).

For every matrix \(G\in \mathbf {G}(\check{p})\) there exists a matrix \(B\in \mathbf {B}\) and since, in general, the LDR representation of \(\mathbf {B}\) may not be optimal, the reverse is not always true. This statement is well visible if we substitute the interval parameters \(v_i\), \(1\le i\le s_B\) and \(u_i\), \(1\le i\le s_G\) into the matrix A(p), by which we obtain

$$\begin{aligned} \{A(u),\mathbf {u}\}\subseteq \{A(v),\mathbf {v}\}. \end{aligned}$$

Assume that \(\mathbf {B}\) is regular and there exists a singular matrix \(G\in \mathbf {G}(\check{p})\). Then \(\mathbf {B}\) will contain a singular matrix, which contradicts the assumption. This means that if \(\mathbf {B}\) is regular then \(\mathbf {G}(\check{p})\) is also regular, however, if \(\mathbf {G}(\check{p})\) is regular \(\mathbf {B}\) may not be so. Then, the equivalent conditions of Theorem 3.1 imply (3.9). The numerical examples presented in Sect. 4 show that for classes of parametric matrices we have true inequality in (3.9) and for some \(\mathbf {p}\)

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {G}(\check{p}))) < 1 \le \rho (\mathrm{rad}(\mathbf {B})). \end{aligned}$$

If A(p) involves only column dependencies and in some other special cases of parametric interval matrices we may have equality in (3.9). \(\square \)

The following theorem expands the scope of applicability of the condition (iii) of Theorem 3.1 to arbitrary \(p_0\in \mathbf {p}\). This is important for computational verification of the regularity of \(\{A(p), \mathbf {p}\}\). The generalization proven in Theorem 3.2 holds true also for the condition in Theorem 3.3. Theorem 3.3 slightly improves Theorem 7 in [19] in the last part of its proof.

Theorem 3.3

Let \(\{A(p), \mathbf {p}\}\) be a square parametric interval matrix with equivalent optimal rank one representations specified in Definition 3.3. If for any nonsingular \(A(p_0)\), \(p_0\in \mathbf {p}\), and for some \(\mathbf {M}\in \{I-\mathbf {G}(p_0), I-\mathbf {G}'(p_0)\}\),

$$\begin{aligned} \rho \left( |\mathbf {M}|\right) < 1, \end{aligned}$$
(3.10)

then \(\{A(p), \mathbf {p}\}\) is regular.

Proof

By [9, Corollary 3.2.3], \(\varrho \left( \left| \mathbf {M}\right| \right) <1\) is equivalent to \(|\mathbf {M}|u < u\) for some vector \(u>0\). The latter is equivalent to \(||\mathbf {M}||_u < 1\) for some vector \(u>0\). Since the last relation is equivalent to \(|| I - \left( I -\mathbf {M}\right) ||_u < 1\) for some \(u>0\), then [9, Proposition 3.7.2] implies that \(\mathbf {H}=I-\mathbf {M}\) is an H-matrix and therefore a regular interval matrix. From now on the proof continues separately and similarly for the matrices \(\mathbf {G}(p_0)\) and \(\mathbf {G}'(p_0)\). We present it for the matrix \(\mathbf {G}'(p_0)\). Since \(\mathbf {G}'(p_0)\) is regular, for each \(p\in \mathbf {p}\)

$$\begin{aligned} I - R'(A^{\top }(p_0))^{-1}L'D_{g'(p_0-p)} \quad \text { is regular.} \end{aligned}$$

Then, the second Woodbury formula, where \(S_{s\times s}=I_{s\times s}\), implies that for each \(p\in \mathbf {p}\)

$$\begin{aligned} A^{\top }(p_0) - L'D_{g'(p_0-p)}R' = A^{\top }(p) \quad \text { is regular.} \end{aligned}$$

Since regularity of \(A^{\top }(p)\) is equivalent to regularity of A(p), the proof is completed. \(\square \)

Any other sufficient (and necessary) conditions applicable to parametric interval matrices can be applied to the parametric interval matrices \(\{G(p_0,p), \mathbf {p}\}\), \(\{G'(p_0,p), \mathbf {p}\}\), specified in Definition 3.3, in order to prove regularity of \(\{A(p), \mathbf {p}\}\).

In view of its wide applicability, the sufficient condition (iii) of Theorem 3.3,

$$\begin{aligned} \rho \left( |(RA^{-1}(p_0)L)D_{g(p_0-\mathbf {p})}|\right)< 1 \text { or} \ \rho \left( |(R'(A^{\top }(p_0))^{-1}L')D_{g(p_0-\mathbf {p})}|\right) <1, \end{aligned}$$

with \(p_0=\check{p}\) or arbitrary \(p_0\in \mathbf {p}\), will be called sufficient condition for regularity of a parametric interval matrix based on its equivalent optimal rank one LDR representation.

4 Solving parametric linear systems

Consider a parametric interval linear algebraic system in the general form

$$\begin{aligned} A(p) x \; = \; b(p,q), \quad p\in \mathbf {p}, \; q\in \mathbf {q}, \end{aligned}$$
(4.1)

where the parameter dependencies are linear and described by

$$\begin{aligned} A(p) := A_0 + \sum _{k = 1}^K p_kA_k, \qquad b(p,q) := b_0 + \sum _{k = 1}^K p_kb_k + \sum _{k=1}^Q q_kb_k, \end{aligned}$$
(4.2)

with \(A_k\in \mathbb {R}^{n\times n}\), \(0\le k\le K\), \(b_l\in \mathbb {R}^n\), \(0\le l\le K+Q\). The parametric united solution set of the system (4.1) is defined by

$$\begin{aligned} \varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) := \left\{ x\in \mathbb {R}^n \mid (\exists p\in \mathbf {p}, \exists q\in \mathbf {q})(A(p)x = b(p,q))\right\} . \end{aligned}$$
(4.3)

In what follows we present a methodology for enclosing \(\varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \), which is based on an equivalent optimal rank one representations specified in Definition 3.3. Assume that the system (4.1) has an equivalent representation

$$\begin{aligned} \left( A_0+LD_{g(p)}R\right) x = b_0+LD_{g(p)}t + Fq, \quad p\in \mathbf {p}, q\in \mathbf {q} \end{aligned}$$
(4.4)

with suitable matrices LR and a parameter vector g(p), specified in Definition 3.3 and vector t such that \(\sum _{k = 1}^K p_kb_k = LD_{g(p)}t\). In the representation (4.4) and in the numerical methods below we will not distinguish between LDR originated from A(p) or from \(A^{\top } (p)\); this difference is essential for the applications, see Example 4.3.

Theorem 4.1

Let the system (4.1) have an equivalent optimal representation (4.4), based on Definition 3.3, and for any \(p_0\in \mathbf {p}\) the matrix \(C_0:=A(p_0)\) be nonsingular. Denote \(x_0=C_0^{-1}b(p_0,q_0)\), \(q_0\in \mathbf {q}\). If

$$\begin{aligned} \varrho \left( \left| (RC_0^{-1}L)D_{g(p_0-\mathbf {p})}\right| \right) <1, \end{aligned}$$
(4.5)

then \(\varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \) and the united solution set \(\varSigma _\mathrm{uni}\)((4.6)) of the system

$$\begin{aligned} G(p_0,p)y = RC_0^{-1}b(p,q), \quad p\in \mathbf {p}, \; q\in \mathbf {q} \end{aligned}$$
(4.6)

are bounded, \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6)) is computable, and every \(x\in \varSigma _{uni}\left( A(p), b(p,q), \mathbf {p},\mathbf {q}\right) \) satisfies

$$\begin{aligned} x \in x_0 - (C_0^{-1}F)(q_0-\mathbf {q}) + (C_0^{-1}L)\left( D_{g(p_0-\mathbf {p})}(\mathbf {y} - t)\right) . \end{aligned}$$
(4.7)

Proof

If (4.5) holds, by Theorem 3.3, the parametric interval matrices \(\{G(p_0,p), \mathbf {p}\}\) and \(\{A(p), \mathbf {p}\}\) are regular. Hence, the united solution set of (4.6) and respectively \(\varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \) are bounded. The parametric matrix \(G(p_0,p)\) involves only column dependencies with respect to each parameter \(g_i\in g_i(\mathbf {p})\) and, by Theorem 3.3 (see the proof), the nonparametric matrix \(\mathbf {G}(p_0)\) is an H-matrix. This implies single-sided strong regularity of \(G(p_0,p)\) by Definition 3.1. Therefore, the parametric linear system (4.6) is solvable by any of the available parametric methods based on single-sided strong regularity (Definition 3.1, its equivalent condition of the corresponding Theorem 3.1 or the sufficient condition of the corresponding Theorem 3.3). Hence \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6)) is computable. The solution x of the parametric interval linear system

$$\begin{aligned} \left( I - C_0^{-1}LD_{g(p_0-p)}R\right) x= & {} C_0^{-1}b(p,q), \quad p\in \mathbf {p}, \; q\in \mathbf {q}, \end{aligned}$$
(4.8)

which is equivalent to (4.1) (respectively to (4.4)), and the solution y of the system (4.6) are related via \(y=Rx\). Hence, each solution \(\tilde{x}\in \varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \) satisfies

$$\begin{aligned} \tilde{x} = x_0 - (C_0^{-1}F)(q_0-\tilde{q}) + (C_0^{-1}L)D_{g(p_0-\tilde{p})}(y - t). \end{aligned}$$

for some \(\tilde{p}\in \mathbf {p}\), \(\tilde{q}\in \mathbf {q}\) and \(y\in \varSigma _\mathrm{uni}\)((4.6)). Then, the inclusion isotonicity of interval operations gives (4.7). \(\square \)

Theorem 4.1 specifies a general framework for enclosing \(\varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \) basing on an equivalent optimal rank one representation (4.4) of the initial system (4.1). The particular implementation (and its properties) of this general framework depends on the particular parametric method that will be chosen for computing the enclosure \(\mathbf {y}\supseteq \varSigma _{uni}\)((4.6)). Since \(p_0=\check{p}\) provides minimum of the magnitude \(|\mathbf {y}|\), cf. [9, Theorem 4.1.10], Theorem 4.1 applies with \(p_0=\check{p}, q_0=\check{q}\). Below we discuss some implementation schemes and their properties.

The parametric methods applicable to system (4.6) can be classified in two general classes:

  1. (s1)

    Methods, which are not self-verified. They require nonsingularity of \(A(\check{p})\) and enclose the solution set when the error of floating point computations is small compared to data uncertainties and the overestimation of the method. Examples of such methods are Skalna’s direct method [22], called in [2] parametric Bauer-Skeel method, and the methods of Kolev [4, 5]. An enclosure \(\mathbf {y}\), equivalent to that obtained by the direct method, is \(\mathbf {y}= y(\check{p},\check{q}) + \square \varSigma (\mathbf {B},\mathbf {b})\), where \(\mathbf {B}=I-(R\check{A}^{-1}L)D_{g(\check{p}-\mathbf {p})}\), \(\mathbf {b}=R\check{A}^{-1}b_0 + (R\check{A}^{-1}F)\mathbf {q} + (R\check{A}^{-1}LD_t)D_{g(\mathbf {p})}\) and \(\square \varSigma (\mathbf {B},\mathbf {b})\) is obtained by the method of Hansen–Bliek–Rohn–Ning–Kearfott [12].

  2. (s2)

    Self-verified parametric methods, which verify the sufficient condition (4.5) and provide guaranteed solution enclosure in floating point arithmetic. Such parametric methods applicable with Theorem 4.1 are based on [16, Theorem 2.3] or on [21, Theorem 4.8]. Rigorous enclosure \(\mathbf {y}\) in floating point arithmetic can be obtained also by the algorithm of Neumaier [10] applied to the nonparametric interval system \(\left( I-(R\mathbf {C}L)D_{g(\check{p}-\mathbf {p})}\right) y = R\mathbf {C}b_0 + (R\mathbf {C}F)\mathbf {q} + (R\mathbf {C}LD_t)D_{g(\mathbf {p})}\), where \(\mathbf {C}\) is a guaranteed enclosure of \(\check{A}^{-1}\).

In floating point arithmetic, although the enclosure \(\mathbf {y}\supseteq \varSigma _{uni}\)((4.6)) can be guaranteed, for ill-conditioned problems the relation (4.7) may not hold true for every \(x\in \varSigma _{uni}\left( A(p), b(q), \mathbf {p},\mathbf {q}\right) \). A guaranteed enclosure of \(\varSigma _{uni}\left( A(p),b(p,q), \mathbf {p},\mathbf {q}\right) \) can be obtained if \(x_0\) and/or \(C_0^{-1}\) in (4.7) are replaced by their guaranteed enclosures \(\mathbf {x}_0\ni x_0\), respectively, \(\mathbf {C}_0\ni C_0^{-1}\).

With equivalent optimal LDR representation, specified in Definition 3.3, the generalized method of Neumaier and Pownuk [18, Theorem 4] and the new enclosure methodology, defined in Theorem 4.1, have the same expanded scope of applicability compared to mostFootnote 2 existing so far numerical interval methods for solving parametric interval linear systems. The advantage of the two methodologies is pronounced for some parametric systems involving “true” rank one parameter dependencies, see Theorem 3.2 and the numerical examples below.

Example 4.1

Consider the parametric interval linear system

$$\begin{aligned} \begin{pmatrix}1 + p_1 - p_2, &{}\quad -p_1 + p_2, &{}\quad 1 + p_1\\ 2 + p_1 + p_2, &{}\quad -1 - p_1 - p_2 + p_3, &{}\quad -1 + p_1 - 2 p_3\\ 1 + p_1, &{}\quad -3 -p_1 - 2p_3, &{}\quad 6 + p_1 + 4p_3\end{pmatrix}x= \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix},\quad \begin{matrix} p_1\in [-\delta , \delta ],\\ p_2\in [-\delta , \delta ],\\ p_3\in [-\delta , \delta ]. \end{matrix} \end{aligned}$$

An equivalent optimal rank one representation of the parametric matrix is defined by \(D_p\) and

$$\begin{aligned} L=\begin{pmatrix} 1&{}1&{}0\\ 1&{}-1&{}1\\ 1&{}0&{}-2 \end{pmatrix}, \qquad R=\begin{pmatrix} 1 &{} -1 &{} 1\\ -1 &{} 1 &{} 0\\ 0 &{} 1 &{} -2 \end{pmatrix}. \end{aligned}$$

For \(\delta = 1/3\) and \(\mathbf {B}\), \(\mathbf {B}'\) specified in Definition 3.1, we have the relations

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {G}(\check{p})))\approx 0.41< 0.667 \approx \rho (\mathrm{rad}(\mathbf {B}')) < 1.25\approx \rho (\mathrm{rad}(\mathbf {B})). \end{aligned}$$

The available methods for solving parametric interval linear systems, which are based on \(\rho (\mathrm{rad}(\mathbf {B}))<1\), are not able to provide solution enclosure for the system with parameters defined by \(\delta = 1/3\). Note that for this \(\delta \), we have \(\rho (\mathrm{rad}(\mathbf {B}))/\rho (\mathrm{rad}(\mathbf {G}(\check{p}))) > 3\). The application of Theorem 4.1 gives a solution enclosure

$$\begin{aligned} \left( [0.4076, 1.638], \; [-0.1032, 0.8175],\; [-0.0268, 0.4554]\right) ^{\top }. \end{aligned}$$

For \(\delta = 1/4\) the following inequalities hold

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {B}))\approx 0.94> \rho (\mathrm{rad}(\mathbf {B}'))\approx 0.5 > \rho (\mathrm{rad}(\mathbf {G}(\check{p})))\approx 0.304. \end{aligned}$$

By some of the parametric methods enlisted in (s1), (s2) above, all of them based on \(\rho (\mathrm{rad}(\mathbf {B}))<1\), and \(p_0=\check{p}\), we obtain

$$\begin{aligned} \mathbf {x}' = \left( [-2.1481, 3.7196], \; [-2.5230, 3.2373],\; {[}-1.7408, 2.1694]\right) ^{\top }. \end{aligned}$$

The same parametric methods yield an enclosure \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6))

$$\begin{aligned} \mathbf {y} = \left( [0.4597, 0.82595], \; [-0.6706, -0.1867],\; {[}-0.1371, -0.00582]\right) ^{\top }. \end{aligned}$$

Then, (4.7) of Theorem 4.1 gives

$$\begin{aligned} \mathbf {x}'' = \left( [0.5393, 1.0321], \; [0.0654, 0.6489],\; [0.062, 0.3666]\right) ^{\top }. \end{aligned}$$

The percentage by which \(\mathbf {x}'\) overestimates \(\mathbf {x}''\) is

$$\begin{aligned} 100\left( 1-\frac{\hat{\mathbf {x}}''}{\hat{\mathbf {x}}'}\right) \approx \left( 91.6, 89.9, 92.2\right) ^{\top } \%. \end{aligned}$$

The method of Neumaier and Pownuk [11], its generalization [18], and Theorem 4.1 are the only interval methods providing solution enclosure for parametric interval linear systems where the row parameter dependencies dominate over the column ones, cf. Examples 4.1 and 4.3.

The two methodologies (generalized method of Neumaier and Pownuk [18, Theorem 4] and Theorem 4.1) differ in the following aspects:

  1. d1.

    With w as the vector with all entries one and \(C=(A_0+LD_{p_0}R)^{-1}\),

    $$\begin{aligned} w':=&w-|\text { Diag}(p_0-\mathbf {p})||RCL|w, \\ w'':=&|\text { Diag}(p_0-\mathbf {p})||RCb_0+(RCF) \mathbf {q}+RCLD_0t-t|, \end{aligned}$$

    and basing on an initial enclosure of

    $$\begin{aligned} h\in \mathbf {h}:=[-\alpha w, \alpha w], \quad \alpha = \max _{i}\frac{w''_i}{w_i'}, \end{aligned}$$
    (4.9)

    the generalized method of Neumaier and Pownuk [18, Theorem 4] iterates

    $$\begin{aligned} \mathbf {y}= & {} \left( RCb_0+(RCF)\mathbf {q}+(RCL)(D_0t+\mathbf {h})\right) \cap \mathbf {y}, \\ \mathbf {h}= & {} \left( \text { diag}(p_0-\mathbf {p})(\mathbf {y}-t)\right) \cap \mathbf {h}. \end{aligned}$$

    In contrast, Theorem 4.1 prescribes that any parametric method, based on single-sided strong regularity of Definition 3.1, provides an (sharper) enclosure of \(\mathbf {y}\). It will be shown by some numerical examples below that the methodology of Theorem 4.1 allows a better accounting for the parameter dependencies between the matrix and the right-hand side vector in the system than the generalized method of Neumaier and Pownuk.

  2. d2.

    The generalized method of Neumaier and Pownuk possesses some specific features that require special treatment in its implementation. These are (cf. [19]):

    1. d2.1

      In case of dependencies between the matrix and the right-hand side vector, some modified values of LR and t may provide a better solution enclosure.

    2. d2.2

      In [11, Theorem 4.1], if \(w'\not >0\), the method requires computing the largest eigenvalue \(\varrho \) of \(D_{g(\hat{p})}|RCL|\) and if \(\varrho <1\), to take w as the eigenvector associated to \(\varrho \).

    3. d2.3

      If the matrix R involves zero rows, these should be excluded from the computation of \(\mathbf {h}\) in (4.9).

The methodology of Theorem 4.1 has the flexibility to use any of the parametric methods based on single-sided strong regularity (e.g., those in (s1), (s2)), and does not depend on the above features. The following examples illustrate this.

Example 4.2

([19, Example 6]) Consider the parametric interval linear system

$$\begin{aligned} \begin{pmatrix} \frac{1}{2}-p_2 &{}\quad p_2 &{}\quad p_1\\ p_1 &{}\quad -p_2 &{}\quad p_3\\ p_1 &{}\quad p_3 &{}\quad 1 \end{pmatrix} x=\begin{pmatrix} p_2\\ 2p_2\\ 3p_2 \end{pmatrix}, \quad \begin{matrix} p_1\in [\frac{3}{4}, \frac{5}{4}],\\ p_2\in [\frac{1}{2}, \frac{3}{2}],\\ p_3\in [\frac{1}{2}, \frac{3}{2}]. \end{matrix} \end{aligned}$$

For the parametric matrix of this system and \(\mathbf {B}\), \(\mathbf {B}'\) specified in Definition 3.1 we have the relation

$$\begin{aligned} \rho (\mathrm{rad}(\mathbf {B}))\approx 0.969 < \rho (\mathrm{rad}(\mathbf {B}')) \approx 1.12, \end{aligned}$$
(4.10)

which means dominating column dependencies in the parametric matrix. We consider an equivalent optimal LDR representation conforming to Definition 3.3 and to the system, which is specified by \(\text { Diag}((p_1,p_1,p_2,\)\(p_2,p_2,p_3,p_3))\), \(t=(0,0,-3,-2,3,0,0)^{\top }\) and

$$\begin{aligned} L=\begin{pmatrix} 0&{}\quad 1&{}\quad -1&{}\quad 1&{}\quad 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0&{}\quad -1&{}\quad 0&{}\quad 0&{}\quad 1\\ 1&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 1&{}\quad 1&{}\quad 0 \end{pmatrix}, \quad R=\begin{pmatrix} 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 1 \end{pmatrix}. \end{aligned}$$

With this representation we have \(\rho (\mathrm{rad}(\mathbf {G}(\check{p})))=\rho (\mathrm{rad}(\mathbf {B}))\). The generalized method of Neumaier and Pownuk, applied to this LDR representation possesses all the specific features d2.1–d2.3 and the obtained solution enclosure is

$$\begin{aligned} \left( [-81.334, 84.001], [-38.667, 39.667], [-73.001, 75.334]\right) ^{\top }. \end{aligned}$$

An application of Theorem 4.1, with the same LDR representation, \(p_0=\check{p}\), \(C_0\approx A^{-1}(\check{p})\), and an enclosure \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6))

$$\begin{aligned}&\mathbf {y} = \left( [-30, 32.68], \; [-28.23, 30.56],\; [-14.78, 15.78], \; [-32.67, 30], \right. \\&\quad \quad \left( [-10^{-68}, 10^{-68}], \; [-28.23, 30.56], \; [-14.78, 15.78]\right) ^{\top }, \end{aligned}$$

yields the solution enclosure

$$\begin{aligned} \left( [-33.278, 35.945], [-16.278, 17.278], [-29.223, 31.556]\right) ^{\top }. \end{aligned}$$

Thus, the application of Theorem 4.1 provides a better solution enclosure without any specific requirements for the implementation. The sharper solution enclosure is because the parametric methods (s1), (s2) account better for the dependencies between the matrix and the right-hand side vector.

Example 4.3

([19, Example 8]) Denote by A(p), b(p), \(p\in \mathbf {p}\) the parametric matrix, right-hand side vector and the parameter intervals, respectively, for the system in Example 4.2. Consider the parametric interval linear system

$$\begin{aligned} A^{\top }(p) x = b(p), \quad p\in \mathbf {p}. \end{aligned}$$

Due to the relation (4.10), the parametric matrix \(A^{\top }(p)\) involves dominating row dependencies. Let \(A(p)-A_0=LD_{g(p)}R\) as in Example 4.2 and \(\left( A(p)-A_0\right) ^{\top } =L'D_{g'(p)}R'\) be an equivalent optimal rank one representation, and these form \(\mathbf {G}(p_0)\), respectively \(\mathbf {G}'(p_0)\). Since \(\rho (\mathrm{rad}(\mathbf {G}(\check{p})))\approx 0.969 < \rho (\mathrm{rad}(\mathbf {G}'(\check{p})))\approx 1.006\), we have to solve the parametric system via the equivalent representation of \(A(p)-A_0\). Due to \(\left( A(p)-A_0\right) ^{\top } =R^{\top } D_{g(p)}L^{\top }\), we consider the parametric interval system of this example in the following equivalent representation

$$\begin{aligned} \left( A^{\top }_0+L''D_{g(p)}R''\right) x = L''D_{g(p)}\tilde{t}, \qquad p\in \mathbf {p}, \end{aligned}$$

where \(L'' = R^{\top }\), \(R'' = L^{\top }\) and \(L''_{\bullet 5}= L_{\bullet 5}\), \(R''_{5\bullet }= R_{5\bullet }\). The last two substitutions aim at retaining the dependency of b(p) on \(p_2\). The choice of \(L'', R''\) implies a new \(\tilde{t}=(0, 0, 1, 2, 3, 0, 0)^{\top }\).

As mentioned in [19], the generalized method of Neumaier and Pownuk applied to an equivalent optimal LDR representation of \(A^{\top } (p)-A_0^{\top }\), was the only numerical interval method providing solution enclosure for this example, where row dependencies dominate over the column parameter dependencies. With a special treatment of the zero rows in the matrix \(R''\), the generalized method of Neumaier and Pownuk yields the following solution enclosure

$$\begin{aligned} \left( [-41.12, 43.79], \; [-43.12, 44.12], \; [-51.89, 54.23]\right) ^{\top }. \end{aligned}$$
(4.11)

An application of Theorem 4.1, with the same LDR representation, \(p_0=\check{p}\), \(C_0\approx A^{-1}(\check{p})\), and an enclosure \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6))

$$\begin{aligned}&\mathbf {y} = \left( [-18.97, 22.30], \; [-24.75, 27.41],\; [-27.41, 24.75], \; [-32.01, 33.68], \right. \\&\quad \quad \left( [-10^{-68}, 10^{-68}], \; [-32.18, 34.51], \; [-26.97, 27.97]\right) ^{\top }, \end{aligned}$$

yields a better solution enclosure

$$\begin{aligned} \left( [-26.741, 29.408], \; [-27.797, 28.797], \; [-32.871, 35.204]\right) ^{\top } \end{aligned}$$

without any specific requirements for the implementation and for the choice of the method for obtaining \(\mathbf {y}\supseteq \varSigma _\mathrm{uni}\)((4.6)).

Example 4.4

Consider a finite element model of a one-bay 20-floor truss cantilever presented in Fig. 1 and proposed in [7].

Fig. 1
figure 1

One-bay 20-floor truss cantilever after [7]

The structure consists of 42 nodes and 101 elements. The bay is \(L = 1\hbox {m}\), every floor is 0.75L, the element cross-sectional area is \(A = 0.01\ \hbox {m}^2\), and the crisp value for the element Young modulus is \(E = 2\times 10^{11}\hbox {N}/\hbox {m}^2\). Twenty horizontal loads with nominal value \(P=1000\) N are applied at the left nodes. The boundary conditions are determined by the supports: at A the support is a pin, at B the support is roller. We assume \(\delta _{E}\)% uncertainty in the modulus of elasticity \(E_k\) of each element, that is \(E_k\in E [1-\delta _{E}/2, 1+\delta _{E}/2]\), \(k=1,\ldots ,101\), and \(\delta _{P}\)% uncertainty in the loads, that is \(P_m\in P [1-\delta _{P}/2, 1+\delta _{P}/2]\), \(m=1,\ldots ,20\). The goal is to find interval estimate for the normalized (nondimensional) displacements U, that is \(U\frac{EA}{PL}\).

The parametric linear systems resulting from finite element models of truss structures have the representation (4.4) since the coefficient numerical matrix for each parameter \(E_k\), \(k=1,\ldots ,101\), has rank one, see, e.g., [11, Section 6]. For \(\delta _{E}=6\)%, \(k=1,\ldots ,101\), and \(\delta _{P}=10\)%, \(m=1,\ldots ,20\), we solve a parametric interval linear system of 81 equations involving 101 interval parameters in the matrix and 20 interval parameters in the right-hand side. The normalized horizontal and vertical displacements at the right upper corner (node D) of the truss, obtained by Theorem 4.1, where \(\mathbf {y}\) is obtained by the single step method, and a solution estimate obtained by the single step method [2, 22], are presented in Table 1. Note that for this level of uncertainty the methods based on single-sided strong regularity condition (3.3) yield enclosure of \(U_{D,y}\) which is so wide that does not guarantee the sign of the displacement, Table 1. The method of Neumaier and Pownuk took 6 iterations to provide solution enclosure with maximal overestimationFootnote 3 of the solution enclosure obtained by Theorem 4.1 less than \(5.3\times 10^{-6}\).

Table 1 Example 4.4: bounds for the normalized displacement at corner D by two approaches

A reviewer required comparison of the computing time. In the interpretative environment of \({ Mathematica}^{\textregistered }\), the implementation of Theorem 4.1, where \(\mathbf {y}\) is obtained by the single step method, was at least four times faster on Example 4.4 than the implementation of the generalized method of Neumaier and Pownuk. The \({ Mathematica}^{\textregistered }\) function Timing gives approximately 0.21 and 0.85 seconds, respectively. This is probably because matrix inversion is a built-in function while the latter method is iterative with more complicated code due to features d2.2, d2.3. Time difference might be different when running executable code on examples that do not possess the specific features d2.2, d2.3, since the implementation of Theorem 4.1 requires inversion of two point matrices while the method of Neumaier and Pownuk avoids the second inversion. Although Theorem 4.1 gives freedom in its implementation, estimating \(\mathbf {y}\) via a parameterized p-solution, e.g., [4, 5], is not recommended since p-solutions demand more computation and memory. In any case, choosing a method for an outer estimate of a parametric united solution set should be guided by the properties of the particular problem and the structure of the parameter dependencies.

5 Conclusion

Basing on an equivalent optimal rank one representation of the linear parameter dependencies in a parametric interval matrix, respectively a parametric interval linear system, we presented a set of sufficient conditions for regularity of a parametric interval matrix and a methodology for enclosing the united solution set of a parametric interval linear system. It is proven that both the sufficient conditions and the solution methodology are more general than the existing so far ones, and more powerful for many parametric problems.

It was presented in Sect. 4 that the enclosure methodology of Theorem 4.1 allows more flexible implementation than the method of Neumaier and Pownuk [11] and its generalization [18]. Furthermore, the methodology of Theorem 4.1 is not affected by the implementation features of the generalized method of Neumaier and Pownuk and accounts better for the dependencies between the matrix and the right hand side vector, as it is illustrated by the considered numerical examples.

The mathematical models of many domain specific problems, e.g., models of electrical circuits [1, 6, 18] and static analysis of mechanical structures modeled by linear finite elements [11], involve rank one matrices describing the parameter dependencies. On the other hand, rank one matrices are widely used in classical numerical analysis for approximation. These two general directions determine the application of the presented methodology for an improved interval uncertainty quantification. Although the presented theory concerns matrices/linear systems involving linear parameter dependencies, it is applicable after a linearization to matrices/linear systems involving nonlinear parameter dependencies. Any other mathematical theory or application which checks regularity of a parametric interval matrix or solves parametric interval linear systems could have an expanded scope of applicability if it uses the results presented in this article.