1. INTRODUCTION

In the modern literature, the term sparse filtering has mainly stuck to areas such as machine learning, pattern recognition, and signal and image processing; see, e.g., [1,2,3]. At the same time, we recall that the classical statement of the filtering problem (i.e., estimating the state of a dynamical system from measurements) under random disturbances admits an almost exhaustive solution using the Kalman filter. However, in many situations the assumption that the noises are random is not justified; often it is only known that all disturbances are bounded and otherwise arbitrary. In this case, one can construct guaranteed (rather than probabilistic) state estimates. This approach was proposed in the late 1960s and early 1970s by American scientists Witzenhausen, Bertsekas and Rhodes, Schweppe [4]. Approximately at the same time, such problems were developed at N.N. Krasovskii’s seminar by researchers such as A.B. Kurzhanskii [5] and others. A significant contribution to this area of research was made by F.I. Chernous’ko [6]. In particular, the ellipsoidal filtering technique was developed in [4,5,6]; a survey of results in this area can be found in [7, 8].

The filtering problem with bounded nonrandom disturbances was considered in [9,10,11], but only for time-invariant problems, when all model parameters are time-independent. Moreover, a state estimate was sought such that its error is guaranteed to be enclosed in a single (invariant) ellipsoid at all times; i.e., the estimate is uniform. The filter itself was also sought in the class of linear time-invariant filters. In this narrowed class of problems and estimates, the problem turned out to be completely solvable, and hence it was possible to construct an optimal filter and a state estimate. This statement of the problem differs from those mentioned above; more general models were considered there, but the resulting solution was only suboptimal, and the estimates were not uniform. From a technical point of view, the papers [9,10,11] use the apparatus of linear matrix inequalities [12] that has proven itself well in analyzing and synthesizing control systems but has not been widely used in filtering problems. A systematic exposition of this technique can be found in the monograph [13].

On the other hand, sparsity ideas are widely used in signal and image processing, pattern recognition, and many other areas. One of the first areas of application of the sparsity concept is \( \ell _1 \)-optimization, further successfully developed in various directions such as compressed sensing, \( \ell _1 \)-filtering, etc. (see, for example, [14, 15]). At the same time, the ideas of sparsity have not found wide application in control; among the few publications on the construction of sparse feedback, one can mention [16, 17], in which the sparse structure is stipulated in advance; the focus of these publications is on optimization algorithms.

A new approach to constructing sparse feedback was proposed in [18]. It is related to minimizing the number of nonzero rows or columns of a matrix rather than of nonzero vector components. Such matrices are called rowand column-sparse, respectively. The concept of sparsity is used to synthesize linear state or output feedback in control systems under nonstandard integer-valued performance criteria such as the number of nonzero components in the control vector. Such problems are difficult, and their straightforward solution leads to combinatorial enumeration. Instead, it was proposed to use the convexification of the problem based on the use of special matrix norms and explicitly obtain a suboptimal solution.

This approach is distinguished by simplicity (the original problems are reduced to low-dimensional convex programming problems, and standard tools such as the Matlab environment can be used for their numerical solution), versatility (problems in continuous and discrete time are considered in a consistent manner, and the approach can be extended to various robust statements of the problem and to constructing both state and output linear feedback), and extendibility to various optimal control problems such as linear-quadratic control, \( H_\infty \)-optimization, etc.

The present paper is a natural continuation of both [9,10,11] and [18]. It considers an approach to solving the sparse filtering problem using a reduced number of outputs for systems subjected to arbitrary bounded exogenous disturbances. This approach leads to column-sparse filter matrices with relatively low performance loss and avoids combinatorial enumeration of all possible combinations of zero columns in the filter matrix. In this case, both continuous- and discrete-time versions of the problem are considered equally.

Throughout the following, \( \|\cdot \| \) is the Euclidean norm of a vector and the spectral norm of a matrix, \( I \) is the identity matrix of appropriate size, and all matrix inequalities are understood in the sense of the sign definiteness of the matrices.

2. SPARSE CONTROL

Let us recall the main ideas of the above-mentioned approach to constructing a sparse control. Let \( X\in \mathbb R^{n\times p} \); let us introduce the matrix norms

$$ \|X\|_{r_1}=\sum \limits _{i=1}^n\max _{1\leqslant j\leqslant p}|x_{ij}|, \qquad \|X\|_{c_1}=\sum \limits _{j=1}^p\max _{1\leqslant i\leqslant n}|x_{ij}|. $$

These norms are well known: the first of them is sometimes called the \( \tt rx \)-norm or \( \ell _{1,\infty } \)-norm; its main application is to reconstruct row-sparse solutions of matrix equations [19]; likewise, the \( r_1 \)-norm reconstructs column-sparse solutions.

The following result was established in [18].

Theorem 1.

If the problem

$$ \min \|X\|_{r_1}\quad \text {under constraint}\quad AX=B, $$

where \( A\in \mathbb R^{m\times n} \) , \( m<n \) , \( B\in \mathbb R^{m\times p} \) , and \( X\in \mathbb R^{n\times p} \) , is solvable, then there exists a solution with at most \( m \) nonzero rows.

A similar result can be stated for the \( c_1 \)-norm and zero columns.

The approach developed in [18] allows constructing sparse controllers in various situations in a regular manner. In particular, consider the continuous-time linear system

$$ \dot x=Ax+Bu $$
(1)

with the state \( x\in \mathbb R^n \) and the control \( u\in \mathbb R^m \); thus, \( A\in \mathbb R^{n\times n} \) and \( B\in \mathbb R^{n\times m} \); we assume that the pair \( (A,B) \) is controllable. The problem is to synthesize a sparse stabilizing control \( u=Kx \); sparseness means the presence of zero components in the control vector. This problem is equivalent to finding a row-sparse stabilizing controller matrix \( K\in \mathbb R^{m\times n} \), i.e., a matrix containing some zero rows.

In the subsequent exposition, we need some technical tricks used to produce a relevant result. It is well known that the matrix \( A+BK \) of the closed-loop system is stable if and only if there exists a positive definite matrix \( Q\succ 0 \) such that

$$ {(A+BK)}^{\top }Q+Q(A+BK)\prec 0. $$

By pre- and post-multiplying this matrix inequality by \( P=Q^{-1} \) and by introducing the new variable \( Y=KP \), we arrive at the linear matrix inequality

$$ AP+P{A}^{\top }+BY+{Y}^{\top }{B}^{\top }\prec 0,\quad P\succ 0 $$
(2)

for the matrix variables \( P={P}^{\top } \) and \( Y \). Then any stabilizing controller for system (1) is given by the expression \( \widehat K=\widehat Y\widehat P^{-1} \), where the matrices \( \widehat P \) and \( \widehat Y \) satisfy (2).

It is clear that if a matrix containing zero rows is postmultiplied by a matrix of appropriate size, then the same zero rows will appear in the resulting matrix; in other words, postmultiplication preserves the row-sparse structure of the matrix. Therefore, if the solution \( \widehat Y \) of the linear matrix inequality (2) is row-sparse, then so is the corresponding controller \( \widehat K \). In turn, the row sparsity of the matrix \( Y \) can be achieved by minimizing its \( r_1 \)-norm. Thus, the following assertion holds true.

Assertion [18].

The solution \( (\widehat P,\widehat Y) \) of the convex programming problem

$$ \min \|Y\|_{r_1}\quad \text {under constraints}\quad AP+P{A}^{\top }+BY+{Y}^{\top }{B}^{\top }\prec 0,\quad P\succ 0 $$

for the matrix variables \( P={P}^{\top }\in \mathbb R^{n\times n} \) and \( Y\in \mathbb R^{m\times n} \) defines a row-sparse stabilizing controller \( K_{\rm sp}=\widehat Y\widehat P^{-1} \) for system (1).

This result allows one to determine a control that stabilizes the system; this control is determined by the numbers of nonzero rows in the matrix \( K_{\rm sp} \). Here, generally speaking, one cannot guarantee that the resulting solution is necessarily sparse; however, the presence of sparsity can be expected because of Theorem 1.

We will apply these ideas to solving the sparse filtering problem stated in the next section.

3. CONTINUOUS-TIME CASE

3.1. Filtering Problem

Consider a continuous-time system

$$ \begin {aligned} \dot x &= Ax+D_1w,\quad x(0)=x_0,\\ y &= Cx+D_2w, \end {aligned} $$
(3)

where \( A\in \mathbb R^{n\times n} \), \( D_1\in \mathbb R^{n\times m} \), \( D_2\in \mathbb R^{l\times m} \), and \( C\in \mathbb R^{l\times n} \), with a phase state \( x(t)\in \mathbb R^n \), an observable output \( y(t)\in \mathbb R^l \), and an exogenous disturbance (noise) \( w(t)\in \mathbb R^m \) satisfying the constraint

$$ \big \|w(t)\big \|\leqslant 1\quad \text {at all}\quad t\geqslant 0; $$
(4)

the pair \( (A,D_1) \) is controllable, and the pair \( (A,C) \) is observable.

Let the state \( x \) of the system be inaccessible for measurement, and let information on the system be given by the system output \( y \). Let us construct a filter described by a linear differential equation for the state estimate \( \widehat x \) that including the mismatch between the output \( y \) and its prediction \( C\widehat x \),

$$ \dot {\widehat x}=A\widehat x+L(y-C\widehat x),\quad \widehat x(0)=0, $$
(5)

where \( L\in \mathbb R^{n\times l} \). Note that the filter structure is preset (the filter is linear time-invariant) and only the constant matrix \( L \) is to be selected. This structure is the same as in the well-known Luenberger observer.

We introduce the mismatch

$$ e(t)=x(t)-\widehat x(t), $$

which characterizes the filtering accuracy.

The problem is to find the minimum (in some sense) invariant ellipsoid containing the mismatch \( e \). The ideology of invariant ellipsoids for problems of analyzing and synthesizing control systems is described in detail in [12, 13]. Recall that the ellipsoid

$$ \mathcal E_x=\bigl \{x\in \mathbb R^n\colon \quad {x}^{\top }P^{-1}x\leqslant 1\bigr \},\quad P\succ 0, $$

is said to be invariant for a dynamical system if it follows from the condition \( x(0)\in \mathcal E_x \) that \( x(t)\in \mathcal E_x \) at all times \( t\geqslant 0 \). In other words, any system trajectory issuing from a point lying in the ellipsoid \( \mathcal E_x \) will be located at any time in this ellipsoid under all admissible exogenous disturbances affecting the system. Note that, owing to the attraction property of an invariant ellipsoid, it is the asymptotic accuracy of filtering that we estimate for large deviations, and for small deviations we also estimate the \( t \)-uniform filtering accuracy.

Note that the observability condition implies the existence of at least one invariant ellipsoid (and controllability guarantees its full dimension). There are many invariant ellipsoids; the goal is to find the minimal of them for a fixed stabilizing \( L \), and then to achieve the minimum of this ellipsoid with respect to \( L \). It is convenient to consider the ellipsoid with the minimum trace of its matrix to be minimal.

The following theorem was established in [9].

Theorem 2.

Let \( \widehat Q \) , \( \widehat Y \) be a solution of the problem

$$ \min \mathrm{tr}\, H $$

under the constraints

$$ \begin {pmatrix} {A}^{\top }Q+QA-YC-{C}^{\top }{Y}^{\top }+\alpha Q & QD_1-YD_2 \\ {D}^{\top }_1Q-{D}^{\top }_2{Y}^{\top } & -\alpha I \end {pmatrix}\preccurlyeq 0, $$
$$ \begin {pmatrix} H & I\\ I & Q \end {pmatrix}\succcurlyeq 0, \qquad Q\succ 0, $$

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \) , \( Y\in \mathbb R^{n\times l} \) , and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( \alpha >0 \) .

Then the optimal filter matrix is given by the expression

$$ \widehat L=\widehat Q^{-1}\widehat Y, $$

and the minimum invariant ellipsoid containing the mismatch of systems (3) and (5) with \( x_0=0 \) is determined by the matrix

$$ \widehat P=\widehat Q^{-1}. $$

Note that for a fixed \( \alpha \) the problem stated in Theorem 2 is reduced to minimizing a linear function under constraints that are linear matrix inequalities, i.e., to a Semi-Definite Programming (SDP) problem, which belongs to the class of convex optimization problems.

3.2. Sparse Filtering

Next, let us look for a sparse solution of the filtering problem for system (3), (4). Note that the filter matrix \( L \) is found as \( L=Q^{-1}Y \), and so if \( Y \) turns out to be column-sparse, then the corresponding filter matrix \( L \) will be column-sparse as well. In turn, the column sparsity of the matrix \( Y \) can be achieved by minimizing its \( c_1 \)-norm.

Thus, we obtain the following algorithm, which involves the execution of three successive steps.

Algorithm 1.

Step 1. :

By solving the convex programming problem

$$ \min \mathrm{tr}\, H $$
(6)

under the constraints

$$ \begin {pmatrix} {A}^{\top }Q+QA-YC-{C}^{\top }{Y}^{\top }+\alpha Q & QD_1-YD_2 \\ {D}^{\top }_1Q-{D}^{\top }_2{Y}^{\top } & -\alpha I \end {pmatrix}\preccurlyeq 0, $$
(7)
$$ \begin {pmatrix} H & I\\ I & Q \end {pmatrix}\succcurlyeq 0, \qquad Q\succ 0, $$
(8)

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \), \( Y\in \mathbb R^{n\times l} \), and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( \alpha >0 \), obtain the values of \( Q^* \), \( Y^* \), and \( H^* \) determining the optimal filter matrix

$$ L^*=(Q^*)^{-1}Y^*, $$

the matrix

$$ P^*=(Q^*)^{-1} $$

of the minimum invariant ellipsoid for the mismatch, and the corresponding optimal value of the functional

$$ J^*=\mathrm{tr}\, H^*. $$
Step 2. :

With the optimal value of the functional \( J^* \), introduce a scalar relaxation coefficient \( \gamma >1 \) and solve the convex \( c_1 \)-optimization problem

$$ \min \|Y\|_{c_1} \;\;\text {under constraints (7), (8), and}\;\;\mathrm{tr}\, H\preccurlyeq \gamma J^* $$
(9)

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \), \( Y\in \mathbb R^{n\times l} \), and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( \alpha \). By virtue of the properties of the \( c_1 \)-norm, one can expect the occurrence of zero columns in the solution \( \widehat Y_0 \) of problem (9).

Step 3. :

Solve the original problem (6)–(8), where the same arrangement of zero columns is fixed in the matrix variable \( Y \) as in the column-sparse matrix \( \widehat Y_0 \). Its solution \( \widehat Q \), \( \widehat Y \) delivers the column-sparse filter matrix

$$ \widehat L=\widehat Q^{-1}\widehat Y $$

and the matrix

$$ \widehat P=\widehat Q^{-1} $$

of the corresponding invariant ellipsoid for the mismatch.

The question of the choice of the relaxation coefficient \( \gamma \) is very complicated; here it is difficult to give any quantitative estimates in advance. The dependence of solutions of a problem of the form (9) on the value of \( \gamma \) is discussed and some heuristic approaches to the choice of the relaxation coefficient are given in [20].

Note that in order to obtain a sparse solution of this problem using the “brute force” method, it would be necessary to solve the problem for all possible column-sparse structures of the matrix \( Y \) and choose the best one according to the performance criterion; in other words, combinatorial enumeration could not be avoided. It will be shown in Sec. 5 that the procedure proposed leads to highly sparse filter matrices with small losses in terms of the performance criterion.

Remark 1.

In some cases there is a priori information on the initial state of the system \( x(0)\in \mathcal E_0 \), where

$$ \mathcal E_0=\{x\colon \quad {x}^{\top }P_0^{-1}x\leqslant 1\}. $$

Then, choosing \( \widehat x(0)=0 \), it can be guaranteed that \( e(0)\in \mathcal E_0 \). If it is required that \( \mathcal E_0\subset \mathcal E \), then one can guarantee that \( e(t)\in \mathcal E \) at all \( t \). Thus, if the system of constraints (7)–(8) in Algorithm 1 is supplemented with the condition

$$ Q\preccurlyeq P_0^{-1}, $$

then we obtain not only asymptotic but also a true estimate of the sparse filtering accuracy at all times.

Remark 2.

It is often necessary to evaluate the filtering performance not for all coordinates of the state \( x \) but only for some of them. Assume that we have an output \( y_1=C_1x \) (for example, one of the state coordinates) and we wish to make the error estimate

$$ e_1=y_1-\widehat y_1=C_1(x-\widehat x) $$

as small as possible. This problem can be solved by replacing the first condition in (8) with

$$ \begin {pmatrix} H & C_1 \\ {C}^{\top }_1 & Q \end {pmatrix}\succcurlyeq 0. $$

4. DISCRETE-TIME CASE

Similar results can be obtained for the linear discrete-time system

$$ \begin {aligned} x_{k+1} &=Ax_k+D_1w_k, \\ y_k &=Cx_k+D_2w_k \end {aligned} $$
(10)

with some initial condition \( x_0 \), where \( A\in \mathbb R^{n\times n} \), \( D_1\in \mathbb R^{n\times m} \), \( D_2\in \mathbb R^{l\times m} \), and \( C\in \mathbb R^{l\times n} \), with a state \( x_k\in \mathbb R^n \), an observable output \( y_k\in \mathbb R^l \), and an exogenous disturbance \( w_k\in \mathbb R^m \) satisfying the constraint

$$ \|w_k\|\leqslant 1\quad \text {for all } k=0,1,2,\ldots ; $$
(11)

here the pair \( (A,D_1) \) is controllable and the pair \( (A,C) \) is observable.

Namely, let us construct a filter described by a linear difference equation with a constant matrix \( L \) for estimating the state \( \widehat x_k \),

$$ \widehat x_{k+1}=A\widehat x_k+L(y_k-C\widehat x_k), \quad \widehat x_0=0, $$
(12)

where \( L\in \mathbb R^{n\times l} \).

We introduce the mismatch

$$ e_k=x_k-\widehat x_k. $$

By analogy with the continuous-time case, the problem is to find a matrix \( L \) minimizing the invariant ellipsoid \( \mathcal E \) containing the mismatch \( e_k \).

The following Theorem is a discrete-time analog of Theorem 2.

Theorem 3 [13].

Let \( \widehat Q \) and \( \widehat Y \) be a solution of the problem

$$ \min \mathrm{tr}\, H $$

under the constraints

$$ \begin {pmatrix} -\alpha Q & {(QA-YC)}^{\top } & 0 \\ QA-YC & -Q & QD_1-YD_2 \\ 0 & {(QD_1-YD_2)}^{\top } & -(1-\alpha )I \end {pmatrix}\preccurlyeq 0, $$
$$ \begin {pmatrix} H & I\\ I & Q \end {pmatrix}\succcurlyeq 0, \qquad Q\succ 0, $$

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \) , \( Y\in \mathbb R^{n\times l} \) , and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( 0<\alpha <1 \) .

Then the optimal filter matrix is given by the expression

$$ \widehat L=\widehat Q^{-1}\widehat Y, $$

and the minimum invariant ellipsoid containing the mismatch \( e_k \) of system (10), (12) with \( x_0=0 \) is determined by the matrix

$$ \widehat P=\widehat Q^{-1}. $$

The procedure of searching for a sparse solution of the filtering problem for system (10), (11) involves three successive steps as well.

Algorithm 2.

Step 1. :

By solving the convex programming problem

$$ \min \mathrm{tr}\, H $$
(13)

under the constraints

$$ \begin {pmatrix} -\alpha Q & {(QA-YC)}^{\top } & 0 \\ QA-YC & -Q & QD_1-YD_2 \\ 0 & {(QD_1-YD_2)}^{\top } & -(1-\alpha )I \end {pmatrix}\preccurlyeq 0, $$
(14)
$$ \begin {pmatrix} H & I\\ I & Q \end {pmatrix}\succcurlyeq 0, \qquad Q\succ 0, $$
(15)

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \), \( Y\in \mathbb R^{n\times l} \), and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( \alpha >0 \), obtain the values of \( Q^* \), \( Y^* \), and \( H^* \) that determine the optimal filter matrix

$$ L^*=(Q^*)^{-1}Y^*, $$

the matrix

$$ P^*=(Q^*)^{-1} $$

of the minimum invariant ellipsoid for the mismatch, and the corresponding optimal value of the functional

$$ J^*=\mathrm{tr}\, H^*. $$
Step 2. :

With the optimal value of the functional \( J^* \), introduce a scalar relaxation coefficient \( \gamma \!>\!1 \) and solve the convex programming problem

$$ \min \|Y\|_{c_1} \;\;\text {under constraints (14), (15), and}\;\;\mathrm{tr}\, H\preccurlyeq \gamma J^* $$

for the matrix variables \( Q={Q}^{\top }\in \mathbb R^{n\times n} \), \( Y\in \mathbb R^{n\times l} \), and \( H={H}^{\top }\in \mathbb R^{n\times n} \) and a scalar parameter \( \alpha \). By virtue of the properties of the \( c_1 \)-norm, one can expect the occurrence of zero columns in the solution \( \widehat Y_0 \) of this problem.

Step 3. :

Solve the original problem (13)–(15), where the same arrangement of zero columns is fixed in the matrix variable \( Y \) as in the column-sparse matrix \( \widehat Y_0 \). Its solution \( \widehat Q \), \( \widehat Y \) gives the column-sparse filter matrix

$$ \widehat L=\widehat Q^{-1}\widehat Y $$

and the matrix

$$ \widehat P=\widehat Q^{-1} $$

of the corresponding invariant ellipsoid for the mismatch.

Remarks 1 and 2 hold true in the discrete-time case.

As the results of numerical modeling show, the “price” for using a small number of controls/outputs (a loss in the performance criterion) is, as a rule, relatively low.

5. EXAMPLE

Let us demonstrate the proposed approach to solving the sparse filtering problem using the HE3 problem from the COMPleib [21] library as an example. It contains test problems that have a transparent engineering origin and are often used to test the efficiency of control algorithms. The system considered below describes a linearized Bell201A-1 helicopter dynamics model with eight states; the corresponding matrices of system (3) have the form

$$ A=\begin {pmatrix} -0{.}0046 & 0{.}038 & 0{.}3259 & -0{.}0045 & -0{.}402 & -0{.}073 & -9{.}81 & 0 \\[.25em] -0{.}1978 & -0{.}5667 & 0{.}357 & -0{.}0378 & -0{.}2149 & 0{.}5683 & 0 & 0 \\[.25em] 0{.}0039 & -0{.}0029 & -0{.}2947 & 0{.}007 & 0{.}2266 & 0{.}0148 & 0 & 0 \\[.25em] 0{.}0133 & -0{.}0014 & -0{.}4076 & -0{.}0654 & -0{.}4093 & 0{.}2674 & 0 & 9{.}81 \\[.25em] 0{.}0127 & -0{.}01 & -0{.}8152 & -0{.}0397 & -0{.}821 & 0{.}1442 & 0 & 0 \\[.25em] -0{.}0285 & -0{.}0232 & 0{.}1064 & 0{.}0709 & -0{.}2786 & -0{.}7396 & 0 & 0 \\[.25em] 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\[.25em] 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \end {pmatrix}, $$
$$ D_1=\begin {pmatrix} 0{.}0676 \\[.25em] -1{.}1151 \\[.25em] 0{.}0062 \\[.25em] -0{.}017 \\[.25em] -0{.}0129 \\[.25em] 0{.}139 \\[.25em] 0 \\[.25em] 0 \end {pmatrix}, \qquad C=\begin {pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\[.25em] 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\[.25em] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[.25em] 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\[.25em] 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\[.25em] 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \end {pmatrix}, \qquad D_2=\begin {pmatrix} 0 \\[.25em] 0{.}1 \\[.25em] 0 \\[.25em] 0 \\[.25em] 0{.}05 \\[.25em] 0 \end {pmatrix}. $$

Setting \( P_0=0{.}1I \) and using Theorem 2, at the first step of Algorithm 1 we obtain the optimal filter matrix

$$ L^*=\begin {pmatrix} -3{.}3888 & -0{.}4284 & 0{.}0451 & 0{.}7626 & 1{.}3802 & -0{.}6771 \\[.25em] 1{.}2477 & -10{.}8108 & -0{.}0285 & -0{.}2999 & -0{.}4385 & 0{.}2334 \\[.25em] 0{.}5366 & -0{.}1163 & 0{.}0020 & 0{.}0763 & -0{.}0001 & -0{.}1173 \\[.25em] -0{.}0430 & 0{.}0292 & 9{.}8101 & 0{.}3401 & -0{.}3954 & -0{.}4499 \\[.25em] -0{.}3659 & 0{.}0901 & 1{.}0006 & -0{.}1135 & -0{.}4449 & -0{.}5456 \\[.25em] 0{.}2883 & 1{.}3576 & 0{.}0020 & -0{.}4210 & 0{.}0414 & -0{.}0226 \\[.25em] 6{.}9893 & -0{.}7343 & -0{.}0084 & -1{.}0649 & 0{.}7739 & 0{.}0687 \\[.25em] 0{.}0073 & 0{.}0021 & 0{.}2791 & 0{.}0000 & -0{.}0043 & -0{.}0000 \end {pmatrix} $$

and the corresponding invariant ellipsoid for the mismatch with the trace \( \mathrm{tr}\, P^*=1{.}1381 \).

At the second step, solving the \( c_1 \)-optimization problem (9) with \( \gamma =10 \), we find the matrix \( \widehat Y_0 \) with the last two zero columns of order \( 10^{-10} \),

$$ \widehat Y_0=\begin {pmatrix} -0{.}4093 & -2{.}0441 & 0{.}1151 & -0{.}1159 & 0{.}0000 & -0{.}0000 \\ 0{.}4093 & -2{.}0441 & -0{.}2968 & -0{.}2514 & -0{.}0000 & -0{.}0000 \\ 0{.}4093 & 2{.}0441 & -1{.}0477 & -0{.}0129 & 0{.}0000 & -0{.}0000 \\ 0{.}0724 & -0{.}3055 & 1{.}9967 & 0{.}2514 & 0{.}0000 & -0{.}0000 \\ -0{.}4093 & -0{.}3780 & 1{.}9967 & -0{.}2514 & -0{.}0000 & 0{.}0000 \\ -0{.}4093 & 2{.}0441 & 1{.}1985 & 0{.}2514 & 0{.}0000 & -0{.}0000 \\ -0{.}2441 & 2{.}0441 & 1{.}9967 & 0{.}2514 & 0{.}0000 & -0{.}0000 \\ -0{.}0143 & -0{.}4028 & 1{.}9967 & 0{.}0245 & -0{.}0000 & 0{.}0000 \end {pmatrix} . $$

Fixing these rows as zero ones and solving the original problem again, at the third step we obtain the column-sparse filter matrix

$$ \widehat L=\begin {pmatrix} -1{.}4878 & 0{.}6754 & 0{.}0519 & 0{.}5895 & 0 & 0 \\ 0{.}7782 & -11{.}1508 & -0{.}3270 & -0{.}2482 & 0 & 0 \\ 0{.}7283 & 0{.}0624 & -0{.}0159 & 0{.}0733 & 0 & 0 \\ -0{.}8973 & -0{.}1698 & 9{.}9423 & 0{.}4216 & 0 & 0 \\ -0{.}8263 & -0{.}1289 & 1{.}0840 & -0{.}0998 & 0 & 0 \\ 0{.}3308 & 1{.}3900 & 0{.}0164 & -0{.}4074 & 0 & 0 \\ 8{.}0516 & -0{.}0006 & -0{.}5781 & -0{.}9786 & 0 & 0 \\ 0{.}1116 & 0{.}0000 & 0{.}3123 & -0{.}0170 & 0 & 0 \end {pmatrix} $$

and the matrix

$$ \widehat P=\begin {pmatrix} 0{.}4183 & -0{.}1314 & 0{.}0026 & -0{.}0146 & -0{.}0251 & 0{.}0179 & -0{.}0083 & -0{.}0147 \\ -0{.}1314 & 0{.}1571 & 0{.}0013 & 0{.}0007 & 0{.}0069 & -0{.}0076 & 0{.}0104 & 0{.}0050 \\ 0{.}0026 & 0{.}0013 & 0{.}1019 & -0{.}0044 & -0{.}0030 & -0{.}0000 & 0{.}0055 & -0{.}0010 \\ -0{.}0146 & 0{.}0007 & -0{.}0044 & 0{.}1105 & 0{.}0076 & -0{.}0004 & -0{.}0125 & 0{.}0026 \\ -0{.}0251 & 0{.}0069 & -0{.}0030 & 0{.}0076 & 0{.}1062 & -0{.}0011 & -0{.}0077 & 0{.}0024 \\ 0{.}0179 & -0{.}0076 & -0{.}0000 & -0{.}0004 & -0{.}0011 & 0{.}1010 & -0{.}0010 & -0{.}0007 \\ -0{.}0083 & 0{.}0104 & 0{.}0055 & -0{.}0125 & -0{.}0077 & -0{.}0010 & 0{.}1170 & -0{.}0022 \\ -0{.}0147 & 0{.}0050 & -0{.}0010 & 0{.}0026 & 0{.}0024 & -0{.}0007 & -0{.}0022 & 0{.}1011 \end {pmatrix} $$

of the invariant ellipsoid for the mismatch with the trace

$$ \mathrm{tr}\, \widehat P=1{.}2131. $$

Thus, a sparse filter not using the outputs \( y_5 \) and \( y_6 \) has been constructed; here the loss in the performance criterion is only 6.5%.

Fig. 1.
figure 1

Filtering of the coordinate \( x_1 \).

Fig. 2.
figure 2

Filtering of the coordinate \( x_4 \).

Fig. 3.
figure 3

Projection of a system trajectory and its estimates onto the plane \( (x_1,x_4) \).

The solid line in Fig. 1 shows the trajectory \( x_1(t) \) under the initial condition \( x_1(0)=\cdots =x_8(0)=0{.}01 \) and an admissible (in this case, step-like) exogenous disturbance; the dashed line shows the optimal estimate of the trajectory \( \widehat x_1(t) \), and the dotted line is the result \( \widetilde x_1(t) \) of the sparse filtering.

The accuracy of sparse filtering is even higher for the coordinate \( x_4 \); see Fig. 2.

The solid line in Fig. 3 shows the projection of the system trajectory onto the plane \( (x_1,x_4) \), while the dashed and dotted lines show the projections of the trajectory’s optimal estimate and the result of using sparse filtering, respectively. By virtue of the instability of the original system, the domain containing its trajectories is unbounded.

6. CONCLUSIONS

A simple, universal approach to solving the sparse filtering problem (problem using a reduced number of outputs) with arbitrary bounded exogenous disturbances using an observer is proposed. The approach is based on the invariant ellipsoid method and the technique of linear matrix inequalities. The application of this concept has made it possible to reduce the original problem to a semidefinite programming problem easy to solve numerically.

The approach is characterized by simplicity and ease of implementation and equally covers both continuous- and discrete-time statements of the problem. The efficiency of the procedure proposed is demonstrated by a test example.

In the future, the author plans to extend the results to robust statements of the problem, in particular, to systems of the form

$$ \dot x=(A+F\Delta H)x+Dw $$

with a matrix uncertainty \( \Delta \in \mathbb R^{p\times q} \) bounded in the spectral norm \( \|\Delta \|\leqslant 1 \) and given matrices \( F \) and \( H \) of appropriate sizes.