1 Introduction

H filtering, first presented in [13], has the main aim to minimize the H norm of the error of a filtering system, in order to ensure that the L 2-induced gain from the noise signals to the estimation error will be less than a prescribed level. In contrast with Kalman filtering, H filtering does not require the exact knowledge of the noise signals, which renders this approach appropriate in some practical applications. A great number of results on the H filtering have been proposed in the literature, in both the deterministic and stochastic contexts: see, for example, [27, 29, 33], and references therein. When uncertainties appear in a system model, the robust H filtering has been also investigated: see, for example, [6, 23, 36].

Currently, there is an increased interest in the design of reduced-order H filters, as presented in [4, 16, 30, 35], since reduced-order filters are easier to implement than full-order ones: this is an important issue when fast data processing is needed.

Note that the results discussed so far were obtained for one-dimensional (1-D) systems. However, many practical systems are better modeled as two-dimensional (2-D) systems, such as those in image data processing and transmission, thermal processes, gas absorption, water stream heating, etc. [26]. The study of 2-D systems is of both practical and theoretical importance [20, 25, 38]. Therefore, in recent years, much attention has been devoted to the analysis and synthesis problems for 2-D systems: controlability [21, 22]; stability [17, 18]; the stability and stabilization in the presence of delays [2, 15, 19, 28]; 2-D dynamic output feedback control [37], model approximation [8], etc. For the specific problem of 2-D H filtering, several results have already been obtained: for example, for Roesser models [7]; for Fornasini–Marchesini second model [31, 34]; for 2-D systems with delays [7, 1012, 34], etc.

Interested in the design of reduced-order H filters and in order to obtain less conservative results, we present a new approach, the structured polynomially parameter-dependent method, for designing robust H filters for uncertain 2-D continuous systems described by the Roesser model. Given a stable system with parameter uncertainties residing in polytope vertices, the focus is on designing a robust filter such that the filtering error system is robustly asymptotically stable and minimizing the H norm of the filtering error system for the entire uncertainty domain. It should be pointed out that not only the full-order filters are established, but also the reduced-order filters are designed. Furthermore, when the reduced-order model is restricted to be of zeroth-order, the dimension constraint is removed and a simpler condition expressed by LMIs is obtained.

In this paper, the reduced-order H filtering problem for uncertain 2-D continuous systems with new structure of the key slack variable matrix is treated. The class of 2-D systems under consideration corresponds to continuous 2-D systems described by a Roesser state space model subject to polytopic uncertainties in both the state and output matrices. A sufficient condition for the solvability of the robust H filtering problem is derived in terms of a set of LMIs, based on homogeneous polynomial dependence on the uncertain parameters of arbitrary degree. The more the degree increases, the less conservative filter designs can be obtained. It is shown that the H filter result includes the quadratic framework, and the linearly parameter-dependent framework as special cases for zeroth degree and first degree, respectively. Two examples will illustrate the feasibility of the proposed methodology.

Notation

Throughout this paper, for real symmetric matrices X and Y, the notation XY (respectively, X>Y) means that the matrix XY is positive semi-definite (respectively, positive definite). I is the identity matrix of appropriate dimension. The superscript T represents the transpose of a matrix; \(\operatorname{diag}\{\ldots\}\), denotes a block-diagonal matrix; the Euclidean vector norm is denoted by ∥⋅∥. and the symmetric term in a symmetric matrix is denoted by ∗, e.g., . Finally, the 2 norm of a 2-D signal w(t 1,t 2) is given by \(\|w(t_{1},t_{2})\|=\sqrt{\int_{0}^{\infty}\int_{0}^{\infty}w(t_{1},t_{2})^{T}w(t_{1},t_{2})\,dt_{1}\,dt_{2}}\), where w(t 1,t 2) is said to be in the space 2{[0,∞],[0,∞]} or 2 if ∥w(t 1,t 2)∥<∞.

2 Problem Formulation

Consider an uncertain 2-D continuous system described by the following Roesser state-space model:

$$\begin{aligned} & \left [\begin{array}{c} \frac{\partial x^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial x^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ]= A_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+B_{\alpha} \omega(t_{1},t_{2}), \end{aligned}$$
(1)
$$\begin{aligned} & y(t_{1},t_{2})=C_{1_{\alpha}} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+D_{1_{\alpha}}\omega(t_{1},t_{2}), \end{aligned}$$
(2)
$$\begin{aligned} & z(t_{1},t_{2})= C_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+D_{\alpha}\omega(t_{1},t_{2}), \end{aligned}$$
(3)

where \(x^{h}(t_{1},t_{2})\in \Re^{n_{h}}\) and \(x^{v}(t_{1},t_{2})\in \Re^{n_{v}}\) are the horizontal and vertical states, respectively; y(t 1,t 2)∈ℜp is the measured output; z(t 1,t 2)∈ℜr is the signal to be estimated, and w(t 1,t 2)∈ℜm is the exogenous input with bounded energy (i.e., w(t 1,t 2)∈ 2). The system matrices are assumed to belong to a known polyhedral domain Γ described by N vertices, that is,

$$\begin{aligned} \mathcal{P}_{\alpha}\triangleq [ A_{\alpha}, B_{\alpha}, C_{1_{\alpha}}, D_{1_{\alpha}}, C_{\alpha}, D_{\alpha} ]\in\varGamma, \end{aligned}$$
(4)

where

$$\begin{aligned} \varGamma\triangleq \Biggl\{ \mathcal{P}(\alpha)\mid\mathcal{P}(\alpha)= \sum _{m = 1}^N\alpha_{m} \mathcal{P}_{m}:\sum_{i = 1}^N \alpha_{m}=1, \alpha_{m}\geq0 \Biggr\} , \end{aligned}$$

with \(\mathcal{P}_{m}\triangleq\{A_{m}, B_{m}, C_{1_{m}}, D_{1_{m}}, C_{m}, D_{m}\}\) denoting the mth vertex of the polyhedral domain Γ. It is assumed that the parameter α is unknown (not measured online) and does not depend explicitly on the time variable (t 1,t 2).

The boundary conditions are defined by

$$\begin{aligned} & x^{h}(0,t_{2}) =g(t_{2}),\quad \quad x^{v}(t_{1},0) =f(t_{1})\quad \forall (t_{1}, t_{2})\geq 0. \end{aligned}$$

Inspired by [5], we make the following assumption:

Assumption 1

The boundary conditions satisfy

$$\begin{aligned} & \big\| x^{h}(0,t_{2})\big\| <\infty,\quad\quad \lim _{t_{2}\to \infty}\big\| x^{h}(0,t_{2})\big\| = 0, \\ & \big\| x^{v}(t_{1},0) \big\| <\infty, \quad\quad \lim _{t_{1}\to \infty}\big\| x^{v}(t_{1},0)\big\| = 0. \end{aligned}$$

Similar to [5], we give the following definition:

Definition 1

The 2-D continuous system (1)–(3) with Assumption 1 is said to be asymptotically stable if

$$ \lim_{(t_{1}+t_{2})\to \infty}\big\| x^{h}(t_{1},t_{2}) \big\| = 0;\quad\quad \lim_{(t_{1}+t_{2})\to \infty}\big\| x^{v}(t_{1},t_{2}) \big\| = 0. $$

Now, we want to find a 2-D continuous linear time-invariant filter, with input y(t 1,t 2) and output z f (t 1,t 2), which is an estimation of z(t 1,t 2). Here, we consider the following state space description for this filter:

$$\begin{aligned} & \left [\begin{array}{c} \frac{\partial x_{f}^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial x_{f}^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ]=A_{f} \left [\begin{array}{c} x_{f}^{h}(t_{1},t_{2}) \\ x_{f}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+B_{f}y(t_{1},t_{2}), \end{aligned}$$
(5)
$$\begin{aligned} & \quad \begin{aligned} &z_{f}(t_{1},t_{2})=C_{f} \left [\begin{array}{c} x_{f}^{h}(t_{1},t_{2}) \\ x_{f}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+D_{f}y(t_{1},t_{2}), \\ & x_{f}^{h}(0,t_{2})=0,\quad\quad x_{f}^{v}(t_{1},0)=0,\quad \forall t_{1},t_{2}, \end{aligned} \end{aligned}$$
(6)

where \(x^{h}_{f}(t_{1},t_{2})\in\Re^{n_{h_{f}}}\) is the vector of the reduced-order filter horizontal states with \(1\leq n_{h_{f}}<n_{f}\), and \(x^{v}_{f}(t_{1},t_{2})\in\Re^{n_{v_{f}}}\) is the vector of vertical states, with \(1\leq n_{v_{f}}<n_{f}\) (for full-order filter, we have \(n_{h_{f}}=n_{f}\) and \(n_{v_{f}}=n_{f}\)); A f , B f , and C f are constant matrices to be determined, partitioned as follows:

$$\begin{aligned} A_{f}\triangleq \left [\begin{array}{c@{\quad}c} A_{f}^{11} & A_{f}^{12} \\ A_{f}^{21} & A_{f}^{22} \end{array} \right ],\quad\quad B_{f}\triangleq \left [\begin{array}{c} B_{f}^{1} \\ B_{f}^{2} \end{array} \right ],\quad \quad C_{f} \triangleq \left [ \begin{array}{c@{\quad}c} C_{f}^{1} & C_{f}^{2} \end{array} \right ]. \end{aligned}$$
(7)

Denote

$$\begin{aligned} \begin{aligned} & \tilde{x}^{h}(t_{1},t_{2})= \bigl[x^{h}(t_{1},t_{2})^{T}\quad x^{h}_{f}(t_{1},t_{2})^{T} \bigr]^{T}, \\ & \tilde{x}^{v}(t_{1},t_{2})= \bigl[x^{v}t_{1},t_{2})^{T}\quad x^{v}_{f}(t_{1},t_{2})^{T} \bigr]^{T}, \\ & \tilde{z}(t_{1},t_{2})=z(t_{1},t_{2})-z_{f}(t_{1},t_{2}). \end{aligned} \end{aligned}$$
(8)

Augmenting system (1)–(3) to include the states of filter (5)–(6), we obtain the following filtering error system:

$$\begin{aligned} \left [\begin{array}{c} \frac{\partial \tilde{x}^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial \tilde{x}^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ] =&\tilde{A}_{\alpha} \left [\begin{array}{c} \tilde{x}^{h}(t_{1},t_{2}) \\ \tilde{x}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+\tilde{B}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(9)
$$\begin{aligned} \tilde{z}(t_{1},t_{2}) =& \tilde{C}_{\alpha} \left [\begin{array}{c} \tilde{x}^{h}(t_{1},t_{2}) \\ \tilde{x}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+ \tilde{D}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(10)

where

$$\begin{aligned} &\tilde{A}_{\alpha}=\varUpsilon \hat{A}_{\alpha} \varUpsilon^{T},\quad\quad \tilde{B}_{\alpha}=\varUpsilon \hat{B}_{\alpha},\quad\quad \tilde{C}_{\alpha}=\hat{C}_{\alpha} \varUpsilon^{T}, \quad\quad \tilde{D}_{\alpha}= \hat{D}_{\alpha}, \\ & \varUpsilon_{1}= \left [\begin{array}{c@{\quad}c} I_{n_{h}} & 0_{n_{h}\times n_{h}} \\ 0_{n_{h_{f}}\times n_{h}} & 0_{n_{h_{f}}\times n_{v}} \\ 0_{n_{v}\times n_{h}} & I_{n_{v}} \\ 0_{n_{v_{f}}\times n_{h}} & 0_{n_{v_{f}}\times n_{v}} \end{array} \right ], \\ & \varUpsilon_{2}= \left [\begin{array}{c@{\quad}c} 0_{n_{h}\times n_{h_{f}}} & 0_{n_{h}\times n_{v_{f}}} \\ I_{n_{h_{f}}} & 0_{n_{h_{f}}\times n_{v_{f}}} \\ 0_{n_{v}\times n_{h_{f}}} & 0_{n_{v}\times n_{v_{f}}} \\ 0_{n_{v_{f}}\times n_{h_{f}}} & I_{n_{v_{f}}} \\ \end{array} \right ],\quad\quad \varUpsilon=\left [\begin{array}{c@{\quad}c} \varUpsilon_{1} & \varUpsilon_{2} \end{array} \right ], \\ & \hat{A}_{\alpha}=\left [ \begin{array}{c@{\quad}c} A_{\alpha} & 0 \\ B_{f}C_{1_{\alpha}} & A_{f} \ \end{array} \right ], \quad\quad \hat{B}_{\alpha}=\left [ \begin{array}{c} B_{\alpha} \\ B_{f}D_{1_{\alpha}} \\ \end{array} \right ], \\ & \hat{C}_{\alpha}= \left [ \begin{array}{c@{\quad}c} C_{\alpha}-D_{f}C_{1_{\alpha}} & -C_{f} \end{array} \right ],\quad\quad \hat{D}_{\alpha}=D_{\alpha}-D_{f}D_{1_{\alpha}}. \end{aligned}$$

The matrix transfer function of the error system (9)–(10) is then given by

$$\begin{aligned} & \tilde{G}(s_{1},s_{2})= \tilde{C}_{\alpha} \bigl[I(s_{1},s_{2})- \tilde{A}_{\alpha} \bigr]^{-1}\tilde{B}_{\alpha}+ \tilde{D}_{\alpha}, \end{aligned}$$
(11)

and the H norm of the system is, by definition,

$$\begin{aligned} & \|\tilde{G}\|_{\infty}=\sup_{w_{1},w_{2}\in R} \sigma_{\max} \bigl[\tilde{G}(jw_{1},jw_{2}) \bigr], \end{aligned}$$
(12)

where σ(⋅) denotes the maximum singular value.

Remark 1

By using the 2-D Parseval’s theorem [25], it is not difficult to show that, under zero boundary conditions and with asymptotic stability of (9)–(10), the condition \(\|\tilde{G}\|_{\infty}<\gamma\) is equivalent to

$$ \sup_{0\neq w(t_{1},t_{2})\in \ell_{2}}\frac{\|\tilde{z}(t_{1},t_{2})\|}{\|w(t_{1},t_{2})\|}\leq\gamma. $$
(13)

Our aim in is to design reduced-order H filters of the form (5)–(6) such that:

  1. 1.

    The filter error system (9)–(10) is asymptotically stable when w(t 1,t 2)=0.

  2. 2.

    The filter error system (9)–(10) fulfills a prescribed level γ of the H norm; i.e., under the zero boundary condition, \(\|\tilde{z}(t_{1},t_{2})\|<\gamma \|w(t_{1},t_{2})\|\) is satisfied for any w(t 1,t 2)∈ 2.

Remark 2

In the reduced-order case, we consider three particular scenarios: First, (\(n_{h_{f}}\neq0\), \(n_{v_{f}}=0\)); then, (\(n_{h_{f}}=0\), \(n_{v_{f}}\neq0\)), and finally the zeroth-order filter: (\(n_{h_{f}}=0\), \(n_{v_{f}}=0\)).

Case 1: \(n_{h_{f}}\neq 0\), \(n_{v_{f}}=0\).

In this case, the reduced-order H filter in (5)–(6) is given by

$$\begin{aligned} \frac{\partial x_{f}^{h}(t_{1},t_{2})}{\partial t_{2}} =&A_{f}^{11} x_{f}^{h}(t_{1},t_{2})+B_{f}^{1}y(t_{1},t_{2}), \end{aligned}$$
(14)
$$\begin{aligned} z_{f}(t_{1},t_{2}) =&C_{f}^{1}x_{f}^{h}(t_{1},t_{2})+D_{f}y(t_{1},t_{2}). \end{aligned}$$
(15)

Augmenting system (1)–(3) to include the states of the filter (14)–(15) and using (8), we obtain the following filtering error system:

$$\begin{aligned} \left [\begin{array}{c} \frac{\partial x^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial \tilde{x}^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ] =&\tilde{A}_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ \tilde{x}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+\tilde{B}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(16)
$$\begin{aligned} \tilde{z}(t_{1},t_{2}) =& \tilde{C}_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ \tilde{x}^{v}(t_{1},t_{2}) \\ \end{array} \right ]+ \tilde{D}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(17)

where

$$\begin{aligned} & \tilde{A}_{\alpha}=\varUpsilon \hat{A}_{\alpha} \varUpsilon^{T}, \quad\quad \tilde{B}_{\alpha}=\varUpsilon \hat{B}_{\alpha},\quad\quad \tilde{C}_{\alpha}=\hat{C}_{\alpha} \varUpsilon^{T}, \quad\quad \tilde{D}_{\alpha}= \hat{D}_{\alpha}, \\ & \varUpsilon_{1}= \left [\begin{array}{c@{\quad}c} I_{n_{h}} & 0_{n_{h}\times n_{h}} \\ 0_{n_{h_{f}}\times n_{h}} & 0_{n_{h_{f}}\times n_{v}} \\ 0_{n_{v}\times n_{h}} & I_{n_{v}} \end{array} \right ],\quad\quad \varUpsilon_{2}= \left [\begin{array}{c} 0_{n_{h}\times n_{h_{f}}} \\ I_{n_{h_{f}}} \\ 0_{n_{v}\times n_{h_{f}}} \end{array} \right ],\quad\quad \varUpsilon=\left [\begin{array}{c@{\quad}c} \varUpsilon_{1} & \varUpsilon_{2} \end{array} \right ], \\ & \hat{A}_{\alpha}=\left [ \begin{array}{c@{\quad}c} A_{\alpha} & 0 \\ B_{f}^{1}C_{1_{\alpha}} & A_{f}^{11} \ \end{array} \right ], \quad\quad \hat{B}_{\alpha}=\left [ \begin{array}{c} B_{\alpha} \\ B_{f}^{1}D_{1_{\alpha}} \\ \end{array} \right ], \\ & \hat{C}_{\alpha}= \left [ \begin{array}{c@{\quad}c} C_{\alpha}-D_{f}C_{1_{\alpha}} & -C_{f}^{1} \end{array} \right ], \quad\quad \hat{D}_{\alpha}=D_{\alpha}-D_{f}D_{1_{\alpha}}. \end{aligned}$$

Case 2: \(n_{h_{f}}=0\), \(n_{v_{f}}\neq 0\).

The reduced-order H filter in (5)–(6) is now

$$\begin{aligned} \frac{\partial x_{f}^{v}(t_{1},t_{2})}{\partial t_{2}} =&A_{f}^{22} x_{f}^{v}(t_{1},t_{2})+B_{f}^{2}y(t_{1},t_{2}), \end{aligned}$$
(18)
$$\begin{aligned} z_{f}(t_{1},t_{2}) =&C_{f}^{2}x_{f}^{v}(t_{1},t_{2})+D_{f}y(t_{1},t_{2}). \end{aligned}$$
(19)

Augmenting system (1)–(3) to include the states of the filter (18)–(19) and using (8), we obtain the following filtering error system:

$$\begin{aligned} \left [\begin{array}{c} \frac{\partial \tilde{x}^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial x^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ] =&\tilde{A}_{\alpha} \left [\begin{array}{c} \tilde{x}^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+\tilde{B}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(20)
$$\begin{aligned} \tilde{z}(t_{1},t_{2}) =& \tilde{C}_{\alpha} \left [\begin{array}{c} \tilde{x}^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+ \tilde{D}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(21)

where

$$\begin{aligned} &\tilde{A}_{\alpha}=\varUpsilon \hat{A}_{\alpha} \varUpsilon^{T}, \quad\quad \tilde{B}_{\alpha}=\varUpsilon \hat{B}_{\alpha},\quad\quad \tilde{C}_{\alpha}=\hat{C}_{\alpha} \varUpsilon^{T}, \quad\quad \tilde{D}_{\alpha}= \hat{D}_{\alpha}, \\ & \varUpsilon_{1}= \left [\begin{array}{c@{\quad}c} I_{n_{h}} & 0_{n_{h}\times n_{h}} \\ 0_{n_{v}\times n_{h}} & I_{n_{v}} \\ 0_{n_{v_{f}}\times n_{h}} & 0_{n_{v_{f}}\times n_{v}} \end{array} \right ],\quad\quad \varUpsilon_{2}= \left [\begin{array}{c} 0_{n_{h}\times n_{v_{f}}} \\ 0_{n_{v}\times n_{v_{f}}} \\ I_{n_{v_{f}}} \end{array} \right ],\quad\quad \varUpsilon=\left [\begin{array}{c@{\quad}c} \varUpsilon_{1} & \varUpsilon_{2} \end{array} \right ], \\ & \hat{A}_{\alpha}=\left [ \begin{array}{c@{\quad}c} A_{\alpha} & 0 \\ B_{f}^{2}C_{1_{\alpha}} & A_{f}^{22} \ \end{array} \right ], \quad\quad \hat{B}_{\alpha}=\left [ \begin{array}{c} B_{\alpha} \\ B_{f}^{2}D_{1_{\alpha}} \\ \end{array} \right ], \\ & \hat{C}_{\alpha}= \left [ \begin{array}{c@{\quad}c} C_{\alpha}-D_{f}C_{1_{\alpha}} & -C_{f}^{2} \end{array} \right ], \quad\quad \hat{D}_{\alpha}=D_{\alpha}-D_{f}D_{1_{\alpha}}. \end{aligned}$$

Case 3: \(n_{h_{f}}= 0\), \(n_{v_{f}}=0\).

The reduced-order H filter in (5)–(6) is now the following static filter:

$$\begin{aligned} & z_{f}(t_{1},t_{2})=D_{f}y(t_{1},t_{2}). \end{aligned}$$
(22)

Connecting this filter (22) to system (1)–(3), we obtain the following filtering error system:

$$\begin{aligned} \left [\begin{array}{c} \frac{\partial x^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial x^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ] =&A_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ] +B_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(23)
$$\begin{aligned} \tilde{z}(t_{1},t_{2}) =& \tilde{C}_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]+ \tilde{D}_{\alpha}w(t_{1},t_{2}), \end{aligned}$$
(24)

with

$$\begin{aligned} & \tilde{C}_{\alpha}=C_{\alpha}-D_{f}C_{1_{\alpha}}, \quad\quad \tilde{D}_{\alpha}=D_{\alpha}-D_{f}D_{1_{\alpha}}. \end{aligned}$$

3 Preliminaries

This section is devoted to some preliminary results used later.

Consider now the following 2-D continuous system:

$$\begin{aligned} \left [\begin{array}{c} \frac{\partial x^{h}(t_{1},t_{2})}{\partial t_{1}} \\ \frac{\partial x^{v}(t_{1},t_{2})}{\partial t_{2}} \\ \end{array} \right ]= A_{\alpha} \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ] = \left [\begin{array}{c@{\quad}c} A_{11_{\alpha}} & A_{12} \\ A_{21} & A_{22} \end{array} \right ] \left [\begin{array}{c} x^{h}(t_{1},t_{2}) \\ x^{v}(t_{1},t_{2}) \\ \end{array} \right ]. \end{aligned}$$
(25)

To test the asymptotic stability of (25), the following condition, based on properties of the characteristic polynomial, could be used:

$$ \mathcal{C}(s_{1},s_{2})\neq 0, \quad \forall (s_{1},s_{2}),\quad\quad \operatorname{Re}(s_{1}) \geq 0,\quad\quad \operatorname{Re}(s_{2})\geq 0, $$
(26)

where

$$ \mathcal{C}(s_{1},s_{2})=\operatorname{det} \left [\begin{array}{c@{\quad}c} s_{1}I_{n_{h}}-A_{11} & -A_{12} \\ -A_{21} & s_{2}I_{n_{v}}-A_{22} \end{array} \right ]. $$

However, this condition is difficult to use to design filters, so an alternative is used here, based on testing stability using Lyapunov matrices. This methodology makes possible to derive a condition in terms of Linear Matrices Inequalities (LMIs).

Theorem 1

[14]

The 2-D system (25) is asymptotically stable if there exists a matrix (block diagonal positive definite) such that

$$ A^{T}P+PA<0. $$
(27)

In this case, a Lyapunov function of system (25) is defined as

$$ V(t_{1},t_{2})\triangleq V_{1}(t_{1},t_{2})+V_{2}(t_{1},t_{2}), $$
(28)

where

$$\begin{aligned} & V_{1}(t_{1},t_{2})\triangleq x^{hT}(t_{1},t_{2})P_{h}x^{h}(t_{1},t_{2}), \\ & V_{2}(t_{1},t_{2})\triangleq x^{vT}(t_{1},t_{2})P_{v}x^{v}(t_{1},t_{2}). \end{aligned}$$

Definition 2

[19]

The unidirectional derivative of V(t 1,t 2) in (28) is defined to be

$$ \dot{V}_{u}(t_{1},t_{2}) \triangleq \frac{\partial V_{1}(t_{1},t_{2})}{\partial t_{1}}+\frac{\partial V_{2}(t_{1},t_{2})}{\partial t_{2}}. $$
(29)

Note that this unidirectional derivative can be seen as a particular case of the derivative of the function V(t 1,t 2) in one direction, independently of the other direction.

Lemma 1

[19]

The 2-D system (25) is asymptotically stable if its unidirectional derivative (29) is negative definite.

Proof

We now give an alternative proof based on Definition 2. From (29) and Lemma 1 we have that

$$ \frac{\partial V_{1}(t_{1},t_{2})}{\partial t_{1}}+\frac{\partial V_{2}(t_{1},t_{2})}{\partial t_{2}}<0, $$

which implies

$$\begin{aligned} &V_{1}(t_{1}+\Delta t_{1},t_{2}) < V_{1}(t_{1},t_{2}) \quad \mbox{ with } \big\| x^{h}(t_{1},t_{2})\big\| >0 \quad \mbox{or} \\ & V_{2}(t_{1},t_{2}+\Delta t_{2}) < V_{2}(t_{1},t_{2})\quad \mbox{ with } \big\| x^{v}(t_{1},t_{2})\big\| >0. \end{aligned}$$
(30)

Let t 1→∞ with t 2 finite: substituting them into (30), we get V 1(∞,t 2)<V 1(∞,t 2) if ∥x h(∞,t 2)∥>0 or, equivalently,

$$ V_{2}(\infty,t_{2}+\Delta t_{2})<V_{2}(\infty,t_{2})<V_{2}( \infty,0). $$
(31)

Since both V 1(∞,t 2)<V 1(∞,t 2) and V 2(∞,t 2)<0 are false if ∥x h(∞,t 2)∥>0, it follows that (31) is false. Thus, ∥x h(∞,t 2)∥=0. Similarly, we can get that ∥x v(t 1,∞)∥>0, which completes the proof. □

By using a parameter-dependent Lyapunov function P(α) we can obtain the following result.

Lemma 2

[38]

Given γ>0, the estimation error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty}< \gamma \) if there exists a block diagonal positive-definite matrix \(P_{\alpha}=\operatorname{diag}(P_{h_{\alpha}},P_{v_{\alpha}})>0\) satisfying

$$\begin{aligned} &\left [ \begin{array}{c@{\quad}c@{\quad}c} \tilde{A}_{\alpha}^{T}P_{\alpha}^{T}+P_{\alpha}\tilde{A}_{\alpha} & \star & \star \\ \tilde{B}_{\alpha}^{T}P_{\alpha} & -\gamma^2 I & \star \\ \tilde{C}_{\alpha} & \tilde{D}_{\alpha} & -\ I \\ \end{array} \right ]<0. \end{aligned}$$
(32)

Lemma 3

Let ξ∈ℜn, Q∈ℜn×n, and \(\mathcal{B}\in\Re^{m\times n}\) with rank \(\mathcal{B}<n\) and \(\mathcal{B}^{\perp}\) such that \(\mathcal{B}\mathcal{B}^{\perp}=0\). Then, the following conditions are equivalent:

  1. 1.

    \(\xi^{T}Q\xi<0 \ \forall \xi\neq0 : \mathcal{B}\xi=0\).

  2. 2.

    \(\mathcal{B}^{\perp T}Q\mathcal{B}^{\perp}<0\).

  3. 3.

    \(\exists \mu\in\Re: Q-\mu \mathcal{B}^{T}\mathcal{B}<0\).

  4. 4.

    \(\exists \chi \in\Re^{n\times m} : Q+\chi\mathcal{B}+\mathcal{B}^{T}\chi^{T}<0\).

4 Main Results

In this section, an LMI approach will be developed to solve the robust H filtering problem formulated in the previous section. First, we propose the following results derived from those in [32] and [38].

Theorem 2

Given γ>0, the filter error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty}< \gamma \) if there exist \(P=\operatorname{diag}(P_{h},P_{v})>0\) with \(P_{h}\in R^{n_{h}+n_{h_{f}}}\) and \(P_{v}\in R^{n_{v}+n_{v_{f}}}\) and matrices \(E_{\alpha}\in R^{(n+n_{f})\times (n+n_{f})}\), \(F_{\alpha}\in R^{p\times (n+n_{f})}\), \(K_{\alpha}\in R^{(n+n_{f})\times (n+n_{f})}\), and \(Q_{\alpha}\in R^{r\times (n+n_{f})}\) satisfying

$$\begin{aligned} & \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} K\tilde{A}_{\alpha}+\tilde{A}_{\alpha}^{T}K^{T} & \star & \star & \star \\ P_{\alpha}+ E_{\alpha}\tilde{A}_{\alpha}-K_{\alpha}^{T} & -E_{\alpha}-E_{\alpha}^{T} & \star & \star \\ \tilde{B}_{\alpha}^{T}K_{\alpha}^{T} + Q_{\alpha}\tilde{A}_{\alpha} & \tilde{B}_{\alpha}^{T}E_{\alpha}^{T}-Q_{\alpha} & Q_{\alpha}\tilde{B}_{\alpha}+\tilde{B}_{\alpha}^{T}Q_{\alpha}^{T}-\gamma^2 I & \star \\ F_{\alpha}\tilde{A}_{\alpha}+\tilde{C}_{\alpha} & -F_{\alpha} & F_{\alpha}\tilde{B}_{\alpha}+\tilde{D}_{\alpha} & -I \end{array} \right ]<0. \end{aligned}$$
(33)

Proof

The equivalence is obtained by considering

$$\begin{aligned} \begin{array}{l} \chi=\left [\begin{array}{c} K_{\alpha} \\ E_{\alpha} \\ Q_{\alpha} \\ F_{\alpha} \end{array} \right ],\quad\quad \mathcal{B}^{T}= \left [\begin{array}{c} \tilde{A}_{\alpha}^{T} \\ -I_{n+n_{f}} \\ \tilde{B}_{\alpha}^{T} \\ 0_{p\times (n+n_{f})} \end{array} \right ], \\ \mathcal{Q}=\left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 0_{(n+n_{f})\times (n+n_{f})} & \star & \star & \star \\ P_{\alpha} & 0_{(n+n_{f})\times (n+n_{f})} & \star & \star \\ 0_{r\times (n+n_{f})} & 0_{r\times (n+n_{f})} & -\gamma^2 I_{r} & \star \\ \tilde{C} & 0_{p\times (n+n_{f})} & \tilde{D} & -I_{p} \end{array} \right ], \end{array} \end{aligned}$$

under condition (4) of Lemma 3, with

$$\begin{aligned} & \mathcal{B}^{\bot ^{T}}=\left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} I_{n+n_{f}} & \tilde{A}_{\alpha}^{T} & 0_{2n\times r} & 0_{(n+n_{f})\times p} \\ 0_{r\times (n+n_{f})} & \tilde{B}_{\alpha}^{T} & I_{r} & 0_{r\times p} \\ 0_{p\times (n+n_{f})} & 0_{p\times (n+n_{f})} & 0_{p\times r} & I_{p} \end{array} \right ], \end{aligned}$$

which, using condition (2) of Lemma 3, gives (32). □

The additional variable matrices F α and Q α provide additional degrees of freedom for the solution of the robust H filtering problems presented below. Note that when F α =0 and Q α =0, the LMI (33) reduces to LMI (34). From Theorem 2 we have the following corollary.

Corollary 1

Given γ>0, the filter error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty}< \gamma \) if there exist \(P=\operatorname{diag}(P_{h},P_{v})>0\) with \(P_{h}\in R^{n_{h}+n_{h_{f}}}\) and \(P_{v}\in R^{n_{v}+n_{v_{f}}}\) and matrices \(E_{\alpha}\in R^{(n+n_{f})\times (n+n_{f})}\) and \(K_{\alpha}\in R^{(n+n_{f})\times (n+n_{f})}\) satisfying

$$ \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} K\tilde{A}_{\alpha}+\tilde{A}_{\alpha}^{T}K^{T} & \star &\star & \star \\ P_{\alpha}+ E_{\alpha}\tilde{A}_{\alpha}-K_{\alpha}^{T} & -E_{\alpha}-E_{\alpha}^{T} &\star & \star \\ \tilde{B}_{\alpha}^{T}K_{\alpha}^{T} & \tilde{B}_{\alpha}^{T}E_{\alpha}^{T} &-\gamma^2 I & \star \\ \tilde{C}_{\alpha} & 0 &\tilde{D}_{\alpha} & -I \end{array} \right ]<0. $$
(34)

Proof

The proof can be easily extended from that for 1-D systems in [9]. □

Remark 3

E α , F α , K α , and Q α act as slack variables to provide extra degrees of freedom in the solution space of the robust H filtering problem. Thanks to these matrices, we obtain an LMI in which the Lyapunov matrix P α is not involved in any product with the system matrices. This enables us to derive a robust H filtering condition that is less conservative than previous results due to the extra degrees of freedom (see the numerical example at the end of the paper).

In the sequel, based on Theorem 2, we will first design full-order parameter-independent H filters of the form (5)–(6). The results are then extended to reduced-order filters, providing the main results of the paper.

4.1 Full-Order H filter design

The following result provides sufficient conditions for the existence of a full-order H filter (\(n_{h_{f}}=h_{n}, n_{v_{f}}=n_{v}\)) for system (9)–(10) satisfying (13).

Theorem 3

Consider system (1)(3) and let γ>0 be a given constant. Then the estimation error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty} < \gamma\) if there exist \(\bar{P}_{\alpha}\triangleq \operatorname{diag}\{\bar{P}_{h\alpha},\bar{P}_{v\alpha}\} >0\) and matrices \(N_{\alpha}\triangleq \operatorname{diag}\{N_{h\alpha},N_{v\alpha}\}\), \(T_{\alpha}\triangleq \operatorname{diag}\{T_{h\alpha},T_{v\alpha}\}\), \(E_{1\alpha}\triangleq \operatorname{diag}\{E_{1_{h\alpha}},E_{1_{v\alpha}}\}\), \(K_{1\alpha}\triangleq \operatorname{diag}\{K_{1_{h\alpha}},K_{1_{v\alpha}}\}\), \(F_{1_{\alpha}}\), \(G_{1_{\alpha}}\), \(Q_{1_{\alpha}}\), \(X\triangleq \operatorname{diag}\{X_{h},X_{v}\}\), S a , S b , S c , and S d such that

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11_{\alpha}}+M_{11_{\alpha}}^{T} & \star & \star & \star \\ M_{21_{\alpha}} & M_{22_{\alpha}}+M_{22_{\alpha}}^{T} &\star & \star \\ M_{31_{\alpha}} & M_{32_{\alpha}} & M_{33_{\alpha}} & \star \\ (F_{1_{\alpha}}A_{\alpha}+C_{\alpha}-S_{d}C_{1_{\alpha}})\varUpsilon_{1}^{T}-S_{c}\varUpsilon_{2}^{T} & -F_{1_{\alpha}}\varUpsilon_{1}^{T} & M_{43_{\alpha}} & -I \\ \end{array} \right ]<0, $$
(35)

where

$$\begin{aligned} M_{11_{\alpha}} =& \varUpsilon_{1}(K_{1_{\alpha}}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}+(\varUpsilon_{1}+ \varUpsilon_{2})S_{a}\varUpsilon_{2}^{T} +\varUpsilon_{2}(N_{\alpha}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}, \\ M_{21_{\alpha}} =& \bar{P}_{\alpha}+\varUpsilon_{1} \bigl(E_{1_{\alpha}}A_{\alpha}+\lambda_{1}S_{b}C_{1_{\alpha}}-K_{1_{\alpha}}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{1}\bigl( \lambda_{1}S_{a}-N_{\alpha}^{T}\bigr) \varUpsilon_{2}^{T} \\ &{}+ \varUpsilon_{2}\bigl(T_{\alpha}A_{\alpha}+ \lambda_{1}S_{b}C_{1_{\alpha}}-X^{T}\bigr) \varUpsilon_{1}^{T} +\varUpsilon_{2}\bigl( \lambda_{1}S_{a}-X^{T}\bigr)\varUpsilon_{2}^{T}, \\ M_{22_{\alpha}} = & -\varUpsilon_{1}E_{1_{\alpha}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\alpha} \varUpsilon_{1}^{T}-\lambda_{1} \varUpsilon_{1}X\varUpsilon_{2}^{T} - \lambda_{2}\varUpsilon_{2}X\varUpsilon_{1}^{T}, \\ M_{31_{\alpha}} = & \bigl(B_{\alpha}^{T}K_{1_{\alpha}}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T}+Q_{1_{\alpha}}A_{\alpha} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}N_{\alpha}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{32_{\alpha}} = & \bigl(B_{\alpha}^{T}E_{1_{\alpha}}^{T}+ \lambda_{1}D_{1_{\alpha}}^{T}S_{b}^{T}- Q_{1_{\alpha}}\bigr)\varUpsilon_{1}^{T} + \bigl(B_{\alpha}^{T}T_{\alpha}^{T}+ \lambda_{2}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33_{\alpha}} = & Q_{1_{\alpha}}B_{\alpha}+B_{\alpha}^{T}Q_{1_{\alpha}}^{T}- \gamma^2I, \quad\quad M_{43_{\alpha}}= F_{1_{\alpha}}B_{\alpha}+D_{\alpha}-S_{d}D_{1_{\alpha}}. \end{aligned}$$

In this case, the desired 2-D continuous filter in the form of (5)(6) can be selected with the following parameters:

$$ \left [\begin{array}{c@{\quad}c} A_{f} & B_{f} \\ C_{f} & D_{f} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} X^{-1} & 0 \\ 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ]. $$
(36)

Proof

Let P α , E α , F α , K α , and Q α have the following structures:

$$\begin{aligned} & P_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} P_{1h_{\alpha}} & P_{2h_{\alpha}} \\ P_{2h_{\alpha}}^{T} & P_{3h_{\alpha}} \end{array} \right ],\left [\begin{array}{c@{\quad}c} P_{1v_{\alpha}} & P_{2v_{\alpha}} \\ P_{2v_{\alpha}}^{T} & P_{3v_{\alpha}} \end{array} \right ] \right \}, \quad\quad Q_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} Q_{1h} & 0 & Q_{1v}& 0 \end{array} \right ], \\ & E_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} E_{1h_{\alpha}} & \lambda_{1}K_{4h} \\ E_{2h_{\alpha}} & \lambda_{2}K_{3h} \end{array} \right ],\left [\begin{array}{c@{\quad}c} E_{1v_{\alpha}} & \lambda_{1}K_{4v} \\ E_{2v_{\alpha}} & \lambda_{2}K_{3v} \end{array} \right ] \right \}, \quad\quad F_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} F_{1h} & 0 & F_{1v}& 0 \end{array} \right ], \\ & K_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} K_{1h_{\alpha}} & K_{4h} \\ K_{2h_{\alpha}} & K_{3h} \end{array} \right ],\left [\begin{array}{c@{\quad}c} K_{1v_{\alpha}} & K_{4v} \\ K_{2v_{\alpha}} & K_{3v} \end{array} \right ] \right \}. \end{aligned}$$

Without loss of generality, we suppose that K 3h , K 4h , K 3v , and K 4v are nonsingular. Introducing the transformation matrix

$$\begin{aligned} & \varPhi=\operatorname{diag} \bigl\{ I_{h},K_{4h}K_{3h}^{-1},I_{v},K_{4v}K_{3v}^{-1} \bigr\} \end{aligned}$$

and pre- and post-multiplying (33) by \(\operatorname{diag} \{\varPhi,\varPhi,I,I \}\), we get

$$\begin{aligned} &\left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \varPhi(K\tilde{A}_{\alpha}+\tilde{A}_{\alpha}^{T}K^{T})\varPhi^{T} & \star & \star & \star \\ \varPhi(P_{\alpha}+ E_{\alpha}\tilde{A}_{\alpha}-K_{\alpha}^{T})\varPhi^{T} & -\varPhi(E_{\alpha}+E_{\alpha}^{T})\varPhi^{T} & \star & \star \\ \tilde{B}_{\alpha}^{T}K_{\alpha}^{T}\varPhi^{T} + Q_{\alpha}\tilde{A}_{\alpha}\varPhi^{T} & \tilde{B}_{\alpha}^{T}E_{\alpha}^{T}\varPhi^{T}-Q_{\alpha}\varPhi^{T} & Q_{\alpha}\tilde{B}_{\alpha}+\tilde{B}_{\alpha}^{T}Q_{\alpha}^{T}-\gamma^2 I & \star \\ F_{\alpha}\tilde{A}_{\alpha}\varPhi^{T}+\tilde{C}_{\alpha}\varPhi^{T} & -F_{\alpha}\varPhi^{T}& F_{\alpha}\tilde{B}_{\alpha}+\tilde{D}_{\alpha} & -I \end{array} \right ] \\ &\quad<0. \end{aligned}$$
(37)

Defining

$$\begin{aligned} & \bar{P}_{\alpha}= \varPhi P_{\alpha}\varPhi^{T} = \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} \bar{P}_{1h_{\alpha}} & \bar{P}_{2h_{\alpha}} \\ \bar{P}_{2h_{\alpha}}^{T} & \bar{P}_{1h_{\alpha}} \end{array} \right ],\left [\begin{array}{c@{\quad}c} \bar{P}_{1v_{\alpha}} & \bar{P}_{2v_{\alpha}} \\ \bar{P}_{2v_{\alpha}}^{T} & \bar{P}_{1v_{\alpha}} \end{array} \right ] \right \}, \\ & X = \operatorname{diag} \bigl\{ K_{4h}K_{3h}^{-1}K_{4h},K_{4v}K_{3v}^{-1}K_{4v} \bigr\} , \\ & N_{\alpha} = \operatorname{diag} \bigl\{ K_{4h}K_{3h}^{-1}K_{2h_{\alpha}},K_{4v}K_{3v}^{-1}K_{2v_{\alpha}} \bigr\} , \\ & T_{\alpha} = \operatorname{diag} \bigl\{ K_{4h}K_{3h}^{-1}E_{2h_{\alpha}},K_{4v}K_{3v}^{-1}E_{2v_{\alpha}} \bigr\} , \\ & K_{1\alpha} = \operatorname{diag} \{K_{1h_{\alpha}},K_{1v_{\alpha}} \}, \quad\quad K_{4\alpha} = \operatorname{diag} \{K_{4h_{\alpha}},K_{4v_{\alpha}} \}, \\ & S_{a} = \left [\begin{array}{c@{\quad}c} S_{a_{1h}} & S_{a_{1v}} \\ S_{a_{2h}} & S_{a_{2v}} \end{array} \right ] =\left [\begin{array}{lr} K_{4h}A_{f_{1h}}K_{3h}^{-1}K_{4h}^{T} & K_{4h}A_{f_{1v}}K_{3v}^{-1}K_{4v}^{T} \\ K_{4v}A_{f_{2h}}K_{3h}^{-1}K_{4h}^{T} & K_{4v}A_{f_{2v}}K_{3v}^{-1}K_{4v}^{T} \end{array} \right ], \\ & S_{b} = \left [\begin{array}{c} S_{b_{h}} \\ S_{b_{v}} \end{array} \right ]=\left [\begin{array}{lr} K_{4h} & 0 \\ 0 & K_{4v} \end{array} \right ] \left [\begin{array}{c} B_{f_{h}} \\ B_{f_{v}} \end{array} \right ],\quad\quad S_{d} = D_{f}, \\ & S_{c} = \left [\begin{array}{c@{\quad}c} S_{c_{h}} & S_{c_{v}} \end{array} \right ] =\left [\begin{array}{c@{\quad}c} C_{f_{h}} & C_{f_{v}} \end{array} \right ] \left [\begin{array}{lr} K_{3h}^{-1}K_{4h} & 0 \\ 0 & K_{3v}^{-1}K_{4v} \end{array} \right ], \\ & \left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ] = \left [\begin{array}{c@{\quad}c@{\quad}c} K_{4h} & 0 & 0\\ 0 & K_{4v} & 0\\ 0 & 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c@{\quad}c} A_{f_{1h}} & A_{f_{1v}} & B_{f_{h}}\\ A_{f_{2h}} & A_{f_{2v}} & B_{f_{v}}\\ C_{f_{h}} & C_{f_{v}} & D_{f} \end{array} \right ]\left [\begin{array}{c@{\quad}c@{\quad}c} K_{3h}^{-1}K_{4h}^{T} & 0 & 0\\ 0 & K_{3v}^{-1}K_{4v}^{T} & 0\\ 0 & 0 & I \end{array} \right ] \end{aligned}$$
(38)

(see the proof of (38)), we know that the transfer function of the filter in (5)–(6) from y(t 1,t 2) to z f (t 1,t 2) is \(G_{z_{f}y}(s_{1},s_{2})=C_{f}[\operatorname{diag} \{s_{1}I_{nh},s_{2}I_{nv} \}-A_{f}]^{-1}B_{f}+D_{f}\). Substituting (38) into this transfer function and considering \(X_{h}=K_{4h}K_{3h}^{-1}K_{4h}^{T}\) and \(X_{v}=K_{4v}K_{3v}^{-1}K_{4v}^{T}\), we get

$$ \begin{array}{ll} \mathrm{ T}_{z_{f}y}(s_{1},s_{2})=& S_{c} [\operatorname{diag} \{s_{1}I_{nh},s_{2}I_{nv} \}-X^{-1}S_{a} ]^{-1} X^{-1}S_{b}+S_{d}. \end{array} $$

Therefore, the filter can be given by (36), and the proof is completed. □

Remark 4

Observe that, for given λ 1 and λ 2, (35) is convex and can be solved using standard LMI tools. Finding optimal values of λ 1 and λ 2can be completed, for example, by using the Matlab command Fminsearch.

Similar to Theorem 3, by Corollary 1 we have the following:

Corollary 2

Consider system (1)(3) and let γ>0 be a given constant. Then the estimation error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty} < \gamma\) if there exist \(\bar{P}_{\alpha}\triangleq \operatorname{diag}\{\bar{P}_{h\alpha},\bar{P}_{v\alpha}\} >0\) and matrices \(N_{\alpha}\triangleq \operatorname{diag}\{N_{h\alpha},N_{v\alpha}\}\), \(T_{\alpha}\triangleq \operatorname{diag}\{T_{h\alpha},T_{v\alpha}\}\), \(E_{1\alpha}\triangleq \operatorname{diag}\{E_{1_{h\alpha}},E_{1_{v\alpha}}\}\), \(K_{1\alpha}\triangleq \operatorname{diag}\{K_{1_{h\alpha}},K_{1_{v\alpha}}\}\), \(G_{1_{\alpha}}\), \(X\triangleq \operatorname{diag}\{X_{h},X_{v}\}\), S a , S b , S c , and S d such that

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11_{\alpha}}+M_{11_{\alpha}}^{T} & \star & \star & \star \\ M_{21_{\alpha}} & M_{22_{\alpha}}+M_{22_{\alpha}}^{T} &\star & \star \\ M_{31_{\alpha}} & M_{32_{\alpha}} & -\gamma^2I & \star \\ (C_{\alpha}-S_{d}C_{1_{\alpha}})\varUpsilon_{1}^{T}-S_{c}\varUpsilon_{2}^{T} & -F_{1_{\alpha}}\varUpsilon_{1}^{T} & D_{\alpha}-S_{d}D_{1_{\alpha}} & -I \\ \end{array} \right ]<0, $$
(39)

where

$$\begin{aligned} M_{11_{\alpha}} = & \varUpsilon_{1}(K_{1_{\alpha}}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}+(\varUpsilon_{1}+ \varUpsilon_{2})S_{a}\varUpsilon_{2}^{T} +\varUpsilon_{2}(N_{\alpha}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}, \\ M_{21_{\alpha}} = & \bar{P}_{\alpha}+\varUpsilon_{1} \bigl(E_{1_{\alpha}}A_{\alpha}+\lambda_{1}S_{b}C_{1_{\alpha}}-K_{1_{\alpha}}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{1}\bigl( \lambda_{1}S_{a}-N_{\alpha}^{T}\bigr) \varUpsilon_{2}^{T} \\ &{} + \varUpsilon_{2}\bigl(T_{\alpha}A_{\alpha}+ \lambda_{1}S_{b}C_{1_{\alpha}}-X^{T}\bigr) \varUpsilon_{1}^{T} +\varUpsilon_{2}\bigl( \lambda_{1}S_{a}-X^{T}\bigr)\varUpsilon_{2}^{T}, \\ M_{22_{\alpha}} = & -\varUpsilon_{1}E_{1_{\alpha}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\alpha} \varUpsilon_{1}^{T}-\lambda_{1} \varUpsilon_{1}X\varUpsilon_{2}^{T} - \lambda_{2}\varUpsilon_{2}X\varUpsilon_{1}^{T}, \\ M_{31_{\alpha}} = & \bigl(B_{\alpha}^{T}K_{1_{\alpha}}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}N_{\alpha}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{32_{\alpha}} = & \bigl(B_{\alpha}^{T}E_{1_{\alpha}}^{T}+ \lambda_{1}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}T_{\alpha}^{T}+ \lambda_{2}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}. \end{aligned}$$

4.2 Reduced-Order H filter design

In this subsection, we provide a solution of the H reduced-order filtering problem in terms of LMIs.

First, it must be pointed out that for the reduced-order \(1\leq n_{h_{f}}< n_{h}\), 1≤n vf <n v , the LMI (35) is no longer applicable because the matrices K 4h and K 4h are rectangular, of dimensions n hf ×n h and n vf ×n v , respectively. We get rid of this difficulty by proposing a special structure for the matrices:

$$ V_{h}=\left [\begin{array}{c} I_{n_{h_{f}}\times n_{h_{f}}} \\ 0_{n_{h}-n_{h_{f}}\times n_{h_{f}}} \end{array} \right ],\quad\quad V_{v}= \left [\begin{array}{c} I_{n_{v_{f}}\times n_{v_{f}}} \\ 0_{n_{v}-n_{v_{f}}\times n_{v_{f}}} \end{array} \right ]. $$

Then, replacing matrices K 4h , K 4v by V h K 4h and V v K 4v , respectively, makes possible to derive the corresponding result, as it is now presented:

Theorem 4

Define , and \(V\triangleq \operatorname{diag}\{V_{h},V_{v}\}\). Consider system (1)(3), and let γ>0 be a given constant. Then, there exists a reduced-order H filter in the form of (5)(6) such that the estimation error system (9)(10) is asymptotically stable with \(\|\tilde{G}\|_{\infty} < \gamma\) if there exist \(\bar{P}_{\alpha}\triangleq \operatorname{diag}\{\bar{P}_{h\alpha},\bar{P}_{v\alpha}\} >0\) and matrices \(N_{\alpha}\triangleq \operatorname{diag}\{N_{h\alpha},N_{v\alpha}\}\), \(T_{\alpha}\triangleq \operatorname{diag}\{T_{h\alpha},T_{v\alpha}\}\), \(E_{1\alpha}\triangleq \operatorname{diag}\{E_{1_{h\alpha}},E_{1_{v\alpha}}\}\), \(K_{1\alpha}\triangleq \operatorname{diag}\{K_{1_{h\alpha}},K_{1_{v\alpha}}\}\), \(F_{1_{\alpha}}\), \(G_{1_{\alpha}}\), \(Q_{1_{\alpha}}\), \(X\triangleq \operatorname{diag}\{X_{h},X_{v}\}\), S a , S b , S c , and S d such that

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11_{\alpha}}+M_{11_{\alpha}}^{T} & \star & \star & \star \\ M_{21_{\alpha}} & M_{22_{\alpha}}+M_{22_{\alpha}}^{T} &\star & \star \\ M_{31_{\alpha}} & M_{32_{\alpha}} & M_{33_{\alpha}} & \star \\ (F_{1_{\alpha}}A_{\alpha}+C_{\alpha}-S_{d}C_{1_{\alpha}})\varUpsilon_{1}^{T}-S_{c}\varUpsilon_{2}^{T} & -F_{1_{\alpha}}\varUpsilon_{1}^{T} & M_{43_{\alpha}} & -I \\ \end{array} \right ]<0, $$
(40)

where

$$\begin{aligned} M_{11_{\alpha}} = & \varUpsilon_{1}(K_{1_{\alpha}}A_{\alpha}+VS_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}+(\varUpsilon_{1}V+ \varUpsilon_{2})S_{a}\varUpsilon_{2}^{T} +\varUpsilon_{2}(N_{\alpha}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}, \\ M_{21_{\alpha}} = & \bar{P}_{\alpha}+\varUpsilon_{1} \bigl(E_{1_{\alpha}}A_{\alpha}+\lambda_{1}S_{b}C_{1_{\alpha}}-K_{1_{\alpha}}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{1}\bigl( \lambda_{1}VS_{a}-N_{\alpha}^{T}\bigr) \varUpsilon_{2}^{T} \\ &{} + \varUpsilon_{2}\bigl(T_{\alpha}A_{\alpha}+ \lambda_{1}S_{b}C_{1_{\alpha}}-X^{T}V^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{2}\bigl( \lambda_{1}S_{a}-X^{T}\bigr)\varUpsilon_{2}^{T}, \\ M_{22_{\alpha}} = & -\varUpsilon_{1}E_{1_{\alpha}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\alpha} \varUpsilon_{1}^{T}-\lambda_{1} \varUpsilon_{1}VX\varUpsilon_{2}^{T} - \lambda_{2}\varUpsilon_{2}X\varUpsilon_{1}^{T}, \\ M_{31_{\alpha}} = & \bigl(B_{\alpha}^{T}K_{1_{\alpha}}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T}V^{T}+Q_{1_{\alpha}}A_{\alpha} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}N_{\alpha}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{32_{\alpha}} = & \bigl(B_{\alpha}^{T}E_{1_{\alpha}}^{T}+ \lambda_{1}D_{1_{\alpha}}^{T}S_{b}^{T}V^{T}- Q_{1_{\alpha}}\bigr)\varUpsilon_{1}^{T} + \bigl(B_{\alpha}^{T}T_{\alpha}^{T}+ \lambda_{2}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33_{\alpha}} = & Q_{1_{\alpha}}B_{\alpha}+B_{\alpha}^{T}Q_{1_{\alpha}}^{T}- \gamma^2I, \quad \quad M_{43_{\alpha}}= F_{1_{\alpha}}B_{\alpha}+D_{\alpha}-S_{d}D_{1_{\alpha}}. \end{aligned}$$

In this case, the 2-D filter in the form of (5)(6) is given by

$$ \left [\begin{array}{c@{\quad}c} A_{f} & B_{f} \\ C_{f} & D_{f} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} X^{-1} & 0 \\ 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ]. $$
(41)

Proof

The proof is parallel to that of Theorem 3. We obtain (40) when the matrices P α , E α , F α , K α , and Q α have the following structures:

$$\begin{aligned} & P_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} P_{1h_{\alpha}} & P_{2h_{\alpha}} \\ P_{2h_{\alpha}}^{T} & P_{3h_{\alpha}} \end{array} \right ],\left [\begin{array}{c@{\quad}c} P_{1v_{\alpha}} & P_{2v_{\alpha}} \\ P_{2v_{\alpha}}^{T} & P_{3v_{\alpha}} \end{array} \right ] \right \}, \quad\quad Q_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} Q_{1h} & 0 & Q_{1v}& 0 \end{array} \right ], \\ & K_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} K_{1h_{\alpha}} & V_{h}K_{4h} \\ K_{2h_{\alpha}} & K_{3h} \end{array} \right ],\left [\begin{array}{c@{\quad}c} K_{1v_{\alpha}} & V_{v}K_{4v} \\ K_{2v_{\alpha}} & K_{3v} \end{array} \right ] \right \}, \quad\quad F_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} F_{1h} & 0 & F_{1v}& 0 \end{array} \right ], \\ & E_{\alpha}= \operatorname{diag} \left \{\left [\begin{array}{c@{\quad}c} E_{1h_{\alpha}} & \lambda_{1}V_{h}K_{4h} \\ E_{2h_{\alpha}} & \lambda_{2}K_{3h} \end{array} \right ],\left [\begin{array}{c@{\quad}c} E_{1v_{\alpha}} & \lambda_{1}V_{v}K_{4v} \\ E_{2v_{\alpha}} & \lambda_{2}K_{3v} \end{array} \right ] \right \}. \end{aligned}$$

 □

Remark 5

In the filter model (5)–(6), when \(n_{h_{f}}=n_{h}\) and \(n_{v_{f}}=n_{v}\), then V=I 2n , so it is a full-order filter; therefore, Theorems 3 and 4 are equivalent for this specific case. The reduced-order filter is then studied when (\(1\leq n_{h_{f}}<n_{h}, 1\leq n_{v_{f}}<n_{v}\)), as when (\(n_{h_{f}}=0\) or \(n_{v_{f}}=0\)), we directly get the following corollaries from Theorem 4.

Case 1: \(n_{h_{f}}\neq0\), \(n_{v_{f}}=0\).

Corollary 3

Define . Consider system (1)(3) and let γ>0 be a given constant. Then, there exists a reduced-order H filter in the form of (18)(19) such that the estimation error system (20)(21) is asymptotically stable with \(\|\tilde{G}\|_{\infty} < \gamma\) if there exist \(\bar{P}_{\alpha}\triangleq \operatorname{diag}\{\bar{P}_{h\alpha},\bar{P}_{v\alpha}\} >0\) and matrices \(N_{\alpha}\triangleq \operatorname{diag}\{N_{h\alpha},N_{v\alpha}\}\), \(T_{\alpha}\triangleq \operatorname{diag}\{T_{h\alpha},T_{v\alpha}\}\), \(E_{1\alpha}\triangleq \operatorname{diag}\{E_{1_{h\alpha}},E_{1_{v\alpha}}\}\), K 1α , \(F_{1_{\alpha}}\), \(G_{1_{\alpha}}\), \(Q_{1_{\alpha}}\), \(X\triangleq \operatorname{diag}\{X_{h},X_{v}\}\), S a , S b , S c , and S d such that

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11_{\alpha}}+M_{11_{\alpha}}^{T} & \star & \star & \star \\ M_{21_{\alpha}} & M_{22_{\alpha}}+M_{22_{\alpha}}^{T} &\star & \star \\ M_{31_{\alpha}} & M_{32_{\alpha}} & M_{33_{\alpha}} & \star \\ (F_{1_{\alpha}}A_{\alpha}+C_{\alpha}-S_{d}C_{1_{\alpha}})\varUpsilon_{1}^{T}-S_{c}\varUpsilon_{2}^{T} & -F_{1_{\alpha}}\varUpsilon_{1}^{T} & M_{43_{\alpha}} & -I \\ \end{array} \right ]<0, $$
(42)

where

$$\begin{aligned} M_{11_{\alpha}} = & \varUpsilon_{1}(K_{1_{\alpha}}A_{\alpha}+V_{h} S_{b}C_{1_{\alpha}})\varUpsilon_{1}^{T}+( \varUpsilon_{1} V_{h}+\varUpsilon_{2}) S_{a}\varUpsilon_{2}^{T}+\varUpsilon_{2}(N_{\alpha}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}, \\ M_{21_{\alpha}} = & \bar{P}_{\alpha}+\varUpsilon_{1} \bigl(E_{1_{\alpha}}A_{\alpha}+\lambda_{1}S_{b}C_{1_{\alpha}}-K_{1_{\alpha}}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{1}\bigl( \lambda_{1} V_{h} S_{a}-N_{\alpha}^{T} \bigr)\varUpsilon_{2}^{T} \\ &{} + \varUpsilon_{2}\bigl(T_{\alpha}A_{\alpha}+ \lambda_{1}S_{b}C_{1_{\alpha}}-X^{T}V_{h}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{2}\bigl( \lambda_{1}S_{a}-X^{T}\bigr)\varUpsilon_{2}^{T}, \\ M_{22_{\alpha}} = & -\varUpsilon_{1}E_{1_{\alpha}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\alpha} \varUpsilon_{1}^{T}-\lambda_{1} \varUpsilon_{1} V_{h} X\varUpsilon_{2}^{T}- \lambda_{2}\varUpsilon_{2}X\varUpsilon_{1}^{T}, \\ M_{31_{\alpha}} = & \bigl(B_{\alpha}^{T}K_{1_{\alpha}}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T}V_{h}^{T}+Q_{1_{\alpha}}A_{\alpha} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}N_{\alpha}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{32_{\alpha}} = & \bigl(B_{\alpha}^{T}E_{1_{\alpha}}^{T}+ \lambda_{1}D_{1_{\alpha}}^{T}S_{b}^{T}V_{h}^{T}- Q_{1_{\alpha}}\bigr)\varUpsilon_{1}^{T} + \bigl(B_{\alpha}^{T}T_{\alpha}^{T}+ \lambda_{2}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33_{\alpha}} = & Q_{1_{\alpha}}B_{\alpha}+B_{\alpha}^{T}Q_{1_{\alpha}}^{T}- \gamma^2I, \quad\quad M_{43_{\alpha}}= F_{1_{\alpha}}B_{\alpha}+D_{\alpha}-S_{d}D_{1_{\alpha}}. \end{aligned}$$

Proof

Let matrices P α ,E α ,F α ,K α , and P α take the following structures:

$$\begin{aligned} & P_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} P_{1h_{\alpha}} & P_{2h_{\alpha}} & 0 \\ P_{2h_{\alpha}}^{T} & P_{3h_{\alpha}} & 0 \\ 0 & 0 & P_{v_{\alpha}} \end{array} \right ], \quad\quad K_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} K_{1h_{\alpha}} & K_{2h_{\alpha}} & 0 \\ K_{2h_{\alpha}}^{T} & K_{3h_{\alpha}} & 0 \\ 0 & 0 & K_{1v_{\alpha}} \end{array} \right ], \\ & E_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} E_{1h_{\alpha}} & \lambda_{1}K_{4h} & 0 \\ E_{2h_{\alpha}}^{T} & \lambda_{2}K_{3h} & 0 \\ 0 & 0 & E_{1v_{\alpha}} \end{array} \right ], \quad\quad F_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} F_{1h} & 0 & F_{1v} \end{array} \right ], \\ & Q_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} Q_{1h} & 0 & Q_{1v} \end{array} \right ]. \end{aligned}$$

Without loss of generality, K 3h and K 4h are nonsingular. Introduce now the transformation matrix

$$\begin{aligned} & \varPhi=\operatorname{diag} \bigl\{ I_{h},K_{4h}K_{3h}^{-1},I_{v} \bigr\} \end{aligned}$$

and define

$$\begin{aligned} &\bar{P}_{\alpha}=\varPhi P(\alpha) \varPhi^{T}= \left [\begin{array}{c@{\quad}c@{\quad}c} \bar{P}_{1h_{\alpha}} & \bar{P}_{2h_{\alpha}} & 0 \\ \bar{P}_{2h_{\alpha}}^{T} & \bar{P}_{3h_{\alpha}} & 0 \\ 0 & 0 & \bar{P}_{v_{\alpha}} \end{array} \right ],\quad\quad X=K_{4h}^{T}K_{3h}^{-1}K_{4h}, \\ &N(\alpha)=K_{4h}^{T}K_{3h}^{-1}K_{2h}( \alpha), \quad\quad T(\alpha)=K_{4h}^{T}K_{3h}^{-1}E_{2h}( \alpha),\quad\quad K_{1}(\alpha)=K_{1h}(\alpha), \\ &S_{a}=K_{4h}^{T}A_{f}^{11}K_{3h}^{-1}K_{4h}^{T}, \quad\quad S_{b}=K_{4h}^{T}B_{f}^{1}, \quad\quad S_{c}=C_{f}^{1}K_{3h}^{-1}K_{4h}, \quad\quad S_{d}=D_{f}, \\ & \left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} K_{4h} & 0 \\ 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c} A_{f}^{11} & B_{f}^{1} \\ C_{f}^{1} & D_{f} \end{array} \right ]\left [\begin{array}{c@{\quad}c} K_{3h}^{-1}K_{4h}^{T} & 0 \\ 0 & I \end{array} \right ]. \end{aligned}$$

Following proof of Theorem 3, it is possible to obtain the LMI of (42), and

$$ \left [\begin{array}{c@{\quad}c} A_{f}^{11} & B_{f}^{1} \\ C_{f}^{1} & D_{f} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} X^{-1} & 0 \\ 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ], $$
(43)

which completes the proof. □

Similarly to Corollary 3, we can get the following result to design the reduced-order common filter in the form of (14)–(15).

Case 2: \(n_{h_{f}}\neq0\), \(n_{v_{f}}=0\).

Corollary 4

Define . Consider system (1)(3) and let γ>0 be a given constant. Then, there exists a reduced-order H filter in the form of (14)(15) such that the estimation error system (16)(17) is asymptotically stable with \(\|\tilde{G}\|_{\infty} < \gamma\) if there exist positive definite matrices \(\bar{P}_{\alpha}\triangleq \operatorname{diag}\{\bar{P}_{h\alpha},\bar{P}_{v\alpha}\} >0\) and matrices \(N_{\alpha}\triangleq \operatorname{diag}\{N_{h\alpha},N_{v\alpha}\}\), \(T_{\alpha}\triangleq \operatorname{diag}\{T_{h\alpha},T_{v\alpha}\}\), \(E_{1\alpha}\triangleq \operatorname{diag}\{E_{1_{h\alpha}},E_{1_{v\alpha}}\}\), K 1α , \(F_{1_{\alpha}}\), \(G_{1_{\alpha}}\), \(Q_{1_{\alpha}}\), \(X\triangleq \operatorname{diag}\{X_{h},X_{v}\}\), S a , S b , S c , and S d such that

$$ \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11_{\alpha}}+M_{11_{\alpha}}^{T} & \star & \star & \star \\ M_{21_{\alpha}} & M_{22_{\alpha}}+M_{22_{\alpha}}^{T} &\star & \star \\ M_{31_{\alpha}} & M_{32_{\alpha}} & M_{33_{\alpha}} & \star \\ (F_{1_{\alpha}}A_{\alpha}+C_{\alpha}-S_{d}C_{1_{\alpha}})\varUpsilon_{1}^{T}-S_{c}\varUpsilon_{2}^{T} & -F_{1_{\alpha}}\varUpsilon_{1}^{T} & M_{43_{\alpha}} & -I \\ \end{array} \right ]<0, $$
(44)

where

$$\begin{aligned} M_{11_{\alpha}} = & \varUpsilon_{1}(K_{1_{\alpha}}A_{\alpha}+V_{v} S_{b}C_{1_{\alpha}})\varUpsilon_{1}^{T}+( \varUpsilon_{1}+\varUpsilon_{2}) V_{v} S_{a}\varUpsilon_{2}^{T} +\varUpsilon_{2}(N_{\alpha}A_{\alpha}+S_{b}C_{1_{\alpha}}) \varUpsilon_{1}^{T}, \\ M_{21_{\alpha}} = & \bar{P}_{\alpha}+\varUpsilon_{1} \bigl(E_{1_{\alpha}}A_{\alpha}+\lambda_{1}S_{b}C_{1_{\alpha}}-K_{1_{\alpha}}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{1}\bigl( \lambda_{1} V_{v} S_{a}-N_{\alpha}^{T} \bigr)\varUpsilon_{2}^{T} \\ &{} + \varUpsilon_{2}\bigl(T_{\alpha}A_{\alpha}+ \lambda_{1}S_{b}C_{1_{\alpha}}-X^{T}V_{v}^{T} \bigr)\varUpsilon_{1}^{T} +\varUpsilon_{2}\bigl( \lambda_{1}S_{a}-X^{T}\bigr)\varUpsilon_{2}^{T}, \\ M_{22_{\alpha}} = & -\varUpsilon_{1}E_{1_{\alpha}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\alpha} \varUpsilon_{1}^{T}-\lambda_{1} \varUpsilon_{1} V_{v} X\varUpsilon_{2}^{T} -\lambda_{2}\varUpsilon_{2}X\varUpsilon_{1}^{T}, \\ M_{31_{\alpha}} = & \bigl(B_{\alpha}^{T}K_{1_{\alpha}}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T}V_{v}^{T}+Q_{1_{\alpha}}A_{\alpha} \bigr)\varUpsilon_{1}^{T} +\bigl(B_{\alpha}^{T}N_{\alpha}^{T}+D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{32_{\alpha}} = & \bigl(B_{\alpha}^{T}E_{1_{\alpha}}^{T}+ \lambda_{1}D_{1_{\alpha}}^{T}S_{b}^{T}V_{v}^{T}- Q_{1_{\alpha}}\bigr)\varUpsilon_{1}^{T} + \bigl(B_{\alpha}^{T}T_{\alpha}^{T}+ \lambda_{2}D_{1_{\alpha}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33_{\alpha}} = & Q_{1_{\alpha}}B_{\alpha}+B_{\alpha}^{T}Q_{1_{\alpha}}^{T}- \gamma^2I, \quad\quad M_{43_{\alpha}}= F_{1_{\alpha}}B_{\alpha}+D_{\alpha}-S_{d}D_{1_{\alpha}}. \end{aligned}$$

Proof

Let matrices P α ,E α ,F α ,K α , and Q α have the following structures:

$$\begin{aligned} P_{\alpha} =& \left [\begin{array}{c@{\quad}c@{\quad}c} \bar{P}_{h_{\alpha}} & & 0 \\ 0 & \bar{P}_{1v_{\alpha}} & \bar{P}_{2v_{\alpha}} \\ 0 & \bar{P}_{2v_{\alpha}}^{T} & \bar{P}_{3v_{\alpha}} \end{array} \right ], \quad\quad K_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} K_{1h_{\alpha}} & 0 & 0 \\ 0 & K_{1v_{\alpha}} & K_{4v} \\ 0 & K_{2v_{\alpha}} & K_{3v} \end{array} \right ], \\ E_{\alpha} =& \left [\begin{array}{c@{\quad}c@{\quad}c} E_{1h_{\alpha}} & 0 & 0 \\ 0 & E_{1v_{\alpha}} & \lambda_{1}K_{4v} \\ 0 & E_{2v_{\alpha}} & \lambda_{2}K_{3v} \end{array} \right ], \quad\quad F_{\alpha}= \left [\begin{array}{c@{\quad}c@{\quad}c} F_{1h} & F_{1v} & 0 \end{array} \right ], \\ Q_{\alpha} =& \left [\begin{array}{c@{\quad}c@{\quad}c} Q_{1h} & Q_{1v} & 0 \end{array} \right ]. \end{aligned}$$

Without loss of generality, we again assume that K 3v and K 4v are nonsingular. We define the transformation matrix

$$\begin{aligned} & \varPhi=\operatorname{diag} \bigl\{ I_{h},I_{v},K_{4v}K_{3v}^{-1} \bigr\} \end{aligned}$$

and

$$\begin{aligned} &\bar{P}_{\alpha}=\varPhi P(\alpha) \varPhi^{T}= \left [\begin{array}{c@{\quad}c@{\quad}c} \bar{P}_{h_{\alpha}} & 0 & 0 \\ 0 & \bar{P}_{1v_{\alpha}} & \bar{P}_{2v_{\alpha}} \\ 0 & \bar{P}_{2v_{\alpha}}^{T} & \bar{P}_{3v_{\alpha}} \end{array} \right ], \quad\quad X=K_{4v}^{T}K_{3v}^{-1}K_{4v}, \\ &N(\alpha)=K_{4v}^{T}K_{3v}^{-1}K_{2v}( \alpha), \quad\quad T(\alpha)=K_{4v}^{T}K_{3v}^{-1}E_{2v}( \alpha),\quad\quad K_{1}(\alpha)=K_{1v}(\alpha), \\ & S_{a}=K_{4v}^{T}A_{f}^{22}K_{3v}^{-1}K_{4v}^{T}, \quad\quad S_{b}=K_{4v}^{T}B_{f}^{2}, \quad\quad S_{c}=C_{f}^{2}K_{3v}^{-1}K_{4v}, \quad\quad S_{d}=D_{f}, \\ &\left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} K_{4v} & 0 \\ 0 & I \end{array} \right ] \left [\begin{array}{c@{\quad}c} A_{f}^{22} & B_{f}^{2} \\ C_{f}^{2} & D_{f} \end{array} \right ]\left [\begin{array}{c@{\quad}c} K_{3v}^{-1}K_{4v}^{T} & 0 \\ 0 & I \end{array} \right ]. \end{aligned}$$

Similarly to the proof of Theorem 3, the LMI (44) is obtained with

$$ \left [\begin{array}{c@{\quad}c} A_{f}^{22} & B_{f}^{2} \\ C_{f}^{2} & D_{f} \end{array} \right ]=\left [\begin{array}{c@{\quad}c} X^{-1} & 0 \\ 0 & I \end{array} \right ]\left [\begin{array}{c@{\quad}c} S_{a} & S_{b} \\ S_{c} & S_{d} \end{array} \right ], $$
(45)

completing the proof. □

Case 3: \(n_{h_{f}}=0\), \(n_{v_{f}}=0\).

Corollary 5

Given γ>0, There exists a zero-order H filter in the form of (22) such that the estimation error system (23)(24) is asymptotically stable with \(\|\tilde{G}\|_{\infty}< \gamma \) if there exist a positive-definite matrix \(P=\operatorname{diag}(P_{h},P_{v})>0\) with \(P_{h}\in R^{n_{h}}\) and \(P_{v}\in R^{n_{v}}\) and matrices E α R n×n, F α R p×n, K α R n×n, and Q α R r×n satisfying

$$ \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} K A_{\alpha}+A_{\alpha}^{T}K^{T} & \star & \star & \star \\ P_{\alpha}+ E_{\alpha} A_{\alpha}-K_{\alpha}^{T} & -E_{\alpha}-E_{\alpha}^{T} & \star & \star \\ B_{\alpha}^{T}K_{\alpha}^{T} + Q_{\alpha} A _{\alpha} & B_{\alpha}^{T}E_{\alpha}^{T}-Q_{\alpha} & Q_{\alpha} B_{\alpha}+ B_{\alpha}^{T}Q_{\alpha}^{T}-\gamma^2 I & \star \\ F_{\alpha} A_{\alpha}+C_{\alpha}-D_{f}C_{1\alpha} & -F_{\alpha} & F_{\alpha} B_{\alpha}+D_{\alpha}-D_{f}D_{1\alpha} & -I \end{array} \right ]<0. $$
(46)

4.3 Homogeneous Polynomial Solutions

Before presenting the formulation of Theorem 4 using homogeneous polynomially parameter-dependent matrices, some definitions and preliminaries are needed to represent and handle products and sums of homogeneous polynomials. First, we define the homogeneous polynomially parameter-dependent matrices of degree \(\mathfrak{g}\) by

$$\begin{aligned} &\bar{P}_{\alpha(\mathfrak{g})} = \sum _{j=1}^{J(\mathfrak{g})} \alpha_{1}^{k_{1}} \alpha_{2}^{k_{2}}\ldots \alpha_{N}^{k_{N}} \bar{P}_{\mathfrak{K}_{j}(\mathfrak{g})}, \quad k_{1}k_{2}\ldots k_{N}= \mathfrak{K}_{j}(\mathfrak{g}). \end{aligned}$$
(47)

Similarly, matrices N α , T α , E 1α , K 1α , F 1α , G 1α , and Q 1α take the same form.

The notations above are as follows: \(\alpha_{1}^{k_{1}}\alpha_{2}^{k_{2}}\ldots\alpha_{N}^{k_{N}}, \alpha\in\varOmega, k_{i}\in \mathbb{N}, i=1,\ldots,N\) are monomials; \(\mathfrak{K}_{j}(\mathfrak{g})\) is the jth N-tuples of \(\mathfrak{K}(\mathfrak{g})\), lexically ordered, \(j=1,\ldots,\mathfrak{J}(\mathfrak{g})\), and \(\mathfrak{K}(\mathfrak{g})\) is the set of N-tuples obtained as all possible combinations of k 1 k 2k N that fulfill \(k_{1}+k_{2}+\cdots+k_{N}=\mathfrak{g}\). Since the number of vertices in the polytope \(\mathcal{P}\) is N, the number of elements in \(\mathfrak{K}(\mathfrak{g})\) is then given by \(\mathfrak{J}(\mathfrak{g})=(N+\mathfrak{g}-1)!/(\mathfrak{g}!(N-1)!)\).

For each i=1,…,N, we define the N-tuples \(\mathfrak{K}_{j}^{i}(\mathfrak{g})\) that are equal to \(\mathfrak{K}_{j}(\mathfrak{g})\) but with k i >0 replaced by k i −1. Note that the N-tuples \(\mathfrak{K}_{j}^{i}(\mathfrak{g})\) are defined only in the cases where the corresponding k i are positive. Note also that, when applied to the elements of \(\mathfrak{K}(\mathfrak{g}+1)\), the N-tuples \(\mathfrak{K}_{j}^{i}(\mathfrak{g}+1)\) define subscripts k 1 k 2k N of matrices \(\bar{P}_{k_{1}k_{2}\ldots k_{N}}\), \(T_{k_{1}k_{2}\ldots k_{N}}\), \(N_{k_{1}k_{2}\ldots k_{N}}\), \(E_{1_{k_{1}k_{2}\ldots k_{N}}}\), \(F_{1_{k_{1}k_{2}\ldots k_{N}}}\), \(G_{1_{k_{1}k_{2}\ldots k_{N}}}\), \(K_{1_{k_{1}k_{2}\ldots k_{N}}}\), and \(Q_{1_{k_{1}k_{2}\ldots k_{N}}}\), associated to homogeneous polynomial parameter-dependent matrices of degree \(\mathfrak{g}\).

Finally, we define the scalar constant coefficients \(\beta^{i}_{j}(j+1)=\mathfrak{g}!/(k_{1}!k_{2}!\ldots k_{N}!)\), with \(k_{1}k_{2}\ldots k_{N}\in \mathfrak{K}_{j}^{i}(\mathfrak{g}+1)\).

To facilitate the presentation of our main results, denote \(\beta^{i}_{j}(j+1)\) by \(\mathfrak{h}\); using this notation, we now present the main result in this section.

Theorem 5

Define , , and \(V\triangleq \operatorname{diag}\{V_{h},V_{v}\}\). Suppose that there exist symmetric parameter-dependent positive definite matrices \(\bar{P}_{\mathfrak{K_{j}(g)}}>0 \) and matrices \(T_{\mathfrak{K_{j}(g)}}\), \(N_{\mathfrak{K_{j}(g)}}\), \(E_{1_{\mathfrak{K_{j}(g)}}}\), \(F_{1_{\mathfrak{K_{j}(g)}}}\), \(G_{1_{\mathfrak{K_{j}(g)}}}\), \(K_{1_{\mathfrak{K_{j}(g)}}}\), and \(Q_{1_{\mathfrak{K_{j}(g)}}}\), \(\mathfrak{K_{j}(g)}\in\mathfrak{K(g)}\), \(j=1,\ldots,\mathfrak{J}(\mathfrak{g})\), such that the following LMIs hold for all \(\mathfrak{K_{l}(g+1)}\in\mathfrak{K(g+l)}\), \(l=1,\ldots,\mathfrak{J}(\mathfrak{g+l})\):

$$\begin{aligned} &\varPsi_{\alpha}=\sum_{i\in l_{l}(\mathfrak{g+l})} \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11}+M_{11}^{T} & \star & \star & \star \\ M_{21} & M_{22} &\star & \star \\ M_{31} & M_{32} & M_{33} & \star \\ M_{41} & -F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\varUpsilon_{1}^{T} & M_{43}& -\mathfrak{h}I \\ \end{array} \right ]<0, \end{aligned}$$
(48)

where

$$\begin{aligned} M_{11} = & \varUpsilon_{1}(K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+ \mathfrak{h}VS_{b}C_{1_{i}})\varUpsilon_{1}^{T}+ \varUpsilon_{1}\mathfrak{h}VS_{a}\varUpsilon_{2}^{T} \\ &{}+\varUpsilon_{2}(N_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \mathfrak{h}S_{b}C_{1_{i}})\varUpsilon_{1}^{T} +\varUpsilon_{2}\mathfrak{h}S_{a}\varUpsilon_{2}^{T}, \\ M_{21} = & \bar{P}_{\mathfrak{K_{l}^{i}(g+1)}}+\varUpsilon_{1} \bigl(E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\lambda_{1} \mathfrak{h}S_{b}C_{1_{i}} -K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T} \bigr)\varUpsilon_{1}^{T} \\ &{}+\varUpsilon_{2}\bigl(\lambda_{1}\mathfrak{h}S_{a}- \mathfrak{h}X^{T}\bigr)\varUpsilon_{2}^{T} + \varUpsilon_{2}\bigl(T_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \lambda_{1}\mathfrak{h}S_{b}C_{1_{i}} - \mathfrak{h}X^{T}V^{T}\bigr)\varUpsilon_{1}^{T} \\ &{}+\varUpsilon_{1}\bigl(\lambda_{1}\mathfrak{h}VS_{a}-N_{\mathfrak{K_{l}^{i}(g+1)}}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{22} = & -\varUpsilon_{1}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\mathfrak{K_{l}^{i}(g+1)}} \varUpsilon_{1}^{T} -\lambda_{1} \varUpsilon_{1}\mathfrak{h}VX\varUpsilon_{2}^{T}- \lambda_{2}\varUpsilon_{2}\mathfrak{h}X\varUpsilon_{1}^{T}, \\ M_{31} = & \bigl(B_{i}^{T}K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V^{T}+Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i} \bigr)\varUpsilon_{1}^{T} \\ &{} +\bigl(B_{i}^{T}N_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}\bigr) \varUpsilon_{2}^{T}, \\ M_{32} = & \bigl(B_{i}^{T}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \lambda_{1}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V^{T}- Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\bigr)\varUpsilon_{1}^{T} \\ &{} +\bigl(B_{i}^{T}T_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \lambda_{2}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33} = & Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+B_{i}^{T}Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}- \mathfrak{h}\gamma^2I, \\ M_{43} =& F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+\mathfrak{h}D_{i}- \mathfrak{h}S_{d}D_{1_{i}}, \\ M_{41} = & (F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\mathfrak{h}C_{i}- \mathfrak{h}S_{d}C_{1_{i}})\varUpsilon_{1}^{T}- \mathfrak{h}S_{c}\varUpsilon_{2}^{T}. \end{aligned}$$

Then the homogeneous polynomially parameter-dependent matrices given by (47) ensure (40) for all αΩ; moreover, if the LMI (48) is fulfilled for a given degree \(\mathfrak{g}\), then the LMIs corresponding to any degree \(\mathfrak{g} > \mathfrak{\hat{g}}\) are also satisfied.

Proof

Note that (40) for (A(α),B(α),C 1(α), D 1(α), C(α),D(α)) \(\in \mathcal{P}\) and P α , T α , N α , K 1α , E 1α , F 1α , G 1α , Q 1α given by (48) are homogeneous polynomial matrix equations of degree \(\mathfrak{g}+1\) that can be written as

$$\begin{aligned} & \sum_{l=1}^{J(g+1)} \alpha_{1}^{k_{1}}\alpha_{2}^{k_{2}}\ldots \alpha_{N}^{k_{N}} \{\varPsi_{\alpha} \}<0, \quad k_{1}k_{2}\ldots k_{N}= \mathfrak{K}_{l}( \mathfrak{g}+1). \end{aligned}$$
(49)

Condition (48) imposed for all \(l=1,\ldots,\mathfrak{J(g}+1)\) ensures condition in (40) for all αΩ, and thus the first part is proved.

Suppose that the LMIs of (48) are fulfilled for a certain degree \(\mathfrak{\hat{g}}\), that is, there exist \(\mathfrak{J}(\mathfrak{\hat{g}})\) matrices \(\bar{P}_{\mathfrak{K}_{j}(\hat{g})}\), \(T_{\mathfrak{K}_{j}(\hat{g})}\), \(N_{\mathfrak{K}_{j}(\hat{g})}\), \(K_{1_{\mathfrak{K}_{j}(\hat{g})}}\), \(E_{1_{\mathfrak{K}_{j}(\hat{g})}}\), \(F_{1_{\mathfrak{K}_{j}(\hat{g})}}\), and \(Q_{1_{\mathfrak{K}_{j}(\hat{g})}}\), \(j=1,\ldots,\mathfrak{J}(\mathfrak{\hat{g}})\), such that \(\bar{P}_{\mathfrak{\hat{g}}_{\alpha}}\), \(T_{\mathfrak{\hat{g}}_{\alpha}}\), \(N_{\mathfrak{\hat{g}}_{\alpha}}\), \(K_{1_{\mathfrak{\hat{g}}_{\alpha}}}\), \(E_{1_{\mathfrak{\hat{g}}_{\alpha}}}\), \(F_{1_{\mathfrak{\hat{g}}_{\alpha}}}\), and \(Q_{1_{\mathfrak{\hat{g}}_{\alpha}}}\), are homogeneous polynomially parameter-dependent matrices ensuring condition (40). Then, the terms of the polynomial matrices \(\bar{P}_{\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})\bar{P}_{\alpha(\mathfrak{\hat{g}})}\), \(T_{\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})T_{\alpha(\mathfrak{\hat{g}})}\), \(N_{\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})N_{\alpha(\mathfrak{\hat{g}})}\), \(E_{1\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})E_{1\alpha(\mathfrak{\hat{g}})}\), \(K_{1\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})K_{1\alpha(\mathfrak{\hat{g}})}\), \(F_{1\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})F_{1\alpha(\mathfrak{\hat{g}})}\), and \(Q_{1\alpha(\mathfrak{\hat{g}+1})}=(\alpha_{1}+\dots+\alpha_{N})Q_{1\alpha(\mathfrak{\hat{g}})}\) satisfy the LMIs of Theorem 4 corresponding to the degree \(\mathfrak{\hat{g}}+1\), which can be obtained in this case by a linear combination of the LMIs of Theorem 4 for \(\mathfrak{\hat{g}}\). □

It must be pointed out that when \(n_{h_{f}}=0\) or \(n_{v_{f}}=0\), by Theorem 5, we have the following corollary.

Case 1: \(n_{h_{f}}\neq 0, n_{v_{f}}=0\).

Corollary 6

Define . Suppose that there exist symmetric parameter-dependent positive definite matrices \(\bar{P}_{\mathfrak{K_{j}(g)}}>0 \) and matrices \(T_{\mathfrak{K_{j}(g)}}\), \(N_{\mathfrak{K_{j}(g)}}\), \(E_{1_{\mathfrak{K_{j}(g)}}}\), \(F_{1_{\mathfrak{K_{j}(g)}}}\), \(G_{1_{\mathfrak{K_{j}(g)}}}\), \(K_{1_{\mathfrak{K_{j}(g)}}}\), and \(Q_{1_{\mathfrak{K_{j}(g)}}}\), \(\mathfrak{K_{j}(g)}\in\mathfrak{K(g)}\), \(j=1,\ldots,\mathfrak{J}(\mathfrak{g})\), such that the following LMIs hold for all \(\mathfrak{K_{l}(g+1)}\in\mathfrak{K(g+l)}\), \(l=1,\ldots,\mathfrak{J}(\mathfrak{g+l})\):

$$ \varPsi_{\alpha}=\sum_{i\in l_{l}(\mathfrak{g+l})} \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11}+M_{11}^{T} & \star & \star & \star \\ M_{21} & M_{22} &\star & \star \\ M_{31} & M_{32} & M_{33} & \star \\ M_{41} & -F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\varUpsilon_{1}^{T} & M_{43}& -\mathfrak{h}I \\ \end{array} \right ]<0, $$
(50)

where

$$\begin{aligned} M_{11} = & \varUpsilon_{1}(K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+ \mathfrak{h} V_{h} S_{b}C_{1_{i}}) \varUpsilon_{1}^{T}+\varUpsilon_{1}\mathfrak{h} V_{h} S_{a}\varUpsilon_{2}^{T} \\ &{}+\varUpsilon_{2}(N_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \mathfrak{h}S_{b}C_{1_{i}})\varUpsilon_{1}^{T} +\varUpsilon_{2}\mathfrak{h}S_{a}\varUpsilon_{2}^{T}, \\ M_{21} = & \bar{P}_{\mathfrak{K_{l}^{i}(g+1)}}+\varUpsilon_{1} \bigl(E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\lambda_{1} \mathfrak{h}S_{b}C_{1_{i}} -K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T} \bigr)\varUpsilon_{1}^{T} \\ &{}+\varUpsilon_{2}\bigl(\lambda_{1}\mathfrak{h}S_{a}- \mathfrak{h}X^{T}\bigr)\varUpsilon_{2}^{T} + \varUpsilon_{2}\bigl(T_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \lambda_{1}\mathfrak{h}S_{b}C_{1_{i}} - \mathfrak{h}X^{T}V_{h}^{T}\bigr) \varUpsilon_{1}^{T} \\ &{} +\varUpsilon_{1}\bigl(\lambda_{1} \mathfrak{h}VS_{a}-N_{\mathfrak{K_{l}^{i}(g+1)}}^{T}\bigr) \varUpsilon_{2}^{T}, \\ M_{22} = & -\varUpsilon_{1}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\mathfrak{K_{l}^{i}(g+1)}} \varUpsilon_{1}^{T} -\lambda_{1} \varUpsilon_{1}\mathfrak{h} V_{h} X\varUpsilon_{2}^{T}- \lambda_{2}\varUpsilon_{2}\mathfrak{h}X\varUpsilon_{1}^{T}, \\ M_{31} = & \bigl(B_{i}^{T}K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V_{h}^{T}+Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i} \bigr)\varUpsilon_{1}^{T} \\ &{}+\bigl(B_{i}^{T}N_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}\bigr) \varUpsilon_{2}^{T}, \\ M_{32} = & \bigl(B_{i}^{T}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \lambda_{1}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V^{T}- Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\bigr)\varUpsilon_{1}^{T} \\ &{}+\bigl(B_{i}^{T}T_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \lambda_{2}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33} = & Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+B_{i}^{T}Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}- \mathfrak{h}\gamma^2I, \\ M_{43} =& F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+\mathfrak{h}D_{i}- \mathfrak{h}S_{d}D_{1_{i}}, \\ M_{41} = & (F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\mathfrak{h}C_{i}- \mathfrak{h}S_{d}C_{1_{i}})\varUpsilon_{1}^{T}- \mathfrak{h}S_{c}\varUpsilon_{2}^{T}. \end{aligned}$$

Then the homogeneous polynomially parameter-dependent matrices given by (47) ensure (42) for all αΩ. Moreover, if the LMI of (50) is fulfilled for a given degree \(\mathfrak{g}\), then the LMIs corresponding to any degree \(\mathfrak{g} > \mathfrak{\hat{g}}\) are also satisfied.

Similar to Corollary 6 \((n_{h_{f}}= 0, n_{v_{f}}\neq0)\), we have the following corollary.

Case 2: \(n_{h_{f}}= 0, n_{v_{f}}\neq0\).

Corollary 7

Define Suppose that there exist symmetric parameter-dependent positive definite matrices \(\bar{P}_{\mathfrak{K_{j}(g)}}>0 \) and matrices \(T_{\mathfrak{K_{j}(g)}}\), \(N_{\mathfrak{K_{j}(g)}}\), \(E_{1_{\mathfrak{K_{j}(g)}}}\), \(F_{1_{\mathfrak{K_{j}(g)}}}\), \(G_{1_{\mathfrak{K_{j}(g)}}}\), \(K_{1_{\mathfrak{K_{j}(g)}}}\), and \(Q_{1_{\mathfrak{K_{j}(g)}}}\), \(\mathfrak{K_{j}(g)}\in\mathfrak{K(g)}\), \(j=1,\ldots,\mathfrak{J}(\mathfrak{g})\), such that the following LMIs hold for all \(\mathfrak{K_{l}(g+1)}\in\mathfrak{K(g+l)}\), \(l=1,\ldots,\mathfrak{J}(\mathfrak{g+l})\):

$$ \varPsi_{\alpha}=\sum_{i\in l_{l}(\mathfrak{g+l})} \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11}+M_{11}^{T} & \star & \star & \star \\ M_{21} & M_{22} &\star & \star \\ M_{31} & M_{32} & M_{33} & \star \\ M_{41} & -F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\varUpsilon_{1}^{T} & M_{43}& -\mathfrak{h}I \\ \end{array} \right ]<0, $$
(51)

where

$$\begin{aligned} M_{11} = & \varUpsilon_{1}(K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+ \mathfrak{h} V_{v} S_{b}C_{1_{i}}) \varUpsilon_{1}^{T}+\varUpsilon_{1}\mathfrak{h} V_{v} S_{a}\varUpsilon_{2}^{T} \\ &{}+\varUpsilon_{2}(N_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \mathfrak{h}S_{b}C_{1_{i}})\varUpsilon_{1}^{T} +\varUpsilon_{2}\mathfrak{h}S_{a}\varUpsilon_{2}^{T}, \\ M_{21} = & \bar{P}_{\mathfrak{K_{l}^{i}(g+1)}}+\varUpsilon_{1} \bigl(E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\lambda_{1} \mathfrak{h}S_{b}C_{1_{i}} -K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T} \bigr)\varUpsilon_{1}^{T} \\ &{}+\varUpsilon_{2}\bigl(\lambda_{1}\mathfrak{h}S_{a}- \mathfrak{h}X^{T}\bigr)\varUpsilon_{2}^{T} + \varUpsilon_{2}\bigl(T_{\mathfrak{K_{l}^{i}(g+1)}}A_{i}+ \lambda_{1}\mathfrak{h}S_{b}C_{1_{i}} - \mathfrak{h}X^{T}V_{v}^{T}\bigr) \varUpsilon_{1}^{T} \\ &{}+\varUpsilon_{1}\bigl(\lambda_{1}\mathfrak{h}VS_{a}-N_{\mathfrak{K_{l}^{i}(g+1)}}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{22} = & -\varUpsilon_{1}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}} \varUpsilon_{1}^{T}-\varUpsilon_{2}T_{\mathfrak{K_{l}^{i}(g+1)}} \varUpsilon_{1}^{T} -\lambda_{1} \varUpsilon_{1}\mathfrak{h} V_{v} X\varUpsilon_{2}^{T}- \lambda_{2}\varUpsilon_{2}\mathfrak{h}X\varUpsilon_{1}^{T}, \\ M_{31} = & \bigl(B_{i}^{T}K_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V_{v}^{T}+Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i} \bigr)\varUpsilon_{1}^{T} \\ &{}+\bigl(B_{i}^{T}N_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}\bigr) \varUpsilon_{2}^{T}, \\ M_{32} = & \bigl(B_{i}^{T}E_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}+ \lambda_{1}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T}V^{T}- Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}\bigr)\varUpsilon_{1}^{T} \\ &{}+\bigl(B_{i}^{T}T_{\mathfrak{K_{l}^{i}(g+1)}}^{T}+ \lambda_{2}\mathfrak{h}D_{1_{i}}^{T}S_{b}^{T} \bigr)\varUpsilon_{2}^{T}, \\ M_{33} = & Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+B_{i}^{T}Q_{1_{\mathfrak{K_{l}^{i}(g+1)}}}^{T}- \mathfrak{h}\gamma^2I, \\ M_{43} =& F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}B_{i}+\mathfrak{h}D_{i}- \mathfrak{h}S_{d}D_{1_{i}}, \\ M_{41} = & (F_{1_{\mathfrak{K_{l}^{i}(g+1)}}}A_{i}+\mathfrak{h}C_{i}- \mathfrak{h}S_{d}C_{1_{i}})\varUpsilon_{1}^{T}- \mathfrak{h}S_{c}\varUpsilon_{2}^{T}. \end{aligned}$$

Then the homogeneous polynomially parameter-dependent matrices given by (47) ensure (44) for all αΩ. Moreover, if the LMI of (51) is fulfilled for a given degree \(\mathfrak{g}\), then the LMIs corresponding to any degree \(\mathfrak{g} > \mathfrak{\hat{g}}\) are also satisfied.

Case 3: \(n_{h_{f}}=0\), \(n_{v_{f}}=0\).

Corollary 8

Suppose that there exist symmetric parameter-dependent matrices \(P_{\mathfrak{K_{j}(g)}}>0 \), \(E_{\mathfrak{K_{j}(g)}}\), \(F_{\mathfrak{K_{j}(g)}}\), \(K_{\mathfrak{K_{j}(g)}}\), and \(Q_{\mathfrak{K_{j}(g)}}\) \(\mathfrak{K_{j}(g)}\in\mathfrak{K(g)}\), \(j=1,\ldots,\mathfrak{J}(\mathfrak{g})\), such that the following LMIs hold for all \(\mathfrak{K_{l}(g+1)}\in\mathfrak{K(g+l)}\), \(l=1,\ldots,\mathfrak{J}(\mathfrak{g+l})\):

$$ \left [\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} K_{\mathfrak{K_{l}^{i}(g+1)}} A_{i}+A_{i}^{T}K^{T}_{\mathfrak{K_{l}^{i}(g+1)}} & \star & \star & \star \\ P_{\mathfrak{K_{l}^{i}(g+1)}}+ E_{\mathfrak{K_{l}^{i}(g+1)}} A_{i}-K_{\mathfrak{K_{l}^{i}(g+1)}}^{T} & M_{22} & \star & \star \\ B_{i}^{T}K_{\mathfrak{K_{l}^{i}(g+1)}}^{T} + Q_{\mathfrak{K_{l}^{i}(g+1)}} A _{i} & M_{32} & M_{33} & \star \\ F_{\mathfrak{K_{l}^{i}(g+1)}} A_{i}+\mathfrak{h}(C_{\mathfrak{K_{l}^{i}(g+1)}}-D_{f}C_{1i}) & -F_{\mathfrak{K_{l}^{i}(g+1)}} & M_{43} & -\mathfrak{h}I \end{array} \right ]<0, $$
(52)

where

$$\begin{aligned} M_{22} =&-E_{\mathfrak{K_{l}^{i}(g+1)}}-E_{\mathfrak{K_{l}^{i}(g+1)}}^{T}, \quad \quad M_{33}=Q_{\mathfrak{K_{l}^{i}(g+1)}} B_{i}+ B_{i}^{T}Q_{\mathfrak{K_{l}^{i}(g+1)}}^{T}- \mathfrak{h}\gamma^2 I, \\ M_{32} =&B_{i}^{T}E_{\mathfrak{K_{l}^{i}(g+1)}}^{T}-Q_{\mathfrak{K_{l}^{i}(g+1)}}, \quad\quad M_{43}=F_{\mathfrak{K_{l}^{i}(g+1)}} B_{i}+ \mathfrak{h}(D_{i}-D_{f}D_{1i}). \end{aligned}$$

Then the homogeneous polynomially parameter-dependent matrices given by (47) ensure (46) for all αΩ. Moreover, if (52) is fulfilled for a given degree \(\mathfrak{g}\), then the LMIs corresponding to any degree \(\mathfrak{g} > \mathfrak{\hat{g}}\) are also satisfied.

Remark 6

Theorem 5 presents a sufficient condition for the solvability of the reduced-order H filtering problem. A reduced-order H filter can be selected by solving the following convex optimization problem:

$$\begin{aligned} \min \delta \quad \mbox{subject to (48)} \quad \mbox{with } \delta=\gamma^{2}. \end{aligned}$$
(53)

5 Numerical Examples

Example 1

The system under consideration corresponds to the uncertain 2-D continuous system (1)–(3) with matrices given by

$$\begin{aligned} & A_{1}=\left [\begin{array}{c@{\quad}c} -0.468 & 0.845 \\ 0.20 & -0.423 \end{array} \right ],\quad\quad A_{2}=\left [\begin{array}{c@{\quad}c} -0.825 & 0.427 \\ 0.299 & -0.346 \end{array} \right ], \\ & A_{3}=\left [\begin{array}{c@{\quad}c} -0.744 & 0 \\ 0.52 & -0.545 \end{array} \right ], \quad\quad A_{4}= \left [\begin{array}{c@{\quad}c} -1.33 & -1.14 \\ 0.322 & -0.309 \end{array} \right ], \\ &B=\left [\begin{array}{c} -0.4545 \\ 0.9090 \end{array} \right ],\quad\quad C=\left [\begin{array}{c@{\quad}c} 0 & 100 \end{array} \right ],\quad \quad C_{1}=\left [\begin{array}{c@{\quad}c} 0 & 100 \end{array} \right ], \\ & D_{1}=1,\quad\quad D=0. \end{aligned}$$

By solving the convex optimization problem in (53), when the parameters λ 1=0.8851 and λ 2=1.0568 are searched, according to Theorem 5, Corollaries 6, 7, and 8, the following filter matrices were obtained:

Case 1: \(n_{h}=1, n_{v}=1, n_{h_{f}}=1, n_{v_{f}}=1\), γ=0.8272.

Case 2: \(n_{h}=1, n_{v}=1, n_{h_{f}}=0, n_{v_{f}}=1\), γ=0.9984.

Case 3: \(n_{h}=1, n_{v}=1, n_{h_{f}}=1, n_{v_{f}}=0\), γ=1.0030.

Case 4: \(n_{h}=1, n_{v}=1, n_{h_{f}}=0, n_{v_{f}}=0\), γ=1.0041.

$$\begin{aligned} D_{f}=0.9987. \end{aligned}$$

For comparison, Theorem 3 with λ 1=−0.0031 and λ 1=0.0057 provides a guaranted H cost 0.8272, while [38] yields 0.8936.

Example 2

Consider an uncertain 2-D continuous system (1)–(3) with the following system matrices [3]:

H upper bounds for the error dynamics have been computed by means of the conditions of Theorem 5 for \(\mathfrak{\hat{g}}=0,\ldots, 3\): λ 1=0.2972,λ 2=0.2940 are searched, with the results and the numbers K of scalar variables and L of LMI rows shown in Table 1.

Table 1 Guaranteed H filtering performance for different orders

In the full-order case, with \(\mathfrak{g}=1\) (linearly parameter-dependent approach) and λ 1=0.2972,λ 2=0.2940, Theorem 5 provides a guaranteed H cost of 0.6157, while the method provided by Corollary 1 in [38] is infeasible, and Corollary 1 yields 0.6202. It is clear that the conditions from Theorem 5 provide the best results. The H performance value achieved with parameter searching and the corresponding filter for different orders are the following:

Case 1: \(n_{h}=2, n_{v}=2, n_{h_{f}}=2, n_{v_{f}}=2\), γ=0.6157.

Case 2: \(n_{h}=2, n_{v}=2, n_{h_{f}}=2, n_{v_{f}}=1\), γ=0.6353.

Case 3: \(n_{h}=2, n_{v}=2, n_{h_{f}}=1, n_{v_{f}}=2\), γ=0.6604.

Case 4: \(n_{h}=2, n_{v}=2, n_{h_{f}}=1, n_{v_{f}}=1\), γ=0.6653.

Case 5: \(n_{h}=2, n_{v}=2,n_{h_{f}}=0, n_{v_{f}}=0\), γ=0.8866.

$$\begin{aligned} D_{f}=\left [\begin{array}{c@{\quad}c} -1.0378 & 1.7119 \end{array} \right ]. \end{aligned}$$

From the comparison it can be seen that the proposed result is less conservative than those given in Corollary 1 and [38].

6 Conclusion

A solution to the reduced-order H filtering problem has been provided for uncertain 2D systems to solve the H filter problem of the 2D continuous systems in the Roesser state space model, with uncertain matrices belonging to a given polytope. The proposed methodology, based on using polynomially parameter-dependent matrices and slack variables, provides less conservative results than those in the literature by using extra degrees of freedom in the solution space. Some numerical examples have been provided to demonstrate the feasibility and effectiveness of the proposed methodology.

It must be pointed out that the proposed approach could be extended to other related problems, such as Marchesini–Fornasini models, or even multidimensional systems of more than two dimensions (see [1] and [24]).