1 Introduction

Two-dimensional (2-D) system theory has attracted considerable attention due to its extensive applications of many physical systems, such as those in state-space digital filter, image data processing and transmission, thermal processes, biomedical imaging, gas absorbtion, water stream heating, etc. So they are being extensively studied. To mention a few of the results obtained so far, modeling has been studied in Fornasini and Marchisini (1976, (1978), Roesser (1975) and Takagi (1985); the stability has been investigated in Xia and Jia (2002), Hmamed et al. (2008), Dey et al. (2012) and Kokil et al. (2012); \(H_{\infty }\) stabilization and control were solved in Du et al. (2001, (2002), Benhayoun et al. (2013), Hmamed et al. (2010), Xu et al. (2008), Wang et al. (2015), Qiu et al. (2015a, (2015b, (2015c); \(H_{\infty }\) filtering for 2-D linear, delayed and Takagi–Sugeno systems have been studied, respectively, in Gao and Li (2014), Ying and Rui (2011), Gao et al. (2008), El-Kasri et al. (2012, (2013a, (2013b), Du et al. (2000), Xu et al. (2005), Wu et al. (2008), Boukili et al. (2014b), Qiu et al. (2013), Hmamed et al. (2013), Gao and Wang (2004), Chen and Fong (2006), Boukili et al. (2013, (2014a) and Meng and Chen (2014); finally, Li and Gao (2012), Gao and Li (2011) and Li et al. (2012) has addressed the finite frequency \(H_{\infty }\) filtering for 2-D systems.

This paper concentrates on filtering, as it is an important problem in signal processing. More precisely, this paper concentrates on \(H_{\infty }\) filtering: \(H_{\infty }\) filtering for 2-D systems with parameter uncertainties has been studied in Xu et al. (2005), Hmamed et al. (2013), El-Kasri et al. (2013a, (2013b), Boukili et al. (2013), Chen and Fong (2006) and Wu et al. (2008). These previous results on robust \(H_{\infty }\) filtering are mostly based on quadratic stability conditions and are hence inevitably conservative, as the same Lyapunov function is used for the entire uncertainty domain.

To overcome this conservatism, this paper considers parameter-dependent Lyapunov functions, to reduce the overdesign inherent to a quadratic framework Gao and Li (2014), Ying and Rui (2011), Gao et al. (2008) and El-Kasri et al. (2012). In addition, in order to decouple the product terms between the Lyapunov matrix and system matrices and provide extra degrees of freedom, slack matrices are introduced. The key in our approach is then the use of four independent slack matrices and some homogenous polynomially parameter-dependent matrices of arbitrary degree: as their degree grows, increasing precision is obtained, providing less conservative filter designs. The filters are designed for systems with parameter uncertainties that belong to a polytope, where only the vertices are known. The proposed condition include as special cases the previous quadratic formulations, and also the linearly parameter-dependent approaches (that use linear convex combinations of matrices).

It must be emphasized that the theoretical results are given in the form of linear matrix inequalities (LMIs), which can be solved by standard numerical software, thus providing a simple methodology. An example shows the effectiveness of the proposed approach.

The organization of this paper is as follows: Sect. 2 states the problem to be solved and present some preliminary results. Then the analysis of robust asymptotical stability with \(H_{\infty }\) performance is given in Sect. 3. The \(H_{\infty }\) filter design scheme is then developed in Sect. 4, followed by an example to illustrate the effectiveness of the proposed approach. Finally, some conclusions are given.

Notations: The notation used throughout the paper is standard. The superscript T stands for matrix transposition. \(P>0\) means that the matrix P is real symmetric and positive definite. I is the identity matrix with appropriate dimension. In symmetric block matrices or long matrix expressions, we use an asterisk \(*\) to represent terms induced by symmetry. \(\hbox {diag}\{\ldots \}\) stands for a block-diagonal matrix. The \(l_{2}\) norm for a 2-D signal w(ij) is given by

$$\begin{aligned}\parallel w\parallel _{2}=\sqrt{\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w^\mathrm{T}(i,j)w(i,j)}\end{aligned}$$

where w(ij) is said to be in the space \(l_{2}\{[0,\infty ),[0,\infty )\}\) or \(l_{2}\), for simplicity, if \(\parallel w\parallel _{2}<\infty \). A 2-D signal w(ij) in the \(l_{2}\) space is an energy-bounded signal.

2 Problem Description

Consider a 2-D discrete system described by the following Roesser model:

$$\begin{aligned} \left[ \begin{array}{c} x^{h}(i+1,j) \\ x^{v}(i,j+1) \end{array} \right]= & {} A_{\tau } \left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] + B_{\tau }w(i,j)\nonumber \\ y(i,j)= & {} C_{\tau }\left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] + D_{\tau }w(i,j)\\ z(i,j)= & {} H_{\tau } \left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] \nonumber \\ x^{h}(0,k)= & {} \varphi (k),x^{v}(0,k)=\phi (k),\;\; \forall k, \nonumber \end{aligned}$$
(1)

where \(x^{h}(i,j)\in R^{n_{1}}\) is the state vector in the horizontal direction, \(x^{v}(i,j)\in R^{n_{2}}\) the state vector in the vertical direction, \(y(i,j)\in R^{m}\) is the measured signal vector, \(z(i,j)\in R^{v}\) the signal to be estimated, and \(w(i,j)\in R^{q}\) is the disturbance signal vector. It is assumed that w(ij) belongs to \(L_{2}\{[0,\infty ),[0,\infty )\}\). The system matrices are decomposed in blocks as follows:

$$\begin{aligned} A_{\tau }= & {} \left[ \begin{array}{cc} A_{11\tau } &{} A_{12\tau } \\ A_{21\tau } &{} A_{22\tau } \\ \end{array} \right] ,\quad B_{\tau }=\left[ \begin{array}{c} B_{1\tau } \\ B_{2\tau } \\ \end{array} \right] ,\nonumber \\ C_{\tau }= & {} \left[ \begin{array}{cc} C_{1\tau } &{} C_{2\tau } \\ \end{array} \right] , \quad H_{\tau }=\left[ \begin{array}{cc} H_{1\tau } &{} H_{2\tau } \\ \end{array} \right] \end{aligned}$$
(2)

where the dimensions of each block are compatible with the vectors.

The system matrices are assumed to be uncertain and bounded in a polyhedral domain

$$\begin{aligned} \varOmega _{\tau }\triangleq (A_{\tau }, B_{\tau }, C_{\tau }, D_{\tau }, H_{\tau })\in \mathcal {R} \end{aligned}$$
(3)

where \(\mathcal {R}\) denotes a polytope defined as

$$\begin{aligned} \mathcal {R}\triangleq \left\{ \varOmega _{\tau }|\varOmega _{\tau }=\sum _{i=1}^{s}\tau _{i}\varOmega _{i}; \tau \in \varGamma \right\} \end{aligned}$$
(4)

with \(\varOmega _{i}\triangleq (A_{i}, B_{i}, C_{i}, D_{i}, H_{i})\) denoting the vertices of \(\mathcal {R}\) and

$$\begin{aligned} \varGamma \triangleq \left\{ (\tau _{1},\tau _{2},\ldots ,\tau _{s}):\sum _{i=1}^{s}\tau _{i}=1,\tau _{i}> 0\right\} \end{aligned}$$
(5)

is the unit simplex.

The boundary condition of the system fulfills

$$\begin{aligned} \lim _{n\mapsto \infty } \sum _{k=1}^{n}(|x^{h}(0,k)|^{2}+|x^{v}(0,k)|^{2})<\infty \end{aligned}$$
(6)

In this paper, we consider a 2-D filter represented by the following Roesser model:

$$\begin{aligned} \left[ \begin{array}{c} \hat{x}^{h}(i+1,j) \\ \hat{x}^{v}(i,j+1) \end{array} \right]= & {} A_{f}\left[ \begin{array}{c} \hat{x}^{h}(i,j) \\ \hat{x}^{v}(i,j) \end{array} \right] + B_{f} y(i,j)\nonumber \\ \hat{z}(i,j)= & {} C_{f} \left[ \begin{array}{c} \hat{x}^{h}(i,j) \\ \hat{x}^{v}(i,j) \end{array} \right] \\ \hat{x}^{h}(0,k)=0,&\hat{x}^{v}(0,k)=0, \;\forall k, \nonumber \end{aligned}$$
(7)

where \(\hat{x}^{h}(i,j)\in R^{n_{1}}\) is the filter state vector in the horizontal direction, \(\hat{x}^{v}(i,j)\in R^{n_{2}}\) is the filter state vector in the vertical direction and \(\hat{z}(i,j)\in R^{p}\) is the estimation of z(ij). The matrices are real valued and are decomposed in the following block form

$$\begin{aligned} A_{f}= & {} \left[ \begin{array}{cc} A_{f11} &{} A_{f12} \\ A_{f21} &{} A_{f22} \\ \end{array} \right] ,\;\;\; B_{f}=\left[ \begin{array}{c} B_{f1} \\ B_{f2} \\ \end{array} \right] ,\nonumber \\ C_{f}= & {} \left[ \begin{array}{cc} C_{f1}&{} C_{f2} \\ \end{array} \right] . \end{aligned}$$
(8)

Defining the augmented state vectors

$$\begin{aligned} \zeta ^{h}(i,j)= \left[ \begin{array}{cc} x^{h}(i,j)^\mathrm{T} &{} \hat{x}^{h}(i,j)^\mathrm{T} \\ \end{array} \right] ^\mathrm{T},\nonumber \\ \zeta ^{v}(i,j)= \left[ \begin{array}{cc} x^{v}(i,j)^\mathrm{T} &{}\hat{x}^{v}(i,j)^\mathrm{T} \\ \end{array} \right] ^\mathrm{T} \end{aligned}$$
(9)

and the estimation error

$$\begin{aligned} e(i,j)=z(i,j)\;\;-\;\;\hat{z}(i,j) \end{aligned}$$
(10)

gives the following filtering error system:

$$\begin{aligned} \left[ \begin{array}{c} \zeta ^{h}(i+1,j) \\ \zeta ^{v}(i,j+1) \end{array} \right]= & {} \bar{A}_{\tau }\left[ \begin{array}{c} \zeta ^{h}(i,j) \\ \zeta ^{v}(i,j) \end{array} \right] + \bar{B}_{\tau } w(i,j)\nonumber \\ e(i,j)= & {} \bar{C}_{\tau } \left[ \begin{array}{c} \zeta ^{h}(i,j) \\ \zeta ^{v}(i,j) \end{array} \right] \end{aligned}$$
(11)

where

$$\begin{aligned} \bar{A}_{\tau }= & {} \varUpsilon ^\mathrm{T}\left[ \begin{array}{cc} A_{\tau } &{} 0 \\ B_{f}C_{\tau } &{} A_{f} \\ \end{array} \right] \varUpsilon ,\;\; \bar{B}_{\tau }= \varUpsilon ^\mathrm{T}\left[ \begin{array}{c} B_{\tau }\\ B_{f} D_{\tau }\nonumber \\ \end{array} \right] ,\\ \bar{C}_{\tau }= & {} \left[ \begin{array}{cc} H_{\tau } &{} -C_{f} \\ \end{array} \right] \varUpsilon ,\\ \varUpsilon= & {} \left[ \begin{array}{c} \varUpsilon _{1} \\ \varUpsilon _{2} \\ \end{array} \right] =\left[ \begin{array}{cc} \varUpsilon _{11} &{} \varUpsilon _{12} \\ \varUpsilon _{21} &{} \varUpsilon _{22} \\ \end{array} \right] \;\; =\left[ \begin{array}{cccc} I_{n_{1}} &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} I_{n_{2}}&{} 0 \\ 0 &{} I_{n_{1}} &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} I_{n_{2}} \nonumber \\ \end{array} \right] \end{aligned}$$
(12)

The transfer function of the filtering error system is then

$$\begin{aligned} T_{ew}(z_{1},z_{2},\tau )=\bar{C}_{\tau }[\hbox {diag}\{ z_{1}I_{2\times n_{1}},z_{2}I_{2\times n_{2}}\}-\bar{A}_{\tau }]^{-1}\bar{B}_{\tau } \end{aligned}$$
(13)

Thus, the robust \(H_{\infty }\) filtering error problem can be stated as follows:

\(\mathbf {Problem\,description}{:}\) Given the Roesser system (1) with parameter uncertainty (3), find a filter (7), such that the filter error system (11) is robustly asymptotically stable for all \(\tau \in \varGamma \) and satisfies the following robust \(H_{\infty }\) performance:

$$\begin{aligned} \Vert T_{ew}(z_{1},z_{2},\tau )\Vert _{\infty }<\gamma , \;\; \forall \tau \in \varGamma \end{aligned}$$
(14)

where \(\gamma \) is a given positive scalar.

Remark 2.1

The parameter uncertainties considered in this paper are assumed to be of polytopic type, entering into all the matrices of the system model. This description has been widely used for robust control and filtering (see, Gao and Wang 2004; Xia and Jia 2002), as many practical systems present parameter uncertainties which can be exactly modeled by a polytopic uncertainty, or at least bounded.

To derive our main results, we use Finsler’s Lemma:

Lemma 2.2

(Lacerda et al. 2011) Let \(\zeta \in \mathbb {R}^{n}, \mathcal {Q}\in \mathbb {R}^{n\times n}\) and \(\mathcal {B}\in \mathbb {R}^{m\times n}\) with rank \((\mathcal {B})=r<n\) and \(\mathcal {B}^{\perp }\in \mathbb {R}^{n\times (n-r)}\) be full-column-rank matrix satisfying \(\mathcal {B}\mathcal {B}^{\perp }=0\). Then, the following conditions are equivalent:

  1. 1.

    \(\zeta ^\mathrm{T}\mathcal {Q}\zeta <0, \forall \zeta \ne 0:\mathcal {B}\zeta =0\)

  2. 2.

    \(\mathcal {B}^{\perp T}\mathcal {Q}\mathcal {B}^{\perp }<0\)

  3. 3.

    \(\exists \mu \in \mathbb {R}: \mathcal {Q}-\mu \mathcal {B}^\mathrm{T}\mathcal {B}<0\)

  4. 4.

    \(\exists \mathcal {X}\in \mathbb {R}^{n\times m}: \mathcal {Q}+\mathcal {X}\mathcal {B}+\mathcal {B}^\mathrm{T}\mathcal {X}^\mathrm{T}<0\)

3 \(H_{\infty }\) Filtering Analysis

In this section, the filtering analysis problem is considered. More specifically, we assume that the filter matrices in (8) are known, and we will study the condition under which the filtering error system (11) is asymptotically stable with \(H_{\infty }\)-norm bounded \(\gamma \). To solve the robust \(H_{\infty }\) filtering problem, we first recall the following result (Gao et al. 2008; Du et al. 2002).

Lemma 3.1

Given a positive scalar \(\gamma \), if \((A_{\tau }, B_{\tau }, C_{\tau }, D_{\tau }, H_{\tau }) \in \varOmega \) are arbitrary but fixed, then the filtering error system (11) is asymptotically stable and satisfies the \(H_{\infty }\) performance \(\gamma \) for any fixed \(\tau \in \varGamma \), if there exists a block-diagonal matrix \(P_{\tau }=\hbox {diag}\{P^{h}_{\tau }\;,\; P^{v}_{\tau }\}>0\), where \(P^{h}_{\tau }\in \mathbb {R}^{(2\times n_{1})\times (2\times n_{1})}\) and \(P^{v}_{\tau }\in \mathbb {R}^{(2\times n_{2})\times (2\times n_{2})}\), such that

$$\begin{aligned} \left[ \begin{array}{cccc} -P_{\tau } \;\;\;&{}\;\;\; P_{\tau }\bar{A}_{\tau } \;\;\;&{}\;\;\; P_{\tau }\bar{B}_{\tau } \;\;\;&{}\;\;\; 0 \\ \bar{A}^\mathrm{T}_{\tau }P_{\tau } &{} -P_{\tau }&{} 0 &{} \bar{C}^\mathrm{T}_{\tau } \\ \bar{B}^\mathrm{T}_{\tau }P_{\tau } &{} 0 &{} -\gamma ^{2} I &{} 0 \\ 0 &{} \bar{C}_{\tau } &{} 0 &{} -I\\ \end{array} \right] <0 \end{aligned}$$
(15)

Proposition 3.2

Given a positive scalar \(\gamma \), if \((A_{\tau }, B_{\tau }, C_{\tau }, D_{\tau }, H_{\tau }) \in \varOmega \) are arbitrary but fixed, the filtering error system (11) is asymptotically stable with \(H_{\infty }\)-norm bounded \(\gamma \) if there exist parameter-dependent symmetric positive definite matrices \(P_{\tau }=\hbox {diag}\{P^{h}_{\tau }\;,\; P^{v}_{\tau }\}\), and parameter-dependent matrices \(M_{\tau }, S_{\tau }, R_{\tau }\) and \(F_{\tau }\) such that:

$$\begin{aligned} \varTheta =\left[ \begin{array}{cccc} \varGamma _{1} \;\;&{}\;\; M^\mathrm{T}_{\tau }\bar{A}_{\tau }-S_{\tau }&{} M^\mathrm{T}_{\tau }\bar{B}_{\tau }-R_{\tau } &{} -F_{\tau } \\ * \;\;&{}\;\;\varGamma _{2} \;\;&{}\;\;\varGamma _{3}\;\;&{}\;\; \varGamma _{4} \\ * &{} * &{}\varGamma _{5}&{}\bar{B}^\mathrm{T}_{\tau }F_{\tau } \\ * &{} * &{} *&{} -I \\ \end{array} \right] <0 \end{aligned}$$
(16)

where

$$\begin{aligned} \varGamma _{1}= & {} P_{\tau }-M_{\tau }-M^\mathrm{T}_{\tau },\;\; \varGamma _{2}=S^\mathrm{T}_{\tau }\bar{A}_{\tau }+\bar{A}^\mathrm{T}_{\tau }S_{\tau }-P_{\tau }\\ \varGamma _{3}= & {} S^\mathrm{T}_{\tau }\bar{B}_{\tau }+\bar{A}^\mathrm{T}_{\tau }R_{\tau },\;\;\; \varGamma _{4}=\bar{A}^\mathrm{T}_{\tau }F_{\tau }+\bar{C}^\mathrm{T}_{\tau }\\ \varGamma _{5}= & {} \bar{B}^\mathrm{T}_{\tau }R_{\tau }+R^\mathrm{T}_{\tau }\bar{B}_{\tau }-\gamma ^{2}I. \end{aligned}$$

Proof

To prove the theorem above, we consider the following matrices

$$\begin{aligned} \mathcal {Q}= & {} \left[ \begin{array}{cccc} P_{\tau } &{} 0 &{} 0 &{} 0 \\ 0 &{} -P_{\tau } &{} 0 &{}\bar{C}^\mathrm{T}_{\tau } \\ 0 &{} 0 &{} -\gamma ^{2}I &{} 0 \\ 0 &{} \bar{C}_{\tau } &{} 0 &{} -I \\ \end{array} \right] ,\; \mathcal {B}= \left[ \begin{array}{c} -I \\ \bar{A}^\mathrm{T}_{\tau } \\ \bar{B}^\mathrm{T}_{\tau } \\ 0 \\ \end{array} \right] ^\mathrm{T}, \end{aligned}$$
$$\begin{aligned} \mathcal {B}^{\perp T}= & {} \left[ \begin{array}{cccc} \bar{A}_{\tau }^\mathrm{T} &{} I &{} 0 &{} 0 \\ \bar{B}_{\tau }^\mathrm{T} &{} 0 &{} I &{} 0 \\ 0 &{} 0 &{} 0 &{} I \\ \end{array} \right] ,\quad \mathcal {X}=\left[ \begin{array}{c} M_{\tau },\;\; S_{\tau },\;\; R_{\tau },\;\; F_{\tau } \end{array} \right] ^\mathrm{T} \end{aligned}$$

Therefore, the condition (2) of Lemma 2.2 is equivalent to

$$\begin{aligned} \left[ \begin{array}{ccc} -P_{\tau }+\bar{A}^\mathrm{T}_{\tau }P_{\tau }\bar{A}_{\tau } &{} \bar{A}^\mathrm{T}_{\tau }P_{\tau }\bar{B}_{\tau } &{} \bar{C}^\mathrm{T}_{\tau } \\ * &{} \bar{B}^\mathrm{T}_{\tau }P_{\tau }\bar{B}_{\tau }-\gamma ^{2}I &{} 0 \\ * &{} 0 &{} -I \\ \end{array} \right] <0 \end{aligned}$$
(17)

By Schur complement argument, it can be seen that the inequality (17) is equivalent to condition (15), which completes the proof.

Remark 3.3

\(M_{\tau }, S_{\tau }, R_{\tau }\) and \(F_{\tau }\) act as slack variables to provide extra degrees of freedom in the solution space of the robust \(H_{\infty }\) filtering problem. By setting \(R_{\tau }=0\) and \(F_{\tau }=0\), Proposition 3.2 coincides with the results of Theorem 1 in Ying and Rui (2011). Thanks to lack variable matrices, we obtain an LMI in which the Lyapunov matrix \(P_{\tau }\) is not involved in any product with the system matrices. This enables us to derive a robust \(H_{\infty }\) filtering condition that is less conservative than previous results due to the extra degrees of freedom (see the numerical example at the end of the paper).

4 \(H_{\infty }\) Filter Design

In this section, a methodology is established for designing the \(H_{\infty }\) filter (7), that is, to determine the filter matrices (8) such that the filtering error system (11) is asymptotically stable with an \(H_{\infty }\)-norm bounded by \(\gamma \).

Based on Proposition 3.2, we select for variables \(P_{\tau }\) and \(M_{\tau }\) the following structures (Gao et al. 2008):

$$\begin{aligned} P_{\tau }= & {} \left[ \begin{array}{cccc} P^{h}_{1\tau }\;\;&{}\;\;P^{h}_{2\tau }\;\;&{}\;\; 0 \;\;&{}\;\; 0 \\ (P^{h}_{2\tau })^\mathrm{T}&{}P^{h}_{3\tau }&{} 0 &{} 0 \\ 0 &{} 0 &{} P^{v}_{1\tau } &{} P^{v}_{2\tau } \\ 0 &{} 0 &{} (P^{v}_{2\tau })^\mathrm{T} &{} P^{v}_{3\tau } \\ \end{array} \right] ,\nonumber \\ M_{\tau }= & {} \left[ \begin{array}{cccc} M^{h}_{1\tau }\;\;\;&{}\;\;\;M^{h}_{4}\;\;\;&{}\;\;\; 0 \;\;\;&{}\;\;\; 0 \\ M^{h}_{2\tau }&{}M^{h}_{3}&{} 0 &{} 0 \\ 0 &{} 0 &{} M^{v}_{1\tau } &{} M^{v}_{4} \\ 0 &{} 0 &{} M^{v}_{2\tau } &{} M^{v}_{3} \\ \end{array} \right] \end{aligned}$$
(18)

Then, let the slack variables \(S_{\tau }\), \(F_{\tau }\) and \(R_{\tau }\) take the following structure (Lacerda et al. 2011)

$$\begin{aligned} S_{\tau }= & {} \left[ \begin{array}{cccc} S^{h}_{1\tau }\;\;&{}\;\;\lambda _{1}M^{h}_{4}\;\;&{}\;\; 0 \;\;&{}\;\; 0 \\ S^{h}_{2\tau }&{}\lambda _{2}M^{h}_{3}&{} 0 &{} 0 \\ 0 &{} 0 &{} S^{v}_{1\tau } &{} \lambda _{3}M^{v}_{4} \\ 0 &{} 0 &{} S^{v}_{2\tau } &{} \lambda _{4}M^{v}_{3} \\ \end{array} \right] ,\nonumber \\ R_{\tau }= & {} \left[ \begin{array}{c} R^{h}_{\tau }\;\;,\;\;0 \;\;,\;\;R^{v}_{\tau }\;\;,\;\;0 \end{array} \right] , \nonumber \\ F_{\tau }= & {} \left[ \begin{array}{c} F^{h}_{\tau }\;\;,\;\;0\;\;,\;\;F^{v}_{\tau }\;\;,\;\; 0 \end{array} \right] \end{aligned}$$
(19)

where \(P_{\tau }, M_{1\tau }^{h}, M_{2\tau }^{h}, M_{1\tau }^{v}, M_{2\tau }^{v}, S_{1\tau }^{h}, S_{2\tau }^{h}, S_{1\tau }^{v}, S_{2\tau }^{v}, R^{h}_{\tau }, R^{v}_{\tau }, F^{h}_{\tau }, F^{v}_{\tau }\) depend on the parameter \(\tau \), while \(M_{3}^{h}, M_{4}^{h}, M_{3}^{v}\), and \(M_{4}^{v}\) are fixed for the entire uncertainty domain and, without loss of generality, invertible; the scalar parameters \(\lambda _{1}, \lambda _{2}, \lambda _{3}\) and \(\lambda _{4}\) will be used as optimization parameters.

Remark 4.1

The structure of \(S_{\tau }\) in (19) is different than the one proposed in Ying and Rui (2011), in which \(S = \delta M\), so it depended on M. It is important to note that \(S_{1\tau }^{h}, S_{2\tau }^{h}, S_{1\tau }^{v}\) and \(S_{2\tau }^{v}\) of S in the new structure (19) are free slack variables completely independent of M. This provides extra degrees of freedom in the solution space for the LMI optimization problems derived from Theorem 4.3.

Define matrices

$$\begin{aligned} \varPi= & {} \hbox {diag} \left\{ I,\; (M_{3}^{h})^{-T}(M^{h}_{4})^\mathrm{T},\;I,\;(M_{3}^{v})^{-T}(M^{v}_{4})^\mathrm{T} \right\} ,\\ \bar{P}_{\tau }= & {} \left[ \begin{array}{cccc} \bar{P}^{h}_{1\tau }&{}\bar{P}^{h}_{2\tau }&{} 0 &{} 0 \\ (\bar{P}^{h}_{2\tau })^\mathrm{T}&{}\bar{P}^{h}_{3\tau }&{} 0 &{} 0 \\ 0 &{} 0 &{} \bar{P}^{v}_{1\tau }&{} \bar{P}^{v}_{2\tau } \\ 0 &{} 0 &{} (\bar{P}^{v}_{2\tau })^\mathrm{T} &{} \bar{P}^{v}_{3\tau } \\ \end{array} \right] =\varPi ^\mathrm{T} P_{\tau }\varPi \end{aligned}$$

Applying congruence transformations to (16) by \(\hbox {diag}\{\varPi , \varPi , I, I\}\) we get

$$\begin{aligned} \varTheta =\left[ \begin{array}{cccc} \varPi ^\mathrm{T}\varGamma _{1}\varPi \;&{}\; \varPi ^\mathrm{T}\varGamma _{2}\varPi \;&{}\; \varPi ^\mathrm{T}\varGamma _{3} \;&{}\; -\varPi ^\mathrm{T}F_{\tau } \\ \varPi ^\mathrm{T}\varGamma _{2}^\mathrm{T}\varPi &{} \varPi ^\mathrm{T}\varGamma _{4}\varPi &{}\varPi ^\mathrm{T}\varGamma _{5} &{} \varPi ^\mathrm{T}\varGamma _{6} \\ \varGamma _{3}^\mathrm{T}\varPi &{} \varGamma _{5}^\mathrm{T}\varPi &{}\varGamma _{7}&{}\bar{B}^\mathrm{T}_{\tau }F_{\tau } \\ -F^\mathrm{T}_{\tau }\varPi &{} \varGamma _{6}^\mathrm{T}\varPi &{} F^\mathrm{T}_{\tau }\bar{B}_{\tau }&{} -I \\ \end{array} \right] <0 \end{aligned}$$
(20)

where

$$\begin{aligned} \varGamma _{1}= & {} P_{\tau }-M_{\tau }-M^\mathrm{T}_{\tau },\;\; \varGamma _{2}=M^\mathrm{T}_{\tau }\bar{A}_{\tau }-S_{\tau },\\ \varGamma _{3}= & {} M^\mathrm{T}_{\tau }\bar{B}_{\tau }-R_{\tau },\;\; \varGamma _{4}=S^\mathrm{T}_{\tau }\bar{A}_{\tau }+\bar{A}^\mathrm{T}_{\tau }S_{\tau }-P_{\tau }\\ \varGamma _{5}= & {} S^\mathrm{T}_{\tau }\bar{B}_{\tau }+\bar{A}^\mathrm{T}_{\tau }R_{\tau },\;\; \varGamma _{6}=\bar{A}^\mathrm{T}_{\tau }F_{\tau }+\bar{C}^\mathrm{T}_{\tau },\\ \varGamma _{7}= & {} \bar{B}^\mathrm{T}_{\tau }R_{\tau }+R^\mathrm{T}_{\tau }\bar{B}_{\tau }-\gamma ^{2}I. \end{aligned}$$

We define

(21)

With a new change of variables in inequality (16) by the above matrices, we obtain the following result.

Proposition 4.2

Given the 2-D system in (1), for the filter in (7), an any fixed \(\tau \in \varGamma \), there exist a matrix \(P_{\tau }=\hbox {diag}\{P^{h}_{\tau }, P^{v}_{\tau }\}\) and filter matrices \(A_{f}, B_{f}, C_{f}\) satisfying (15) if there exist matrices

\(\bar{P}_{\tau }=\hbox {diag}\left\{ \begin{array}{lr} \bar{P}^{h}_{\tau } \;&{}\; \bar{P}^{v}_{\tau } \\ \end{array} \right\} >0, M_{\tau }=\hbox {diag}\left\{ \begin{array}{lr} M^{h}_{\tau } \;&{}\; M^{v}_{\tau } \\ \end{array} \right\} \),

\(S_{\tau }=\hbox {diag}\left\{ \begin{array}{lr} S^{h}_{\tau } \;&{}\; S^{v}_{\tau } \\ \end{array} \right\} , N_{\tau }=\hbox {diag}\left\{ \begin{array}{lr} N^{h}_{\tau } \;&{}\; N^{v}_{\tau } \\ \end{array} \right\} \),

\(U=\hbox {diag}\left\{ \begin{array}{lr} U^{h} \;&{}\; U^{v} \\ \end{array} \right\} , Q_{\tau }=\hbox {diag}\left\{ \begin{array}{lr} Q^{h}_{\tau } \;&{}\; Q^{v}_{\tau } \\ \end{array} \right\} \),

\(R_{\tau }=\left[ \begin{array}{cc} (R^{h}_{\tau })^\mathrm{T}\;&\; (R^{v}_{\tau })^\mathrm{T} \end{array} \right] ^\mathrm{T}, F_{\tau }=\left[ \begin{array}{cc} (F^{h}_{\tau })^\mathrm{T}\;&\; (F^{v}_{\tau })^\mathrm{T} \end{array} \right] ^\mathrm{T}\),

\(\bar{A}_{f}\), \(\bar{B}_{f}\), \(\bar{C}_{f}, \varLambda _{1}=\hbox {diag}\{\lambda _{1},\; \lambda _{3}\}, \varLambda _{2}=\hbox {diag}\{\lambda _{2},\; \lambda _{4}\}\) with \(\lambda _{1}, \lambda _{2}, \lambda _{3}\) and \(\lambda _{4}\) real scalars satisfying:

$$\begin{aligned} \Xi _{\tau }=\left[ \begin{array}{cccc} \bar{P}_{\tau }-\varPsi _{1\tau } \;\;&{}\;\;\varPsi _{2\tau } \;\;&{}\;\; \varPsi _{3\tau }\;\;&{}\;\; -\varUpsilon _{1}^\mathrm{T}F^\mathrm{T}_{\tau } \\ * &{} \varPsi _{4\tau } &{} \varPsi _{5\tau }&{} \varPsi _{6\tau } \\ * &{} * &{} \varPsi _{7\tau } &{}B^\mathrm{T}_{\tau }F^\mathrm{T}_{\tau } \\ * &{} * &{} * &{} -I \\ \end{array} \right] <0 \end{aligned}$$
(22)

where

$$\begin{aligned} \varPsi _{1\tau }= & {} \varUpsilon _{1}^\mathrm{T}[M^\mathrm{T}_{\tau }+M_{\tau }]\varUpsilon _{1}+\varUpsilon _{1}^\mathrm{T}N^\mathrm{T}_{\tau }\varUpsilon _{2}+\varUpsilon _{2}^\mathrm{T}N_{\tau }\varUpsilon _{1}\\&+\,\varUpsilon _{2}^\mathrm{T}U^\mathrm{T}[\varUpsilon _{1}+\varUpsilon _{2}]+[\varUpsilon ^\mathrm{T}_{1}+\varUpsilon ^\mathrm{T}_{2}]U\varUpsilon _{2}.\\ \varPsi _{2\tau }= & {} \varUpsilon _{1}^\mathrm{T}[M_{\tau }A_{\tau }+\bar{B}_{f}C_{\tau }]\varUpsilon _{1}+\varUpsilon _{1}^\mathrm{T}\bar{A}_{f}\varUpsilon _{2}+\varUpsilon _{2}^\mathrm{T}\bar{A}_{f}\varUpsilon _{2}\\&+\,\varUpsilon ^\mathrm{T}_{2}[N_{\tau }A_{\tau }+\bar{B}_{f}C_{\tau }]\varUpsilon _{1}-\varUpsilon _{1}^\mathrm{T}S_{\tau }^\mathrm{T}\varUpsilon _{1} - \varUpsilon _{1}^\mathrm{T}Q^\mathrm{T}_{\tau }\varUpsilon _{2} \\&-\,\varUpsilon _{2}^\mathrm{T}[\varLambda _{1}U^\mathrm{T}\varUpsilon _{1}+ \varLambda _{2}U^\mathrm{T}\varUpsilon _{2}].\\ \varPsi _{3\tau }= & {} \varUpsilon _{1}^\mathrm{T}[M_{\tau }B_{\tau }+\bar{B}_{f}D_{\tau }]+\varUpsilon _{2}^\mathrm{T}[N_{\tau }B_{\tau }+\bar{B}_{f}D_{\tau }]\\&-\,\varUpsilon _{1}^\mathrm{T}R^\mathrm{T}_{\tau }.\\ \varPsi _{4\tau }= & {} -\bar{P}_{\tau }+\varUpsilon _{1}^\mathrm{T}[S_{\tau }A_{\tau }+A^\mathrm{T}_{\tau }S^\mathrm{T}_{\tau }]\varUpsilon _{1}+\varUpsilon _{1}^\mathrm{T}[\varLambda _{1}\bar{B}_{f}C_{\tau }\\&+\, C^\mathrm{T}_{\tau }\bar{B}_{f}^\mathrm{T}\varLambda _{1}]\varUpsilon _{1}+\varUpsilon _{2}^\mathrm{T}[Q_{\tau }A_{\tau } + \varLambda _{2}\bar{B}_{f}C_{\tau }]\varUpsilon _{1}\\&+[\varUpsilon _{2}^\mathrm{T}\varLambda _{2}+ \varUpsilon _{1}^\mathrm{T}\varLambda _{1}]\bar{A}_{f}\varUpsilon _{2} + \varUpsilon _{1}^\mathrm{T}[A^\mathrm{T}_{\tau }Q^\mathrm{T}_{\tau } \\&+\, C^\mathrm{T}_{\tau }\bar{B}_{f}^\mathrm{T}\varLambda _{2}]\varUpsilon _{2}+\varUpsilon ^\mathrm{T}_{2}\bar{A}_{f}^\mathrm{T}[\varLambda _{2}\varUpsilon _{2}+\varLambda _{1}\varUpsilon _{1}].\\ \varPsi _{5\tau }= & {} \varUpsilon _{1}^\mathrm{T}S_{\tau }B_{\tau }+[\varUpsilon _{1}^\mathrm{T}\varLambda _{1}+\varUpsilon _{2}^\mathrm{T}\varLambda _{2}]\bar{B}_{f} D_{\tau }+\varUpsilon _{2}^\mathrm{T}Q_{\tau }B_{\tau }\\&+\,\varUpsilon _{1}^\mathrm{T}A^\mathrm{T}_{\tau }R^\mathrm{T}_{\tau }.\\ \varPsi _{6\tau }= & {} \varUpsilon _{1}^\mathrm{T}H^\mathrm{T}_{\tau }-\varUpsilon _{2}^\mathrm{T}\bar{C}_{f}^\mathrm{T}+\varUpsilon _{1}^\mathrm{T}A^\mathrm{T}_{\tau }F^\mathrm{T}_{\tau }.\\ \varPsi _{7\tau }= & {} R_{\tau }B_{\tau }+B^\mathrm{T}_{\tau }R^\mathrm{T}_{\tau }-\gamma ^{2}I. \end{aligned}$$

Moreover, under the above condition, the matrices for an admissible robust \(H_{\infty }\) filter are given by

$$\begin{aligned} \left[ \begin{array}{cc} A_{f} &{} B_{f} \\ C_{f} &{} 0 \\ \end{array} \right] =\left[ \begin{array}{cc} U^{-1} &{} 0 \\ 0 &{} I \\ \end{array} \right] \left[ \begin{array}{cc} \bar{A}_{f} &{} \bar{B}_{f} \\ \bar{C}_{f} &{} 0 \\ \end{array} \right] \end{aligned}$$
(23)

Proof

We know the transfer function of the filter (7) from y(ij) to \(\bar{z}(i,j)\) is given by

$$\begin{aligned} T_{\bar{z}y}(z_{1}, z_{2})=C_{f}[\hbox {diag}\{z_{1}I_{n_{1}}, z_{2}I_{n_{2}}\}-A_{f}]^{-1}B_{f} \end{aligned}$$
(24)

Substituting (21) into this transfer function and considering

$$\begin{aligned} U_{h} = M_{4}^{h}(M_{3}^{h})^{-T}(M_{4}^{h})^\mathrm{T}, U_{v} = M_{4}^{v}(M_{3}^{v})^{-T} (M_{4}^{v})^\mathrm{T}, \end{aligned}$$
(25)

we get

$$\begin{aligned} T_{\bar{z}y}(z_{1}, z_{2})=\bar{C}_{f}[\hbox {diag}\{z_{1}I_{n_{1}},z_{2}I_{n_{2}}\}-U^{-1}\bar{A}_{f}]^{-1}S^{-1}\bar{B}_{f}. \end{aligned}$$
(26)

Therefore, the desired filter is given by (23) and the proof is completed.

Before presenting the formulation of Proposition 4.2 using homogeneous polynomially parameter-dependent matrices, some definitions and preliminaries are needed to represent and handle products and sums of homogeneous polynomials. First, we define the homogeneous polynomially parameter-dependent matrices of degree g by

$$\begin{aligned} \bar{P}_{\tau }=\displaystyle \sum _{j=1}^{J(g)}\tau _{1}^{k_{1}}\tau _{2}^{k_{2}}\ldots \tau _{s}^{k_{s}}\bar{P}_{k_{j}(g)},\quad [k_{1},k_{2},\ldots ,k_{s}]=K_{j}(g) \nonumber \\ \end{aligned}$$
(27)

Similarly, matrices \(M_{\tau }, N_{\tau }, R_{\tau }, Q_{\tau }, S_{\tau }\) and \(F_{\tau }\) take the same form.

The notations in the above are explained as follows. Define K(g) as the set of s-tuples obtained as all possible combination of \([k_{1},k_{2},\ldots ,k_{s}]\), with \(k_{i}\) being nonnegative integers, such that \(k_{1}+k_{2}+\cdots +k_{s}=g\). \(K_{j}(g)\) is the j-th s-tuples of K(g) which is lexically ordered, \(j=1,\ldots ,J(g)\). Since the number of vertices in the polytope \(\varGamma \) is equal to s, the number of elements in K(g) as given by \(J(g)=\frac{(s +g-1)!}{g!(s -1)!}\). These elements define the subscripts \(k_{1},k_{2},\ldots ,k_{s}\) of the constant matrices

\(\bar{P}_{k_{1},k_{2},\ldots k_{s}}\triangleq \bar{P}_{k_{j}(g)}, \quad M_{k_{1},k_{2},\ldots k_{s}}\triangleq M_{k_{j}(g)}\),

\(N_{k_{1},k_{2},\ldots k_{s}}\triangleq N_{k_{j}(g)}, \quad R_{k_{1},k_{2},\ldots k_{s}}\triangleq R_{k_{j}(g)}\),

\(Q_{k_{1},k_{2},\ldots k_{s}}\triangleq Q_{k_{j}(g)}, \quad S_{k_{1},k_{2},\ldots k_{s}}\triangleq S_{k_{j}(g)}\),

\(F_{k_{1},k_{2},\ldots k_{s}}\triangleq F_{k_{j}(g)}\), which are used to construct the homogeneous polynomial-dependent matrices \(\bar{P}_{\tau }, M_{\tau }, N_{\tau }, R_{\tau } Q_{\tau }, S_{\tau }, F_{\tau }\) in (27).

For each set K(g), define also the set I(g) with elements \(I_{j}(g)\) given by subsets of \(i,\; i\in \{1,2,\ldots ,s\}\), associated with s-tuples \(K_{j}(g)\) whose \(k_{i}\)’s are nonzero. For each i, i=1,ldots,s, define the s-tuples \(K_{j}^{i}(g)\) as being equal to \(K_{j}(g)\) but with \(k_{i}>0\) replaced by \(k_{i}-1\). Note that the s-tuples \(K_{j}^{i}(g)\) are defined only in the cases where the corresponding \(k_{i}\) is positive. Note also that, when applied to the elements of \(K(g+1)\), the s-tuples \(K_{j}^{i}(g+1)\) define subscripts \(k_{1},k_{2},\ldots ,k_{s}\) of matrices \(\bar{P}_{k_{1},k_{2},\ldots ,k_{s}}, M_{k_{1},k_{2},\ldots ,k_{s}}, N_{k_{1},k_{2},\ldots ,k_{s}},\) \(Q_{k_{1},k_{2},\ldots ,k_{s}}, S_{k_{1},k_{2},\ldots ,k_{s}},\) \(F_{k_{1},k_{2},\ldots ,k_{s}}, R_{k_{1},k_{2},\ldots ,k_{s}},\) associated with homogeneous polynomially parameter-dependent matrices of degree g. Finally, define the scalar constant coefficients \(\beta _{j}^{i}(g+1)=\frac{g!}{(k_{1}!k_{2}!\ldots k_{s}!)},\) with \([k_{1},k_{2},\dots ,k_{s}]\in K_{j}^{i}(g+1)\).

The main result in this section is given in the following Theorem 4.3.

Theorem 4.3

Given a stable 2-D system (1) and a scalar \(\gamma > 0\), a filter (7) exists such that the filtering error system (11) is robustly asymptotically stable and satisfies (14). If there exist matrices

$$\begin{aligned} \bar{P}_{K_{j}(g)}= & {} \mathrm{diag}\{\bar{P}^{h}_{K_{j}(g)},\;\; \bar{P}^{v}_{K_{j}(g)}\}>0, \\ M_{K_{j}(g)}= & {} \mathrm{diag} \{M^{h}_{K_{j}(g)},\;\; M^{v}_{K_{j}(g)}\}, \\ R_{K_{j}(g)}= & {} \mathrm{diag}\{R^{h}_{K_{j}(g)},\;\; R^{v}_{K_{j}(g)}\}, \\ N_{K_{j}(g)}= & {} \mathrm{diag}\{N^{h}_{K_{j}(g)},\;\; N^{v}_{K_{j}(g)}\}, \\ Q_{K_{j}(g)}= & {} \mathrm{diag}\{Q^{h}_{K_{j}(g)},\;\; Q^{v}_{K_{j}(g)}\},\\ S_{K_{j}(g)}= & {} \left[ \begin{array}{cc} (S^{h}_{K_{j}(g)})^\mathrm{T}&(S^{v}_{K_{j}(g)})^\mathrm{T} \end{array} \right] ^\mathrm{T},\\ F_{K_{j}(g)}= & {} \left[ \begin{array}{cc} (F^{h}_{K_{j}(g)})^\mathrm{T}&(F^{v}_{K_{j}(g)})^\mathrm{T} \end{array} \right] ^\mathrm{T},\\ K_{j}(g)\in & {} K(g), j=1,2,ldots,J(g), \end{aligned}$$

\(\bar{A}_{f},\) \(\bar{B}_{f},\) \(\bar{C}_{f}, \varLambda _{1}=\mathrm{diag}\{\lambda _{1},\;\; \lambda _{3}\}, \varLambda _{2}=\mathrm{diag}\{\lambda _{2},\;\; \lambda _{4}\}\) with \(\lambda _{1}, \lambda _{2}, \lambda _{3}\) and \(\lambda _{4}\) real scalars such that the following LMIs hold for all \(K_{l}(g+1)\in K(g+1)\), l=1,ldots,J(g+1):

$$\begin{aligned} \Xi _{k}=\displaystyle \sum _{i\in I_{l}(g+1)} \left[ \begin{array}{cccc} \varPsi _{1} \;\;&{}\;\;\varPsi _{2} \;\;&{}\;\; \varPsi _{3}\;\;&{}\;\; -\varUpsilon _{1}^\mathrm{T}F^\mathrm{T}_{K_{j}(g)} \\ * &{} \varPsi _{4} &{} \varPsi _{5}&{} \varPsi _{6} \\ * &{} * &{} \varPsi _{7} &{}B^\mathrm{T}_{i}F^\mathrm{T}_{K_{j}(g)} \\ * &{} * &{} * &{} -\beta _{j}^{i}(g+1)I \\ \end{array} \right] <0 \end{aligned}$$
(28)

where

$$\begin{aligned} \varPsi _{1}= & {} \bar{P}_{K_{j}(g)}-\varUpsilon _{1}^\mathrm{T}[M^\mathrm{T}_{K_{j}(g)}+M_{K_{j}(g)}]\varUpsilon _{1}-\varUpsilon _{1}^\mathrm{T}N^\mathrm{T}_{K_{j}(g)}\varUpsilon _{2}\\&-\,\varUpsilon _{2}^\mathrm{T}N_{K_{j}(g)}\varUpsilon _{1}-\beta _{j}^{i}(g+1)\varUpsilon _{2}^\mathrm{T}U^\mathrm{T}[\varUpsilon _{1}+\varUpsilon _{2}]\\&-\,\beta _{j}^{i}(g+1)[\varUpsilon ^\mathrm{T}_{1}+\varUpsilon ^\mathrm{T}_{2}]U\varUpsilon _{2}.\\ \varPsi _{2}= & {} \varUpsilon _{1}^\mathrm{T}[M_{K_{j}(g)}A_{i}+\beta _{j}^{i}(g+1)\bar{B}_{f}C_{i}]\varUpsilon _{1}-\varUpsilon _{1}^\mathrm{T}S_{K_{j}(g)}^\mathrm{T}\varUpsilon _{1}\\&-\,\varUpsilon _{1}^\mathrm{T}Q^\mathrm{T}_{K_{j}(g)}\varUpsilon _{2}+\beta _{j}^{i}(g+1)[\varUpsilon _{1}^\mathrm{T}\bar{A}_{f}\varUpsilon _{2}+\varUpsilon _{2}^\mathrm{T}\bar{A}_{f}\varUpsilon _{2}]\\&+\,\varUpsilon ^\mathrm{T}_{2}[N_{K_{j}(g)}A_{i}+\beta _{j}^{i}(g+1)\bar{B}_{f}C_{i}]\varUpsilon _{1}\\&-\,\beta _{j}^{i}(g+1)\varUpsilon _{2}^\mathrm{T}[\varLambda _{1}U^\mathrm{T}\varUpsilon _{1}+ \varLambda _{2}U^\mathrm{T}\varUpsilon _{2}].\\ \varPsi _{3}= & {} \varUpsilon _{1}^\mathrm{T}[M_{K_{j}(g)}B_{i}+\beta _{j}^{i}(g+1)\bar{B}_{f}D_{i}]+\varUpsilon _{2}^\mathrm{T}[N_{K_{j}(g)}B_{i}\\&+\,\beta _{j}^{i}(g+1)\bar{B}_{f}D_{i}]-\varUpsilon _{1}^\mathrm{T}R^\mathrm{T}_{K_{j}(g)}.\\ \varPsi _{4}= & {} -\bar{P}_{K_{j}(g)}+\varUpsilon _{1}^\mathrm{T}[S_{K_{j}(g)}A_{i}+A^\mathrm{T}_{i}S^\mathrm{T}_{K_{j}(g)}]\varUpsilon _{1}\\&+\,\beta _{j}^{i}(g+1)\varUpsilon _{1}^\mathrm{T}[\varLambda _{1}\bar{B}_{f}C_{i}+C^\mathrm{T}_{i}\bar{B}_{f}^\mathrm{T}\varLambda _{1}]\varUpsilon _{1}\\&+\,\varUpsilon _{2}^\mathrm{T}[Q_{K_{j}(g)}A_{i}+\beta _{j}^{i}(g+1)\varLambda _{2}\bar{B}_{f}C_{i}]\varUpsilon _{1}\\&+\,\beta _{j}^{i}(g+1)[\varUpsilon _{2}^\mathrm{T}\varLambda _{2}+\varUpsilon _{1}^\mathrm{T}\varLambda _{1}]\bar{A}_{f}\varUpsilon _{2}+\varUpsilon _{1}^\mathrm{T}[A^\mathrm{T}_{i}Q^\mathrm{T}_{K_{j}(g)}\\&+\,\beta _{j}^{i}(g+1)C^\mathrm{T}_{i}\bar{B}_{f}^\mathrm{T}\varLambda _{2}]\varUpsilon _{2}+\beta _{j}^{i}(g+1)\varUpsilon ^\mathrm{T}_{2}\bar{A}_{f}^\mathrm{T}[\varLambda _{2}\varUpsilon _{2}\\&+\,\varLambda _{1}\varUpsilon _{1}].\\ \varPsi _{5}= & {} \varUpsilon _{1}^\mathrm{T}S_{K_{j}(g)}B_{i}+\beta _{j}^{i}(g+1)[\varUpsilon _{1}^\mathrm{T}\varLambda _{1}+\varUpsilon _{2}^\mathrm{T}\varLambda _{2}]\bar{B}_{f} D_{i}\\&+\,\varUpsilon _{2}^\mathrm{T}Q_{K_{j}(g)}B_{i}+\varUpsilon _{1}^\mathrm{T}A^\mathrm{T}_{i}R^\mathrm{T}_{K_{j}(g)}.\\ \varPsi _{6}= & {} \beta _{j}^{i}(g+1)[\varUpsilon _{1}^\mathrm{T}H^\mathrm{T}_{i}-\varUpsilon _{2}^\mathrm{T}\bar{C}_{f}^\mathrm{T}]+\varUpsilon _{1}^\mathrm{T}A^\mathrm{T}_{i}F^\mathrm{T}_{K_{j}(g)}.\\ \varPsi _{7}= & {} R_{K_{j}(g)}B_{i}+B^\mathrm{T}_{i}R^\mathrm{T}_{K_{j}(g)}-\beta _{j}^{i}(g+1)\gamma ^{2}I. \end{aligned}$$

Then, the homogeneous polynomial matrices \(\bar{P}_{\tau }, M_{\tau }, N_{\tau }, R_{\tau }, Q_{\tau }, S_{\tau }\) and \(F_{\tau }\) assure (22) for all \(\tau \in \varGamma \).

Moreover, if the LMIs of (28) are fulfilled for a given degree \(\bar{g}\), then the LMIs corresponding to any degree \(g>\bar{g}\) are also satisfied.

Proof

\(\mathbf {First\,part}\): Since \(\bar{P}_{K_{j}(g)}>0, K_{j(g)}\in K(g),\; j=1,ldots,J(g)\), we know that \(\bar{P}_{\tau }\) defined in (27) is positive definite for all \(\tau \in \varGamma \). Now, note that \(\Xi \in \) (22) for \((A_{\tau }, B_{\tau }, C_{\tau }, D_{\tau }, H_{\tau })\in \varOmega \) and \(P_{\tau }, M_{\tau }, N_{\tau }, T_{\tau }, R_{\tau }\) and \(S_{\tau }\) given by (27) are homogeneous polynomial matrix equations of degree \(g+1\) that can be written as

$$\begin{aligned} \Xi (\tau )=\displaystyle \sum _{l=1}^{J(g+1)}\tau _{1}^{k_{1}}\tau _{2}^{k_{2}}\ldots \tau _{s}^{k_{s}}\;\Xi _{k} \end{aligned}$$
(29)

Condition (28) imposed for all \(l, l=1,ldots,J(g+1)\) assure that \(\Xi _{\tau }<0\) for all \(\tau \in \varGamma \), and thus the first part is proved.

\(\mathbf {Second\,part}\): Suppose that (28) are fulfilled for a certain degree \(\hat{g}\), that is, there exit \(J(\hat{g})\) symmetric positive definite matrix \(\bar{P}_{K_{j(\hat{g})}}\) and matrices \(M_{K_{j(\hat{g})}}, N_{K_{j(\hat{g})}}, Q_{K_{j(\hat{g})}}, S_{K_{j(\hat{g})}}, R_{K_{j(\hat{g})}}, F_{K_{j(\hat{g})}}, j=1,ldots,J(\hat{g})\) such that \(\bar{P}_{\tau }\), \(M_{\tau }, N_{\tau }\), \(Q_{\tau }, S_{\tau }, F_{\tau }\) and \(R_{\tau }\) defined in (27) are homogeneous polynomially parameter-dependent Lyapunov matrices assuring \(\Xi _{\tau }<0\). Then, the terms of the polynomial matrices \(\tilde{\bar{P}}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})\bar{P}_{\tau }, \tilde{M}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})M_{\tau }, \tilde{N}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})N_{\tau },\;\;\; \tilde{Q}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})Q_{\tau }, \tilde{S}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})S_{\tau },\;\;\; \tilde{F}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})F_{\tau }\) and \(\tilde{R}_{\tau }=(\tau _{1},\tau _{2},\ldots ,\tau _{s})R_{\tau }\) also satisfy the inequalities of Theorem 4.3 corresponding to the degree \(\hat{g}+1\), which can be obtained in this case by linear combination of the inequalities of Theorem 4.3 for \(\hat{g}\)

Remark 4.4

When the scalars \(\lambda _{1}, \lambda _{2}, \lambda _{3}\) and \(\lambda _{4}\) of Theorem 4.3 are fixed to be constants, then (28) is an LMI which is effectively linear in the variables. To select values for these scalars, optimization can be used (for example fminsearch in MATLAB) to optimize some performance measure (for example \(\gamma \), the disturbance attenuation level).

Remark 4.5

As the degree g of the polynomial increases, the conditions become less conservative since new free variables are added to the LMIs. Although the number of LMIs is also increased, each LMI becomes easier to be fulfilled due to the extra degrees of freedom provided by the new free variables; as a consequence, better \(H_{\infty }\) guaranteed costs can be obtained.

5 Numerical Example

Consider the following 2-D static field model described by differential equation (El-Kasri et al. 2012):

$$\begin{aligned} \eta (i+1,j+1)= & {} \tau _{1}\eta (i,j+1)+\tau _{2}\eta (i+1,j)\nonumber \\&-\,\tau _{1}\tau _{2}\eta (i,j)+\omega _{1}(i,j) \end{aligned}$$
(30)

where \(\eta (i,j)\) is the state of the random field of spacial coordinate \((i,j), \omega _{1}(i,j)\) is a noise input, \(\tau _{1}\),\(\tau _{2}\) are the vertical and horizontal correlative coefficients of the random field, respectively, satisfying \(\tau _{1}^{2}<1\) and \(\tau _{2}^{2}<1\). The output is then:

$$\begin{aligned} y(i,j)= & {} \tau _{1}\eta (i,j+1)+(1-\tau _{1}\tau _{2})\eta (i+1,j)\nonumber \\&+\,\omega _{2}(i,j) \end{aligned}$$
(31)

where \(\omega _{2}(i,j)\) is the measurement noise. The signal to be estimated is

$$\begin{aligned} z(i,j)=\eta _{i,j} \end{aligned}$$
(32)

As in Du et al. (2002), define \(x^{h}(i,j)=\eta (i,j+1)-\tau _{2}\eta (i,j), x^{v}(i,j)=\eta (i,j)\) and \(\omega (i,j)=[\omega ^\mathrm{T}_{1}(i,j) \omega _{2}(i,j)^\mathrm{T}]^\mathrm{T}\). It is easy to see (30)–(32) can be converted into a 2D Roesser model as follows:

$$\begin{aligned} \left[ \begin{array}{c} x^{h}(i+1,j) \\ x^{v}(i,j+1) \end{array} \right]= & {} \left[ \begin{array}{cc} \tau _{1} &{} 0 \\ 1 &{} \tau _{2} \\ \end{array} \right] \left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] + \left[ \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0 \\ \end{array} \right] w(i,j)\nonumber \\ y(i,j)= & {} \left[ \begin{array}{cc} \tau _{1}&{} 1 \\ \end{array} \right] \left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] + \left[ \begin{array}{cc} 0&{} 1 \\ \end{array} \right] w(i,j)\nonumber \\ z(i,j)= & {} \left[ \begin{array}{cc} 0&{} 1 \\ \end{array} \right] \left[ \begin{array}{c} x^{h}(i,j) \\ x^{v}(i,j) \end{array} \right] \end{aligned}$$
(33)

where \(0.15\le \tau _{1}\le 0.45, 0.35\le \tau _{2}\le 0.85\). The uncertain 2-D system corresponds to a four vertex polytopic system.

The LMIs (28) were solved using Yalmip and SeDuMi in MATLAB 7.6, for increasing values of the degree g:

For degree \(g=0\), the proposed optimization gives \(\lambda _{1}=-0.1064, \lambda _{2}=0.0228, \lambda _{3}=0.0027\), and \(\lambda _{4}-0.0002\). For these scalars \(\gamma =2.4342\) and the corresponding filter matrices are:

$$\begin{aligned} A_{f}= & {} \left[ \begin{array}{cc} 0.3614 &{} 0.0000\\ 0.1538 &{} -0.0000\\ \end{array} \right] ,\;\;\; B_{f}=\left[ \begin{array}{c} -0.0920\\ -0.8213\\ \end{array} \right] ,\\ C_{f}= & {} \left[ \begin{array}{cc} -0.0605 &{} -0.9868\\ \end{array} \right] \end{aligned}$$

The \(H_{\infty }\) norms obtained with this filter at the vertices of the uncertainties are given in Table 1.

Table 1 \(H_{\infty }\) norms at the vertices \((g=0)\)

On the other hand, when \(g=1, \lambda _{1}=-20.3646, \lambda _{2}=0.0662,\) \(\lambda _{3}=0.6369\) and \(\lambda _{4}-0.1645\), the disturbance attenuation obtained is \(\gamma =1.8043\) and the corresponding filter matrices are:

$$\begin{aligned} A_{f}= & {} \left[ \begin{array}{cc} 0.6476 &{} 1.9416\\ -0.0151 &{} 0.2548\\ \end{array} \right] ,\;\;\; B_{f}=\left[ \begin{array}{c} 4.0789\\ -1.2683\\ \end{array} \right] ,\\ C_{f}= & {} \left[ \begin{array}{cc} 0.0045 &{} -0.4699\\ \end{array} \right] . \end{aligned}$$

The \(H_{\infty }\) norms at the vertices are now given in Table 2.

Table 2 \(H_{\infty }\) norms at the vertices \((g=1)\)

For degree \(g=2\), \(\lambda _{1}=-3.3940, \lambda _{2}=0.0696, \lambda _{3}=0.6291\) and \(\lambda _{4}=-0.1640, \gamma =1.8042\) and the corresponding filter matrices are:

$$\begin{aligned} A_{f}= & {} \left[ \begin{array}{cc} 0.6360 &{} 0.2615\\ -0.1152 &{} 0.2544\\ \end{array} \right] ,\;\;\; B_{f}=\left[ \begin{array}{c} 0.5458\\ -1.2617\\ \end{array} \right] ,\\ C_{f}= & {} \left[ \begin{array}{cc} 0.0338 &{} -0.4725\\ \end{array} \right] \end{aligned}$$
Table 3 \(H_{\infty }\) norms at the vertices \((\hbox {g}=2)\)

For the filter designed with \(g=2\), the actual \(H_{\infty }\) norms calculated at the four extreme plants are presented in Table 3: it can be seen that all of them are below the guaranteed bound 1.8042.

In summary, it has been shown that less conservative filter designs are achieved as g grows by applying the polynomially parameter-dependent method proposed here.

A comparison with the results obtained with the techniques proposed in Gao and Li (2014), Ying and Rui (2011) and Gao et al. (2008) is presented in Table 4, showing the improvement obtained with the methodology proposed in this paper.

Table 4 Comparison with previous results

The number of LMIs, the number of scalar variables and the cpu time to solve the LMIs are compared in the following Table 5. It must be pointed out that for this example, increasing the polynomial order to \(g > 2\) does not improve the noise reduction properties.

Table 5 The numerical complexity obtained by Theorem 4.3 with \(\mathbf {L}\) is the number of LMIs rows, \(\mathbf {V}\) is the number of scalar variables, \(\mathbf {N}\) is number of LMIs and \(\mathbf {Time (s)}\) is the cpu time to solve the LMIs

6 Conclusion

This article has investigated the \(H_{\infty }\) filtering problem for 2-D discrete systems described by uncertain Roesser models. A new condition for \(H_{\infty }\) performance analysis has been proposed in the LMI framework, by using a Lyapunov function approach and adding some slack matrix variables with specific structures that make possible to reduce the conservatism of previous works. A numerical example illustrate the effectiveness of the proposed method. As future research, we will use this technique in nonlinear systems based on fuzzy dynamic model and the SOS (Sum Of Square) Technique in this systems.