1 Introduction

Preventing the disasters and halts during the system operation in an unexpected time is a critical and important issue for engineering. This prompts the interest in the design of stabilizing controller. Indeed, the robustness property of the controller, which is clearly desired in safety, tends to mask the faults. This makes the task of fault detection (FD) difficult; particularly, if it is desired to detect those faults which degrade performance. Traditional approach for FD is known as passive fault detection (PFD) (Chen and Patton 1999; Ding 2013). Robustness issue is a challenging task in PFD. Recently, various techniques have been developed for robust PFD for uncertain linear time-delay systems and Takagi–Sugeno models using unknown input observers (UIOs) (see for example, Ahmadizadeh et al. 2013, 2014a, b). It is well-known that a major drawback with the passive approach is that the masked faults cannot be detected by PFD approaches. An alternative to PFD is active fault detection (AFD) that was found and has been extended by Campbell and Nikoukhah (2004), Nikoukhah et al. (2010), Esna-Ashari et al. (2012a, b), Forouzanfar and Khosrowjerdi (2014), Niemann (2003, 2006, 2012), Niemann and Poulsen (2015), Simandl and Puncochar (2009) and Puncochar et al. (2014). In model-based AFD approach, an exogenous signal called the test signal is designed and injected into the system in such a way that the separation of the multi-models of the system corresponding to healthy and faulty models during the injection time-period is guaranteed, as shown in Fig. 1. As mentioned before, using AFD approach, it is possible to detect masked faults efficiently, but in comparison with PFD, designing the test signal and injecting it into the system for AFD requires efficient numerical algorithms with more computations as reported in Campbell and Nikoukhah (2004). Recently, the effect of feedback in closed-loop systems for the optimal generation of the test signal for AFD was considered. It is worth mentioning that given proper feedback, a test signal with optimal energy can also be achieved (Esna-Ashari et al. 2012a, b), but the design of proper feedback was not discussed.

Fig. 1
figure 1

Integrated test signal and controller design

Indeed, because of the robustness property of the controller, the effect of all additive signals such as test signal can be considerably attenuated. Therefore, to detect faults in a more reliable way, a test signal with considerable energy is needed. In fact, a large test signal has many undesirable effects on the system performance. Therefore, it is necessary to formulate a unified synthesis problem to consider the trade-off between the controller performance and the optimal test signal for AFD.

In comparison with the recent literature, this paper generalizes the proposed approach in Esna-Ashari et al. (2012b) for the integrated AFD and control, and its contributions can be summarized as follows: (1) the candidate controller is a fixed-order dynamic output feedback controller instead of Luenberger observer-based controller, (2) the controller parameters are unknown and should be designed in an efficient way, (3) the controller is synthesized such that a well-defined quadratic performance index is optimized, (4) a well-defined constrained finite-dimensional optimization problem is formulated for unified synthesis of test signal for AFD and optimal fixed-order dynamic output feedback controller, and (5) two iterative constructive algorithms are proposed to find a sub-optimal solution to the proposed optimization problem.

The rest of the paper is organized as follows. Section 2 considers a brief review on the approach of optimal AFD and optimal p-system stabilization. Section 3 provides an exact mathematical description for optimal unified synthesis of AFD and control problem. Section 4 deals with two iterative algorithms for finding a sub-optimal solution to the proposed optimization problem, and in Sect. 5, a numerical example is presented to show the effectiveness of the proposed algorithm. Concluding remarks are given in Sect. 6.

2 Notations and Preliminaries

In this section, the optimal p-system stabilization problem and the optimal AFD problem are briefly stated. Some standard notations and the mathematical preliminaries used in this paper are also summarized.

Let the multi-model discrete-time LTI system be given by

$$\varSigma_{\text{s}} :\left\{ {\begin{array}{*{20}c} {x_{i} (t + 1) = A_{i} x_{i} (t) + B_{i} v(t) + M_{i} \mu_{i} (t)} \\ {y(t) = C_{i} x_{i} (t) + D_{i} v(t) + N_{i} \mu_{i} (t)} \\ \end{array} ,\quad i = 1,2, \ldots ,p} \right.,$$
(1)

where "\(p\)" represents the number of models, \(x_{i} \in R^{n}\) is the state vector, \(v \in R^{m}\) is the control signal, \(\mu_{i} \in R^{l}\) is the noise and system uncertainty vector with the bounded energy and \(y \in R^{q}\) is the output vector. The distribution matrices \(M_{i}\) and \(N_{i}\) define the input and output noise and system uncertainties, respectively. Also, to simplify the illustration of the procedure, without the loss of generality, matrix \(D_{i}\) can be assumed to have the same value, \(D\), for all models. Pair \((A_{i} ,C_{i} )\) is detectable and pair \((A_{i} ,B_{i} )\) is stabilizable. The upper bound for the energy of \(\mu_{i} (t)\) is given by

$$\mathop \sum \limits_{t = 0}^{\tau } \mu_{i} (t)^{\text{T}} \mu_{i} (t) < \mu_{0i}^{T} \mu_{{0}_{i}} = \gamma_{{0}_{i}},$$
(2)

where \(\tau\) is the upper bound of time which will be described in the rest of the paper, and \(\gamma_{{0}_{i}} > 0\) is the energy of \(\mu_{i} (t)\) in the worst case. Normalizing the problem, \(\gamma_{{0}_{i}}\) could be assumed as 1.

Remark 1

Without loss of generality, the initial state \(x_{i} (0)\) is assumed to be zero. Note that in case \(x_{i} (0)\) is non-zero or unknown, it can be included in the system (1) as uncertainty parameter that is shown by \(\mu_{i}\).

Remark 2

For the sake of simplicity, the problem of simultaneous synthesis is considered for healthy and faulty models; therefore, \(p = 2\).

Consider a fixed-order dynamic output feedback (DOF) controller as follows:

$$\varSigma_{\text{c}} :\left\{ {\begin{array}{*{20}l} {\varepsilon (t + 1) = P\varepsilon (t) + Ky(t)} \hfill \\ {v(t) = G\varepsilon (t) + w(t)} \hfill \\ \end{array} ,} \right.\quad \varepsilon (0) = 0,$$
(3)

where \(\varepsilon \in R^{g}\) is the state vector with the zero initial value and \(w\) is the test signal. In this paper, the synthesis of a full-order stabilizing controller (\(g = n\)) is discussed but the proposed algorithm can also be used for the case \(g < n\). By combining (1)–(3), the closed-loop system equations in Fig. 2 are given by

$$\varSigma_{{{\text{s}} . {\text{cl}}}} \left\{ {\begin{array}{*{20}l} {\bar{x}_{i} (t + 1) = \bar{A}_{i} \bar{x}_{i} (t) + \bar{B}_{i} w(t) + \bar{M}_{i} \mu_{i} (t)} \hfill \\ {y(t) = \bar{C}_{i} \bar{x}_{i} (t) + \bar{D}_{i} w(t) + \bar{N}_{i} \mu_{i} (t)} \hfill \\ {v(t) = \bar{G}\bar{x}_{i} (t) + w(t)} \hfill \\ \end{array} } \right.$$
(4)

where \(\bar{x}_{i} \in R^{(n + g)}\) is the state vector of the closed-loop system,

$$\begin{aligned} \bar{x}_{i} (t) & = \left[ {\begin{array}{*{20}c} {x_{i} (t)} \\ {\varepsilon (t)} \\ \end{array} } \right],\quad \bar{A}_{i} = \left( {\begin{array}{*{20}c} {A_{i} } & {B_{i} G} \\ {KC_{i} } & {P + KD_{i} G} \\ \end{array} } \right),\quad \bar{B}_{i} = \left( {\begin{array}{*{20}c} {B_{i} } \\ {KD_{i} } \\ \end{array} } \right),\quad \bar{M}_{i} = \left( {\begin{array}{*{20}c} {M_{i} } \\ {KN_{i} } \\ \end{array} } \right), \\ \bar{C}_{i} & = (C_{i} \quad D_{i} G),\quad \bar{G} = (0\quad G) \\ \end{aligned}$$
Fig. 2
figure 2

Closed-loop multi-model system

2.1 Active Fault Detection

In the model-based AFD, the main objective is to synthesize a proper test signal. However, it is mathematically proved that the proper test signal guarantees the separation of the healthy and faulty models. Therefore, the focus of this paper is to synthesize a minimum energy test signal which guarantees the separation of the healthy and faulty models.

Separating the system models (4) by generating a proper test signal \(w(t)\) that minimizes a quadratic cost function is known as AFD problem. \(w(t)\) should be designed such that the set \(\theta = \{ w(t)|{\mathcal{A}}_{1} (w(t)) \cap {\mathcal{A}}_{2} (w(t))\}\) is empty where \({\mathcal{A}}_{i} (w(t))\) is the set of outputs of model “\(i\)” defined by (4) to the input \(w(t)\). This motivates the following min–max problem for generating an optimal test signal for the closed-loop system (4).

Problem 2.1

Given the injection time-period \(\tau_{2} > 0\) and closed-loop system (4), find \(w(t)\) in \(\theta\) which minimizes the quadratic performance index

$$J_{\text{D.CL}} = \mathop {\hbox{min} }\limits_{ w(t)} \mathop { \hbox{max} }\limits_{{\mu_{1} (t), x_{1} (0)}} \mathop \sum \limits_{t = 0}^{{\tau_{1} }} v(t)^{\text{T}} Sv(t) + x_{1} (t)^{\text{T}} Tx_{1} (t)$$
(5)

Subject to

$$x_{1} (0)^{\text{T}} x_{1} (0) + \mathop \sum \limits_{t = 0}^{{\tau_{2} }} \mu_{1} (t)^{\text{T}} \mu_{1} (t)) < 1.$$

where matrices \(S\) and \(T\) are the constant matrices with appropriate dimensions.

2.2 Simultaneous Stabilization Controller

The problem of simultaneous stabilizing controller designs a single controller \({{\varSigma }}_{\text{c}}\) to stabilize a set of p-system simultaneously and optimize a performance index (Ackermann 1980; Zhang et al. 2011; Ghosh 2013; Das and Dey 2011; Saadatjoo et al. 2013).

The main objective is to formulate a finite-dimensional constrained optimization problem to synthesize a fixed-order DOF controller that stabilizes all models and optimizes the performance index of the controlled system in the presence of the finite energy disturbances. In other words, given \(w(t) = 0\), \(\varSigma_{\text{c}}\) in (3) is an internally stabilizing controller for the closed-loop system (4) that minimizes a quadratic cost as a desired performance index. Internal stability of (4) is equivalent to the condition that the set \(\mho = \{ (P,K,G)|\bar{A}\;{\text{is}}\;{\text{Schur}}\}\) is nonempty, where a Schur matrix is a square matrix with real entries and with eigenvalues of absolute value less than one. This motivates the following optimal p-system stabilization problem as follows:

Problem 2.2

Given the closed-loop system (4) and \(w(t) = 0\), find \((P,K,G)\) in \(\mho\) which minimizes the following performance index:

$$J_{{{\text{C}} . {\text{CL}}}} = \mathop { \hbox{min} }\limits_{P,K,G} \mathop \sum \limits_{t = 0}^{{\tau_{2} }} (x_{1} (t)^{\text{T}} Q_{1} x_{1} (t) + x_{2} (t)^{\text{T}} Q_{2} x_{2} (t) + v^{\text{T}} (t)Rv(t))$$
(6)

where \(Q_{1} = Q_{2} = Q = Q^{\text{T}} \ge 0\) and \(R = R^{\text{T}} > 0\) are the constant weighting matrices with appropriate dimensions.

Integrating Problems 2.1 and 2.2, leads to trade-off between the following performance of stabilizing controller and energy of proper test signal for AFD approach.

3 Integrated Formulation of AFD and DOF Controller

In this section, it is shown that integration of an optimal proper test signal for AFD and control problem can be transformed into a finite-dimensional equality constrained optimization problem. The main objectives are

  1. 1.

    simultaneous stabilization of multi-model system (healthy and faulty models) using a single controller,

  2. 2.

    optimizing the control performance for healthy model,

  3. 3.

    designing an optimal proper test signal for AFD with minimum energy.

Regarding Problems 2.1 and 2.2, the integrated problem of AFD and DOF controller is as follows. By comparing (5) and (6), it can be easily seen that the performance index in (6) is related to both models but in (5), it is just related to the healthy model. Therefore, for the unified synthesis, the control objective can be limited to optimize the performance of healthy model. After selecting the performance index, the following remarks are summarized for formulating a well-defined problem of optimal integrated synthesis of AFD and control.

Remark 3

A fixed-order DOF controller with a proper test signal can be considered as (3). The controller is assumed as a full-order controller but it can be optimized by redesigning with lower order, if possible. The unknown matrices \(P\), \(K\) and \(G\) are the controller parameters, and are designed such that the simultaneous stabilization can be achieved for healthy and faulty models. With the guarantee of the internal stability requirement on the closed-loop system in Fig. 1, it can be concluded that the test signal \(w(t)\) cannot destabilize the closed-loop system, it can, however, degrade the control performance.

Remark 4

It is worth noting that \(\tau_{1}\) and \(\tau_{2}\) in (5) and (6) are the time-upper bounds for optimal AFD and optimal simultaneous synthesis of stabilizing controller, respectively. For the optimal test signal, it is better to minimize the value of \(\tau_{1}\) and for the optimal stabilizing controller design, it is better to maximize the value of \(\tau_{2}\). However, for integrated synthesis, both of them should have similar value considered as \(\tau\). Indeed, it is another trade-off between AFD and the control.

Remark 5

Another point that should be considered for integrated formulation is the assumption on the value of uncertain parameters \(\mu_{1} (t)\) and \(x_{1} (0)\). For AFD this is given by (3) but for controller synthesis, it is assumed that \(\mu_{1} (t) = \mu_{0} \delta (t)\) and \(x_{1} (0) = 0\), where \(\delta (t)\) is the Kronecker delta function. To overcome this difficulty, non-similarity in the assumption of uncertain parameters, the unified synthesis can be performed on the worst case recursively. In other words, for the first step, regarding Eq. (10), set \(x_{1} (0) = 0\) and \(\mu_{1} (t) = \mu_{0} \delta (t)\), then the controller parameters can be determined. In the next step, using these controller parameters, the test signal \(w(t)\) is designed and then the new values of \(x_{1} (0)\) and \(\mu_{1} (t)\) are computed. To achieve the optimal test signal and optimal performance, this step should be conducted recursively.

Regarding the three mentioned points, the optimal unified synthesis problem can be illustrated based on Problems 2.1 and 2.2. By considering the system (1) and the controller (3), the closed-loop system is described as (4). To achieve the internal stability for closed-loop system (4), the discrete Lyapunov equation should be satisfied. Therefore, the set of stabilizing controllers is given by (6). Moreover, the proper set of \(\mu_{1} (t)\) and \(x_{1} (0)\) is considered as (2).

Based on above mentioned remarks, the integrated formulation for AFD and DOF controller is defined as the following problem.

Problem 3.1

Given the closed-loop system (4), find \((P,G,K)\) in \(\varphi , w(t)\) in \(\theta\), and the worst case of (\(\mu_{1} (t),x_{1} (0)\) in \(\rho\), to minimize the performance index.

$$J_{\text{D.C.sim.dyn}} = \mathop { \hbox{min} }\limits_{w(t)} \mathop { \hbox{max} }\limits_{{\mu_{1} (t), x_{1} (0)}} \mathop { {\text{min}}}\limits_{P, G, K} \mathop \sum \limits_{t = 0}^{\tau } (v(t)^{\text{T}} Sv(t) + x_{1} (t)^{\text{T}} \varGamma x_{1} (t))$$
(7)

where the matrices \(S\) and \(T\) are the constant weighting matrices. \((P,G,K) \in \varphi , \;w(t) \in \theta ,\; {\text{and}}\; (\mu_{1} (t),x_{1} (0)) \in \rho\) are the proper controller parameters, proper test signal, and the worst case of uncertainties. The mentioned sets are defined as follows:

$$\varphi = \{ (P,G,K) |\bar{A}_{i}^{\text{T}} {\mathcal{P}}_{i} \bar{A}_{i} - {\mathcal{P}}_{i} < 0, {\mathcal{P}}_{i} \succ 0\}$$
(8)

where \({\mathcal{P}}_{i}\) is the symmetric positive definite matrix.

$$\theta = \{ w(t)|{\mathcal{A}}_{1} (w(t))\mathop \cap \nolimits {\mathcal{A}}_{2} w(t) = \emptyset \}$$
(9)
$$\rho = \left\{ {( \mu_{1} (t),x_{1} (0))\left| {x_{1} (0)^{\text{T}} x_{1} (0) + \mathop \sum \limits_{t = 0}^{\tau } (\mu_{1} (t)^{\text{T}} \mu_{1} (t)) < 1} \right.} \right\}$$
(10)

The Problem 3.1 is a min–max–min problem and the cost function is indirectly related to the test signal, uncertainty terms and controller parameters. Obviously, it is a multi-objective optimization problem with three nonlinear coupled constraints which satisfy the Lyapunov equation, the constraint on uncertainties and properness of test signal. Therefore, the solution of Problem 3.1 is not straightforward and could not be completely solved analytically. Two solution algorithms are discussed in the next section.

4 Solution Algorithms

Two solutions of Problem 3.1 based on sub-optimal method are the static-based algorithm and SMEFootnote 1-based algorithm. In the proposed solution algorithms, at first level, the DOF controller parameters are computed and in the second level, based on the calculated controller parameters, the test signal is designed. These two levels are recursively repeated until the results are acceptable.

4.1 Static-Based Algorithm

A common technique to solve dynamic optimization problem with nonlinear coupled constraints is converting the dynamic problem to static one (Esna-Ashari et al. 2012b; Skaf and Boyd 2010). To convert Problem 3.1 into a static form, some new variables in a time period \([0,\tau ]\) are defined as follows:

$$\hat{x}_{i} = \hat{B}_{i} \hat{w} + \hat{M}_{i} \hat{\mu }_{i}$$
(11)
$$\hat{y} = \hat{D}_{i} \hat{w} + \hat{N}_{i} \hat{\mu }_{i}$$
(12)
$$\hat{v} = \hat{G}_{i} \hat{w} + \hat{H}_{i} \hat{\mu }_{i}$$
(13)

where

$$\begin{aligned} \hat{x}_{i} &= \left( {\begin{array}{*{20}c} {x_{i} \left( 0 \right)} \\ {x_{i} \left( 1 \right)} \\ {\begin{array}{*{20}c} \vdots \\ {x_{i} \left( \tau \right)} \\ \end{array} } \\ \end{array} } \right),\quad \hat{y} = \left( {\begin{array}{*{20}c} {y\left( 0 \right)} \\ {y\left( 1 \right)} \\ {\begin{array}{*{20}c} \vdots \\ {y\left( \tau \right)} \\ \end{array} } \\ \end{array} } \right),\quad \hat{v} = \left( {\begin{array}{*{20}c} {v\left( 0 \right)} \\ {v\left( 1 \right)} \\ {\begin{array}{*{20}c} \vdots \\ {v\left( \tau \right)} \\ \end{array} } \\ \end{array} } \right),\quad \hat{w} = \left( {\begin{array}{*{20}c} {w\left( 0 \right)} \\ {w\left( 1 \right)} \\ {\begin{array}{*{20}c} \vdots \\ {w\left( \tau \right)} \\ \end{array} } \\ \end{array} } \right),\quad \hat{\mu }_{i} = \left( {\begin{array}{*{20}c} {x_{i} \left( 0 \right)} \\ {\mu _{i} \left( 0 \right)} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\mu _{i} (1)} \\ \vdots \\ \end{array} } \\ {\mu _{i} \left( \tau \right)} \\ \end{array} } \\ \end{array} } \right)\quad \hat{I} = ~\left( {\begin{array}{*{20}c} I \\ 0 \\ \end{array} } \right),\quad I = \left( {\begin{array}{*{20}c} 1 & 0 & {\begin{array}{*{20}c} \cdots & 0 \\ \end{array} } \\ 0 & 1 & {\begin{array}{*{20}c} \cdots & 0 \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} \vdots \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} \ddots \\ \cdots \\ \end{array} } & {\begin{array}{*{20}c} \vdots \\ 1 \\ \end{array} } \\ \end{array} } \\ \end{array} } \right),\\ \hat{B}_{i} & = \left( {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{B}_{i} } \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\bar{A}_{i}^{\tau - 1} \bar{B}_{i} } \hfill & {\bar{A}_{i}^{\tau - 1} \bar{B}_{i} } \hfill & \cdots \hfill & 0 \hfill \\ \end{array} } \right),\quad \hat{M}_{i} = \left( {\begin{array}{*{20}l} {\hat{I}} \hfill & 0 \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{A}_{i} \hat{I}} \hfill & {\bar{M}_{i} } \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\bar{A}_{i} \hat{I}} \hfill & {\bar{A}_{i}^{\tau - 1} \bar{M}_i} \hfill & {\bar{A}_{i}^{\tau - 2} \bar{M}_{i} } \hfill & \cdots \hfill & {\bar{M}_{i} } \hfill \\ \end{array} } \right) \\ \hat{D}_{i} & = \left( {\begin{array}{*{20}l} {D_{i} } \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{C}_{i} \bar{B}_{i} } \hfill & {D_{i} } \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\bar{C}_{i} \bar{A}_{i}^{\tau - 2} \bar{B}_{i} } \hfill & {\bar{C}_{i} \bar{A}_{i}^{\tau - 2} \bar{B}_{i} } \hfill & \cdots \hfill & {D_{i} } \hfill \\ \end{array} } \right),\quad \hat{N}_{i} = \left( {\begin{array}{*{20}l} {\bar{C}_{i} \hat{I}} \hfill & {N_{i} } \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{C}_{i} \bar{A}_{i} \hat{I}} \hfill & {\bar{C}_{i} \bar{M}_{i} } \hfill & {N_{i} } \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\bar{C}_{i} \bar{A}_{i}^{\tau } \hat{I}} \hfill & {\bar{C}_{i} \bar{A}_{i}^{\tau - 1} \bar{M}_{i} } \hfill & {\bar{C}_{i} \bar{A}_{i}^{\tau - 2} \bar{M}_{i} } \hfill & \cdots \hfill & {N_{i} } \hfill \\ \end{array} } \right) \\ \hat{G}_{i} & = \left( {\begin{array}{*{20}l} I \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{G}\bar{B}_{i} } \hfill & I \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\bar{G}\bar{A}_{i}^{\tau - 1} \bar{B}_{i} } \hfill & {\bar{G}\bar{A}_{i}^{\tau - 2} \bar{B}_{i} } \hfill & \cdots \hfill & I \hfill \\ \end{array} } \right),\quad \hat{H}_{i} = \left( {\begin{array}{*{20}l} {\bar{G}\hat{I}} \hfill & 0 \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ {\bar{G}\bar{A}_{i} \hat{I}} \hfill & {\bar{G}\bar{M}_{i} } \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ \vdots \hfill & \vdots \hfill & \vdots \hfill & \ddots \hfill & 0 \hfill \\ {\bar{G}\bar{A}_{i}^{\tau } \hat{I}} \hfill & {\bar{G}\bar{A}_{i}^{\tau - 1} \bar{M}_{i}} \hfill & {\bar{G}\bar{A}_{i}^{\tau - 2} \bar{M}_{i}} \hfill & \cdots \hfill & {\bar{G}\bar{M}_{i} } \hfill \\ \end{array} } \right) \\ \end{aligned}$$

Using (7) and (11)–(13), the performance index in Problem 3.1 is rewritten as the following static form:

$$J_{\text{D.C.sim.St}} = \mathop { \hbox{min} }\limits_{{{\text{proper}}\;\hat{w}}} \mathop { \hbox{max} }\limits_{{{\text{proper}}\;\hat{\mu }_{ 1} }} \mathop { \hbox{min} }\limits_{{ {\text{proper (}}P ,G ,K )}} (\hat{v}^{\text{T}} \hat{S}\hat{v} + \hat{x}_{1}^{\text{T}} \hat{T}\hat{x}_{1} )$$
(14)

where

$$\hat{S} = \left( {\begin{array}{*{20}l} S \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ 0 \hfill & S \hfill & \ddots \hfill & \vdots \hfill \\ \vdots \hfill & \ddots \hfill & \ddots \hfill & 0 \hfill \\ 0 \hfill & \cdots \hfill & 0 \hfill & S \hfill \\ \end{array} } \right),\quad \hat{T} = \left( {\begin{array}{*{20}l} T \hfill & 0 \hfill & \cdots \hfill & 0 \hfill \\ 0 \hfill & T \hfill & \ddots \hfill & \vdots \hfill \\ \vdots \hfill & \ddots \hfill & \ddots \hfill & 0 \hfill \\ 0 \hfill & \cdots \hfill & 0 \hfill & T \hfill \\ \end{array} } \right).$$

Now, to define the concept of properness for \((P,G,K)\) and \(\hat{\mu }_{1}\) in a mathematical form and using the new variables definition, Problem 3.1 can be transformed into the following static form with two inequality constraints which are the Lyapunov equation and uncertainty bound.

Problem 4.1

Given the closed-loop system (4), find \(\hat{w}, \hat{\mu }_{1}\) and \((P,G,K)\) which minimizes the performance index.

$$J_{\text{D.C.sim}} = \mathop { \hbox{min} }\limits_{{{\hat{w}\,}{\text{proper}}\;}} \mathop { \hbox{max} }\limits_{{\hat{\mu }_{1} }} \mathop { {\text{min}}}\limits_{P,G,K} \left\| {{\tilde{\mathcal{A}}}\hat{w} + {\tilde{\mathcal{B}}}\hat{\mu }_{1} } \right\| ^{2}$$
(15)

Subject to

  1. 1.
    $$\bar{A}_{i}^{\text{T}} {\mathcal{P}}_{i} \bar{A}_{i} - {\mathcal{P}}_{i} < 0, {\mathcal{P}}_{i} \succ 0, \quad {\text{for}}\;i = 1,2$$
  2. 2.
    $$\| {\hat{\mu }_{1}^{2} < 1} \|$$

where \({\tilde{\mathcal{A}}} = \left[ {\begin{array}{*{20}c} {\tilde{T}} & 0 \\ 0 & {\tilde{S}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\hat{B}} \\ {\hat{G}} \\ \end{array} } \right],\quad {\tilde{\mathcal{B}}} = \left[ {\begin{array}{*{20}c} {\tilde{T}} & 0 \\ 0 & {\tilde{S}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\hat{M}} \\ {\hat{H}} \\ \end{array} } \right],\quad \left[ {\begin{array}{*{20}c} {\hat{T}} & 0 \\ 0 & {\hat{S}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\tilde{T}} & 0 \\ 0 & {\tilde{S}} \\ \end{array} } \right]^{\text{T}} \left[ {\begin{array}{*{20}c} {\tilde{T}} & 0 \\ 0 & {\tilde{S}} \\ \end{array} } \right]\)

The constraint of properness of \(\hat{w}\) can be converted to a mathematical condition as an inequality constraint, and finally the Problem 4.1 can be rewritten as the following problem.

Problem 4.2

Given the closed-loop system (4), find \(\hat{w}, \hat{\mu }_{1}\) and (P, G, K), which optimizes the performance index:

$$J_{\text{D.C.sim}} = \mathop { \hbox{min} }\limits_{0 \le \beta \le 1} \mathop { \hbox{min} }\limits_{{\hat{w}}} \mathop { \hbox{max} }\limits_{{\hat{\mu }_{1} }} \mathop { {\text{min}}}\limits_{P,G,K} \left\| {{\tilde{\mathcal{A}}}\hat{w} + {\tilde{\mathcal{B}}}\hat{\mu }_{1} } \right\|^{2}$$
(16)

Subject to

  1. 1.

    \(\bar{A}_{i}^{T} {\mathcal{P}}_{i} \bar{A}_{i} - {\mathcal{P}}_{i} < 0, {\mathcal{P}}_{i} \succ 0,\quad {\text{for}}\; i = 1,2\) \(\| {\hat{\mu }_{1} } \|^{2}\)

  2. 2.
    $$\hat{\mu }_{1}^{2} < 1$$
  3. 3.
    $$\hat{w}^{\text{T}} Q_{\beta } \hat{w} \ge 1$$

where \(Q_{\beta } = {\mathcal{G}}^{\text{T}} ({\mathcal{H}}J_{\beta }^{ - 1} {\mathcal{H}}^{\text{T}} )^{ - 1} {\mathcal{G}}\), \(\rho = \left( {\begin{array}{*{20}c} {\hat{\mu }_{1} } \\ {\hat{\mu }_{2} } \\ \end{array} } \right)\), \(J_{\beta } = \left( {\begin{array}{*{20}c} {\beta I} & 0 \\ 0 & {( {1 - \beta } )I} \\ \end{array} } \right)\), \({\mathcal{G}}\hat{w} = {\mathcal{H}}\rho\), \({\mathcal{G}} = {\mathcal{Y}}_{1} - {\mathcal{Y}}_{2} ,{\mathcal{H}} = \left( {\begin{array}{*{20}c} { - {\mathcal{H}}_{1} } & {{\mathcal{H}}_{2} } \\ \end{array} } \right)\) as illustrated in Nikoukhah et al. (2010).

Problem 4.2 is a finite-dimensional inequality constrained optimization problem with a nonlinear cost and three nonlinear constraints which can be solved using a recursive algorithm as a sub-optimal method. For this purpose, the appropriate initial conditions should be considered. Here, the sub-optimal solution can be obtained using the following algorithm.

Algorithm 1

Step 0 :

Choosing the value of parameter \(\tau\), between [\(\tau_{1} ,\tau_{2}\)], practically.

Step 1 :

Let the initial values for \(\hat{\mu }_{1}\) be in such a way that satisfy the constraint on uncertainty energy-bound as follows:

For example, it can be considered as

$$\hat{\mu }_{1}^{\text{T}} = [x_{1} (0) = 0\quad \mu_{1} (0) = 0.99\quad \mu_{1} (1) = 0\quad \cdots \quad \mu_{1} (\tau ) = 0]^{\text{T}}$$
Step 2 :

Let the initial values for \(\hat{w}\) be as follows:

$$\| {\hat{\mu }_{1}^{2} < 1} \|$$
$$\hat{w}^{\text{T}} = [w(0) = 0\quad w(1) = 0\quad \cdots \quad w(\tau ) = 0]^{\text{T}}$$
Step 3 :

Compute the controller parameters by solving the following optimization problem:

$$J_{{ 1. {\text{sim}}}} = \mathop {\hbox{min} }\limits_{P,G,K} \left\| {{\tilde{\mathcal{A}}}\hat{w} + {\tilde{\mathcal{B}}}\hat{\mu }_{1} } \right\|^{2}$$
(17)

Subject to

$$\bar{A}_{i}^{T} {\mathcal{P}}_{i} \bar{A}_{i} - {\mathcal{P}}_{i} < 0, \,{\mathcal{P}}_{i} \succ 0, \quad {\text{for}}\quad i = 1,2$$
Step 4 :

Solving the following optimization problem and computing the worst case of \(\hat{\mu }_{1}\)

$$J_{{ 2. {\text{sim}}}} = \mathop {\hbox{min} }\limits_{{\hat{\mu }_{1} }} \left\| {{\tilde{\mathcal{A}}}\hat{w} + {\tilde{\mathcal{B}}}\hat{\mu }_{1} } \right\|^{2}$$
(18)

Subject to

$$\left\| {\hat{\mu }_{1} } \right\|^{2} < 1$$
Step 5 :

Compute the optimal test signal by applying some modification

$$J_{{ 3. {\text{sim}}}} = \mathop { \hbox{min} }\limits_{0 \le \beta \le 1} \mathop {\hbox{min} }\limits_{{\hat{w}}} \|{\tilde{\mathcal{A}}}\hat{w} + {\tilde{\mathcal{B}}}\hat{\mu }_{1}^{2}\|$$
(19)

Subject to

$$\hat{w}^{\text{T}} Q_{\beta } \hat{w} \ge 1$$

where

$$\begin{aligned} Q_{\beta } & = {\mathcal{G}}^{\text{T}} ({\mathcal{H}}J_{\beta }^{ - 1} {\mathcal{H}}^{\text{T}} )^{ - 1} {\mathcal{G}},\quad \rho = \left( {\begin{array}{*{20}c} {\hat{\mu }_{1} } \\ {\hat{\mu }_{2} } \\ \end{array} } \right),\quad J_{\beta } = \left( {\begin{array}{*{20}c} {\beta I} & 0 \\ 0 & {(1 - \beta )I} \\ \end{array} } \right),\quad {\mathcal{G}}\hat{w} = {\mathcal{H}}\rho , \\ {\mathcal{G}} & = {\mathcal{Y}}_{1} - {\mathcal{Y}}_{2} ,\quad {\mathcal{H}} = ( - {\mathcal{H}}_{1} \quad {\mathcal{H}}_{2} ) \\ \end{aligned}$$

(Nikoukhah et al. 2010).

Step 6 :

Compare the new and previous values of J 3.sim

  1. 1.

    if the new one is higher than the previous one, then the algorithm should be stopped and new initial values should be chosen.

  2. 2.

    if the new one is equal to the previous one, then the algorithm should be stopped and J(optimal) is equal to J 3.sim

  3. 3.

    if the new one is less than the previous one, then go to Step 3 and update the values of \(\hat{w}\) \(\hat{\mu }_{1}\).

Remark 6

Choose the initial value for the time period of test signal from the proposed method in Campbell and Nikoukhah (2004) and Esna-Ashari et al. (2012a, b). A method has been presented to find the optimal value of injection time-period in Campbell and Nikoukhah (2004).

Remark 7

Static-based algorithm depends on initial values which are assumed in Steps 1 and 2. Therefore, to achieve a better result, the algorithm can be repeated with different initial values. Choosing different initial values will result in the three following cases;

  1. 1.

    If it is not appropriate, you will fall in an infinity loop while executing the algorithm between Step 6 and Step 3.

  2. 2.

    If it is fully appropriate, in a finite number of iterations, the optimal value of J will be obtained.

  3. 3.

    If it is almost appropriate, in a large number of iterations, the optimal value of J will be obtained.

It is worth mentioning that the controller parameters in Step 3 could not be computed analytically, therefore, the numerical algorithms should be applied. Moreover, these numerical solutions are usually time-consuming and may not be optimal. To overcome these difficulties, a constructive algorithm is suggested in the next section.

4.2 SME-Based Algorithm

To find an analytical solution for computing the controller parameters the following theorem is given.

Theorem 4.3

Given \(w(t) = 0\) , constant matrices \(\gamma_{0i} ,Q ,\, R > 0\) and the discrete-time system (1), there exists an optimal fixed-order DOF controller in the form of (3) which solves Problem 3.1 if there exist the matrices \(\varGamma_{i} > 0\) and \(W_{i} > 0\) and \((P,K,G)\) in \(\mho\) for the following nonlinear coupled equations:

$$\left\{ {\begin{array}{*{20}l} {1. \quad Z_{i} + (\bar{A}_{i} ) \varGamma_{i} (\bar{A}_{i}^{\text{T}} ) - \varGamma_{i} = 0, \quad i = 1, 2} \hfill \\ {2. \quad S + (\bar{A}_{i}^{T} )W_{i} (\bar{A}_{i} ) - W_{i} = 0,\quad i = 1, 2} \hfill \\ {3. \quad \mathop \sum \limits_{i = 1}^{2} (\tilde{B}_{{\bar{A}_{i} }}^{\text{T}} W_{i} \bar{M}_{i} \gamma_{0} \tilde{C}_{{\bar{M}_{i} }}^{\text{T}} + \tilde{B}_{{\bar{M}_{i} }}^{\text{T}} W_{i} \bar{A}_{i} \varGamma_{i} \tilde{C}_{{\bar{A}_{i} }}^{\text{T}} + \hat{R}E^{\text{T}} T {\varGamma_{i}} T^{\text{T}} )}=0 \hfill \\ \end{array} } \right.$$
(20)

where

$$Z_{i} = \bar{M}_{i} \gamma_{0i} \bar{M}_{i}^{\text{T}} , \varGamma_{i} = \left[ {\begin{array}{*{20}c} {\varGamma_{11} } & {\varGamma_{12} } \\ {\varGamma_{21} } & {\varGamma_{22} } \\ \end{array} } \right]_{i} , \quad W_{i} = \left[ {\begin{array}{*{20}c} {W_{11} } & {W_{12} } \\ {W_{21} } & {W_{22} } \\ \end{array} } \right]_{i} ,\quad E = \left[ {\begin{array}{*{20}c} 0 & G \\ K & {P + KDG} \\ \end{array} } \right],$$
$$\hat{Q} = \left[ {\begin{array}{*{20}c} Q & 0 \\ 0 & 0 \\ \end{array} } \right],\quad \hat{R} = \left[ {\begin{array}{*{20}c} R & 0 \\ 0 & 0 \\ \end{array} } \right],\quad T = \left[ {\begin{array}{*{20}c} {0_{q \times n} } & 0 \\ 0 & {I_{g \times g} } \\ \end{array} } \right],\quad S = \hat{Q} + T^{\text{T}} E^{\text{T}} \hat{R} E T$$
$$\tilde{A}_{{\bar{A}_{i} }} = \left[ {\begin{array}{*{20}c} {A_{i} } & 0 \\ 0 & 0 \\ \end{array} } \right],\quad \tilde{B}_{{\bar{A}_{i} }} = \left[ {\begin{array}{*{20}c} {B_{i} } & 0 \\ 0 & I \\ \end{array} } \right],\quad \tilde{C}_{{\bar{A}_{i} }} = \left[ {\begin{array}{*{20}c} {C_{i} } & 0 \\ 0 & I \\ \end{array} } \right], \tilde{A}_{{\bar{M}_{i} }} = \left[ {\begin{array}{*{20}c} {M_{i} } \\ 0 \\ \end{array} } \right],\quad \tilde{B}_{{\bar{M}_{i} }} = \left[ {\begin{array}{*{20}c} {B_{i} } & 0 \\ 0 & I \\ \end{array} } \right],\quad \tilde{C}_{{\bar{M}_{i} }} = \left[ {\begin{array}{*{20}c} {\bar{N}_{i} } \\ 0 \\ \end{array} } \right].$$

Proof of Theorem 4.3 is illustrated in Appendix 1.

Theorem 4.4

Given the known positive definite matrices \(W_{i} \;{\text{and}}\;\varGamma_{i}\) , the controller parameters that are represented as unknown matrix \(E\) in (20), can be computed by simultaneous solution of two coupled SME as the following forms:

$$\left\{ \begin{aligned} {\mathbf{\check{A} \check{X} \check{B} }} + {\mathbf{\check{C} \check{Y} \check{F} }} = {\mathbf{\check{S} }}_{1} \hfill \\ {\mathbf{\check{E} \check{X} \check{B} }} + {\mathbf{\check{F} \check{Y} \check{I} }} = {\mathbf{\check{S} }}_{2} \hfill \\ \end{aligned} \right.$$
(21)
$$\left\{ \begin{aligned} {\mathbf{\check{G} }}{\mathbf{\check{X} \check{M} }} + {\mathbf{\check{C} \check{Z} \check{I} }} = {\mathbf{\check{S} }}_{3} \hfill \\ {\mathbf{\check{E} \check{X} \check{M} }} + {\mathbf{\check{F} \check{Z} \check{I} }} = {\mathbf{\check{S} }}_{4} \hfill \\ \end{aligned} \right.$$
(22)

where

$$\check{S}_{1} = \mathop \sum \limits_{i = 1}^{2} [B_{i}^{\text{T}} W_{{11}_i} M_{i} \gamma_{0} N_{i}^{\text{T}} + B_{i}^{\text{T}} W_{{11}_i} A_{i} \varGamma_{{11}_i} C_{i}^{\text{T}} ], \quad \check{S}_{2} = \sum\limits_{i = 1}^{2} {[W_{{21}_i} M_{i} \gamma_{0} N_{i}^{\text{T}} + W_{{21}_i} A_{i} \varGamma_{{11}_i} C_{i}^{\text{T}} ]} ,$$
$$\check{S}_{3} = \sum\limits_{i = 1}^{2} {[B_{i}^{\text{T}} W_{{11}_i} A_{i} \varGamma_{{12}_i} ]} ,\quad \check{S}_{4} = \sum\limits_{i = 1}^{2} {[W_{{21}_i} A_{i} \varGamma_{{12}_i} ]} .$$
$$\check{A} = [\check{a}_{1} \check{a}_{2} ],[\check{a}_{i} ] = [B_{i}^{\text{T}} W_{{11}_i} B_{i} ], \quad \check{B} = [\check{b}_{1}^{\text{T}} \check{b}_{2}^{\text{T}} ]^{\text{T}} , \quad [\check{b}_{i} ] = [\varGamma_{{21}_i} C_{i}^{\text{T}} ],\quad \check{C} = [\check{c}_{1} \check{c}_{2} ],$$
$$[\check{c}_{i} ] = [B_{i}^{\text{T}} W_{{12}_i} ],\quad \check{D} = [\check{d}_{1} \check{d}_{2} ],\quad [\check{d}_{i} ] = [N_{i} \gamma_{0} N_{i}^{\text{T}} + C_{i} \varGamma_{{11}_i} C_{i}^{\text{T}} ],\quad \check{E} = [\check{e}_{1} \check{e}_{2} ],$$
$$[\check{e}_{i} ] = [W_{{21}_i} B_{i} ],\quad \check{F} = [\check{f}_{1} \check{f}_{2} ],\quad [\check{f}_{i} ] = [W_{{22}_i} ],\quad \check{G} = [\check{g}_{1} \check{g}_{2} ],\quad [\check{g}_{i} ]= [R + B_{i}^{\text{T}} W_{{11}_i} B_{i} ],$$
$$\check{M} = [\check{m}_{1}^{\text{T}} \check{m}_{2}^{\text{T}} ]^{\text{T}} , \quad [\check{m}_{i} ] = [\varGamma_{{22}_i} ],\quad \check{H} = [\check{h}_{1} \check{h}_{2} ],\quad [\check{h}_{i} ] = [C_{i} \varGamma_{{12}_i} ],$$
$$\begin{aligned} \check{I} & = [\check{I}_{1}^{\text{T}} \check{I}_{2}^{\text{T}} ]^{\text{T}} , \quad [\check{I}_{i} ] = [I_{i} ],\quad \check{X} = \left[ {\begin{array}{*{20}c} G & 0 \\ 0 & G \\ \end{array} } \right],\;\check{Y} = \left[ {\begin{array}{*{20}c} {K\check{d}_{i} + (P + KDG)\check{b}_{i} } & 0 \\ 0 & {K\check{d}_{i} + (P + KDG)\check{b}_{i} } \\ \end{array} } \right], \\ \check{Z} & = \left[ {\begin{array}{*{20}c} {K\check{h}_{i} + (P + KDG)\check{m}_{i} } & 0 \\ 0 & {K\check{h}_{i} + (P + KDG)\check{m}_{i} } \\ \end{array} } \right] \\ \end{aligned}$$

Proof of Theorem 4.4 is illustrated in Appendix 2.

It is worth mentioning that many numerical solution algorithms have been proposed for coupled SMEs in the literature; but in this paper, the coupled SMEs have been decoupled and a constructive algorithm has been suggested to calculate the unknown controller parameters.

Theorem 4.5

Given symmetric positive definite matrices, \(W_{{11}_{i}} ,W_{{22}_{i}} , \varGamma_{{11}_{i}}\) and \(\varGamma_{{22}_i}\) , the controller parameters are calculated as follows:

$$\left\{ {\begin{array}{*{20}c} {\overline{\overline{A}} \check{Z} \overline{\overline{B}} - \check{Z} = \overline{\overline{W}} } \\ {\check{X} = (\check{G}^{\text{T}} \check{G} )^{ - 1} \check{G}^{\text{T}} (\check{S}_{3} - \check{C} \check{Z} \check{I} )\check{M}^{\text{T}} (\check{M} \check{M}^{\text{T}} )^{ - 1} } \\ {\check{Y} = (\check{F}^{\text{T}} \check{F} )^{ - 1} \check{F}^{\text{T}} \check{S}_{2} \check{I}^{\text{T}} (\check{I} \check{I}^{\text{T}} )^{ - 1} - (\check{F}^{T} \check{F} )^{ - 1} \check{F}^{\text{T}} \check{E} \check{X} \check{B} \check{I}^{\text{T}} (\check{I} \check{I}^{\text{T}} )^{ - 1} } \\ \end{array} } \right.$$
(23)

where

$$\overline{\overline{A}} = (\check{F}^{\text{T}} \check{F} )^{ - 1} \check{F}^{\text{T}} \check{E} (\check{G}^{\text{T}} \check{G} )^{ - 1} \check{G}^{\text{T}} \check{G} , \overline{\overline{B}} = \check{I}_{1} , \overline{\overline{W}} = (\check{F}^{\text{T}} \check{F} )^{ - 1} \check{F}^{\text{T}} [\check{E} (\check{G}^{\text{T}} \check{C} )^{ - 1} \check{G}^{\text{T}} \check{S}_{3} - \check{S}_{4} ]\check{I}^{\text{T}} (\check{I} \check{I}^{\text{T}} )^{ - 1} .$$

Proof of Theorem 4.5 is illustrated as Appendix 3.

Now, using Theorems 4.3–4.5, the controller parameters can be calculated analytically.

Algorithm 2

Step 0 :

Choosing the value of parameter \(\tau\), between [\(\tau_{1} ,\tau_{2}\)], practically.

Step 1 :

Let the initial values for \(\hat{\mu }_{1}\) in such a way that satisfy the constraint on uncertainty energy-bound

$$\left\| {\hat{\mu }_{1} } \right\|^{2} < 1$$

For example, it can be considered as

$$\hat{\mu }_{1}^{\text{T}} = [x_{1} (0) = 0\quad \mu_{1} (0) = 0.99\quad \mu_{1} (1) = 0 \cdots \mu_{1} (\tau ) = 0]^{\text{T}}$$
Step 2 :

Let the initial values for \(\hat{w}\) be as follows

$$\hat{w}^{\text{T}} = [w(0) = 0\quad w(1) = 0\quad \cdots \quad w(\tau ) = 0]^{\text{T}}$$
Step 3 :

Computing the controller parameters based on (23)

$$\Bigg \{ {\begin{array}{*{20}c} {\check{Z} \overline{\overline{B}} - \check{Z} = ( {\check{F}^{T} \check{F} } )^{ - 1} \check{F}^{T} [ {\check{E} ( {\check{G}^{T} \check{G} } )^{ - 1} \check{G}^{T} \check{S}_{3} - \check{S}_{4} } ]\check{I}^{T} ( {\check{I} \check{I}^{T} } )^{ - 1} } \\ {\check{X} = ( {\check{G}^{T} \check{G} } )^{ - 1} \check{G}^{T} ( {\check{S}_{3} - \check{C} \check{Z} \check{I} } )\check{M}^{T} ( {\check{M} \check{M}^{T} } )^{ - 1} } \\ {\check{Y} = ( {\check{F}^{T} \check{F} } )^{ - 1} \check{F}^{T} \check{S}_{2} \check{I}^{T} ( {\check{I} \check{I}^{T} } )^{ - 1} - ( {\check{F}^{T} \check{F} } )^{ - 1} \check{F}^{T} \check{E} \check{X} \check{B} \check{I}^{T} ( {\check{I} \check{I}^{T} } )^{ - 1} } \\ \end{array} }$$
Step 4 :

Solving min–max Problem 4.4 by considering the results of Step 3 and computing the new values for uncertainties and exogenous test signal [2],

$$J_{\text{D.CL}} = \mathop {\hbox{min} }\limits_{ w(t)} \mathop {\hbox{max} }\limits_{{\mu_{1} (t), x_{1} (0)}} \mathop \sum \limits_{t = 0}^{\tau } v(t)^{\text{T}} Sv(t) + x_{1} (t)^{\text{T}} Tx_{1} (t)$$

Subject to

$$x_{1} (0)^{\text{T}} x_{1} (0) + \mathop \sum \limits_{t = 0}^{\tau } \mu_{1} (t)^{\text{T}} \mu_{1} (t)) < 1$$
Step 5 :

Updating the values of \(W_{i}\) and \(\varGamma_{i}\)

$$W_{i} = \mathop \sum \limits_{k = 0}^{\tau } (\bar{A}_{i}^{\text{T}} )^{k} S(\bar{A}_{i}^{k} )\quad {\text{and}}\;\varGamma_{i} = \mathop \sum \limits_{k = 0}^{\tau } (\bar{A}_{i}^{k} )Z_{i} (\bar{A}_{i}^{\text{T}})^{k}$$
Step 6 :

Checking the stop criterion, if the results are converged to a constant value, the algorithm should be stopped, else go to Step 3 and update the computed values.

In comparison with static algorithm, the SME algorithm is an analytical algorithm and it converges to the optimal result in less iteration for systems with a smaller number of models. Actually, the analytical solution of equations in the SME algorithm is complicate and time-consuming.

5 Numerical Example

To evaluate the synthesis approach, Problem 3.1 is solved using both solution algorithms in three different scenarios. In the first scenario, test signal and controller are designed for a MIMO system using Algorithm 1. In the second one, to solve the synthesis problem, algorithm II is considered. In scenario III, to show the effect of weighting matrices in (7) on the synthesis results, scenario II is resolved with different values of weighting matrices.

Consider the following MIMO system with healthy and faulty models known as models-1 and -2, correspondingly. The order of system is two, two inputs and two outputs and both models are considered to be unstable. The proposed models are as follows:

$$A_{1} = \left( {\begin{array}{*{20}c} { - 0.28} & { - 0.8} \\ {2.4} & { - 0.4} \\ \end{array} } \right),\;B_{1} = \left( {\begin{array}{*{20}c} 3 & {0.5} \\ 1 & { - 2} \\ \end{array} } \right),\;M_{1} = \left( {\begin{array}{*{20}c} 1 & 2 \\ 0 & 1 \\ \end{array} } \right),\;C_{1} = \left( {\begin{array}{*{20}c} 0 & { - 2} \\ 1 & 3 \\ \end{array} } \right),\;D_{1} = \left( {\begin{array}{*{20}c} {0.5} & 3 \\ 1 & 1 \\ \end{array} } \right),\;N_{1} = \left( {\begin{array}{*{20}c} 1 & {0.5} \\ 3 & 1 \\ \end{array} } \right),$$
$$A_{2} = \left( {\begin{array}{*{20}c} { - 1.1} & {0.55} \\ {0.57} & {1.14} \\ \end{array} } \right),\; B_{2} = \left( {\begin{array}{*{20}c} {1.8} & {0.5} \\ { - 1} & {1.8} \\ \end{array} } \right),M_{2} = \left( {\begin{array}{*{20}c} 0 & { - 1} \\ 2 & 3 \\ \end{array} } \right),\;C_{2} = \left( {\begin{array}{*{20}c} { - 2.1} & 0 \\ 1 & { - 1} \\ \end{array} } \right),\;D_{2} = \left( {\begin{array}{*{20}c} {0.5} & 3 \\ 1 & 1 \\ \end{array} } \right),\;N_{2} = \left( {\begin{array}{*{20}c} 3 & 1 \\ { - 1} & 1 \\ \end{array} } \right).$$

For the scenarios I and II, all constant weighting matrices are selected as a unitary matrix with appropriate dimension. The injection period time of test signal is 18(s) and the scaled optimal test signal \(w(t)\) is achieved for first scenario in 23 iterations as shown in Fig. 3. In this way, the minimum value of cost function for the first scenario is 30.1249. The optimal DOF controller is calculated as:

$$P = \left( {\begin{array}{*{20}c} { - 2.5090} & {0.6181} \\ { - 3.3922} & {0.5411} \\ \end{array} } \right),\; G = \left( {\begin{array}{*{20}c} {0.0190} & {0.3262} \\ {0.6781} & { - 0.0872} \\ \end{array} } \right),\; K = \left( {\begin{array}{*{20}c} {1.1789} & {0.4678} \\ {1.4612} & {0.8281} \\ \end{array} } \right).$$
Fig. 3
figure 3

Scaled optimal test signal for Scenario-1

Regarding the designed controller, the closed-loop poles for healthy and faulty models are placed in \(p_{1,2} = 0.7180 \pm 0.4533i, p_{3} = - 0.6681 \pm 0.5443i\) and \(p_{1} = - 0.4313, p_{2,3} = 0.6324 \pm 0.8773i, p_{4} = - 0.0136\) respectively.

In scenario II, the controller and the test signal are designed using algorithm II in 14th iteration. The test signal for this case is depicted in Fig. 4 and the minimum value of cost function is calculated as 27.9311 which is 7% lower than the first scenario. Note that, it does not mean that the energy of test signal in the second case is surely lower than the first case. In fact, the optimal test signal minimizing the quadratic cost in (7) enables the optimal controller-performance to be achieved. It is a trade-off between controller-performance and energy of the test signal. Moreover, the optimal controller parameters are calculated as:

$$P = \left( {\begin{array}{*{20}c} { - 2.7090} & {0.6183} \\ { - 3.2922} & {0.6402} \\ \end{array} } \right),\; G = \left( {\begin{array}{*{20}c} {0.0090} & {0.1762} \\ {0.6676} & { - 0.0972} \\ \end{array} } \right),\; K = \left( {\begin{array}{*{20}c} {1.0795} & {0.4678} \\ {1.4610} & {0.8281} \\ \end{array} } \right).$$
Fig. 4
figure 4

Scaled optimal test signal for Scenario-2

Regarding the mentioned controller, the closed-loop poles for healthy and faulty models are placed in \(p_{1,2} = - 0.3530 \pm 0.4809i, p_{3,4} = 0.1043 \pm 0.3203i\) and \(p_{1} = - 0.8942, p_{2,3} = 0.4694 \pm 0.6082i, p_{4} = 0.1783\) respectively.

It is expected to find the proper test signal with lower energy by varying the weighting matrices \(S\) and \(T\) in (7). As noted before, if the weighting matrices are chosen properly, the cost function in (7) could be reduced. To prove this claim, the Problem 3.1 is resolved for \(S = {\text{diag(}}0.6 )\) and \(T = {\text{diag}}(1.4)\) as scenario III. The injection time-period is assumed to be 18 s, and the scaled optimal exogenous test signal \(w(t)\) is depicted in Fig. 5. In this way, the minimum value of cost function is reduced to 27.0318 after the 17th iteration and the optimal controller parameters are calculated as:

$$P = \left( {\begin{array}{*{20}c} { - 2.6211} & {0.5183} \\ { - 3.7922} & {0.8402} \\ \end{array} } \right),\;G = \left( {\begin{array}{*{20}c} {0.0295} & {0.2962} \\ {0.6576} & { - 0.0872} \\ \end{array} } \right),\;K = \left( {\begin{array}{*{20}c} {1.2795} & {0.4678} \\ {1.3110} & {0.7281} \\ \end{array} } \right).$$
Fig. 5
figure 5

Scaled optimal test signal for Scenario-3

The closed-loop poles for healthy and faulty models for this controller are placed in \(p_{1,2} = 0.1907 \pm 0.7395i, p_{3} = - 0.9157, p_{4} = 0.9414\) and \(p_{1,2} = - 0.5882 \pm 0.7760i, p_{3,4} = - 0.0247 \pm 0.4033i\) respectively. The mentioned scenario results are summarized in Table 1.

Table 1 Simulation results of scenarios 1, 2 and 3

6 Conclusion

In this paper, the optimal integrated synthesis of AFD and control problem has been formulated in a discrete-time setting. The optimal fixed-order dynamic output feedback controller is designed which guarantees the stability of healthy and faulty models and optimizes the control performance for healthy model. From an optimal AFD point of view, an optimal test signal is generated such that the system models are guaranteed separation with minimum energy. In this way, the optimal integrated AFD and control problem is formulated as a constrained finite-dimensional optimization problem with a general quadratic performance index. Two recursive constructive algorithms have been suggested for finding sub-optimal solution of the proposed optimization problem. Finally, to illustrate the effectiveness of the theoretical results, the algorithms were applied to a MIMO system with two unstable models. The test signal was generated for three interesting scenarios. By varying the constant weighting matrices, the trade-off between control and detection has been shown. Further research works include two aspects. The first one is that the proposed AFD approach could be extended to a large class of uncertain nonlinear systems. Implementation on an experimental set-up such as electrical machines could be another interesting subject for future research.