Keywords

15.1 Introduction

The bilinear matrix inequality (BMI) problems are not convex optimization problems due to the bilinear terms in the constraint [1] and, therefore, can have multiple local solutions. BMI problems are proved to be NP-hard [2]. In recent years, considerable research effort has been devoted to the development of algorithms to solve BMI problems. Branch and bound method [3, 4] is a global optimization algorithm for solving BMIs, which is an implicit enumeration. Based on some branching rules and bounding approaches, the local minima is decided in order to obtain the global minimum. But the combination explosion problem is common in solving high-order BMI problems. Another simple global optimization algorithm is random search method [5]. In this method, the probability to find a solution as well as the number of random trials can be evaluated. The drawback is that the probability that the algorithm fails is not equal to zero for a finite number of iterations and the computation complex is still high. The rank minimization approach [1], also a global optimization method, is based on the semidefinite programming relaxation approach to indefinite quadratic programming. But the convergence rate is slow and sometimes it cannot find the global optimum. As global methods have so many difficulties, most of the algorithms formed in the literature that claim the applicability to control-related problems of practical size are local search algorithms. Most of the existing local approaches are computationally fast but, depending on the initial condition, may not converge to the global optimum. The simplest local approach makes use of the fact that by fixing some of the variables, the BMI problem becomes convex in the remaining variables, and vice versa, and iterates between them [69]. This method is not guaranteed to converge to a local solution. Another local approach is the so-called over-bounding method [10], which splits two variables in BMI terms into different LMI ones, and the nonpositive quadratic terms are successively replaced by their upper bounds. Over-bounding method can reduce the conservatism arising from seeking a common LMI solution in the past results. But sometimes it has the defect of low convergence rate. Path-following method (PFM) was proposed in [11]. As a step-by-step method, implying linearization approach at its key step, it has shown a significant advantage [1214]. However, the linearization method of this method leads to nonconvergence when the perturbation is too wide, or the rate of convergence is very slow when the perturbation is too small.

The main purpose of this paper is to provide an improvement on the path-following method. The result is based on a new linearization method. By solving the problem of static output feedback control, the detailed steps of the method is presented. A numerical example is given to show the effectiveness of our method.

This paper is organized as follows. Section 15.2 describes the existing path-following method steps and the improved path-following method in detail. Section 15.3 applies this algorithm to the numerical examples, and compares results with the existing path-following method. Finally, Sect. 15.4 conclude the work with some comments.

15.2 Problem Formulation and Path-Following Method

15.2.1 Problem Formulation

Consider the problem of static output feedback (SOF) design for the linear following time-invariant dynamical system:

$$ \left\{ {\begin{array}{*{20}l} {\dot{x} = Ax + Bu} \hfill \\ {y = Cx,} \hfill \\ \end{array} } \right. $$
(15.1)

The SOF stabilization problem is to find a SOF controller \( u = Fy \), such that the closed-loop system given by

$$ \dot{x} = (A + BFC)x $$
(15.2)

is stable. As is known, the closed-loop system (15.2) is stable if and only if there exists a \( P = P^{T} > 0 \) such that

$$ P(A + BFC) + (A + BFC)^{T} P < 0 $$
(15.3)

As mentioned in [8], if

$$ P(A + BFC) + (A + BFC)^{T} P - \alpha P < 0 $$
(15.4)

holds for some negative number \( \alpha \), the closed-loop system matrix \( A + BFC \) has its eigenvalues on the strict left-hand side of the line \( \alpha \) in the complex s-plane. If \( \alpha \le 0 \) satisfying (15.4) can be found, the SOF stabilization problem is solved. So, the SOF optimization problem in control is

$$ \begin{aligned} OP1 & \quad \hbox{min} \,\alpha \\ {\text{subject}}\;\;\;{\text{to}} & \quad P(A + BFC) + (A + BFC)^{T} - \alpha P\,<\,0. \\ \end{aligned} $$

15.2.2 PFM

In order to solve optimization problem OP1, the PFM developed in [11] can be adopted. The method consists of three steps.

  • Step1: Initialization step. At initial, this step is to search suboptimal values of F 0, P 0, and \( \alpha_{0} \) by computing OP1 as follows.

    1. (1)

      Select an initial value \( M \) such that \( A + BM \) is Hurwitz;

    2. (2)

      Solve the optimization problem OP2 with respect to \( P \).

      $$ \begin{aligned} OP2 & \quad \hbox{min} \,\alpha \\ {\text{subject}}\;\;\;{\text{to}} & \quad P(A + BM) + (A + BM)^{T} P - \alpha P < 0 \\ \end{aligned} $$
    3. (3)

      For fixed \( P \), solve OP1 with respect to \( \alpha \) and F.

  • If \( \alpha \le 0, \) F is the stabilising SOF gain, stop. Else, set k = 1, i = 1, and let \( \alpha_{0} = \alpha \), F 0 = F, Ac = ABF 0 C.

  • Step 2: Perturbation step. The BMI (15.4) is then linearized around (\( F_{k} ,P_{k} ,\alpha_{k} \)) by means of perturbations ∆F, ∆P and \( \Delta \alpha \). Set \( \beta = \beta_{0} \). The LMI problem to be solved in this step is

  • $$ \begin{aligned} OP3 & \quad \hbox{min} \,\alpha \\ {\text{subject}}\;\;\;{\text{to}} & \quad P_{k} +\Delta P > 0 \\ & \quad \left( \begin{gathered} \beta P_{k} \begin{array}{*{20}c} {} & {\Delta P} \\ \end{array} \hfill \\\Delta P\begin{array}{*{20}c} {} & {\beta P_{k} } \\ \end{array} \hfill \\ \end{gathered} \right) > 0 \\ & \;(P_{k} +\Delta P)A_{c} + A_{c}^{T} (P_{k} +\Delta P) + P_{k} B\Delta FC + (B\Delta FC)^{T} P_{k} < \alpha P_{k} \\ \end{aligned} $$
  • Step 3: Update step. Let \( F_{k} = F_{k} +\Delta F,P_{k} = P_{k} +\Delta P,\alpha_{k} = \alpha_{k} +\Delta \alpha \). For fixed \( F_{k} \), compute new \( P_{k} \) by solving OP1, and then compute new \( F_{k} \) and \( \alpha_{k} \) by solving OP1. Let \( Ac = A + BF_{k} C \).

    If \( \alpha \le 0, \) stop. Else if the relative improvement in \( \alpha \) is larger than a preset value, let \( k = k + 1, \) go to Step 2.

15.3 Main Result

In this section, the new linearization method and IPFM is given as follows.

Write \( \Delta F = F - F_{k} \), \( \Delta P = P - P_{k} \) and \( A_{c} = A + BF_{k} C \), where \( F_{k} \) and \( P_{k} \) are fixed matrices. The left side of inequality (15.3) is expanded around \( (F_{k} ,P_{k} ) \) as follows:

$$ \begin{aligned} P &(A + BFC) + (A + BFC)^{T} P \\ = & \,P_{k} (A_{c} + B\Delta FC) + (A_{c} + B\Delta FC)^{T} P_{k} +\Delta PA_{c} + A_{c}^{T}\Delta P \\ & +\Delta PB\Delta FC + (B\Delta FC)^{T}\Delta P \\ &\le \,P_{k} (A_{c} + B\Delta FC) + (A_{c} + B\Delta FC)^{T} P_{k} +\Delta PA_{c} + A_{c}^{T}\Delta P \\ & + \frac{1}{2}(\Delta P + B\Delta FC)^{T} (\Delta P + B\Delta FC) \\ = & \,b(\Delta F,\Delta P). \\ \end{aligned} $$

Then, by applying Schur complement, \( b(\Delta F,\Delta P) < 0 \) is equivalent to the following LMI condition:

$$ \left( {\begin{array}{*{20}c} {(P_{k} +\Delta P)A_{c} + A_{c}^{T} (P_{k} +\Delta P) + P_{k} B\Delta FC + (B\Delta FC)^{T} P_{k} } & * \\ {B\Delta FC +\Delta P} & { - 2I} \\ \end{array} } \right) < 0 $$
(15.5)

The algorithm for improved path-following method consists of five steps.

  • Step 1: Initialization step. This step is the same as the initialization step in PFM.

  • Step 2: Small range of perturbation step. The BMI (15.4) is then linearized around (\( F_{k} ,P_{k} ,\alpha_{k} \)) by means of perturbations ∆F, ∆P and \( \Delta \alpha \). Set \( \beta = \beta_{0} \). The LMI problem to be solved in this step is

    $$ \begin{aligned} OP4 & \quad \hbox{min} \;\alpha \\ {\text{subject}}\;\;\;{\text{to}} & \quad P_{k} +\Delta P > 0 \\ & \quad \left( {\begin{array}{*{20}c} {\beta P_{k} } & {\Delta P} \\ {\Delta P} & {\beta P_{k} } \\ \end{array} } \right) > 0 \\ & \quad \left( {\begin{array}{*{20}c} {(P_{k} +\Delta P)A_{c} + A_{c}^{T} (P_{k} +\Delta P) + P_{k} B\Delta FC + (B\Delta FC)^{T} P_{k} - \alpha P_{k} } & * \\ {B\Delta FC +\Delta P} & { - 2I} \\ \end{array} } \right) < 0 \\ \end{aligned} $$
  • Step 3: Update step. Let \( F_{k} = F_{k} +\Delta F,P_{k} = P_{k} +\Delta P,\alpha_{k} = \alpha_{k} +\Delta \alpha \). For fixed \( F_{k} \), compute new \( P_{k} \) by solving OP1, and then compute new \( F_{k} \) and \( \alpha_{k} \) by solving OP1. Let \( A_{c} = A + BF_{k} C \).

  • If \( \alpha \le 0, \) stop. Else if \( |\alpha_{k} - \alpha_{k - 1} | < \varepsilon_{1} \), a prescribed tolerance, set j = 1, let \( \beta = \beta \times 2 Z \), \( F_{0} = F_{k} ,P_{0} = P_{k} ,\alpha_{0} = \alpha_{k} \); Else let \( k = k + 1 \) go to Step 2.

  • Step 4: Wide range of perturbation step. Solve OP4 with replacement of k by j.

  • Step 5: Update step. Let \( F_{j} = F_{j} +\Delta F,P_{j} = P_{j} +\Delta P,\alpha_{j} = \alpha_{j} +\Delta \alpha \). For fixed \( F_{j} \), compute new \( P_{j} \) by solving OP1, and then compute new \( F_{j} \) and \( \alpha_{j} \) by solving OP1.

  • Let \( A_{c} = A + BF_{j} C. \)

  • If \( \alpha \le 0, \) stop. Else, if the difference \( \alpha_{j} - \alpha_{j - 1} < - \varepsilon_{2} \), let \( F_{0} = F_{j} ,P_{0} = P_{j} ,\alpha_{0} = \alpha_{j} \), \( i = i + 1,\,{\text{set}}\,k = 1 \), go to step 2; Else, if the difference \( \alpha_{j} - \alpha_{j - 1} > - \varepsilon_{2} \), and j < 3, let j = j + 1 and \( \beta = \beta \times 2 \), and then go to step 4; Else, if the difference \( \alpha_{j} - \alpha_{j - 1} > - \varepsilon_{2} \), and j ≥ 3, stop. The system may not be stable via output feedback.

15.4 Numerical Example

An example concerning the SOF control problem is given in this section.

Example 15.1

Consider the SOF stabilization problem of system (15.1) with the following parameter matrices [6]:

$$ \begin{aligned} A = & \,\left( {\begin{array}{*{20}c} { - 4} & { - 2} & { - 8} & 5 & { - 1} & { - 8} & 4 \\ { - 9} & { - 7} & { - 6} & { - 3} & { - 2} & 2 & 6 \\ { - 7} & { - 3} & 7 & 5 & 2 & {10} & { - 1} \\ { - 6} & { - 3} & 8 & 1 & 2 & 3 & { - 7} \\ 0 & { - 5} & 6 & { - 3} & { - 4} & 6 & 1 \\ 2 & 8 & { - 4} & 6 & { - 9} & { - 2} & { - 4} \\ 5 & 8 & 3 & 1 & 9 & { - 6} & 3 \\ \end{array} } \right), \\ B = & \,\left( {\begin{array}{*{20}c} { - 3.9} & 2 & {0.1} & { - 2.5} & { - 1} & {2.5} & { - 1} \\ {0.5} & {0.5} & { - 1} & { - 0.5} & 1 & 2 & { - 0.05} \\ \end{array} } \right)^{T} , \\ C = & \,\left( {\begin{array}{*{20}c} 3 & 6 & { - 5} & { - 2} & { - 1} & { - 7} & 5 \\ { - 1} & { - 4} & { - 7} & { - 1} & { - 6} & { - 5} & { - 3} \\ \end{array} } \right). \\ \end{aligned} $$

In this example, two different initialization methods are used in the first step, which aims to test whether IPFM is sensitive to the initial value. The first initialization method is shown in our algorithms, which is very simple, but the initial value is not good enough. The second one in [6] is to optimize the initial value by iteration. It is more complex but a better initial value can be obtained.

Using the initialization method in our algorithms, we obtain the initial value

$$ F_{01} = \left( {\begin{array}{*{20}c} {0.4890} & {0.1910} \\ {0.4440} & {1.4200} \\ \end{array} } \right) $$

and \( \alpha_{01} = 49.8330 \). After four iterations of small range of perturbation and three iterations of large range of perturbation of IPFM, a SOF gain is found as

$$ F = \left( {\begin{array}{*{20}c} {0.5663} & {3.5617} \\ { - 0.0314} & {1.3763} \\ \end{array} } \right), $$

\( \alpha = - 1.7580 \), and the eigenvalues of the closed-loop system are −20.9826 ± j25.8332, −0.9704 ± j12.6477, −0.9618 ± j0.0784 and −5.0692. But the existing path-following method (PFM) cannot find a stabilizing solution.

Using the initialization step in [6], the initial value is obtained

$$ F_{02} = \left( {\begin{array}{*{20}c} { - 0.8871} & {4.9310} \\ { - 0.6576} & {0.9867} \\ \end{array} } \right) $$

and \( \alpha_{02} = 0.00079717 \). After two iterations of small range of perturbation and 0 iterations of large range of perturbation, a SOF gain F is found as

$$ F = \left( {\begin{array}{*{20}c} { - 0.8664} & {4.7886} \\ { - 0.6107} & {0.8197} \\ \end{array} } \right), $$

\( \alpha = - 1.0250 \), and the eigenvalues of the closed-loop system are −6.6636 ± j35.4503, −0.8788 ± j12.6648, −0.5967 and −4.8248 ± j6.2698. In this case, the PFM can also find a solution as

$$ F = \left( {\begin{array}{*{20}c} { - 0.7647} & {4.5271} \\ { - 0.5021} & {0.6122} \\ \end{array} } \right), $$

and \( \alpha = - 2.7612 \).

Table 15.1 shows the comparison of convergence between IPFM and PFM. Let \( \beta = 0.2 \), both of them are convergent; when the value increases to 0.5, IPFM is still convergent, but PFM will not converge.

Table 15.1 The comparison of convergence between IPFM and PFM (Example 1)

The results indicate that IPFM is able to quickly obtain the stable SOF gain under the two initial value. However, PFM can solve the SOF problem under the initial value \( F_{02} \) and cannot solve it under the initial value \( F_{01} \). It shows IPFM is not sensitive to the initial values. This is owing to the newly wide range of perturbation step introduced, IPFM has the ability to escape from local optimum.

15.5 Conclusion

In this paper, we have given the improved path-following method. Compared with the existing path-following method, our method is to linearize the BMIs by use of a new linearization approach, which improved convergence to a great extent. Then, by means of adding a wide range of perturbation step, IPFM is able to escape from local optimum. A numerical example is presented to show that the convergence and optimization ability of IPFM are better than PFM.