1. INTRODUCTION

Real-world control problems are always multicriteria problems. Finding the Pareto set and hence the Pareto optimal solutions, i.e., solutions that are unimprovable simultaneously in all criteria, is a difficult problem. It is well known that, in the general case, replacing a multicriteria problem with a single-criterion one with the criterion in the form of a linear combination of the original criteria ensures finding a certain subset of the Pareto set (the Pareto optimal front). However, in multicriteria minimax problems, where the individual criteria are the maxima of some functionals with respect to uncertain variables or functions, even such a replacement leads to the difficult problem of optimizing a linear combination of the maxima of some functionals. There are only a few multicriteria control problems in which it was possible to describe Pareto optimal solutions; these are linear-quadratic Gaussian controls [1] and \(H_2 \)-optimal controls [2] based on the \(Q \)-parametrization of the stabilizing controllers of linear time-invariant systems on the infinite horizon, as well as generalized \(H_2 \)-optimal controls [3, 4] for linear time-varying systems on a finite horizon and time-invariant systems on the infinite horizon.

Pareto suboptimal control laws whose losses relative to the Pareto optimal laws do not exceed \(1-\sqrt {N}/N\), where \(N \) is the number of criteria, were found in [5] for multicriteria optimization problems with criteria in the form of \(H_{\infty }\)- and \(\gamma _0 \)-norms in deterministic and stochastic settings. As a rule, suboptimal control design in multicriteria control problems with criteria including the \(H_{\infty } \)-norm was only successful under additional constraints imposed on the matrix equations or the linear matrix inequalities characterizing each of the criteria such as the equality of the Lyapunov function matrices [6,7,8,9,10,11], a condition that is implicitly present in the concept of mixed \(H_2/H_{\infty } \)-control, or the equality of auxiliary matrix variables [12, 13]. To synthesize a two-criterion control, Hindi et al. [14] used an approach whereby one obtains finite-dimensional \(Q\)-approximations to Pareto optimal controllers. At the same time, the question as to how much the values of the individual criteria for the multicriteria control laws synthesized under additional constraints or with the use of approximations exceed the corresponding values for the Pareto optimal control laws remains unanswered.

The results obtained in the present paper, in a sense, shed light on this problem. Namely, we show how to estimate the boundaries of a criterion space domain that contains the points corresponding to the minima of the linear convolution of the criteria and hence belonging to the Pareto set in multicriteria minimax problems, which, in particular, include problems with criteria like the generalized \(H_{\infty }\)-norm. In the process, we find Pareto suboptimal solutions corresponding to one of the resulting boundaries and estimate how suboptimal they are. In two-criteria problems, these boundaries are an upper and a lower curve between which the desired set lies. The presence of these boundaries permits one to estimate how much the individual criteria values produced by multicriteria control laws with additional assumptions and constraints differ from the criteria values for the Pareto optimal controls. Further in the article, we obtain characterizations of the generalized \(H_{\infty } \)-norm in terms of linear matrix inequalities for linear continuous and discrete time-varying systems on a finite horizon and time-invariant systems on an infinite horizon and design Pareto suboptimal controls in multicriteria problems with generalized \( H_{\infty }\)-norms as criteria. Illustrative examples are given in which suboptimal solutions are found for various two-criteria control problems, and the boundaries of a domain containing points of the Pareto set are constructed.

2. TWO-SIDED BOUNDARIES OF A DOMAIN CONTAINING A SUBSET OF THE PARETO FRONT

The problem is to find Pareto optimal solutions of a multicriteria minimization problem with criteria \( J_i(\Theta )\), \(i=1, \ldots , N \), each of which is the maximum of some nonnegative function \( F_i(\Theta , \omega ) \geqslant 0\) with respect to some variables \( \omega \) in the set \(\Omega \); i.e.,

$$ J_i(\Theta )=\sup _{\omega \in \Omega } F_i(\Theta , \omega ), \quad i=1, \ldots , N.$$
(2.1)

Recall that a solution \(\Theta _P\) is said to be Pareto optimal if there does not exist a solution \(\Theta \) such that the inequalities \(J_i(\Theta ) \leqslant J_i(\Theta _P)\), \(i=1,\ldots , N \), hold with at least one of them being strict. The set of points corresponding to all such solutions in the \(N \)-dimensional criteria space is called the Pareto set,

$$ {\cal P}=\Big \{J(\Theta _P)=\big (J_1(\Theta _P), \ldots , J_N(\Theta _P)\big ) \Big \}.$$

Pareto optimal solutions are unimprovable in the sense that for none of them there exists a solution for which all criteria are not greater than for this one and at least one of the criteria is strictly smaller.

The most common method for finding Pareto optimal solutions of a multicriteria problem is the scalarization method, whereby one selects a single criterion, say, in the form of a linear convolution of the original criteria,

$$ J_\alpha (\Theta )=\sum \limits _{i=1}^N \alpha _i J_i(\Theta ), \quad \alpha \in {\cal S}=\left \{\alpha =(\alpha _1, \ldots , \alpha _N): \thinspace \alpha _i > 0, \thinspace \sum \limits _{i=1}^N \alpha _i =1\right \}.$$

Let us refer to the linear convolution \(J_\alpha (\Theta )\) as the optimal objective function. It is well known [15] that the parameter values \(\Theta _\alpha \) such that

$$ \min \limits _{\Theta } J_\alpha (\Theta )=J_\alpha (\Theta _\alpha )=\mu (\alpha ) $$

are Pareto optimal solutions of the multicriteria problem. Denote the corresponding set of points in the criteria space by

$$ {\cal P_L}=\Big \{J(\Theta _\alpha )=\big (J_1(\Theta _\alpha ), \ldots , J_N(\Theta _\alpha )\big ) \quad \forall \thinspace \alpha \in {\cal S} \Big \}.$$

In the general case, these points may not exhaust the entire Pareto set; i.e., \({\cal P_L} \subseteq {\cal P} \).

Finding the parameters \(\Theta _\alpha \) directly for multicriteria minimax problems encounters difficulties, because the optimal objective function for such problems turns out to be a linear combination of the maxima of various functions. In this connection, let us estimate it from below by replacing the sum of maxima with the maximum of the sum,

$$ J_\alpha (\Theta ) = \sum \limits _{i=1}^N \alpha _i \sup \limits _{\omega \in \Omega } F_i(\Theta , \omega ) \geqslant \sup \limits _{\omega \in \Omega } \sum \limits _{i=1}^N \alpha _i F_i(\Theta , \omega ) = \sup \limits _{\omega \in \Omega } F_\alpha (\Theta , \omega ) = \widehat J_\alpha (\Theta ).$$
(2.2)

We refer to \(\widehat J_\alpha (\Theta )=\sup _{\omega \in \Omega }\sum _{i=1}^N \alpha _iF_i(\Theta , \omega ) \) as the suboptimal objective function and to the parameters \( \widehat \Theta _\alpha \) optimal with respect to \(\widehat J_\alpha (\Theta )\) for which

$$ \min \limits _{\Theta } \widehat J_\alpha (\Theta )=\widehat J_\alpha (\widehat {\Theta }_\alpha )=\mu _-(\alpha ) $$
(2.3)

as the Pareto suboptimal solutions of the multicriteria problem. Let us show that for the problems in question we can indicate the boundaries of a criteria space domain that contains the subset \({\cal P}_{\cal L}\) and thereby estimate how suboptimal the solutions \(\widehat {\Theta }_\alpha \) are.

Fig. 1.
figure 1

Construction of the domain \( \Sigma _0\) containing points of the Pareto set.

The equality

$$ J_\alpha (\Theta _\alpha )=\sum \limits _{i=1}^N \alpha _i J_i(\Theta _\alpha )=\mu (\alpha )$$

means that the point \(J(\Theta _\alpha ) \) in the criteria space belongs to the hyperplane \(\Pi _\alpha \) with normal vector \(n_\alpha ^{\rm T}=(\alpha _1, \ldots , \alpha _N)\) and with the equation \(n_\alpha ^{\rm T} J=\mu (\alpha )\) (see Fig. 1a). This plane lies at the distance \(d_\alpha =|n_\alpha |^{-1} \mu (\alpha )\) from the origin. Since

$$ \mu _+(\alpha )=\sum \limits _{i=1}^N \alpha _i J_i(\widehat \Theta _\alpha ) \geqslant \sum \limits _{i=1}^N \alpha _i J_i(\Theta _\alpha ) = \mu (\alpha ),$$
(2.4)

it follows that the point \(J(\widehat \Theta _\alpha )\) corresponding to the Pareto suboptimal solutions \(\widehat \Theta _\alpha \) belongs to the hyperplane \(\Pi _\alpha ^+ \) whose equation is \(n_\alpha ^{\rm T} J\!=\!\mu _+(\alpha )\). This plane lies at the distance \(d_\alpha ^+\!=\!|n_\alpha |^{-1} \mu _+(\alpha )\!\geqslant \!d_\alpha \) from the origin.

The inequalities

$$ \mu (\alpha )=\sum \limits _{i=1}^N \alpha _i J_i(\Theta _\alpha ) \geqslant \widehat J_\alpha (\Theta _\alpha ) \geqslant \widehat J_\alpha (\widehat {\Theta }_\alpha )= \mu _-(\alpha ) $$
(2.5)

imply that the distance from the point \(J(\Theta _\alpha ) \) to the origin is not less than the distance \(d_\alpha ^{-}=|n_\alpha |^{-1} \mu _-(\alpha )\) from the plane \(\Pi _\alpha ^{-}\) with the equation \(n_\alpha ^{\rm T} J=\mu _-(\alpha )\) to the origin, \(d_\alpha \geqslant d_\alpha ^{-}\). Thus, the point \(J(\Theta _\alpha ) \in \Pi _\alpha \) lies between the two parallel hyperplanes \(\Pi _\alpha ^{-}\) and \(\Pi _\alpha ^+ \).

Let us define the following sets in the criteria space for each tuple \(\alpha \in {\cal S}\):

$$ \begin {gathered} \begin {aligned} \Sigma _\alpha ^{-} &= \bigg \{ (J_1, \ldots , J_N) : \sum \limits _{i=1}^N \alpha _iJ_i < \widehat J_\alpha (\widehat \Theta _\alpha ), \quad J_i \geqslant 0, \quad i = 1, \ldots , N \bigg \} , \\ \Sigma _\alpha ^+ &= \bigg \{ (J_1, \ldots , J_N) : \sum \limits _{i=1}^N \alpha _iJ_i \leqslant \sum \limits _{i=1}^N \alpha _i J_i(\widehat \Theta _\alpha ), \quad J_i \geqslant 0, \quad i = 1, \ldots , N \bigg \} , \end {aligned} \\ \Sigma _-=\bigcup \limits _{\alpha \in {\cal S}} \Sigma _\alpha ^{-}, \quad \Sigma _+=\bigcup \limits _{\alpha \in {\cal S}} \Sigma _\alpha ^+, \quad \Sigma _0=\Sigma _+ \backslash \Sigma _-. \end {gathered} $$
(2.6)

Let us prove that the points \(J(\Theta _\alpha ) \in {\cal P_L} \subseteq {\cal P}\) belong to the set \(\Sigma _0 \) (see Fig. 1b).

Indeed, since

$$ \sum \limits _{i=1}^N \alpha _i J_i(\Theta _\alpha ) \leqslant \sum \limits _{i=1}^N \alpha _i J_i(\widehat \Theta _\alpha ) = \mu _+(\alpha ),$$

we have \(J(\Theta _\alpha ) \in \Sigma _+\). Let us show that \(J(\Theta _{\widehat \alpha }) \not \in \Sigma _-\) for an arbitrary tuple \(\widehat \alpha \in {\cal S}\); i.e., \(J(\Theta _{\widehat \alpha }) \not \in \Sigma _\alpha ^{-}\) for any \(\alpha \in {\cal S}\). It follows from (2.5) that \(J(\Theta _{\widehat \alpha }) \not \in \Sigma _{\widehat \alpha }^{-}\). Assume that \(J(\Theta _{\widehat \alpha }) \in \Sigma _{\alpha }^{-}\) for some \(\alpha \not =\widehat \alpha \); i.e., \(\sum _{i=1}^N \alpha _i J_i(\Theta _{\widehat \alpha }) <\mu _-(\alpha ) \). Since \({\widehat J_{\alpha } (\widehat \Theta _{\alpha }) \leqslant \widehat J_{\alpha }(\Theta _{\alpha }) }{\leqslant J_{\alpha } (\Theta _{\alpha })=\mu (\alpha )} \), we have

$$ \sum \limits _{i=1}^N \alpha _i J_i(\Theta _{\widehat \alpha })< \sum \limits _{i=1}^N \alpha _i J_i(\Theta _{\alpha });$$

i.e., \(J_{\alpha } (\Theta _{\widehat \alpha }) <J_{\alpha } (\Theta _{\alpha })\). This contradicts the fact that \(\Theta _\alpha \) minimizes the linear convolution \(J_\alpha (\Theta )\). Thus, \(\Theta _\alpha \in \Sigma _0\), and we can state the following assertion.

Theorem 2.1.

The set \({\cal P}_L\subseteq {\cal P} \) of points in the criteria space that correspond to Pareto optimal parameters \(\Theta _\alpha \) minimizing the optimal objective function \({J_\alpha (\Theta )}{=\sum _{i=1}^N \alpha _i \sup _{\omega \in \Omega } F_i(\Theta , \omega )} \) is contained in the set \( \Sigma _0\) defined in (2.2), (2.3), and (2.6).

In two-criteria problems, the lower and upper boundaries of a domain containing the subset \({\cal P}_L \) of the space of criteria \((J_1, J_2) \) are the envelopes of the family of straight lines

$$ \begin {gathered} \alpha J_1+(1-\alpha )J_2=\widehat J_\alpha (\widehat {\Theta }_\alpha ), \\ \alpha J_1+(1-\alpha )J_2= \alpha J_1(\widehat \Theta _\alpha )+(1-\alpha )J_2(\widehat \Theta _\alpha ) \quad \forall \thinspace \alpha \in (0, 1). \end {gathered}$$

Note that the straight lines corresponding to the values \(\alpha =1\) and \(\alpha =0 \) are \(J_1=\min _\Theta J_1(\Theta ) \) and \(J_2=\min _\Theta J_2(\Theta ) \), respectively.

To give a quantitative estimate of how close the values of the functionals on the suboptimal solutions \(\widehat {\Theta }_\alpha \) and on the unknown optimal solutions \(\Theta _\alpha \) are, we introduce the suboptimality exponent

$$ \eta =\max \limits _{\alpha \in {\cal S}} \frac {d_\alpha ^+ - d_\alpha ^{-}}{d_\alpha ^+}= \max \limits _{\alpha \in {\cal S}} \frac {\mu _+(\alpha ) - \mu _-(\alpha )}{\mu _+(\alpha )}$$

determined from the relative maximum “distance” between the boundaries of the set \(\Sigma _0\). The closer \(\eta \) to zero, the more accurate the estimate of the Pareto set is and the closer the values of the respective criteria on the suboptimal and optimal solutions are.

Note that the analysis presented in this section and the result obtained can be carried over without any changes to the case where the criteria are functionals and the “variables” \(\omega \) and solutions \(\Theta \) are functions.

3. GENERALIZED \(H_{\infty }\) -NORM

This section discusses system characteristics that will be further selected as individual criteria in control synthesis. Let a linear time-varying system be described by the equations

$$ \begin {aligned} \partial x&=A(t)x(t)+B(t)v(t), \quad x(t_0)=x_0,\\ z(t)&=C(t)x(t)+D(t)v(t), \quad t\in [t_0, \thinspace t_f], \end {aligned}$$
(3.1)

where \(\partial \) denotes the operator of differentiation for a continuous-time system or the shift operator \(\partial x(t)=x(t+1) \) for a discrete-time system, \(x \in {\rm R}^{n_x} \) is the state, \(v \in {\rm R}^{n_v} \) is the disturbance, and \(z \in {\rm R}^{n_z} \) is the target output. The generalized \(H_{\infty } \)-norm of system (3.1) from input \(v\) to output \(z \) on a finite interval \([t_0, t_f] \) with an uncertain initial state for given weight matrices \(R=R^{\rm T} > 0\) of the initial state and \(S=S^{\rm T} \geqslant 0 \) of the terminal state is defined as the square root of the maximum value of the integral output indicator with allowance for the terminal system state normalized by the sum of the quadratic form of the initial state and the squared \(L_2 \)-norm of disturbance for a continuous-time system or \(l_2 \)-norm of disturbance for a discrete-time system; i.e.,

$$ \gamma _{\infty ,\thinspace 0} = \displaystyle \sup \limits _{x_0, \thinspace v}\left (\frac {\|z\|^2_{[t_0, \thinspace t_f]}+x^{\rm T}(t_f) S x(t_f)}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2}\right )^{1/2},$$
(3.2)

where the supremum is taken over all initial states \( x(t_0)=x_0\) and all \(v \in L_2 \) or \(v \in l_2 \) not vanishing simultaneously. Here for continuousor discrete-time systems we use the notation

$$ \displaystyle \|\xi \|^2_{[t_0, \thinspace t_f]}=\int \limits _{t_0}^{t_f} \big |\xi (t)\big |^2 dt, \quad \|\xi \|^2_{[t_0, \thinspace t_f]}=\sum _{t=t_0}^{t_f-1} \big |\xi (t)\big |^2. $$

For \(S=0\), the terminal state is not taken into account in the generalized \(H_{\infty }\)-norm.

This characteristic, introduced in [16] under the name “\(H_{\infty } \)-norm with transients” for continuous-time systems and in [17] for discrete-time systems, includes many criteria applied in control synthesis as special cases. For example, in the absence of exogenous disturbances, i.e., for \(v(t) \equiv 0\), this indicator becomes the \(\gamma _0\)-norm

$$ \gamma _0 = \sup \limits _{x_0 \neq 0} \displaystyle \frac {\|z\|_{[t_0, \thinspace t_f)}}{\left (x_0^{\rm T} R^{-1} x_0\right )^{1/2}}$$

and characterizes the “worst-case” value of a quadratic functional on system trajectories with initial state belonging to the ellipsoid \(x^{\rm T} R^{-1}x \leqslant 1 \). When the initial state is zero, the generalized \(H_{\infty } \)-norm becomes the standard \(H_{\infty } \)-norm

$$ \gamma _{\infty } = \sup \limits _{v\neq 0} \displaystyle \frac {\left (\|z\|_{[t_0, \thinspace t_f)}^2+x^{\rm T}(t_f) S x(t_f)\right )^{1/2}}{\|v\|_{[t_0, \thinspace t_f)}}.$$

In the special case of \( C(t) \equiv 0\) and \(D(t) \equiv 0 \) in (3.1), the generalized \(H_{\infty } \)-norm characterizes the maximum output deviation \(S^{1/2}x(t) \) at the final time, defined as

$$ \gamma _{v,\thinspace 0}= \displaystyle \sup \limits _{x_0, \thinspace v}\frac {\big |S^{1/2}x(t_f)\big |}{\left (x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2\right )^{1/2}}. $$
(3.3)

All these indicators are the induced norms of the corresponding linear operators generated by the system and taking the initial state and/or disturbance to the output and/or the terminal state with appropriate inner products on the linear spaces.

It was established in [16] that to find the generalized \(H_{\infty }\)-norm of a linear continuous time-varying system on a finite horizon, it is required to find a solution of a matrix Riccati differential equation with certain initial and final conditions. The following theorem, the proof of which is given in the Appendix, shows that the generalized \(H_{\infty } \)-norm on a finite horizon in the continuous- and discrete-time cases can be calculated as a solution of the optimization problem for a linear function under constraints given by differential or difference linear matrix inequalities.

Theorem 3.1.

Let the inequality

$$ \gamma ^2 I-D^{\rm T}(t)D(t)>0 \quad \forall t\in [t_0,t_f]$$
(3.4)

be satisfied. The generalized \(H_{\infty } \)-norm of system (3.1) satisfies the inequality \(\gamma _{\infty , 0} < \gamma \) if and only if the differential linear matrix inequalities

$$ \left ( \begin {array}{{ccc}} -\dot Y(t)+Y(t)A^{\rm T}(t)+A(t)Y(t) & * & * \\[.6em] B^{\rm T}(t) & - I & * \\[.6em] C(t)Y(t) & D(t) & -\gamma ^2 I \end {array} \right ) \leqslant 0, \quad t \in [t_0, \thinspace t_f], $$
(3.5)

for a continuous-time system or the linear matrix inequalities

$$ \left ( \begin {array}{{cccc}} -Y(t+1) & * & *& * \\ Y(t)A^{\rm T}(t) & -Y(t) & * & * \\ B^{\rm T}(t) & 0 & - I & * \\ 0 & C(t)Y(t) & D(t) & -\gamma ^2 I \end {array} \right ) \leqslant 0, \quad t=t_0, \ldots , t_f-1,$$
(3.6)

for a discrete-time system, as well as the relation

$$ Y(t_0)=R$$
(3.7)

and the linear matrix inequality

$$ \left ( \begin {array}{{cc}} Y(t_f) & * \\[.6em] S^{1/2}Y(t_f) & \gamma ^2 I \end {array} \right ) > 0, $$
(3.8)

have solutions for the unknowns \(Y(t)>0\) and \(\gamma ^2\).

To calculate the generalized \(H_{\infty }\)-norm, we first discretize the corresponding differential linear matrix inequality and then solve the standard semidefinite programming problem.

For a stable time-invariant plant of the form (3.1), where \(A(t) = A\), \(B(t)= B \), \(C(t) = C \), and \(D(t) = D \) are given constant matrices and all eigenvalues of \(A \) lie strictly to the left of the imaginary axis for a continuous-time system and strictly inside the unit disk of the complex plane for a discrete-time system, the generalized \(H_{\infty }\)-norm, the standard \( H_{\infty }\)-norm (i.e., under the zero initial conditions), and the \(\gamma _0\)-norm are defined on the infinite horizon as

$$ \gamma _{\infty ,\thinspace 0}^s = \sup \limits _{x_0, \thinspace v} \displaystyle \frac {\|z\|_{[0, \infty )}}{\left (x_0^{\rm T} R^{-1} x_0+\|v\|^2_{[0, \infty )}\right )^{1/2}}, \quad \gamma _{\infty }^s = \sup \limits _{v\neq 0} \displaystyle \frac {\|z\|_{[0, \infty )}}{\|v\|_{[0, \infty )}},\quad \gamma _0^s = \sup \limits _{x_0 \neq 0} \displaystyle \frac {\|z\|_{[0, \infty )}}{\left (x_0^{\rm T} R^{-1} x_0\right )^{1/2}}, $$

where the superscript \(s \) indicates that the system is time-invariant.

Theorem 3.2.

The generalized \( H_{\infty }\)-norm of a stable time-invariant system (3.1) on an infinite horizon satisfies the inequality \(\gamma _{\infty , 0}^s<\gamma \) if and only if the linear matrix inequalities

$$ \left ( \begin {array}{{ccc}} YA^{\rm T}+AY & * & * \\ B^{\rm T} & -I & * \\ CY & D & -\gamma ^2 I \\ \end {array} \right ) < 0, \quad Y>R$$
(3.9)

for the continuous-time system [18] or

$$ \left ( \begin {array}{{cccc}} -Y & * & *& * \\[.3em] YA^{\rm T} & -Y & * & * \\ B^{\rm T} & 0 & - I & * \\ 0 & CY & D & - \gamma ^2 I \end {array} \right ) < 0, \quad Y>R$$
(3.10)

for the discrete-time system [17] have solutions for the unknowns \(Y \) and \( \gamma ^2\).

In contrast to Theorem 3.1, the linear matrix inequalities in Theorem 3.2 are strict, since on the infinite horizon it is additionally required to ensure the asymptotic stability of the system under the worst-case disturbance. It follows from Theorem 3.2 that the level of rejection of the initial disturbance satisfies the inequality \( \gamma _0^s<\gamma \) if and only if, for the case of a continuous-time system, inequalities (3.9) with the second block row and column deleted from the first inequality are solvable or, for the case of a discrete-time system, inequalities (3.10) with the third block row and column deleted from the first inequality are solvable. In turn, the \(H_{\infty } \)-norm of the transfer matrix from \(v \) to \(z \) is less than \(\gamma \) if and only if the first inequality in (3.9) is solvable for the case of a continuous-time system or the first inequality in (3.10) is solvable for the case of a discrete-time system. Each of the indicators is found by minimizing \(\gamma ^2 \) on the set defined by the corresponding linear matrix inequalities.

4. PARETO SUBOPTIMAL CONTROL DESIGN IN PROBLEMS WITH GENERALIZED \(H_{\infty }\) -NORMS

Consider the multicriteria control problem for a system

$$ \begin {aligned} \partial x&=A(t)x(t)+B_v(t)v(t)+ B_u(t) u(t), \\ z_i(t)&=C_i(t)x(t)+D_{v \thinspace i} (t) v(t)+D_{u \thinspace i}(t) u(t), \quad i=1, \ldots , N, \end {aligned}$$
(4.1)

with \(N \) target outputs and with the feedback \(u=\Theta (t)x \), where the criteria are the squared generalized \(H_{\infty } \)-norms of the outputs \(z_i \),

$$ J_i[\Theta (t)]=\displaystyle \sup \limits _{x_0, v}\frac {\|z_i\|^2_{[t_0, t_f]}+x^{\rm T}(t_f) S_i x(t_f)}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, t_f]}^2}, \quad i=1, \ldots , N,$$

viewed as functionals of the matrix-valued functions \(\Theta (t) \), \(t \in [t_0, t_f] \). For this problem, the suboptimal objective functional has the form

$$ \widehat J_\alpha [\Theta (t)]=\displaystyle \sup \limits _{x_0, \thinspace v}\frac {\| z_\alpha \|^2_{[t_0, \thinspace t_f]}+x^{\rm T}(t_f) S_\alpha x(t_f)}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2}, $$

where

$$ \begin {gathered} z_\alpha (t)=\big [C_\alpha (t)+D_{u \thinspace \alpha }(t) \Theta (t)\big ] x(t)+D_{v \thinspace \alpha }(t) v(t), \quad S_\alpha =\sum \limits _{i=1}^N \alpha _i S_i,\\ C_\alpha (t)=\left ( \begin {array}{c} \alpha _1^{1/2}C_1(t)\\ \cdots \\ \alpha _N^{1/2}C_N(t) \end {array} \right ), \quad D_{u \thinspace \alpha }(t)=\left ( \begin {array}{c} \alpha _1^{1/2}D_{u \thinspace 1}(t)\\ \cdots \\ \alpha _N^{1/2}D_{u \thinspace N}(t) \end {array} \right ), \\ D_{v \thinspace \alpha }(t)=\left ( \begin {array}{c} \alpha _1^{1/2}D_{v \thinspace 1}(t)\\ \cdots \\ \alpha _N^{1/2}D_{v \thinspace N}(t) \end {array} \right ). \end {gathered}$$
(4.2)

This means that \( \widehat J_\alpha [\Theta (t)]\) is a generalized \(H_{\infty } \)-norm of the combined output \(z_\alpha \) of system (4.1) with terminal state matrix \(S_\alpha \), and the Pareto suboptimal solutions are the generalized \( H_{\infty }\)-optimal controls with respect to this norm for all \( \alpha \in {\cal S}\). The matrices of parameters of these control laws are calculated as \(\widehat \Theta _\alpha (t)=Z(t) Y^{-1}(t) \) when solving the linear matrix inequalities obtained from (3.5) and (3.8) for a continuous-time system (or from (3.6) and (3.8) for a discrete-time system) with the initial condition \(Y(t_0)=R \) and with the terminal state matrix \(S_\alpha \) after the appropriate matrices have been replaced with the matrices \(A(t)+B_u (t)\Theta (t) \), \(B_v(t) \), \(C_\alpha (t)+D_{u \thinspace \alpha } (t) \Theta (t)\), and \(D_{v \thinspace \alpha }(t) \) and the auxiliary variables \(Z(t)=\Theta (t)Y(t) \) have been introduced. Note that to calculate the matrices of parameters of the feedback \(\Theta (t)\) approximately in continuous-time systems, the differential linear matrix inequality is preliminarily discretized.

In the special case of a system with zero initial conditions under criteria in the form of standard \( H_{\infty }\)-norms and maximum deviations, the Pareto suboptimal controls are found by solving the linear matrix inequalities (3.5) and (3.8) (or (3.6) and (3.8)) with the appropriate system matrices and with \(Y(t_0)=0 \). In another special case with no disturbances, i.e., with \(v(t)\equiv 0\), the Pareto suboptimal controls in a multicriteria problem with criteria of the form of \(\gamma _0\)-norms and maximum deviations of outputs at the final time under indefinite initial conditions are found by solving the linear matrix inequalities obtained from (3.5) and (3.8) (or (3.6) and (3.8)) with the appropriate system matrices for \(B(t)\equiv 0 \) and \(Y(t_0)=R \).

When all criterion are the maximum deviations of the \(N \) outputs \(z_i(t_f) \) of system (4.1) at the final time for \(D_{v \thinspace i} (t) \equiv 0\), \(i=1, \ldots , N \), it is possible to synthesize Pareto optimal controls. Indeed, in this case for the unified criterion we select the Germeier convolution [15]

$$ J^{G}_\alpha [\Theta (t)]=\max \limits _{j=1, \ldots , N} \Big \{\alpha _j^{-1} J_j\big [\Theta (t)\big ]\Big \} \quad \forall \thinspace \alpha _j >0.$$

We represent it in the form

$$ \begin {aligned} J^{G}_\alpha \big [\Theta (t)\big ]&=\displaystyle \max \limits _{j=1, \ldots , N}\alpha _j^{-1}\sup \limits _{x_0, \thinspace v}\thinspace \frac {\big |z_j(t_f)\big |^2}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2} \\ &=\displaystyle \sup \limits _{x_0, \thinspace v}\thinspace \frac {\max \limits _{j=1, \ldots , N}\big |\alpha _j^{-1/2}z_j(t_f)\big |^2}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2}=\sup \limits _{x_0, \thinspace v}\thinspace \frac {\big |z_\alpha (t_f)\big |^2_{g \thinspace \infty }}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[t_0, \thinspace t_f]}^2}, \end {aligned} $$

where

$$ z_\alpha =\mathrm {col} \thinspace (\alpha _1^{-1/2}z_1, \ldots , \alpha _N^{-1/2}z_N), \quad |z_\alpha |_{g \thinspace \infty }=\max \limits _{j=1, \ldots , N} |\alpha _j^{-1/2} z_j|. $$

Thus, the Germeier convolution for this problem is the squared maximum deviation of the combined output \(z_\alpha \) consisting of the weighted outputs of system (4.1) at the final time, where the maximum deviation of the combined vector is defined as the maximum of Euclidean norms of its constituent vectors. Consequently, the matrices \(\Theta _\alpha (t)\) optimal with respect to \(J^{G}_\alpha [\Theta (t)] \), which are Pareto optimal solutions of the multicriteria problem in question, can be found as \({\Theta _\alpha (t)=Z(t)Y^{-1}(t)} \) by solving the problem \(\inf \gamma ^2 \) under constraints in the form of the differential matrix inequalities

$$ -\dot Y(t) + Y(t)A^{\rm T}(t) + A(t)Y(t) + Z^{\rm T}(t)B_u^{\rm T}(t) + B_u(t)Z(t) + B_v(t)B_v^{\rm T}(t) \leqslant 0 $$
(4.3)

with \(t=[t_0, t_f]\) for a continuous-time system or

$$ \left ( \begin {array}{{cc}} -Y(t+1) +B_v(t)B_v^{\rm T}(t) & * \\[.3em] Y(t)A^{\rm T}(t)+Z^{\rm T}(t) B_u^{\rm T} (t) & -Y(t) \end {array} \right ) \leqslant 0$$
(4.4)

with \(t=t_0, \ldots , t_f-1 \) for a discrete-time system, and also

$$ Y(t_0) = R, \quad \left ( \begin {array}{{cc}} Y(t_f) & * \\[.3em] C_i Y(t_f)+D_{u \thinspace i} Z(t_f) & \alpha _i\gamma ^2 I \end {array} \right ) > 0, \quad i=1, \ldots , N.$$
(4.5)

Now let us show how to synthesize Pareto suboptimal time-invariant feedbacks \(u=\Theta x \) in multicriteria optimization problems for time-invariant systems of the form (4.1) in which the criteria are generalized \(H_{\infty }\)-norms on the infinite horizon,

$$ J_i(\Theta )=\displaystyle \sup \limits _{x_0, \thinspace v}\thinspace \frac {\|z_i\|^2_{[0, \thinspace \infty )}}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[0, \thinspace \infty )}^2}, \quad i=1, \ldots , N.$$

In this case, the suboptimal objective function has the form

$$ \widehat J_\alpha (\Theta )=\displaystyle \sup \limits _{x_0, \thinspace v}\thinspace \frac {\| z_\alpha \|^2_{[0, \thinspace \infty )}}{x^{\rm T}_0 R^{-1} x_0+\|v\|_{[0, \thinspace \infty )}^2}, $$

where the combined output \(z_\alpha (t) \) is defined as in (4.2) for all time-invariant matrices. Pareto suboptimal solutions of this problem are optimal controls with respect to the generalized \(H_{\infty } \)-norm of the output \(z_\alpha \) of the time-invariant system (4.1) for all \(\alpha \in {\cal S}\). The parameters \( \widehat \Theta _\alpha \) of these control laws are determined when solving the linear matrix inequalities (3.9) or (3.10) in which the appropriate matrices should be replaced by \(A+B_u \Theta \), \(B_v \), \(C_\alpha +D_{u \thinspace \alpha } \Theta \), and \(D_{v \thinspace \alpha } \).

5. EXAMPLES

5.1. First-Order System

We start from a simple example for the first-order system

$$ \begin {gathered} \dot x=-x+v+u, \quad x(0)=0,\\[.02em] z_1=x, \quad z_2=u, \end {gathered}$$
(5.1)

which is considered analytically. Let us prescribe a control in the form \(u=-\theta x \) and choose the two criteria

$$ J_i(\theta ) = \sup \limits _{v\neq 0}\thinspace \displaystyle \frac {\|z_i\|_{[0, \thinspace \infty )}^2}{\|v\|_{[0, \thinspace \infty )}^2}, \quad i=1,2.$$
(5.2)

In this case, the suboptimal objective function \(\widehat J_\alpha (\theta ) \) is the squared \(H_{\infty } \)-norm of the transfer function \(H(s) \) of the closed-loop system from \(v \) to the combined output,

$$ z_{\alpha }(t)=\big (\alpha ^{1/2} \thinspace \thinspace \thinspace -(1-\alpha )^{1/2}\theta \big )^{\rm T}x(t), \quad \alpha \in (0, \thinspace 1).$$

Since

$$ \displaystyle \big |H(j\omega )\big |^2=\frac {\alpha +(1-\alpha )\theta ^2}{\omega ^2+(1+\theta )^2},$$

we have

$$ \displaystyle \widehat J_\alpha (\theta )=\max \limits _{\omega \in [0, \infty )}\big |H(j\omega )\big |^2= \frac {\alpha +(1-\alpha )\theta ^2}{(1+\theta )^2}. $$

By minimizing \(\widehat J_\alpha (\theta ) \), we obtain

$$ \displaystyle \widehat \theta _{\alpha }= \frac {\alpha }{1-\alpha }, \quad \widehat J_\alpha (\widehat \theta _{\alpha })=\alpha (1-\alpha ).$$

According to (2.3) and (2.4), we have

$$ \mu _-(\alpha )= \mu _+(\alpha )=\alpha (1-\alpha ). $$

Thus, the lower and upper boundaries of the domain in which the desired set of solutions of the two-criterion problem is located coincide, and this set itself is defined as the envelope of the family of straight lines

$$ \alpha J_1+(1-\alpha )J_2=\alpha (1-\alpha )$$

on the plane \((J_1,J_2) \). The solution of this easy problem about finding the envelope of a family of straight lines for \(J_1\in [0,\thinspace 1] \) and \(J_2\in [0,\thinspace 1] \) yields a curve in the implicit form

$$ 2J_1+2J_2-(J_1-J_2)^2=1, $$

or, in explicit form,

$$ J_2=\left (\sqrt {J_1}-1\right )^2, \quad J_1 \in (0,\thinspace 1). $$

Finally, note that

$$ \displaystyle J_1(\theta )=\frac {1}{(1+\theta )^2}, \quad J_2(\theta )=\frac {\theta ^2}{(1+\theta )^2} $$

in this example and hence \(\widehat J_\alpha (\Theta )=J_\alpha (\Theta )\).

5.2. Vibration Insulation of an Elastic Object

Consider the mechanical system with two degrees of freedom shown in Fig. 2 and representing an elastic object modeled by two material points 2 and 3 connected by linear elastic and dissipative elements; this elastic object is connected via the same linear elastic and dissipative elements and via the plant (hereinafter referred to as the vibration insulator) with another body 1 , which simulates a movable base. The dynamics of this mechanical system (in dimensionless variables and parameters) is described by the differential equations

$$ \begin {aligned} \ddot {x}_1 &= -2\beta \dot {x}_1 + \beta \dot {x}_2 - 2x_1 + x_2 + v + u, &\quad x_1(0) &= x_{10},&\quad \dot {x}_1(0) &= x_{30},\\ \ddot {x}_2 &= -\beta (\dot {x}_2 - \dot {x}_1) - x_2 + x_1 + v, &\quad x_2(0) &= x_{20},&\quad \dot {x}_2(0) &= x_{40}, \end {aligned} $$

where \(x_1\) and \(x_2 \) are the coordinates of material points 2 and 3 relative to the movable base, \(u \) is the force the vibration insulator creates when deformed (i.e., when point 2 is displaced with respect to point 1 ), \(v \) is, up to the sign, the acceleration of the base (material point 1 ), and \(\beta = 0{.}1 \) is a given positive damping parameter. The vibration insulation problem is to find a control \(u = \theta _1x_1 + \theta _2x_2 +\theta _3\dot {x}_1 + \theta _4\dot {x}_2 \) that ensures both the least possible deformation of the mechanical system and the minimum force counteracting the displacement of the elastic object relative to the base. To this end, consider the target outputs

$$ z_1 = (x_1,\thinspace x_2 - x_1)^{\rm T}, \quad z_2 = -x_1 - \beta \dot {x}_1 + u. $$

The generalized \({H}_\infty \)-norms with respect to the indicated outputs can be treated as the desired system characteristics. Using the inequalities in Theorem 3.2, we find the set \(\Sigma _0 \) (see Fig. 3). In this case, the suboptimality indicator is equal to \(\eta = 0{.}2768\). The point \(A(4 {.}256; 5 {.}582) \) corresponding to the value \(\alpha =0{.}64 \) is indicated at the upper boundary of the set \(\Sigma _0 \).

Fig. 2.
figure 2

Diagram of an elastic object with a vibration insulator.

Fig. 3.
figure 3

Estimate of the Pareto set in the vibration insulation problem.

For comparison, we calculated controllers based on linear matrix inequalities characterizing each of the indicated generalized \(H_{\infty }\)-norms under the auxiliary assumption that the Lyapunov functions for these norms are equal to each other. Namely, the feedback matrices were determined when solving the problem \(\inf J_2(\Theta )\) under the condition that \(J_1(\Theta ) < \gamma ^2\) with parameter \(\gamma \). They were found as \(\widetilde \Theta _\gamma =Z_\gamma Y_\gamma ^{-1}\), where \(Y_\gamma \) and \(Z_\gamma \) are solutions of the problem \(\inf \gamma ^2 \) under constraints determined by a pair of linear matrix inequalities of the form (3.9) with the matrices \(A \), \(B \), \(C \), and \(D \) replaced by the matrices \(A+B_u \Theta \), \(B_v \), \(C_1+D_{u \thinspace 1}\Theta \), and \(D_{v \thinspace 1} \), respectively, in one of them and by the matrices \(A+B_u \Theta \), \(B_v \), \(C_2+D_{u \thinspace 2} \Theta \), \(D_{v \thinspace 2} \), and \(Z_\gamma =\Theta Y_\gamma \) in the other. The point \(B(4 {.}959; 5 {.}913) \) in Fig. 3 corresponds to one of the controllers thus found with the parameters \( \Theta =(-0 {.}472; 0 {.}252; -1 {.}745; -1 {.}385)^{\rm T} \).

5.3. Damping Parametric Vibrations of a Linear Oscillator

Consider the Mathieu equation

$$ \begin {aligned} \dot {x}_1 &= x_2,\\ \dot {x}_2 &= -\delta ^2(1 + \varepsilon \sin t)x_1 + u + v \end {aligned} $$
(5.3)

with parameters \(\delta = 0 {.}5 \) and \(\varepsilon = 0 {.}3 \). This equation describes parametric vibrations of a linear oscillator. On a time interval of duration \(T = 10 \), we define a uniform mesh with step \(h = 0 {.}05 \) and discretize system (5.3) by replacing the derivatives with finite-difference ratios. For the first criterion we take the generalized \({H}_\infty \)-norm of this system with the target output \(z=x_1+u\), i.e., with the matrices \(C_1 = (1 \thinspace \thinspace 0)\), \(D_1 = 1 \), and the terminal state weight matrix \(S_1 = 0 \). For the second criterion we take the maximum deviation of the vector \((1/2) \mathrm {col} \thinspace (x_1, x_2) \) at the final time, i.e., \(C_2 = (0 \thinspace \thinspace 0)\), \(D_2 = 0 \) and \(S_2 = 0 {.}25I \). Thus, the functionals have the form

$$ J_1\big [\Theta (t)\big ] = \sup \limits _{x_0,v}\thinspace \dfrac {\|z\|_{[0,T]}^2}{x_0^{\rm T} R^{-1} x_0 + \|v\|_{[0,T]}^2}, \quad J_2\big [\Theta (t)\big ] = \sup \limits _{x_0,v}\thinspace \dfrac {x_f^{\rm T} S_2 x_f}{x_0^{\rm T} R^{-1} x_0 + \|v\|_{[0,T]}^2}.$$
(5.4)
Fig. 4.
figure 4

Estimating the Pareto set in the parametric vibration damping problem.

In further numerical experiments, \( R=0{.}5I\). Let us apply the approach described in Sec. 2 to construct the boundaries of the domain \(\Sigma _0\) containing the Pareto set. In Fig. 4, the set \(\Sigma _0\) is shown in gray. It can be seen that it turned out to be rather “narrow” and the value of the suboptimality indicator \({\eta =0{.}125} \) confirms this conclusion. Note that the points of the upper boundary of the set \( \Sigma _0\) correspond to Pareto suboptimal solutions \( \widehat {\Theta }_\alpha \) found using the approach described in Sec. 4. In particular, for \( {\alpha =0{.}18} \) the feedback coefficients \( \widehat {\Theta }_\alpha \) were found and the values of the functionals were calculated for them. The point corresponding to these values in Fig. 4 is denoted by \(A\) ; its coordinates are \((0{.}898;0{.}249)\) . The values of the functionals in the case where there is no control in system (5.3) are \({J_1=185{.}259} \) and \( {J_2=0{.}966} \). Thus, the use of control permits one to reduce the generalized \({H}_\infty \)-norm of the system by a factor of almost 200. It is also of interest to compare the results obtained with the simplest controller of the form \(u=-x_1\) . One can readily see that \(J_1= 0\) in this case, because \(z=x_1+u=0\) . One has \({J_2=0{.}750} \) , which is approximately 3 times the value of \(J_2\) for point \(A\) . In Fig. 4, the point with coordinates \((0;0{.}750)\) is denoted by \(B\).

Fig. 5.
figure 5

Graphs of the time dependences of Pareto suboptimal feedback coefficients.

Figure 5 shows the graphs of the time dependences of the Pareto suboptimal feedback coefficients \(\widehat {\Theta }_\alpha ^{\rm T}(t)=(\widehat {\theta }_1(t)\thinspace \thinspace \widehat {\theta }_2(t)) \) corresponding to point \(A \). Note that the feedback coefficients remain “constant” values for almost the entire time of the system operation. The values of the functionals for the time-invariant controller corresponding to these “constant” values \({\theta _1(t)\equiv -0{.}13} \) and \(\theta _2(t)\equiv -1{.}0 \) are equal to \(J_1=0{.}900 \) and \(J_2=0{.}249 \); i.e., the differences in the values of the criteria when using such a time-invariant controller rather the time–varying suboptimal controller \(\widehat {\Theta }_\alpha (t)\) are insignificant.

6. CONCLUSIONS

Multicriteria minimax optimization problems are considered in which the criteria are the maxima of some functionals. It is shown that when minimizing a unified criterion in the form of the maximum of a linear convolution of functionals (instead of a linear convolution of maxima), Pareto suboptimal solutions are found, and their losses are estimated in comparison with the minimum ones. In the criteria space, we construct the boundaries of a domain containing the Pareto points at which the linear convolution of the criteria takes minimum values. Thus, it becomes possible to compare the values of individual criteria for solutions of multicriteria problems chosen in some way and for Pareto optimal solutions. When combined with the apparatus of linear matrix inequalities, this approach is used to solve new multicriteria linear-quadratic control problems under uncertain initial conditions and disturbances with criteria in the form of the generalized \(H_{\infty }\)-norm or \(\gamma _0 \)-norm for continuous and discrete time-varying systems on a finite horizon and time-invariant systems on an infinite horizon. Examples of two-criteria control problems are given in which Pareto suboptimal solutions are found and domains containing points of the Pareto set are constructed.