Keywords

Introduction

Various linear control problems can be formulated in terms of the interconnection shown in Fig. 1; e.g., see Francis and Doyle (1987), Boyd and Barratt (1991), and Zhou et al. (1996). The linear system K is a controller (with input y and output uv1) to be designed for the generalized plant model G. The latter is constructed so that controller performance (i.e., the quality of K relative to specifications) can be quantified as a nonnegative functional of

$$\displaystyle\begin{array}{rcl} H(G,K) = G_{11} + G_{12}K(I - G_{22}K)^{-1}G_{ 21},& &{}\end{array}$$
(1)
Optimal Control via Factorization and Model Matching, Fig. 1
figure 141figure 141

Standard interconnection for control system design

which relates the input w and the output z when v1 = 0 and v2 = 0. The objective is to select K, to minimize this measure of performance. Alternatively, controllers that achieve a specified upper bound are sought. It is also usual to require internal stability, which pertains to the fictitious signals v1 and v2, as discussed more subsequently. The best known examples are \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty }\) control problems. In the former, performance is quantified as the energy (resp. power) of z when w is impulsive (resp. unit white noise), and in the latter, as the worst-case energy gain from w to z, which can be used to reflect robustness to model uncertainty; see Zhou et al. (1996).

The special case of G22 = 0 gives rise to a (weighted) model-matching problem, in that the corresponding performance map \(H(G,K) = G_{11} + G_{12}KG_{21}\) exhibits affine dependence on the design variable K, which is chosen to match G12KG21 to − G11 with respect to the scalar quantification of performance. Any internally stabilizable problem with G22 ≠ 0, can be converted into a model-matching problem. The key ingredients in this transformation are coprime factorizations of the plant model. The role of these and other factorizations in a model-matching approach to \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty }\) control problems is the focus of this article.

For the sake of argument, finite-dimensional linear time-invariant systems are considered via real-rational transfer functions in the frequency domain, as the existence of all factorizations employed is well understood in this setting. Indeed, constructions via state-space realizations and Riccati equations are well known. The merits of the model-matching approach pursued here are at least twofold: (i) the underlying algebraic input-output perspective extends to more abstract settings, including classes of distributed-parameter and time-varying systems (Curtain and Zwart 1995; Desoer et al. 1980; Feintuch 1998; Quadrat 2006; Vidyasagar 1985); and (ii) model matching is a convex problem for various measures of performance (including mixed indexes) and controller constraints. The latter can be exploited to devise numerical algorithms for controller optimization (Boyd and Barratt 1991; Dahleh and Diaz-Bobillo 1995; Qi et al. 2004).

First, some notation regarding transfer functions and two measures of performance for control system design is defined. Coprime factorizations are then described within the context of a well-known parametrization of stabilizing controllers, originally discovered by Youla et al. (1976) and Kucera (1975). This yields an affine parametrization of performance maps for problems in standard form, and thus, a transformation to a model-matching problem. Finally, the role of spectral factorizations in solving model-matching problems with respect to impulse-response energy (\(\mathcal{H}_{2}\)) and worst-case energy-gain (\(\mathcal{H}_{\infty }\)) measures of performance is discussed.

Notation and Nomenclature

\(\mathcal{R}\) generically denotes a linear space of matrices having fixed row and column dimensions, which are not reflected in the notation for convenience, and entries that are proper real-rational functions of the complex variable s; i.e., \(\left (\sum _{k=1}^{m}b_{k}s^{k}\right )/\left (\sum _{k=1}^{n}a_{k}s^{k}\right )\) for sets of real coefficients \(\{a_{k}\}_{k=1}^{n}\) and \(\{b_{k}\}_{k=1}^{m}\) with mn < . The compatibility of matrix dimensions is implicitly assumed henceforth. All matrices in \(\mathcal{R}\) have (nonunique) “state-space” realizations of the form \(C(sI - A)^{-1}B + D\), where A, B, C and D are real valued matrices. This form naturally arises in frequency-domain analysis of the input-output map associated with the time-domain model \(\dot{x}(t) = Ax(t) + Bu(t)\), with initial condition x(0) = 0 and output equation \(y(t) = Cx(t) + Du(t)\), where \(\dot{x}\) denotes the time derivative of x and u is the input. The study of such linear time-invariant differential equation models via the Laplace transform and multiplication by real-rational transfer function matrices is fundamental in linear systems theory (Francis 1987; Kailath 1980; Zhou et al. 1996). \(P \in \mathcal{R}\) has an inverse \(P^{-1} \in \mathcal{R}\) if and only if \(\lim _{\vert s\vert \rightarrow \infty }P(s)\) is a nonsingular matrix. The superscripts T and ∗ denote the transpose and complex conjugate transpose. For a matrix Z = Z with complex entries, Z > 0 means \(z^{{\ast}}Zz \geq \epsilon z^{{\ast}}z\) for some ε > 0 and all complex vectors z of compatible dimension. \(P^{\sim }(s) := P(-s)^{T}\), whereby \((P(j\omega ))^{{\ast}} = P^{\sim }(j\omega )\) for all real ω with \(j := \sqrt{-1}\). Zeros of transfer function denominators are called poles.

In subsequent sections, several subspaces of \(\mathcal{R}\) are used to define and solve two standard linear control problems. The subspace \(\mathcal{B}\subset \mathcal{R}\) comprises transfer functions that have no poles on the imaginary axis in the complex plane. For \(P \in \mathcal{B}\), the scalar performance index

$$\displaystyle\begin{array}{rcl} \|P\|_{\infty } :=\max _{-\infty \leq \omega \leq \infty }\bar{\sigma }(P(j\omega )) \geq 0& & {}\\ \end{array}$$

is finite; the real number \(\bar{\sigma }(Z)\) is the maximum singular value of the matrix argument Z. This index measures the worst-case energy-gain from an input signal u, to the output signal y = Pu. Note that \(\|P\|_{\infty } <\gamma\) if and only if \(\gamma ^{2}I - P^{\sim }(j\omega )P(j\omega )> 0\) for all −ω.

The subspace \(\mathcal{S}\subset \mathcal{B}\subset \mathcal{R}\) consists of transfer functions that have no poles with positive real part. A transfer function in \(\mathcal{S}\) is called stable because the corresponding input-output map is causal in the time domain, as well as bounded-in-bounded-out (in various senses). If \(P \in \mathcal{S}\) is such that PP = I, then it is called inner. If \(P,P^{-1} \in \mathcal{S}\), then both are called outer.

Let \(\mathcal{L}\) denote the subspace of strictly-proper transfer functions in \(\mathcal{B}\); i.e., for all entries of the matrix, the degree n of the denominator exceeds the degree m of the numerator. Observe that \(P \in \mathcal{L}\) if and only if \(P^{\sim }\in \mathcal{L}\). Moreover, \(P_{1}P_{2} \in \mathcal{L}\) and \(P_{3}P_{1} \in \mathcal{L}\) for all \(P_{1} \in \mathcal{L}\) and \(P_{i} \in \mathcal{B}\), i = 2, 3. Now, for \(P_{1},P_{2} \in \mathcal{L}\), define the inner-product

$$\displaystyle\begin{array}{rcl} \langle P_{1},P_{2}\rangle := \frac{1} {2\pi }\int _{-\infty }^{\infty }\mathrm{trace}(P_{ 1}^{\sim }(j\omega )P_{ 2}(j\omega ))d\omega < \infty & & {}\\ \end{array}$$

and the scalar performance index \(\|P\|_{2} := \sqrt{\langle P, P\rangle } \geq 0\) for \(P \in \mathcal{L}\). This index equates to the root-mean-square (energy) measure of the impulse response and the covariance (power) of the output signal y = Pu, when the input signal u is unit white noise. By the properties \(\mathrm{trace}(Z_{1} + Z_{2}) =\mathrm{ trace}(Z_{1}) +\mathrm{ trace}(Z_{2})\) and \(\mathrm{trace}(Z_{1}Z_{2}) =\mathrm{ trace}(Z_{2}Z_{1})\) of the matrix trace, it follows that \(\langle P_{1} + P_{2},P_{3}\rangle =\langle P_{1},P_{3}\rangle +\langle P_{2},P_{3}\rangle\) and

$$\displaystyle\begin{array}{rcl} \langle P_{1},P_{2}P_{3}\rangle & =& \langle P_{2}^{\sim }P_{ 1},P_{3}\rangle =\langle P_{1}P_{3}^{\sim },P_{ 2}\rangle \\ & =& \langle P_{3}^{\sim },P_{ 1}^{\sim }P_{ 2}\rangle \;\text{for }P_{i} \in \mathcal{L},\,i = 1,2,3.{}\end{array}$$
(2)

The (not closed) subspace \(\mathcal{L}\subset \mathcal{B}\subset \mathcal{R}\) can be expressed as the direct sum \(\mathcal{L} = \mathcal{H} + \mathcal{H}_{\perp }\), where \(\mathcal{H} = \mathcal{L}\cap \mathcal{S}\) and \(\mathcal{H}_{\perp }\) is the subspace of transfer functions in \(\mathcal{L}\) that have no poles with negative real part. That is, given \(P \in \mathcal{L}\), there is a unique decomposition \(P =\boldsymbol{\varPi } _{+}(P) +\boldsymbol{\varPi } _{-}(P)\), with \(\boldsymbol{\varPi }_{+}(P) \in \mathcal{H}\) and \(\boldsymbol{\varPi }_{-}(P) \in \mathcal{H}_{\perp }\). Observe that \(P \in \mathcal{H}\) if and only if \(P^{\sim }\in \mathcal{H}_{\perp }\). It can be shown via Plancherel’s theorem that \(\langle P_{1},P_{2}\rangle = 0\) for \(P_{1} \in \mathcal{H}_{\perp }\) and \(P_{2} \in \mathcal{H}\). Finally, note that \(P_{1}P_{2} \in \mathcal{H}\) and \(P_{3}P_{1} \in \mathcal{H}\) for \(P_{1} \in \mathcal{H}\) and \(P_{i} \in \mathcal{S}\), i = 2, 3.

Coprime and Spectral Factorizations

Given \(P \in \mathcal{R}\), the factorizations \(P = NM^{-1} =\tilde{ M}^{-1}\tilde{N}\) are said to be (doubly) coprime over \(\mathcal{S}\), if N, M, \(\tilde{N}\), \(\tilde{M}\) are all elements of \(\mathcal{S}\) and there exist U0, V0, \(\tilde{U}_{0}\), \(\tilde{V }_{0}\) all in \(\mathcal{S}\) such that

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} \tilde{V }_{0} & -\tilde{U}_{0} \end{array} \right ]\left [\begin{array}{*{10}c} M\\ N\end{array} \right ] = I\text{ and }\left [\begin{array}{*{10}c} -\tilde{N}&\tilde{M} \end{array} \right ]\left [\begin{array}{*{10}c} U_{0} \\ V _{0} \end{array} \right ] = I& &{}\end{array}$$
(3)

hold; i.e., \(\left [\begin{array}{*{10}c} M^{T}&N^{T} \end{array} \right ]\) and \(\left [\begin{array}{*{10}c} -\tilde{N}&\tilde{M} \end{array} \right ]\) are right invertible in \(\mathcal{S}\). Importantly, if the factorizations are coprime and \(P \in \mathcal{S}\), then \(M^{-1} =\tilde{ V }_{0} -\tilde{ U}_{0}P\) and \(\tilde{M}^{-1} = V _{0} - PU_{0}\) are in \(\mathcal{S}\), as sums of products of transfer functions in \(\mathcal{S}\); i.e., M and \(\tilde{M}\) are outer. Doubly coprime factorizations over \(\mathcal{S}\) always exist, but these are not unique. Constructions from state-space realizations can be found in Zhou et al. (1996, Chapter 6) and Francis (1987), for example. As mentioned above, coprime factorizations play a role in transforming a standard problem into the special case of a model matching problem, via the Youla-Kučera parametrization of internally stabilizing controllers presented in the next section.

Subsequently, a special coprime factorization proves to be useful. If \(P^{\sim }(s)P(s) = M^{-\sim }(s)N^{\sim }(s)N(s)M^{-1}(s)> 0\) for s on the extended imaginary axis (i.e., for s = j ω with −ω), then it is possible to choose the factor N to be inner. In this case, if P is also an element of \(\mathcal{S}\), then \(P = NM^{-1}\) is called an inner-outer factorization, and \(P^{\sim }P = (M^{-1})^{\sim }M^{-1}\) is called a spectral factorization, since \(M,M^{-1} \in \mathcal{S}\). More generally, if \(\varXi =\varXi ^{\sim }\in \mathcal{B}\) satisfies Ξ(s) > 0 for s on the extended imaginary axis, then there exists a (non-unique) spectral factor \(\varSigma,\varSigma ^{-1} \in \mathcal{S}\) such that Ξ = ΣΣ. Similarly, there exists a co-spectral factor \(\tilde{\varSigma },\tilde{\varSigma }^{-1} \in \mathcal{S}\) such that \(\varXi =\tilde{\varSigma }\tilde{\varSigma } ^{\sim }\). State-space constructions via Riccati equations can be found in Zhou et al. (1996, Chapter 13), for example.

Affine Controller/Performance-Map Parametrization

With reference to Fig. 1, a generalized plant model \(G = \left [\begin{matrix}\scriptstyle G_{11}&\scriptstyle G_{12} \\ \scriptstyle G_{21}&\scriptstyle G_{22}\end{matrix}\right ] \in \mathcal{R}\) is said to be internally stabilizable if there exists a \(K \in \mathcal{R}\) such that the nine transfer functions associated with the map from the vector of signals (w, v1, v2) to the vector of signals (z, u, y), which includes the performance map \(H(G,K) = G_{11} + G_{12}K(I - G_{22}K)^{-1}G_{21}\), are all elements of \(\mathcal{S}\). Accounting in this way for the influence of the fictitious signals v1 and v2, and the behavior of the internal signals u and y, amounts to following requirement: Given minimal state-space realizations, any nonzero initial condition response decays exponentially in the time domain when G and K are interconnected according to Fig. 1 with w = 0, v1 = 0 and v2 = 0. Not every \(G \in \mathcal{R}\) is internally stabilizable in the sense just defined; for example, take G11 to have a pole with positive real part and \(G_{21} = G_{12} = G_{22} = 0\). A necessary condition for stabilizability is \((I - G_{22}K)^{-1} \in \mathcal{R}\); i.e., the inverse must be proper. The latter always holds if G22 is strictly proper, as assumed henceforth to simplify the presentation. It is also assumed that G is internally stabilizable.

It can be shown that G is internally stabilized by K if and only if the standard feedback interconnection of G22 and K, corresponding to w = 0 in Fig. 1, is internally stable. That is, if and only if the transfer function

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} I &-K\\ -G_{ 22} & I \end{array} \right ] \in \mathcal{R},& &{}\end{array}$$
(4)

which relates u and y to v1 and v2 by virtue of the summing junctions at the interconnection points, has an inverse in \(\mathcal{S}\); see Francis (1987, Theorem 4.2). Substituting the coprime factorizations \(K = UV ^{-1} =\tilde{ V }^{-1}\tilde{U}\) and \(G_{22} = NM^{-1} =\tilde{ M}^{-1}\tilde{N}\), it follows that the inverse of (4) is an element of \(\mathcal{S}\) if and only if

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} M &U\\ N &V \end{array} \right ]^{-1} \in \mathcal{S}\quad \quad \Leftrightarrow \quad \quad \left [\begin{array}{*{10}c} \tilde{V } &-\tilde{U} \\ -\tilde{N}& \tilde{M} \end{array} \right ]^{-1} \in \mathcal{S}.& &{}\end{array}$$
(5)

The equivalent characterizations of internal stability in (5) lead directly to affine parametrizations of controllers and performance maps. Specifically, following the approach of Desoer et al. (1980), Vidyasagar (1985), and Francis (1987), suppose that the factorizations \(G_{22} = NM^{-1} =\tilde{ M}^{-1}\tilde{N}\) are doubly coprime in the sense that (3) holds for some \(U_{0},V _{0},\tilde{U}_{0},\tilde{V }_{0} \in \mathcal{S}\). Indeed, since \(0 = G_{22} - G_{22} =\tilde{ M}^{-1}(\tilde{M}N -\tilde{ N}M)M^{-1}\), it follows that

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} \tilde{V }_{0} & -\tilde{U}_{0} \\ -\tilde{N}& \tilde{M} \end{array} \right ]\left [\begin{array}{*{10}c} M &U_{0} \\ N &V _{0} \end{array} \right ]& =& \left [\begin{array}{*{10}c} I &0\\ 0 &I \end{array} \right ] \\ & =& \left [\begin{array}{*{10}c} M &U_{0} \\ N &V _{0} \end{array} \right ]\left [\begin{array}{*{10}c} \tilde{V }_{0} & -\tilde{U}_{0} \\ -\tilde{N}& \tilde{M} \end{array} \right ].{}\end{array}$$
(6)

Exploiting this and the condition (5), it holds that \(K = UV ^{-1}\) stabilizes G22 if and only if

$$\displaystyle\begin{array}{rcl} U = (U_{0} - MQ)\,\text{ and }V = (V _{0} - NQ)\,\text{ with }Q \in \mathcal{S}.& & {}\\ \end{array}$$

Similarly, K stabilizes G22 if and only if \(K = (\tilde{V }_{0} - Q\tilde{N})^{-1}(\tilde{U}_{0} - Q\tilde{M})\) with \(Q \in \mathcal{S}\). Together, these constitute the Youla-Kučera parametrizations of internally stabilizing controllers. Importantly, the coprime factors that appear in these are affine functions of the stable parameter Q. Moreover, using (6), an affine parametrization of the standard performance map (1) holds by direct substitution of either controller parametrization. Specifically,

$$\displaystyle\begin{array}{rcl} H(G,K)& =& G_{11} + G_{12}K(I - G_{22}K)^{-1}G_{ 21} \\ & =& T_{1} + T_{2}QT_{3}\quad \text{ with }Q \in \mathcal{S}, {}\end{array}$$
(7)

where \(T_{1} = G_{11} + G_{12}U_{0}\tilde{M}G_{21}\), \(T_{2} = -G_{12}M\) and \(T_{3} =\tilde{ M}G_{21}\). Clearly, \(T_{1} \in \mathcal{S}\) since this is the performance map when \(Q = 0 \in \mathcal{S}\). By the assumption that G is stabilizable, it follows that T2 and T3 are also elements of \(\mathcal{S}\); see Francis (1987, Chapter 4). The so-called Q-parametrization in (7) motivates the subsequent consideration of model-matching problems with respect to the standard measures of control system performance \(\|\cdot \|_{2}\) and \(\|\cdot \|_{\infty }\).

Model-Matching via Spectral Factorization

Bearing in mind the Q-parametrization (7), consider the following \(\mathcal{H}_{2}\) model-matching problem, where inf denotes greatest lower bound (infimum) and \(T_{i} \in \mathcal{S}\), i = 1, 2, 3:

$$\displaystyle\begin{array}{rcl} \inf _{Q\in \mathcal{S}}\|T_{1} + T_{2}QT_{3}\|_{2}.& & {}\\ \end{array}$$

Assume that T2(s) and T3(s) have full column and row rank, respectively, for s on the extended imaginary axis. Also assume that T1 is strictly proper, whereby Q must be strictly proper, and thus an element of \(\mathcal{H}\subset \mathcal{S}\), for the performance index to be finite. Under this standard collection of assumptions, the infimum is achieved as shown below.

A minimizer of the convex functional \(f := Q \in \mathcal{H}\mapsto \langle (T_{1} + T_{2}QT_{3}),(T_{1} + T_{2}QT_{3})\rangle\) is a solution of the model matching problem. Given spectral factorizations \(\varPhi ^{\sim }\varPhi = T_{2}^{\sim }T_{2}\) and \(\varLambda \varLambda ^{\sim } = T_{3}T_{3}^{\sim }\) (i.e., \(\varPhi,\varPhi ^{-1},\varLambda,\varLambda ^{-1} \in \mathcal{S}\)), which exist by the assumptions on the problem data, let R : = Φ Q Λ and \(W :=\varPhi ^{-\sim }T_{2}^{\sim }T_{1}T_{3}^{\sim }\varLambda ^{-\sim }\). Then for \(Q \in \mathcal{H}\), which is equivalent to \(R \in \mathcal{H}\) by the properties of spectral factors, it follows that

$$\displaystyle\begin{array}{rcl} f(Q)& =& \langle T_{1},T_{1}\rangle +\langle \varPhi ^{-\sim }T_{ 2}^{\sim }T_{ 1}T_{3}^{\sim }\varLambda ^{-\sim },R\rangle +\langle R,\varPhi ^{-\sim }T_{ 2}^{\sim }T_{ 1}T_{3}^{\sim }\varLambda ^{-\sim }\rangle +\langle R,R\rangle {}\end{array}$$
(8)
$$\displaystyle\begin{array}{rcl} & =& \langle T_{1},T_{1}\rangle +\langle (\boldsymbol{\varPi }_{-}(W) +\boldsymbol{\varPi } _{+}(W) + R),(\boldsymbol{\varPi }_{-}(W) +\boldsymbol{\varPi } _{+}(W) + R)\rangle -\langle W,W\rangle \\ & =& \langle T_{1},T_{1}\rangle -\langle \boldsymbol{\varPi }_{+}(W),\boldsymbol{\varPi }_{+}(W)\rangle +\langle (\boldsymbol{\varPi }_{+}(W) + R),(\boldsymbol{\varPi }_{+}(W) + R)\rangle, {}\end{array}$$
(9)

where the second last equality holds by “completion-of-squares” and the last equality holds since \(\langle \boldsymbol{\varPi }_{+}(W),\boldsymbol{\varPi }_{-}(W)\rangle = 0 =\langle R,\boldsymbol{\varPi }_{-}(W)\rangle\). From (9) it is apparent that

$$\displaystyle\begin{array}{rcl} Q = -\varPhi ^{-1}\boldsymbol{\varPi }_{ +}(\varPhi ^{-\sim }T_{ 2}^{\sim }T_{ 1}T_{3}^{\sim }\varLambda ^{-\sim })\varLambda ^{-1}& & {}\\ \end{array}$$

is a minimizer of f. As above, spectral factorization is a key component of the so-called Wiener-Hopf approach of Youla et al. (1976) and DeSantis et al. (1978).

Now consider the \(\mathcal{H}_{\infty }\) model-matching problem

$$\displaystyle\begin{array}{rcl} \inf _{Q\in \mathcal{S}}\|T_{1} + T_{2}QT_{3}\|_{\infty },& & {}\\ \end{array}$$

given \(T_{i} \in \mathcal{S}\), i = 1, 2, 3. This is more challenging than the problem discussed above, where \(\|\cdot \|_{2}\) is the performance index. While sufficient conditions are again available for the infimum to be achieved, computing a minimizer is generally difficult; see Francis and Doyle (1987) and Glover et al. (1991). As such, nearly optimal solutions are often sought by considering the relaxed problem of finding the set of \(Q \in \mathcal{S}\) that satisfy \(\|T_{1} + T_{2}QT_{3}\|_{\infty } <\gamma\) for a value of γ > 0 greater than, but close to, the infimum.

With a view to highlighting the role of factorization methods and simplifying the presentation, suppose that T2 is inner, which is possible without loss of generality via inner-outer factorization if T2(s) has full column rank for s on the extended imaginary axis. Furthermore, assume that T3 = I. Following the approach of Francis (1987) and Green et al. (1990), let \(X^{\sim } = \left [\begin{array}{*{10}c} X_{1}^{\sim }\quad &X_{2}^{\sim } \end{array} \right ] := \left [\begin{array}{*{10}c} T_{2}\quad & I - T_{2}T_{2}^{\sim } \end{array} \right ] \in \mathcal{B}\), so that XX = I and \(XT_{2} = [\begin{matrix}\scriptstyle I \\ \scriptstyle 0\end{matrix}]\). Observe that

$$\displaystyle\begin{array}{rcl} \|T_{1} + T_{2}Q\|_{\infty }& =& \|X(T_{1} + T_{2}Q)\|_{\infty } \\ & =& \left \|\left [\begin{array}{*{10}c} T_{2}^{\sim }T_{1} + Q \\ (I - T_{2}T_{2}^{\sim })T_{1} \end{array} \right ]\right \|_{\infty } <\gamma {}\end{array}$$
(10)

if and only if

$$\displaystyle\begin{array}{rcl} & & 0 <\gamma ^{2}I - T_{ 1}^{\sim }(I - T_{ 2}T_{2}^{\sim })T_{ 1} \\ & & \quad - (T_{2}^{\sim }T_{ 1} + Q)^{\sim }(T_{ 2}^{\sim }T_{ 1} + Q){}\end{array}$$
(11)

on the extended imaginary axis. Note that (11) implies \(0 <\gamma ^{2}I - T_{1}^{\sim }(I - T_{2}T_{2}^{\sim })^{2}T_{1}\). Thus, it follows that there exists a \(Q \in \mathcal{S}\) for which (10) holds if and only if the following are both satisfied: (a) there exists a spectral factorization \(\gamma ^{2}\varPsi ^{\sim }\varPsi =\gamma ^{2}I - T_{1}^{\sim }(I - T_{2}T_{2}^{\sim })^{2}T_{1}\); and (b) there exists an \(\bar{R}(= Q\varPsi ^{-1}) \in \mathcal{S}\) such that \(\|\bar{W} +\bar{ R}\|_{\infty } <\gamma\), where \(\bar{W} := T_{2}^{\sim }T_{1}\varPsi ^{-1} \in \mathcal{B}\). The condition (b) is a well-known extension problem and a solution exists if and only if the induced norm of the Hankel operator with symbol \(\bar{W}\) is less than γ, which is part of a result known as Nehari’s theorem. In fact, (b) is equivalent to the existence of a spectral factor \(\varUpsilon,\varUpsilon ^{-1} \in \mathcal{S}\) with \(\varUpsilon _{11}^{-1} \in \mathcal{S}\) such that

$$\displaystyle\begin{array}{rcl} \varUpsilon ^{\sim }\left [\begin{array}{*{10}c} I & 0 \\ 0&-\gamma ^{2}I \end{array} \right ]\varUpsilon = \left [\begin{array}{*{10}c} I &\bar{W}\\ 0 & I \end{array} \right ]^{\sim }\left [\begin{array}{*{10}c} I & 0\\ 0 &-\gamma ^{2 }I \end{array} \right ]\left [\begin{array}{*{10}c} I &\bar{W}\\ 0 & I \end{array} \right ],& &{}\end{array}$$
(12)

in which case \(\|\bar{W} +\bar{ R}\|_{\infty }\leq \gamma\) if and only if \(\bar{R} =\bar{ R}_{1}\bar{R}_{2}^{-1}\) with \(\left [\begin{array}{*{10}c} \bar{R}_{1}^{T}&\bar{R}_{2}^{T} \end{array} \right ] := \left [\begin{array}{*{10}c} \bar{S}^{T}&I \end{array} \right ]\varUpsilon ^{-T}\), \(\bar{S} \in \mathcal{S}\) and \(\|\bar{S}\|_{\infty }\leq \gamma\); see Ball and Ran (1987), Francis (1987), and Green et al. (1990) for details, including state-space constructions of the factors via Riccati equations. Noting that

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} T_{2} & T_{1}\\ 0 & I \end{array} \right ]^{\sim }\left [\begin{array}{*{10}c} I & 0 \\ 0&-\gamma ^{2}I \end{array} \right ]\left [\begin{array}{*{10}c} T_{2} & T_{1} \\ 0 & I \end{array} \right ] = \left [\begin{array}{*{10}c} I &0\\ 0 & \varPsi \end{array} \right ]^{\sim }\left [\begin{array}{*{10}c} I &\bar{W}\\ 0 & I \end{array} \right ]^{\sim }& & {}\\ \left [\begin{array}{*{10}c} I & 0\\ 0 &-\gamma ^{2}I \end{array} \right ]\left [\begin{array}{*{10}c} I &\bar{W}\\ 0 & I \end{array} \right ]\left [\begin{array}{*{10}c} I &0\\ 0 & \varPsi \end{array} \right ],& & {}\\ \end{array}$$

it follows using (12) that there exists a \(Q \in \mathcal{S}\) such that (10) holds if and only if there exists a spectral factor \(\varOmega,\varOmega ^{-1} \in \mathcal{S}\) with \(\varOmega _{11}^{-1} \in \mathcal{S}\)\(\left (\varOmega =\varUpsilon \left [\begin{matrix}\scriptstyle I&\scriptstyle 0 \\ \scriptstyle 0&\scriptstyle \varPsi \end{matrix}\right ]\right )\) that satisfies

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{*{10}c} T_{2} & T_{1}\\ 0 & I \end{array} \right ]^{\sim }\left [\begin{array}{*{10}c} I & 0 \\ 0&-\gamma ^{2}I \end{array} \right ]\left [\begin{array}{*{10}c} T_{2} & T_{1} \\ 0 & I \end{array} \right ]& =\varOmega ^{\sim }\left [\begin{array}{*{10}c} I & 0 \\ 0&-\gamma ^{2}I \end{array} \right ]\varOmega,&{}\end{array}$$
(13)

in which case \(\|T_{1} + T_{2}Q\|_{\infty }\leq \gamma\) if and only if \(Q = Q_{1}Q_{2}^{-1}\), where \(\left [\begin{array}{*{10}c} Q_{1}^{T}&Q_{2}^{T} \end{array} \right ] := \left [\begin{array}{*{10}c} S^{T}&I \end{array} \right ]\varOmega ^{-T}\), \(S \in \mathcal{S}\) and \(\|S\|_{\infty }\leq \gamma\); see Green et al. (1990). So-called J-spectral factorizations of the kind in (12) and (13) also appear in the chain-scattering/conjugation approach of Kimura (19891997) and the factorization approach of Ball et al. (1991), for example.

Summary

The preceding sections highlight the role of coprime and spectral factorizations in formulating and solving model-matching problems that arise from standard \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty }\) control problems. The transformation of standard control problems to model-matching problems hinges on an affine parametrization of internally stabilized performance maps. Beyond the problems considered here, this parametrization can be exploited to devise numerical algorithms for various other control problems in terms of convex mathematical programs.

Cross-References