Abstract
By means of iterative learning parametrization for state/output feedback gains in linear time invariant systems, pointwise pole assignment (PPA) is re-formulated and addressed in a complex-domain fashion, whereas implementation issues are also examined. Technical features include: (i) no assumptions other than controllability and observability are needed; (ii) the iterative learning parametrization algorithms are numerically tractable and robust against initial values and matrix uncertainties; (iii) the suggested algorithms are significant for achieving PPA-related control strategies, where data-modeling and data-driving techniques are employed. Numerical examples are included to illustrate the main results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
As is well known, poles and zeros are substantially related to system dynamics or structural properties, and control performances [1]. Almost all structural and spectral features of linear time-invariant systems are related to poles and zeros defined on their state-space realizations and transfer functions [2,3,4,5,6]. Therefore, how to locate or specify their positions and distribution by means of state and/or output feedback gain parametrization is one of the vital techniques for control design and applications, which is simply called pole assignment. It is by no means an exaggeration to say that pole assignment is a key tool for achieving almost all control objectives and performances. Indeed, in the literature, it is not surprising to see that numerous pole assignment problems are formulated and resolved in various ways; as a matter of fact, it is impossible to exhaust the literature even if we focus only on the classical versions of pole assignment.
Roughly speaking, three types of pole assignment are coined and addressed in diversity of terminologies and notations in continuous- and discrete-time systems. In what follows, we classify these types into pointwise, regional and pinning pole assignments, with or without constraints. By pointwise pole assignment, it means that the closed-loop poles or eigenvalues are placed to some isolated points for obtaining the expected dynamics and modes [7,8,9,10,11,12,13]; pole assignment in this sense is typical as well as classical. By regional pole assignment, it implies that the closed-loop poles are placed into some prescribed region so that some dynamics and features can be obtained, while some controller parametrization freedom is available [14,15,16,17,18]; pole assignment in this sense is relatively new and better reflects engineering applications. By pinning pole assignment, it says that some closed-loop poles are pinned to isolated points or one region, while the other ones are located to another distinctive region [19,20,21,22]; pole assignment in this sense can be exploited when complicated control plants are involved or subject to uncertainties and constraints. Besides pole assignment, specification in spectra and structures is reported in [21,22,23,24,25,26,27,28], where time-domain as well as frequency-domain concerns are considered.
Motivated by the existing studies, the paper creates an iterative learning state/output feedback gain parametrization approach, which not only solves pole assignment, but also considers the numerical tractability and robustness against initial values and matrix uncertainties. Moreover, the pointwise pole assignment can be formulated in continuous- and discrete-time alike, though only the former cases are talked about.
The major contribution is a class of iterative learning algorithms for feedback gain parametrization, which possess the technical advantages: (i) no assumptions other than controllability and observability are needed; (ii) the iterative learning parametrization algorithms are numerically tractable and robust against initial values and model uncertainties; (iii) when PPA-related control strategies are based on data-modeling and data-driving, the suggested algorithms can be employed easily.
Outline. Section 2 collects preliminaries to LTI dynamical systems, including the state-space models and their closed-loop expressions under state/output feedback. Pointwise pole assignment problems are concerned in Sect. 3. Technical issues involved in the iterative learning algorithms are summarized in Sect. 4. Numerical examples are sketched in Sect. 5, whereas conclusions are given in Sect. 6.
Notations. \(\mathcal {R}\) and \(\mathcal {C}\) denote the sets of all real and complex numbers, respectively. \(I_k\) denotes the \(k\times k\) identity matrix. \((\cdot )^T\) means the transpose of a matrix \((\cdot )\) , while \((\cdot )^*\) means the conjugate transpose of \((\cdot )\). Also, \((\cdot )^{-T}=((\cdot )^{-1})^T\).
2 Preliminaries to LTI multivariable systems
Consider the linear time-invariant dynamical continuous-time system
where \(x\in \mathcal {R}^n\), \(u\in \mathcal {R}^m\), and \(y\in \mathcal {R}^l\), respectively, the state, input and output vectors; \(A\in \mathcal {R}^{n\times n}\), \(B\in \mathcal {R}^{n\times m}\) and \(C\in \mathcal {R}^{l\times n}\) are constant matrices.
Now let us introduce to the system (1) the following static state or output feedback, respectively.
where \(v\in \mathcal {R}^m\) is a new input, and \(K\in \mathcal {R}^{m\times n}\) and \(L\in \mathcal {R}^{m\times l}\) are static gains.
Then, the closed-loop state-space equations are
and
Correspondingly, we write the characteristic polynomial for (3) and (4), respectively, by
where \(s\in \mathcal {C}\) is the Laplace variable.
Controllability/observability, stabilization and stability of the system (1) are standard and can be found in textbooks about systems and control theory, say [1]. The PBH criteria are important in understanding the discussions, which are referred to Corollaries 12.6.19 and 12.8.4 of [2], respectively.
To facilitate our discussion, let \( {\Omega _0}\) be a simple and close complex contour defined on the complex plane \(\mathcal {C}\) (for example, the unit circle for the discrete-time systems; the Nyquist contour for the continuous-time cases). \(\{s_0,s_1, \cdots , s_\kappa \}\) is a set of isolated points on \( {\Omega _0}\), namely \(s_i\in {\Omega _0}\), with \(i=0,1,\cdots , \kappa \); \(\kappa >0\) is an integer for the total points number. The interior with \( {\Omega _0}\) being the boundary is denoted by \(\mathrm {Int}( {\Omega _0})\).
Let us denote the real polynomial
whose roots belong to \(\mathrm {Int}( {\Omega _0})\) and are specified according to some expected performances; or \(\alpha (s)\) is the desired characteristic polynomial.
3 Pointwise pole assignment via iterative learning algorithm
3.1 Problem 1: pole assignment
The problem in the state feedback is: determine the state feedback gain \(K\in \mathcal {R}^{m\times n}\) such that the error function is minimized over \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\).
Or, the problem in the output feedback is: determine the output feedback gain \(L\in \mathcal {R}^{m\times l}\) such that the error function is minimized over \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\).
Remark 1
Clearly \(J(s,K)>0\) for any \(s\in {\Omega _0}\) and K. By the error function (7), when the minimum is reached, it holds that \(p(s,K)=\alpha (s)\). This says that by installing the state feedback K to the system (1), the closed-loop eigenvalues are assigned to the expected positions. To avoid redundance, the discussion about the output feedback case is omitted. This is also the case throughout the subsequent arguments.
3.2 Problem 2: pole assignment with gain trace
The gain trace taken into account, the problem in the state feedback is: determine the state feedback gain \(K\in \mathcal {R}^{m\times n}\) such that the following error function is minimized over \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\).
Or, the problem in the output feedback when taking the gain trace into consideration is: determine the output feedback gain \(L\in \mathcal {R}^{m\times l}\) such that the following error function is minimized over \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\).
Remark 2
By (9), when the minimum is reached, it holds that \(p(s,K)\rightarrow \alpha (s)\) and \(\mathrm {trace}(K^TK)\rightarrow \mathrm {min}\). This says by roots continuity with respect to polynomial coefficients that installing the state feedback K to the system (1), the closed-loop eigenvalues are assigned to some neighborhood around the expected positions, while the state feedback gain is the smallest.
3.3 Problems solving via iterative learning parametrization
In this section, we discuss algorithms for the pole assignment by means of an iterative learning approach. The algorithms present us with numerical approximations of the feedback gains. This is a typical feature of the suggested approach.
The iterative learning algorithms in (11) are alternatively given in terms of \(J(s,\mathbf {K})\) and \(J(s,\mathbf {L})\) (instead of J(s, K) and J(s, L)), which will be defined soon in the following for surmounting mathematical difficulties in calculating determinant and trace derivatives with respect to K and L.
where \(\beta >0\) and \(\gamma >0\) are the learning ratios, and
That is, the partial derivatives are the gradients of \(J(s,\mathbf {K})\) and \(J(s,\mathbf {L})\) with respect to \(\mathbf {K}\) and \(\mathbf {L}\) evaluated at \(s=s^{(k)}\), and \(\mathbf {K}^{(k)}\) and \(\mathbf {L}^{(k)}\) (or, the k-\(\mathrm {th}\) iterative evaluations of \(\mathbf {K}\) and \(\mathbf {L}\), respectively).
Since Problems 1 and 2 are substantially the same, we consider to work out a general iterative learning algorithm for solving both of them. More precisely, we will focus only on addressing Problem 2. The iterative learning algorithm to Problem 1 follows as a special case of that to Problem 2.
To this end, let us write J(s, K) in (9) as follows.
where \(Q(s)=[sI_n-A,\,B]\in \mathcal {C}^{n\times (n+m)}\) and
To understand the iterative learning algorithm for fixing \(\mathbf {K}\), we derive the partial derivative of \(J(s,\mathbf {K})\) with respect to \(\mathbf {K}\). We observe that
Similarly, we observe the following algebras.
where \(E_l\in \mathcal {R}^{(n+l)\times (n+l)}\) is similar to \(E_m\) but with \(I_m\) being replaced by \(I_l\); and
Now we turn to derive the derivative of \(J(s,\mathbf {L})\) with respect to \(\mathbf {L}\). We observe that
In summary, we claim the following pole assignment procedures according to (11) and (13). The procedures for (11) and (15) can be given similarly.
-
1.
Set \(k=0\), \(s^{(0)}=s_0\), and initialize \(\mathbf {K}^{(0)}\in \mathcal {R}^{(n+m)\times n}\) with the top \(n\times n\) sub-matrix being \(I_n\) (respectively., \(\mathbf {L}^{(0)}\in \mathcal {R}^{(n+m)\times (n+l)}\) with the (1,1)-sub-matrix being \(I_n\), and the (1,2)- and (2,1)-sub-matrices being zeros);
-
2.
Calculate \(\left. \frac{\partial J(s,\mathbf {K})}{\partial \mathbf {K}}\right| _{s=s^{(k)},\mathbf {K}=\mathbf {K}^{(k)}}\) by (13) (respectively., \(\left. \frac{\partial J(s,\mathbf {L})}{\partial \mathbf {L}}\right| _{s=s^{(k)},\mathbf {L}=\mathbf {L}^{(k)}}\) by(15));
-
3.
If \(\det (Q(s^{(k)})\mathbf {K}^{(k)})=0\), then \(Q(s^{(k)})\mathbf {K}^{(k)}\) is not invertible (respectively, if \(\det (Q(s^{(k)})\mathbf {L}^{(k)}\mathbf {C})=0\), then \(Q(s^{(k)})\mathbf {L}^{(k)}\mathbf {C}\) is not invertible), and drop this \(s^{(k)}\) by letting \(s_k=s_{k+1}\) over \(k=k+1,\cdots ,\kappa -1\) and setting \(\kappa =\kappa -1\), and return to the previous step; otherwise, go forward to the next step;
-
4.
Determine \(\mathbf {K}^{(k+1)}\) (resp., \(\mathbf {L}^{(k+1)}\)) according to (11), and let the top \(n\times n\) sub-matrix of \(\mathbf {K}^{(k+1)}\) be \(I_n\) (resp., let the (1,1)-sub-matrix of \(\mathbf {L}^{(k+1)}\) be \(I_n\), and the (1,2)- and (2,1)-sub-matrices of \(\mathbf {L}^{(k+1)}\) be zeros);
-
5.
Test if \(\Vert \mathbf {K}^{(k+1)}-\mathbf {K}^{(k)}\Vert \le \epsilon \) (resp., \(\Vert \mathbf {L}^{(k+1)}-\mathbf {L}^{(k)}\Vert \le \epsilon \)) is satisfied, where \(\epsilon >0\) is the tolerance error sufficiently small;
-
6.
If yes, go to the next step; otherwise, set \(k=k+1\) and go back to Step. 2;
-
7.
Let \(\mathbf {K}=\mathbf {K}^{(k+1)}\) (resp., \(\mathbf {L}=\mathbf {L}^{(k+1)}\)) and end.
Remarks about the iterative learning pole assignment algorithm:
-
The obtained K (resp., L) is quasi-optimal at most, since the iterative learning algorithms are constructed in terms of \(\mathbf {K}\) or \(\mathbf {L}\), in which K and L are their sub-matrices.
-
When determining the contour \( {\Omega _0}\), trial-and-errors are inevitable. In principle, \( {\Omega _0}\) should include all the expected closed-loop eigenvalues in its interior; to improve the iterative learning efficiency, \( {\Omega _0}\) needs to be near to the expected eigenvalues.
-
In each iteration, the invertibility of \(Q(s)\mathbf {K}\) is assumed. Now we see that the invertibility is satisfied if the pair (A, B) is controllable. Although the iterative learning algorithm does not explicitly entail controllability of the concerned system, it is an underlying assumption for the algorithm to be numerically meaningful.
To see this, let us suppose that the pair (A, B) is uncontrollable. This implies that some eigenvalues of \(A-BK\) cannot be assigned as expected by choosing the feedback gain K. In other words, \(p(s,K)=\alpha (s)\) cannot be true no matter how K is fixed. This in turn says that J(s, K) defined in (9) cannot achieve its minimum.
-
The output feedback cannot assign all the closed-loop eigenvalues as expected in general, even if the system (1) is controllable and observable [1]. Bearing this in mind, the suggested algorithm for the output feedback case at most provides us with some partial pole assignment.
Now we claim the following results.
Proposition 1
Assume that the system (1) is controllable. Consider the iterative learning algorithm in the first equation of (11). If for each \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\) with \(\kappa >0\) sufficiently large, \(J(s,\mathbf {K}^{(k)})\rightarrow 0\) as \(k\rightarrow \infty \), then all the eigenvalues of \(A-BK^{(k)}\) are assigned to small neighborhoods around the roots of \(\alpha (s)=0\).
Proof of Propostion 1
Under the given assumptions, \(J(s,\mathbf {L}^{(k)})\rightarrow 0\) as \(k\rightarrow \infty \) implies that \(J(s,K^{(k)})\rightarrow 0\) as \(k\rightarrow \infty \). This can be interpreted as \(p(s,K^{(k)})\rightarrow \alpha (s)\). Then eigenvalue continuity with regard to polynomial coefficients yields the desired assertion. \(\square \)
Proposition 2
Assume that the system (1) is controllable and observable. Consider the iterative learning algorithm in the second equation of (11). If for each \(s\in \{s_0,s_1,\cdots ,s_\kappa \}\subset {\Omega _0}\) with \(\kappa >0\) sufficiently large, \(J(s,\mathbf {L}^{(k)})\rightarrow 0\) as \(k\rightarrow \infty \), then it is obvious that \(\min \left. \{n,\mathrm {rank}B+\mathrm {rank}C-1\right\} \) eigenvalues of \(A-BL^{(k)}C\) can be assigned to small neighborhoods around the corresponding roots of \(\alpha (s)=0\).
4 Numerical issues about the iterative learning algorithms
When implementation of the iterative learning algorithms is concerned, there are numerical issues that need to be explicated carefully. This section is written for collecting such issues and give solutions, if any. This section can be simply skipped if the reader has no interest in the numerical aspects of the iterative learning pole assignment algorithms.
4.1 Convergence and learning ratios
In the above, iteration convergence in the suggested algorithms is not examined, and the learning ratios are constants. Since these problems are somehow beyond the scope of this study, these issues are left for our subsequent studies. It must be added that as far as the authors are concerned, convergence of almost all iterative learning algorithms are still open problems.
4.2 Contours, pole distribution and gain realness
For the obtained state or output feedback gain parametrization to make sense, they must be some real matrices. After carefully reviewing these algorithms, it is not difficult to see that the feedback gains derived do give us with real gain matrices. In what follows, we collect rules about how to avoid complex feedback gains.
-
The contour must be symmetric with respect to the real axis, besides being simple and close;
-
The specified poles must be real ones or in pairs of conjugate complex numbers.
4.3 Alternative evaluation functions and summation iterative learning
Based on the error evaluation functions defined in (7), (8), (9) and (10), the iterative learning algorithms of (11) are developed. Carefully examining the definition formula, one can easily find that the algorithms are actually implemented in a point-by-point fashion over each \(s^{(k)}\in \{s_0,s_1,\cdots ,s_{\kappa }\}\subset {\Omega _0}\). Our numerical simulations show that the iterative learning algorithms in this way are sensitive to the distribution pattern and computational ordering of \(s^{(k)}\in \{s_0,s_1,\cdots ,s_{\kappa }\}\subset {\Omega _0}\).
To reduce computational sensitivity, we define the error evaluation functions by
which are summations of the error evaluation functions as appropriately. Accordingly, the iterative learning algorithms can be given in form of
In other words, the corresponding iterative learning algorithms in (16) will be implemented in each iteration with the error evaluation summation at all the testing points being summed up, and the iterative algorithm goes repeatedly until the error evaluation summation is sufficiently small.
5 Numerical illustrations
5.1 Example system description
Consider the example system of the reference [18]:
Clearly, \(n=5\), \(m=3\) and \(l=2\). And the eigenvalues of A are: \(\lambda _1=0.4009\), \(\lambda _{2,3}=-3.7004 \pm 1.1286j\) and \(\lambda _{4,5}=-1.5000 \pm 1.9365j\).
To facilitate our statements, let us write the controllability matrix as follows.
It is easy to see that \(\mathrm{rank}\{Q(s)\}=5\) for any \(s\in \mathcal {C}\) so that by the Popov-Belevitch-Hautus criteria [2], the system is completely controllable.
5.2 Numerical simulations with the proposed iterative learning algorithms
In what follows, only the state feedback cases are considered and the summation iterative learning algorithms in (16) are adopted for the sake of brevity. Throughout the numerical simulations, the learning tolerance error is \(\epsilon =0.0001\). The specified closed-loop eigenvalues are assigned at \(\lambda _1=-1\), \(\lambda _2=-1.5\), \(\lambda _3=-2\) and \(\lambda _{4,5}=-2.5\pm j\sqrt{3}\). That is, the expected characteristic polynomial is
The set \( {\Omega _0}\) consists of the 36 equitably and symmetrically chosen isolated points on the circle with radius r and centered at \((-2,j0)\).
Case 1: \(r=3.2\), \(\beta =0.12\) with gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
Case 2: \(r=3.2\), \(\beta =0.12\) without gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
Case 3: \(r=3\), \(\beta =0.1\) with gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
Case 4: \(r=3\), \(\beta =0.1\) without gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
Case 5: \(r=2.8\), \(\beta =0.08\) with gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
Case 6: \(r=2.8\), \(\beta =0.08\) without gain trace minimization The obtained state gain is
and the closed-loop eigenvalues are
The set \( {\Omega _0}\) as well as the above numerical pole distributions are plotted in Fig. 1, where the asterisks stand for the closed-loop eigenvalues assigned with gain trace minimization, and the circles represent the ones assigned without gain trace minimization.
Based on Fig. 1, we can observe that
-
In each case, the iterative learning algorithm is convergent;
-
In general, the gain K obtained with gain trace minimization is slightly smaller than that obtained without gain trace minimization;
-
The choice of \( {\Omega _0}\) does have effect on the gain, though its effect on the closed-loop eigenvalues may be negligibly small.
Case 7: \(r=3\), \(\gamma =0.15\) with gain trace minimization The obtained output gain is
and the closed-loop eigenvalues are
Case 8: \(r=3\), \(\beta =0.15\) without gain trace minimization The obtained output gain is
and the closed-loop eigenvalues are
The points of the set \( {\Omega _0}\) as well as the above numerical poles distributions are plotted in Fig. 2. Compared to the numerical results of Fig. 1, it is clear to see that the output feedback pole assignment is equally efficient and accurate.
Case 9: \(r=3\), \(\beta =0.1\) with gain trace minimization, while K and A are randomly set More precisely, the points of the set \( {\Omega _0}\) are the same as those in Fig. 2, and the learning ratio is \(\beta =0.1\). All the entries of K are randomly prescribed initially (namely \(K_0\) is randomly given), while each and all entries of A are perturbed by white noise in form of \(A+\Delta A\) with \(\Delta A\) being randomly given. The learning iteration number is fixed at 4000.
All the numerical poles are plotted in Fig. 3. Compared to the numerical results of Fig. 1, it is clear to see that the state feedback pole assignment is fairly robust with respect to the initial conditions \(K_0\) of the state feedback gain K as well as the perturbations \(\Delta A\) in the state matrix A. In other words, the suggested pole assignment algorithm is numerically efficient and not sensitive to model uncertainties.
5.3 Numerical results with the conventional method
In what follows, only the state feedback case is considered and the standard pole assignment algorithm in [29] is adopted to fix the feedback gain matrix. The specified closed-loop eigenvalues are assigned at the same positions mentioned as in Sect. 5.2. That is, \(\lambda _1=-1\), \(\lambda _2=-1.5\), \(\lambda _3=-2\) and \(\lambda _{4,5}=-2.5\pm j\sqrt{3}\). The obtained state feedback gain matrix is
which has a matrix norm larger than those obtained by the suggested algorithms.
6 Conclusion
In this article, pole assignment in linear dynamical systems is re-formulated and addressed by developing an iterative learning parametrization approach, by exploiting the matrix and trace/determinant derivative concepts. In other words, the suggested design producers are claimed completely from a frequency-domain viewpoint. Numerical implementation of the algorithms are also explicated. The results obtained through numerical examples show how the approach works, and clearly demonstrate high efficiency of the approach.
The selection of learning ratios and convergence in the suggested iterative algorithms remains an interesting and open problem in our future work. This could improve practicability and accuracy of the proposed algorithms hopefully .
References
Hespanha JP (2009) Linear System Theory. Princeton University Press, Oxford
Bernstein DS (2009) Matrix mathematics: theory, facts and formulas, 2nd edn. Princeton, NJ
Green M, Limebeer DJN (1995) In: Linear Robust Control. Prentice-Hall, Englewood Cliffs, New Jersey, pp 93–96
Haddad WM, Chellabonina V (2008) Nonlinear Dynamical Systems and Control: a Lyapunov-Based Approach. Princeton University Press, Oxford
Khalil H (2000) Nonlinear systems, 3rd edn. Pearson Educational International Inc., New Jersey
Zhou K, Doyle JC (1998) Essentials of Robust control. Prentice Hall, New York
Konigorski U (2012) Pole placement by parametric output feedback. Syst Control Lett 61(2):292–297
Mammadov K (2021) Pole placement parametrisation for full-state feedback with minimal dimensionality and range. Int J Control 94(2):382–389
Peretz Y (2017) On parametrization of all the exact pole-assignment state-feedbacks for LTI systems. IEEE Trans Autom Control 62(7):3436–3441
Schmid R, Pandey A, Nguyen T (2014) Robust pole placement with Moore’s algorithm. IEEE Trans Autom Control 59(2):500–505
Schmid R, Ntogramatzidis L, Nguyen T, Pandey A (2014) A unified method for optimal arbitrary pole placement. Automatica 50(8):2150–2154
Schmid R, Ntogramztzidis L, Nguyen T (2016) Arbitrary pole placement with the extended Kautsky-Nicholsvan Dooren parametric form. Int J Control 89(7):1359–1366
Wang AX, Konigorski U (2013) On linear solutions of the output feedback pole assignment problems. IEEE Trans Autom Control 58(9):2354–2359
Abdelaziz THS (2013) Robust pole placement for second-order linear systems using velocity-plus-acceleration feedback. IET Control Theor Appl 7(14):1843–1856
Argha A, Su SW, Savkin A, Celler BG (2018) Mixed \(H_2/H_\infty \)-based actuator selection for uncertain polytopic systems with regional pole placement. Int J Control 91(1–3):320–336
dos Sandos JFS, Pellanda PC, Simoes AM (2018) Robust pole placement under structural constraints. Syst Control Lett 116:8–14
Lin ZW, Liu JZ, Zhang WH, Niu YG Regional pole placement of wind turbine generator systems via a Markovian approach. IET Control Theory Appl 10(15):1771–1781
Sahoo PR, Goyal JK, Ghosh S, Naskar AK (2019) New results on restricted static output feedback \(H_\infty \) controller design with regional pole placement. IET Control Theor Appl 13(8):1095–1104
Guo ZC, Cai YF, Qian J, Xu SF (2015) A modified Schur method for robust pole assignment in state feedback control. Automatica 52:334–339
Guo ZC, Qian J, Cai YF, Xu SF (2016) Refined Schur method for robust pole assignment with repeated poles. IEEE Trans Autom Control 61(9):2370–2385
Shen DM, Wei MS (2015) The pole assignment for the regular triangular decoupling problem. Automatica 53:208–215
Wei MS, Wang Q, Cheng XH (2010) Some new results for system decoupling and pole assignment problems. Automatica 46(5):937–944
Babiarz A, Banshchikova I, Czornik A, Makarov EK, Niezabitowski M, Popova S (2018) Necessary and sufficient conditions for assignability of the Lyapunov spectrum of discrete linear time-varying systems. IEEE Trans Autom Control 63(11):3825–3837
Cai YF, Qian J, Xu SF (2012) Robust partial pole assignment problem for high order control systems. Automatica 48(7):1462–1466
Dao MN, Noll D, Apkarian P (2015) Robust eigenstructure clustering by non-smooth optimization. Int J Control 88(8):1441–1455
Lavaei J, Sojoudi S, Aghdam AG (2010) Pole assignment with improved control performance by means of periodic feedback. IEEE Trans Autom Control 55(1):248–252
Rosenwasser EN, Lampe BP, Drewelow W, Jeinsch T (2019) Causal polynomial pole assignmnet and stabilization of SISO strictly causal linear discrete processes. Int J Control 92(6):1306–1313
Tomas-Rodriguez M, Banks SP (2013) An iterative approach to eigenvalue assignment for nonlinear systems. Int J Control 86(5):883–892
Ogata K (2011) Modern control engineering, 5th edn. Publishing House of Electronics Industry, Beijing
Acknowledgements
The study is supported by the National Natural Science Foundation of China under Grant No. 61573001.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Zhou, J., Yan, T. Iterative learning parametrization for pointwise pole assignment in linear dynamical systems. Int. J. Dynam. Control 10, 194–202 (2022). https://doi.org/10.1007/s40435-021-00800-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40435-021-00800-9