Abstract
Robust control theory has introduced several new and challenging problems for researchers. Some of these problems have been solved by innovative approaches and led to the development of new and efficient algorithms. However, some of the other problems in robust control theory had attracted significant amount of research, but none of the proposed algorithms were efficient, namely, had execution time bounded by a polynomial of the “problem size.” Several important problems in robust control theory are either of decision type or of computation/approximation type, and one would like to have an algorithm which can be used to answer all or most of the possible cases and can be executed on a classical computer in reasonable amount of time. There is a branch of theoretical computer science, called theory of computation, which can be used to study the difficulty of problems in robust control theory. In the following, classical computer system, algorithm, efficient algorithm, unsolvability, tractability, NP-hardness, and NP-completeness will be introduced in a more rigorous fashion, with applications to problems from robust control theory.
Access provided by Autonomous University of Puebla. Download reference work entry PDF
Similar content being viewed by others
Keywords
Introduction
The term algorithm is used to refer to different notions which are all somewhat consistent with our intuitive understanding. This ambiguity may sometimes generate significant confusion, and therefore, a rigorous definition is of extreme importance. One commonly accepted “intuitive” definition is a set of rules that a person can perform with paper and pencil. However, there are “algorithms” which involve random number generation, for example, finding a primitive root in \(\mathbb{Z}_{p}\) (Knuth 1997). Based on this observation, one may ask whether a random number generation-based set of rules should be also considered as an algorithm, provided that it will terminate after finitely many steps for all instances of the problem or for a significant majority of the cases. In a similar fashion, one may ask whether any real number, including irrational ones which cannot be represented on a digital computer without an approximation error, should be allowed as an input to an algorithm and, furthermore, should all calculations be limited to algebraic functions only or should exact calculation of non-algebraic functions, e.g., trigonometric functions, the gamma function, etc., be acceptable in an algorithm. Although all of these seem acceptable with respect to our intuitive understanding of the algorithm, from a rigorous point of view, they are different notions. In the context of robust control theory, as well as many other engineering disciplines, there is a separate and widely accepted definition of algorithm, which is based on today’s digital computers, more precisely the Turing machine (Turing 1936). Alan M. Turing defined a “hypothetical computation machine” to formally define the notions of algorithm and computability. A Turing machine is, in principle, quite similar to today’s digital computers widely used in many engineering applications. The engineering community seems to widely accept the use of current digital computers and Turing’s definitions of algorithm and computability.
However, depending on new scientific, engineering, and technological developments, superior computation machines may be constructed. For example, there is no guarantee that quantum computing research will not lead to superior computation machines (Chen et al. 2006; Kaye et al. 2007). In the future, the engineering community may feel the need to revise formal definitions of algorithm, computability, tractability, etc., if such superior computation machines can be constructed and used for scientific/engineering applications.
Turing Machines and Unsolvability
Turing machine is basically a mathematical model of a simplified computation device. The original definition involves a tape-like device for memory. For an easy-to-read introduction to the Turing machine model, see Garey and Johnson (1979) and Papadimitriou (1995), and for more details, see Hopcroft et al. (2001), Lewis and Papadimitriou (1998), and Sipser (2006). Despite this being a quite simple and low-performance “hardware” compared to today’s engineering standards, the following two observations justify their use in the study of computational complexity of engineering problems. Anything which can be solved on today’s current digital computers can be solved on a Turing machine. Furthermore, a polynomial-time algorithm on today’s digital computers will correspond to again a polynomial-time algorithm on the original Turing machine, and vice versa. A widely accepted definition for an algorithm is a Turing machine with a program, which is guaranteed to terminate after finitely many steps.
For some mathematical and engineering problems, it can be shown that there can be no algorithm which can handle all possible cases. Such problems are called unsolvable. The condition “all cases” may be considered too tough, and one may argue that such negative results have only theoretical importance and have no practical implications. But such results do imply that we should concentrate our efforts on alternative research directions, like the development of algorithms only for cases which appear more frequently in real scientific/engineering applications, without asking the algorithm to work for the remaining cases as well.
Here is a famous unsolvable mathematical problem: Hilbert’s tenth problem is basically the development of an algorithm for testing whether a Diophantine equation has an integer solution. However, in 1970, Matijasevich showed that there can be no such algorithm (Matiyasevich 1993). Therefore, we say that the problem of checking whether a Diophantine equation has an integer solution is unsolvable.
Several unsolvability results for dynamical systems can be proved by using the Post correspondence problem (Davis 1985) and the embedding of free semigroups into matrices. For example, the problem of checking the stability of saturated linear dynamical systems is proved to be undecidable (Blondel et al. 2001), meaning that no general stability test algorithm can be developed for such systems. A similar unsolvability result is reported in Blondel and Tsitsiklis (2000a) for boundedness of switching systems of the type
where f is assumed to be an arbitrary and unknown function from \(\mathbb{N}\) into {0, 1}. A closely related asymptotic stability problem is equivalent to testing whether the joint spectral radius (JSR) (Rota and Strang 1960) of a set of matrices is less than one. For a quite long period of time, there was a conjecture called the finiteness conjecture (FC) (Lagarias and Wang 1995), which was generally believed or hoped to be true, at least for a group of researchers. FC may be interpreted as “For asymptotic stability of \(x(k + 1) = A_{f(k)}x(k)\) type switching systems, it is enough to consider periodic switchings only.” There was no known counterexample, and the truth of this conjecture would imply existence of an algorithm for the abovementioned JSR problem. However, it was shown in Bousch and Mairesse (2002) that FC is not true (see Blondel et al. (2003) for a simplified proof). There are numerous known computationally very valuable procedures related to JSR approximation, for example, see Blondel and Nesterov (2005) and references therein. However, the development of an algorithm to test whether JSR is less than one remains as an open problem.
For further results on unsolvability and unsolved problems in robust control, see Blondel et al. (1999), Blondel and Megretski (2004), and references therein.
Tractability, NP-Hardness, and NP-Completeness
The engineering community is interested in not only solution algorithms but algorithms which are fast even in the worst case and if not on the average. Sometimes, this speed requirement may be relaxed to being fast for most of the cases and sometimes to only a significant percentage of the cases. Currently, the theory of computation is developed around the idea of algorithms which are polynomial time even in the worst case, namely, execution time bounded by a polynomial of the problem size (Garey and Johnson 1979; Papadimitriou 1995). Such algorithms are also called efficient, and associated problems are classified as tractable. The term problem size means number of bits used in a suitable encoding of the problem (Garey and Johnson 1979; Papadimitriou 1995).
One may argue that this worst-case approach of being always polynomial time is a quite conservative requirement. In reality, a practicing engineer may consider being polynomial time on the average quite satisfactory for many applications. The same may be true for algorithms which are polynomial time for most of the cases. However, the existing computational complexity theory is developed around this idea of being polynomial time even in the worst case. Therefore, many of the computational complexity results proved in the literature do not imply the impossibility of algorithms which are neither polynomial time on the average nor polynomial time for most of the cases. Note that despite not being efficient, such algorithms may be considered quite valuable by a practicing engineer. Tractability and efficiency can be defined in several different ways, but the abovementioned polynomial-time solvability even in the worst-case approach is widely adopted by the engineering community.
NP-hardness and NP-completeness are originally defined to express the inherent difficulty of decision-type problems, not for approximation-type problems. Although approximation complexity is an important and active research area in the theory of computation (Papadimitriou 1995), most of the classical results are on decision-type problems. Many robust control-related problems can be formulated as “Check whether γ < 1,” which is a decision-type problem. Approximate value of γ may not be always good enough to “solve” the problem, i.e., to decide about robust stability. For certain other engineering applications for which approximate values of optimization problems are good enough to “solve” the problem, the complexity of a decision problem may not be very relevant. For example, in a minimum effort control problem, usually there may be no point in computing the exact value of the minimum, because good approximations will be just fine for most cases. However, for a robust control problem, a result like γ = 0. 99 ± 0. 02 may be not enough to decide about robust stability, although the approximation error is about 2 % only. Basically, both the conservativeness of the current tractability definition and the differences between decision- and approximation-type problems should be always kept in mind when interpreting computational complexity results reported here as well as in the literature.
In this subsection, and in the next one, we will consider decision problems only. The class P corresponds to decision problems which can be solved by a Turing machine with a suitable program in polynomial time (Garey and Johnson 1979). This is interpreted as decision problems which have polynomial-time solution algorithms. The definition of the class NP is more technical and involves nondeterministic Turing machines (Garey and Johnson 1979). It may be interpreted as the class of decision problems for which the truth of the problem can be verified in polynomial time. It is currently unknown whether P and NP are equal or not. This is a major open problem, and the importance of it in the theory of computation is comparable to the importance of Riemann hypothesis in number theory.
A problem is NP-complete if it is NP and every NP problem polynomially reduces to it (Garey and Johnson 1979). For an NP-complete problem, being in P is equivalent to P = NP. There are literally hundreds of such problems, and it is generally argued that since after several years of research nobody was able to develop a polynomial-time algorithm for these NP-complete problems, there is probably no such algorithm, and most likely P ≠ NP. Although current evidence is more toward P ≠ NP, this does not constitute a formal proof, and the history of mathematics and science is full of surprising discoveries.
A problem (not necessarily NP) is called NP-hard if and only if there is an NP-complete problem which is polynomial time reducible to it (Garey and Johnson 1979). Being NP-hard is sometimes called being intractable and means that unless P = NP, which is considered to be very unlikely by a group of researchers, no polynomial-time solution algorithm can be developed. All NP-complete problems are also NP-hard, but they are only as “hard” as any other problem in the set of NP-complete problems.
The first known NP-complete problem is SATISFIABILITY (Cook 1971). In this problem, there is a single Boolean equation with several variables, and we would like to test whether there is an assignment to these variables which make the Boolean expression true. This important discovery enabled proofs of NP-completeness or NP-hardness of several other problems by using simple polynomial reduction techniques only (Garey and Johnson 1979). Among these, quadratic programming is an important one and led to the discovery of several other complexity results in robust control theory. The quadratic programming (QP) can be defined as
more precisely testing whether q < 1 or not (decision version). When the matrix Q is positive definite, convex optimization techniques can be used; however, the general version of the problem is NP-hard.
A related problem is linear programming (LP)
which is used in certain robust control problems (Dahleh and Diaz-Bobillo 1994) and has a quite interesting status. Simplex method (Dantzig 1963) is a very popular computational technique for LP and is known to have polynomial-time complexity on the “average” (Smale 1983). However, there are examples where the simplex method requires exponentially growing number of steps with the problem size (Klee and Minty 1972). In 1979, Khachiyan proposed the ellipsoid algorithm for LP, which was the first known polynomial-time approximation algorithm (Schrijver 1998). Because of the nature of the problem, one can answer the decision version of LP in polynomial time by using the ellipsoid algorithm for approximation and stopping when the error is below a certain level. But all of these results are for standard Turing machines with input parameters restricted to rational numbers. An interesting open problem is whether LP admits a polynomial algorithm in the real number model of computation.
Complexity of Certain Robust Control Problems
There are several computational complexity results for robust control problems (see Blondel and Tsitsiklis (2000b) for a more detailed survey). Here we summarize some of the key results on interval matrices and structured singular values.
Kharitonov theorem is about robust Hurwitz stability of polynomials with coefficients restricted to intervals (Kharitonov 1978). The problem is known to have a surprisingly simple solution; however, the matrix version of the problem has a quite different nature. If we have a matrix family
where αi, j, βi, j are given constants for i, j = 1, …, n, then it is called an interval matrix. Such matrices do occur in descriptions of uncertain dynamical systems. The following two stability problems about interval matrices are known to be NP-hard:
- Interval Matrix Problem 1 (IMP1): :
-
Decide whether a given interval matrix, \(\mathcal{A}\), is robust Hur- witz stable or not. Namely, check whether all members of \(\mathcal{A}\) are Hurwitz-stable matrices, i.e., all eigenvalues are in open left half plane.
- Interval Matrix Problem 2 (IMP2): :
-
Decide whether a given interval matrix, \(\mathcal{A}\), has a Hurwitz-stable matrix or not. Namely, check whether there exists at least one matrix in \(\mathcal{A}\) which is Hurwitz stable.
For a proof of NP-hardness of IMP1, see Poljak and Rohn (1993) and Nemirovskii (1993), and for a proof of IMP2, see Blondel and Tsitsiklis (1997).
Another important problem is related to structured singular values (SSV) and linear fractional transformations (LFT), which are mainly used to study systems which have component-level uncertainties (Packard and Doyle 1993). Basically, bounds on the component-level uncertainties are given, and we would like to check whether the system is robustly stable or not. This is known to be NP-hard.
- Structured Singular Value Problem (SSVP): :
-
Given a matrix M and uncertainty structure \(\boldsymbol{\Delta }\), check whether the structured singular value \(\mu _{\boldsymbol{\Delta }}(M) < 1.\)
This is proved to be NP-hard for real, and mixed, uncertainty structures (Braatz et al. 1994), as well as for complex uncertainties with no repetitions (Toker and Ozbay 1996, 1998).
Approximation Complexity
Decision version of QP is NP-hard, but approximation of QP is quite “difficult” as well. An approximation is called a μ-approximation if the absolute value of the error is bounded by μ times the absolute value of max–min of the function. The following is a classical result on QP (Bellare and Rogaway 1995): Unless P = NP, QP does not admit a polynomial-time μ-approximation algorithm even for μ < 1∕3. This is sometimes informally stated as “QP is NP-hard to approximate.” Much work is needed toward similar results on robustness margin and related optimization problems of the classical robust control theory.
An interesting case is the complex structured singular value computation with no repeated uncertainties. There is a convex relaxation of the problem, the standard upper bound \(\overline{\mu }\), which is known to result in quite tight approximations for most cases of the original problem (Packard and Doyle 1993). However, despite strong numerical evidence, a formal proof of “good approximation for most cases” result is not available. We also do not have much theoretical information about how hard it is to approximate the complex structured singular value. For example, it is not known whether it admits a polynomial-time approximation algorithm with error bounded by, say, 5 %. In summary, much work needs to be done in these directions for many other robust control problems whose decision versions are NP-hard.
Summary and Future Directions
The study of the “Is P ≠ NP?” question turned out to be a quite difficult one. Researchers agree that really new and innovative tools are needed to study this problem. On one other extreme, one can question whether we can really say something about this problem within the Zermelo-Fraenkel (ZF) set theory or will it have a status similar to axiom of choice (AC) and the continuum hypothesis (CH) where we can neither refute nor provide a proof (Aaronson 1995). Therefore, the question may be indeed much deeper than we thought, and standard axioms of today’s mathematics may not be enough to provide an answer. As for any such problem, we can still hope that in the future, new “self-evident” axioms may be discovered, and with the help of them, we may provide an answer.
All of the complexity results mentioned here are with respect to the standard Turing machine which is a simplified model of today’s digital computers. Depending on the progress in science, engineering, and technology, if superior computation machines can be constructed, then some of the abovementioned problems can be solved much faster on these devices, and current results/problems of the theory of computation may no longer be of great importance or relevance for engineering and scientific applications. In such a case, one may also need to revise definitions of the terms algorithm, tractable, etc., according to these new devices.
Currently, there are several NP-hardness results about robust control problems, mostly NP-hardness of decision problems about robustness. However, much work is needed on the approximation complexity and conservatism of various convex relaxations of these problems. Even if a robust stability problem is NP-hard, a polynomial-time algorithm to estimate robustness margin with, say, 5 % error is not ruled out with the NP-hardness of the decision version of the problem. Indeed, a polynomial-time and 5 % error-bounded result will be of great importance for practicing engineers. Therefore, such directions should also be studied, and various meaningful alternatives, like being polynomial time on the average or for most of cases or anything which makes sense for a practicing engineer, should be considered as an alternative direction.
In summary, computational complexity theory guides research on the development of algorithms, indicating which directions are dead ends and which directions are worth to investigate.
Bibliography
Aaronson S (1995) Is P versus NP formally independent? Technical report 81, EATCS
Bellare M, Rogaway P (1995) The complexity of approximating a nonlinear program. Math Program 69:429–441
Blondel VD, Megretski A (2004) Unsolved problems in mathematical systems and control theory. Princeton University Press, Princeton
Blondel VD, Nesterov Y (2005) Computationally efficient approximations of the joint spectral radius. SIAM J Matrix Anal 27:256–272
Blondel VD, Tsitsiklis JN (1997) NP-hardness of some linear control design problems. SIAM J Control Optim 35:2118–2127
Blondel VD, Tsitsiklis JN (2000a) The boundedness of all products of a pair of matrices is undecidable. Syst Control Lett 41:135–140
Blondel VD, Tsitsiklis JN (2000b) A survey of computational complexity results in systems and control. Automatica 36:1249–1274
Blondel VD, Sontag ED, Vidyasagar M, Willems JC (1999) Open problems in mathematical systems and control theory. Springer, London
Blondel VD, Bournez O, Koiran P, Tsitsiklis JN (2001) The stability of saturated linear dynamical systems is undecidable. J Comput Syst Sci 62:442–462
Blondel VD, Theys J, Vladimirov AA (2003) An elementary counterexample to the finiteness conjecture. SIAM J Matrix Anal 24:963–970
Bousch T, Mairesse J (2002) Asymptotic height optimization for topical IFS, Tetris heaps and the finiteness conjecture. J Am Math Soc 15:77–111
Braatz R, Young P, Doyle J, Morari M (1994) Computational complexity of μ calculation. IEEE Trans Autom Control 39:1000–1002
Chen G, Church DA, Englert BG, Henkel C, Rohwedder B, Scully MO, Zubairy MS (2006) Quantum computing devices. Chapman and Hall/CRC, Boca Raton
Cook S (1971) The complexity of theorem proving procedures. In: Proceedings of the third annual ACM symposium on theory of computing, Shaker Heights, pp 151–158
Dahleh MA, Diaz-Bobillo I (1994) Control of uncertain systems. Prentice Hall, Englewood Cliffs
Dantzig G (1963) Linear programming and extensions. Princeton University Press, Princeton
Davis M (1985) Computability and unsolvability. Dover
Garey MR, Johnson DS (1979) Computers and intractability, a guide to the theory of NP-completeness. W. H. Freeman, San Francisco
Hopcroft JE, Motwani R, Ullman JD (2001) Introduction to automata theory, languages, and computation. Addison Wesley, Boston
Kaye P, Laflamme R, Mosca M (2007) An introduction to quantum computing. Oxford University Press, Oxford
Kharitonov VL (1978) Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differentsial’nye Uravneniya 14:2086–2088
Klee V, Minty GJ (1972) How good is the simplex algorithm? In: Inequalities III (proceedings of the third symposium on inequalities), Los Angeles. Academic, New York/London, pp 159–175
Knuth DE (1997) Art of computer programming, volume 2: seminumerical algorithms, 3rd edn. Addison-Wesley, Reading
Lagarias JC, Wang Y (1995) The finiteness conjecture for the generalized spectral radius of a set of matrices. Linear Algebra Appl 214:17–42
Lewis HR, Papadimitriou CH (1998) Elements of the theory of computation. Prentice Hall, Upper Saddle River
Matiyasevich YV (1993) Quantum computing devices. MIT
Nemirovskii A (1993) Several NP-hard problems arising in robust stability analysis. Math Control Signals Syst 6:99–105
Packard A, Doyle J (1993) The complex structured singular value. Automatica 29:71–109
Papadimitriou CH (1995) Computational complexity. Addison-Wesley/Longman, Reading
Poljak S, Rohn J (1993) Checking robust nonsingularity is NP-hard. Math Control Signals Syst 6:1–9
Rota GC, Strang G (1960) A note on the joint spectral radius. Proc Neth Acad 22:379–381
Schrijver A (1998) Theory of linear and integer programming. Wiley, Chichester
Sipser M (2006) Introduction to the theory of computation. Thomson Course Technology, Boston
Smale S (1983) On the average number of steps in the simplex method of linear programming. Math Program 27:241–262
Toker O, Ozbay H (1996) Complexity issues in robust stability of linear delay differential systems. Math Control Signals Syst 9:386–400
Toker O, Ozbay H (1998) On the NP-hardness of the purely complex mu computation, analysis/synthesis, and some related problems in multidimensional systems. IEEE Trans Autom Control 43:409–414
Turing AM (1936) On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc 42:230–265
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag London
About this entry
Cite this entry
Toker, O. (2015). Computational Complexity Issues in Robust Control. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, London. https://doi.org/10.1007/978-1-4471-5058-9_130
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5058-9_130
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5057-2
Online ISBN: 978-1-4471-5058-9
eBook Packages: EngineeringReference Module Computer Science and Engineering