Abstract
We consider the minimum-energy control of a car, which is modelled as a point mass sliding on the ground in a fixed direction, and so it can be mathematically described as the double integrator. The control variable, representing the acceleration or the deceleration, is constrained by simple bounds from above and below. Despite the simplicity of the problem, it is not possible to find an analytical solution to it because of the constrained control variable. To find a numerical solution to this problem we apply three different projection-type methods: (i) Dykstra’s algorithm, (ii) the Douglas–Rachford (DR) method and (iii) the Aragón Artacho–Campoy (AAC) algorithm. To the knowledge of the authors, these kinds of (projection) methods have not previously been applied to continuous-time optimal control problems, which are infinite-dimensional optimization problems. The problem we study in this article is posed in infinite-dimensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explored via numerical experiments with respect to their parameters. An error analysis is also carried out numerically for a particular instance of the problem for each of the algorithms.
We dedicate our contribution to the memory of our friend and mentor Jonathan Borwein
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Optimal control
- Dykstra projection method
- Douglas-Rachford method
- Aragón Artacho–Campoy algorithm
- Linear quadratic optimal control
- Control constraints
- Numerical methods
AMS 2010 Subject Classification
2.1 Introduction
In this paper, we provide (to the best of our knowledge also first) application of various best approximation algorithms to solve a continuous-time optimal control problem. Operator splitting methods were applied previously to discrete-time optimal control problems [19, 26], which are finite-dimensional problems. In [26], for example, the state difference equations comprise the constraint \(\mathcal {A}\), and the box constraints on the state and control variables comprise \(\mathcal {B}\). The condition of belonging to the sets \(\mathcal {A}\) and \(\mathcal {B}\) are then appended to the objective function via indicator functions. The original objective function that is considered in [26] is quadratic in the state and control variables. In the next step in [26], the new objective function is split into its quadratic and convex parts and the Douglas-Rachford splitting method is applied to solve the problem.
In the current paper, we deal with continuous-time optimal control problems, which are infinite-dimensional optimization problems that are set in Hilbert spaces. After splitting the constraints of the problem, we apply Dykstra’s algorithm [11], the Douglas–Rachford (DR) method [6, 9, 17, 18, 25, 29], and the Aragón Artacho–Campoy (AAC) algorithm [3], all of which solve the underlying best approximation problem.
The exposure of the current paper is more in the style of a tutorial. We pose the problem of minimum-energy control of a simplified model of a car, amounting to the double integrator, where the control variable has simple lower and upper bounds and the initial and terminal state variables are specified. We split the constraints into two, \(\mathcal {A}\) and \(\mathcal {B}\), representing respectively the state differential equations (the double integrator) along with their boundary conditions and the constraints on the control variable. We define two subproblems, one subject to \(\mathcal {A}\), and the other one subject to \(\mathcal {B}\). We take advantage of the relatively simple form of the optimal control problem and derive analytical expressions for the optimality conditions and implement these in defining the projections onto \(\mathcal {A}\) and \(\mathcal {B}\).
The solutions of these subproblems provide the projections of a given point in the control variable space onto the constraint sets \(\mathcal {A}\) and \(\mathcal {B}\), respectively, in some optimal way. By performing these projections in the way prescribed by the above-listed algorithms, we can ensure convergence to a solution of the original optimal control problem,
Note that while the minimum-energy control of the double integrator without any constraints on the control variable can be solved analytically, the same problem with (even simple bound, i.e., box) constraints on the control variable can in general be solved only numerically. This problem should be considered within the framework of control-constrained linear-quadratic optimal control problems for which new numerical methods are constantly being developed—see for example [1, 12] and the references therein.
The current paper is a prototype for future applications of projection methods to solving more general optimal control problems. Indeed, the minimum-energy control of double integrator is a special case of linear quadratic optimal control problems; so, with the reporting of the current study, an extension to more general problems will be imminent.
The paper is organized as follows. In Section 2.2, we state the control-constrained minimum-energy problem for the double integrator, and write down the optimality conditions. We provide the analytical solution for the unconstrained problem. For the control-constrained case, we briefly describe the standard numerical approach and consider an instance of the problem which we use in the numerical experiments in the rest of the paper. We define the constraint sets \(\mathcal {A}\) and \(\mathcal {B}\). In Section 2.3, we provide the expressions for the projections onto \(\mathcal {A}\) and \(\mathcal {B}\). We describe the algorithms in Section 2.4 and in the beginning of Section 2.5. In the remaining part of Section 2.5, we present numerical experiments to study parametric behaviour of the algorithms as well as the errors in the state and control variables with each algorithm. In Section 2.6, we provide concluding remarks and list some open problems.
2.2 Minimum-Energy Control of Double Integrator
We consider the minimum-energy control of a car, with a constrained control variable. Consider the car as a point unit mass, moving on a frictionless ground in a fixed line of action. Let the position of the car at time t be given by y(t) and the velocity by \(\dot {y}(t):=(dy/dt)(t)\). By Newton’s second law of motion, \(\ddot {y}(t) = u(t)\), where u(t) is the summation of all the external forces applied on the car, in this case the force simply representing the acceleration and deceleration of the car. This differential equation model is referred to as the double integrator in system theory literature, since y(t) can be obtained by integrating u(t) twice.
Optimal Control Problem
Suppose that the total force on the car, i.e., the acceleration or deceleration of the car, is constrained by a magnitude of a > 0. Let x 1 := y and \(x_2 := \dot {y}\). Then the problem of minimizing the energy of the car, which starts at a position x 1(0) = s 0 with a velocity x 2(0) = v 0 and finishes at some other position x 1(1) = s f with velocity x 2(1) = v f, within one unit of time, can be posed as follows.
Here, the functions x 1 and x 2 are referred to as the state variables and u the control variable. As a first step in writing the conditions of optimality for this optimization problem, define the Hamiltonian function H for Problem (P) simply as
where λ(t) := (λ 1(t), λ 2(t)) ∈R 2 is the adjoint variable (or costate) vector such that (see [21])
Equations in (2.2) simply reduce to
where c 1 and c 2 are real constants. Let x(t) := (x 1(t), x 2(t)) ∈R 2 denote the state variable vector.
Maximum Principle
If u is an optimal control for Problem (P), then there exists a continuously differentiable vector of adjoint variables λ, as defined in (2.2), such that λ(t) ≠ 0 for all t ∈ [0, t f], and that, for a.e. t ∈ [0, t f],
i.e.,
see e.g. [21]. Condition (2.5) implies that the optimal control is given by
From (2.6), we can also conclude that the optimal control u for Problem (P) is continuous.
When a is large enough, the control constraint does not become active, so the optimal control is simply − λ 2, and it is a straightforward classroom exercise to find the analytical solution as
for all t ∈ [0, 1], where
The solution of an instance of Problem (P), with s 0 = 0, s f = 0, v 0 = 1, v f = 0, and large a, say a = 9, is depicted in Figure 2.1. Note that, for all t ∈ [0, 1], λ 2(t) = −u(t) = −6 t + 4 and λ 1(t) = c 1 = 6. The graphs of λ 1 and λ 2 are not displayed for this particular instance.
When a is not so large, say a = 2.5, as we will consider next so that the control constraint becomes active, it is usually not possible to find an analytical solution, i.e., a solution has to be found numerically, as described below.
Numerical Approach
A straightforward and popular numerical approach to solving Problem (P) is to discretize Problem (P) over a partition of the time horizon [0, 1] and then use some finite-dimensional optimization software to get a discrete (finite-dimensional) solution for the state and control variables x(t) and u(t). The discrete solution is an approximation of the continuous-time solution. This approach is often referred to as the direct method or the (first-)discretize-then-optimize approach. A survey and discussion of Euler discretization of linear-quadratic optimal control problems and convergence of their discretized solutions to their continuous-time solutions can be found in [12, Section 5].
Figure 2.2 depicts the discrete solution of Problem (P) with the instance where a = 2.5, s 0 = 0, s f = 0, v 0 = 1, v f = 0. The solution was obtained by pairing up the optimization modelling language AMPL [20] and the finite-dimensional optimization software Ipopt [30]. The number of discretization nodes was taken to be 2000. The multipliers of the (Euler approximation of the) state differential equation constraints are provided by Ipopt when it finds an optimal solution to the discretized (finite-dimensional) problem. These multipliers have been plotted in Figure 2.2c. It should be noted that the graph of the adjoint variable λ 2(t) given in Figure 2.2c verifies the graph of the optimal control u(t) in Figure 2.2b via the optimal control rule in (2.6). In Figure 2.2b and c, the bounds ± 2.5 have been marked by horizontal dashed lines for ease of viewing.
Remark 2.1
If a is too small, there will obviously be no solution to Problem (P). For the particular instance of the problem considered here, the critical value of a, below which there exists no solution, is somewhere between 2.414 and 2.415, as our numerical experiments show (not reported in detail here). At this critical value, the only feasible solution is bang–bang, i.e., u(t) switches once from − a to a at around t = 0.71. It should be noted that, in this case, the optimal control in (2.6) requires the adjoint variable λ 2 to switch from a value α ≥ a to another value β ≤−a, i.e., be discontinuous, which is not allowed by the maximum principle. In this paper, we only consider the case when a is strictly greater than its critical value so that the maximum principle can be applied.
Function Spaces
For the numerical methods, we consider projection/reflection methods in Hilbert spaces. The spaces associated with Problem (P) are set up as follows. Let q ∈N and L 2(0, 1;R q) be the Banach space of Lebesgue measurable functions
with finite L 2 norm. Namely, define
where
for i = 1, …, q, with \(\left |\,\cdot \,\right |\) the modulus or absolute value. In other words,
Furthermore, W 1, 2(0, 1;R q) is the Sobolev space of absolutely continuous functions, namely
endowed with the norm
In Problem (P), the state variable x ∈ W 1, 2(0, 1;R 2) and the control variable u ∈ L 2(0, 1;R).
Constraint Splitting
Next, we split the constraints of Problem (P) into two subsets, \(\mathcal {A}\) and \(\mathcal {B}\). The subset \(\mathcal {A}\) collects together all the feasible control functions satisfying only the dynamics of the car. The subset \(\mathcal {B}\), on the other hand, collects all the control functions whose values are constrained by − a and a.
The rationale behind this sort of splitting is as follows: The problem of minimizing the energy of the car subject to only \(\mathcal {A}\) or only \(\mathcal {B}\) is much easier to solve—in fact, the solutions can be analytically written up in each case. If, for some given u, a solution exists to the two-point boundary-value problem (TPBVP) in (2.7) then that solution is unique by the linearity of the TPBVP [5, 28]. Note that a control solution u as in (2.7) exists by the (Kalman) controllability of the double integrator—see [27]. So the set \(\mathcal {A}\) is nonempty. Note that the constraint set \(\mathcal {A}\) is an affine subspace and \(\mathcal {B}\) a box.
2.3 Projections
All of the projection methods that we will consider involve projections onto the sets \(\mathcal {A}\) and \(\mathcal {B}\). The projection onto \(\mathcal {A}\) from a current iterate u − is the point u solving the following problem.
In (P1), we minimize the squared L 2-norm distance between u − and u. The projection onto \(\mathcal {B}\) from a current iterate u − is similarly the point u solving the following problem.
Proposition 2.1 (Projection onto \(\mathcal {A}\))
The projection \(P_{\mathcal {A}}\) of u −∈ L 2(0, 1;R) onto the constraint set \(\mathcal {A}\) , as the solution of Problem (P1), is given by
for all t ∈ [0, 1], where
and x 1(1) and x 2(1) are obtained by solving the initial value problem
for all t ∈ [0, 1].
Proof
The Hamiltonian function for Problem (P1) is
where the adjoint variables λ 1 and λ 2 are defined as in (2.2), with H replaced by H 1, and the subsequent solutions are given as in (2.3). The optimality condition for Problem (P1) is akin to that in (2.4) for Problem (P) and, owing to the fact that the control u is now unconstrained, can more simply be written as
which yields the optimal control as u(t) = u −(t) − λ 2(t), i.e.
for all t ∈ [0, 1]. We need to show that c 1 and c 2 are found as in (2.10)–(2.11). Using (2.14) in (2.7) yields the following time-varying, linear two-point boundary-value problem.
for all t ∈ [0, 1]. In other words, Problem (P1) is reduced to solving Equations (2.15)–(2.16) for the unknown parameters c 1 and c 2. Once c 1 and c 2 are found, the projected point u in (2.14) is found. Since Equations (2.15)–(2.16) are linear in x 1 and x 2, a simple shooting technique [5, 28] provides the solution for c 1 and c 2 in just one iteration. The essence of this technique is that the initial-value problem (IVP)
for all t ∈ [0, 1], is solved repeatedly, so as to make the discrepancy at t = 1 vanish. Namely, we seek a parameter c := (c 1, c 2) such that z 1(1, c) − s f = 0 and z 2(1, c) − v f = 0. The procedure is as follows. For a given c, there exists a unique solution z(t, c) := (z 1(t, c), z 2(t, c)) of (2.17)–(2.18). Define the near-miss (vector) function φ : R 2 →R 2 as follows:
The Jacobian of the near-miss function is
The shooting method looks for a pair c such that φ(c) := 0 (i.e., a pair c such that the terminal boundary conditions are met). Expanding φ about, say, \(\overline {c}=0\), and discarding the terms of order 2 or higher, we obtain
Substituting φ(c) = 0 in the above expression, replacing “≈” with “= ”, and re-arranging, gives the single (Newton) iteration of the shooting method:
The components (∂z i∕∂c j)(1, c), i, j = 1, 2, of J φ(c), can be obtained by solving the variational equations for (2.15)–(2.16) with respect to c 1 and c 2, i.e., by solving the following system for (∂z i∕∂c j)(⋅, c):
Elementary calculations lead to the following solution of the above system:
which is independent of c. Hence,
with inverse:
Setting (x 1(⋅), x 2(⋅)) := (z 1(⋅, 0), z 2(⋅, 0)), the IVP (2.17)–(2.18) becomes (2.12)–(2.13). Then substitution of (2.19) and (2.21) with c = 0 into Equation (2.20), and expanding out, yield (2.10)–(2.11). The proof is complete. □
Proposition 2.2 (Projection onto \(\boldsymbol {\mathcal {B}}\))
The projection \(P_{\mathcal {B}}\) of u −∈ L 2(0, 1;R) onto the constraint set \(\mathcal {B}\) , as the solution of Problem (P2), is given by
for all t ∈ [0, 1].
Proof
The expression (2.22) is the straightforward solution of Problem (P2). □
2.4 Best Approximation Algorithms
In this section, we discuss best approximation algorithms. In the following,
with inner product 〈⋅, ⋅〉, induced norm ∥⋅∥. We also assume that
Given z ∈ X, our aim is to find
the projection of z onto the intersection A ∩ B which we assume to be nonempty. We also assume that we are able to compute the projectors P A and P B onto the constraints A and B, respectively.
Many algorithms are known which could be employed to find P A∩B(z); here, however, we focus on three simple methods that do not require a product space set-up as some of those considered, in, e.g., [6, 7, 13, 14].
In the next section, we will numerically test these algorithms when X = L 2(0, 1;R), \(A=\mathcal {A}\), \(B=\mathcal {B}\), and z = 0.
2.4.1 Dykstra’s Algorithm
We start with Dykstra’s algorithm (see [11]), which operates as followsFootnote 1: Set a 0 := z and q 0 := 0. Given a n, q n, where n ≥ 0, update
It is known that both \((a_n)_{n\in \mathbb {B}}\) and \((b_n)_{n\in \mathbb {N}}\) converge strongly to P A∩B(z).
2.4.2 Douglas–Rachford Algorithm
Given β > 0, we specialize the Douglas–Rachford algorithm (see [17], [25] and [18]) to minimize the sum of the two functions \(f(x)=\iota _B(x) + \tfrac {\beta }{2}\|x-z\|{ }^2\) and g := ι A which have respective proximal mappings (see [6, Proposition 23.29(i)]) \(P_f(x) = P_B\big (\tfrac {1}{1+\beta }x+\tfrac {\beta }{1+\beta }z\big )\) and P g = P A. Set \(\lambda := \tfrac {1}{1+\beta }\in \left ]0,1\right [\). It follows that the Douglas–Rachford operator \( T := {\operatorname {Id}} - P_f + P_g(2P_f-{\operatorname {Id}})\) turns into
Now let x 0 ∈ X and given x n ∈ X, where n ≥ 0, update
Then it is known (see [29] or [9]) that \((b_n)_{n\in \mathbb {N}}\) converges weakly to P A∩B(z). Note that (2.28) simplifies to
2.4.3 Aragón Artacho–Campoy Algorithm
The Aragón Artacho–Campoy (AAC) Algorithm was recently presented in [3]; see also [2, 4]. Given two fixed parameters α and β in \(\left ]0,1\right [\), define
Now let x 0 ∈ X and given x n ∈ X, where n ≥ 0, update
and
By [3, Theorem 4.1(iii)], the sequence \((b_n)_{n\in \mathbb {N}}\) converges strongly to P A∩B(z) provided thatFootnote 2 z − P A∩B(z) ∈ (N A + N B)(P A∩B z). Note that (2.32) simplifies to
2.5 Numerical Implementation
2.5.1 The Algorithms
In this section, we gather the algorithms considered abstractly and explain how we implemented them.
We start with Dykstra’s algorithm from Section 2.4.1.
Algorithm 1 (Dykstra)
- Step 1:
-
(Initialization) Choose the initial iterates u 0 = 0 and q 0 = 0. Choose a small parameter ε > 0, and set k = 0.
- Step 2:
-
(Projection onto \(\mathcal {B}\)) Set u − = u k + q k. Compute \(\widetilde {u} = P_{\mathcal {B}}(u^-)\) by using (2.22).
- Step 3:
-
(Projection onto \(\mathcal {A}\)) Set \(u^- := \widetilde {u}\). Compute \(\widehat {u} = P_{\mathcal {A}}(u^-)\) by using (2.9).
- Step 4:
-
(Update) Set \(u^{k+1} := \widehat {u}\) and \(q^{k+1} := u^{k} + q^k - \widetilde {u}\) .
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return \(\widetilde {u}\) and stop. Otherwise, set k := k + 1 and go to Step 2.
Next is the Douglas–Rachford method from Section 2.4.2.
Algorithm 2 (DR)
- Step 1:
-
(Initialization) Choose a parameter \(\lambda \in \left ]0,1\right [\) and the initial iterate u 0 arbitrarily. Choose a small parameter ε > 0, and set k = 0.
- Step 2:
-
(Projection onto \(\mathcal {B}\)) Set u − = λu k. Compute \(\widetilde {u} = P_{\mathcal {B}}(u^-)\) by using (2.22).
- Step 3:
-
(Projection onto \(\mathcal {A}\)) Set \(u^- := 2\widetilde {u}-u^k\). Compute \(\widehat {u} = P_{\mathcal {A}}(u^-)\) by using (2.9).
- Step 4:
-
(Update) Set \(u^{k+1} := u^k + \widehat {u} - \widetilde {u}\).
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return \(\widetilde {u}\) and stop. Otherwise, set k := k + 1 and go to Step 2.
Finally, we describe the Aragón Artacho–Campoy algorithm from Section 2.4.3.
Algorithm 3 (AAC)
- Step 1:
-
(Initialization) Choose the initial iterate u 0 arbitrarily. Choose a small parameter ε > 0, two parametersFootnote 3 α and β in \(\left ]0,1\right [\), and set k = 0.
- Step 2:
-
(Projection onto \(\mathcal {B}\)) Set u − = u k. Compute \(\widetilde {u} = P_{\mathcal {B}}(u^-)\) by using (2.22).
- Step 3:
-
(Projection onto \(\mathcal {A}\)) Set \(u^- = 2\beta \widetilde {u}-u^k\). Compute \(\widehat {u} = P_{\mathcal {A}}(u^-)\) by using (2.9).
- Step 4:
-
(Update) Set \(u^{k+1} := u^k +2\alpha \beta (\widehat {u}-\widetilde {u})\).
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return \(\widetilde {u}\) and stop. Otherwise, set k := k + 1 and go to Step 2.
We provide another version of each of Algorithms 1–3, as Algorithms 1b–3b, in Appendix A. In Algorithm 1b, we monitor the sequence of iterates which are the projections onto set \(\mathcal {A}\), instead of monitoring the projections onto set \(\mathcal {B}\) in Algorithm 1. On the other hand, in Algorithms 2b–3b, the order in which the projections are done is reversed: the first projection is done onto the set \(\mathcal {A}\) and the second projection onto \(\mathcal {B}\).
Although the order of projections will not matter in view of the existing results stating that convergence is achieved under any order—see [8, Proposition 2.5(i)], the order does make a difference in early iterations (as well as in the number of iterations required for convergence of Algorithms 2 and 2b, as we will elaborate on later). If our intent is to stop the algorithm early so that we can use the current iterate as an initial guess in more accurate computational optimal control algorithms, which can find the junction times with a high precision (see [22,23,24]), then it is desirable to implement Algorithms 1–3 above, rather than Algorithms 1b–3b, because any iterate of Algorithms 1–3 will satisfy the constraints on the control variable, while that of Algorithms 1b–3b will in general not.
2.5.2 Numerical Experiments
In what follows, we study the working of Algorithms 1–3 for an instance of Problem (P). Suppose that the car is initially at a reference position 0 and has unit speed. It is desired that the car come back to the reference position and be at rest after one unit of time; namely that s 0 = 0, s f = 0, v 0 = 1, v f = 0. For these boundary conditions, no solution exists if one takes the control variable bound a = 2.4 or smaller but a solution does exist for a = 2.5. So, we use a = 2.5. In the ensuing discussions, we use the stopping tolerance ε = 10−8 unless otherwise stated.
Discretization
Algorithms 1–3, as well as 1b–3b, carry out iterations with functions. For computations, we consider discrete approximations of the functions over the partition 0 = t 0 < t 1 < … < t N = 1 such that
h := 1∕N and N is the number of subdivisions. Let u i be an approximation of u(t i), i.e., u i ≈ u(t i), i = 0, 1, …, N − 1; similarly, x 1,i ≈ x 1(t i) and x 2,i ≈ x 2(t i), or x i := (x 1,i, x 2,i) ≈ x(t i), i = 0, 1, …, N. In other words, the functions u, x 1 and x 2 are approximated by the N-dimensional array u h, with components u i, i = 0, 1, …, N − 1, and the (N + 1)-dimensional arrays x 1,h and x 2,h, with components x 1,i and x 2,i, i = 0, 1, …, N, respectively. We define a discretization \(P^h_{\mathcal {A}}\) of the projection \(P_{\mathcal {A}}\) as follows.
where t h = (0, t 1, …, t N),
and x 1,N and x 2,N are obtained from the Euler discretization of (2.12)–(2.13): Given x 1,0 = s 0 and x 2,0 = v 0,
for i = 0, 1, …, N − 1.
The discretization \(P^h_{\mathcal {B}}\) of the projection \(P_{\mathcal {B}}\) can be defined in a straightforward manner, by simply replacing u − in (2.22) with the discrete components \(u_i^-\) of \(u_h^-\).
Parametric Behaviour
Obviously, the behaviour of Algorithms 2 and 2b, the Douglas–Rachford method, depend on the parameter λ, and the behaviour of Algorithms 3 and 3b on the two parameters α and β. Figure 2.3 displays the dependence of the number of iterations it takes to converge on these parameters, for various values of a. The dependence for a given value of a appears to be continuous, albeit the presence of downward spikes.
The graphs for Algorithms 2 and 2b, shown in parts (a) and (c) of Figure 2.3, respectively, differ significantly from one another. The bound a = 4 corresponds to the case when the control constraint becomes active only at t = 0—see Figure 2.1. In other words, when a > 4 the optimal control variable is truly unconstrained. When a = 4, the best value of λ is 1 for Algorithm 2, yielding the solution in just 6 iterations. For Algorithm 2b, the best value for λ is 0.5, as can be seen in (c), producing the solution in 30 iterations. Going back to Algorithm 2, with decreasing values of a, the values of λ minimizing the number of iterations shift to the right. For example, the minimum number of iterations is 91, with a = 2.5 and λ = 0.7466 (found by a refinement of the graph).
As for Algorithm 2b, the minimizer for a = 2.5 is λ = 0.5982766 and the corresponding minimum number of iterations is 38. This is a point where a downward spike occurs and so the number of iterations is very sensitive to changes in λ. For example, the rounded-off value of λ = 0.598277 results in 88 iterations instead of 38, and λ = 0.55 yields 444 iterations for convergence. The number of iterations is less sensitive to the local minimizer λ = 0.7608, which results in 132 iterations. It is interesting to note that the graph with a = 4 appears to be an envelope for the number of iterations for all λ ∈ ]0, 1[.
The graphs for Algorithms 3 and 3b, the Aragón Artacho–Campoy algorithm, are indistinguishable to one’s eye; therefore we only display the one in Figure 2.3b. Part (d) of Figure 2.3 shows surface plots of the number of iterations versus the algorithmic parameters α and β, for the same values of a as in the rest of the graphs in the figure. It is interesting to observe that the surfaces look to be cascaded with (roughly) the outermost surface being the one corresponding to a = 4. The surface plot suggests that for minimum number of iterations, one must have α = 1. Although theory requires α < 1, α = 1 seems to cause no concerns in this particular instance; so, we set α = 1 for the rest of the paper. The cross-sectional curves at α = 1 are shown with much more precision in part (b) of the figure. The spikes that are observed in part (d) can also be seen in the graph in part (b).
In fact, the first observation one has to make here is that, for a = 4, convergence can be achieved in merely one iteration, with β = 0.5. This is quite remarkable, compared with Algorithms 2 and 2b. The graphs in (b) appear to be enveloped as well by the graph for a = 4, as in part (c). For the values of a other than 4, the globally minimum number of iterations seems to be achieved at a downward spike, which as a result is very sensitive to changes in β. For example, for a = 2.5, the optimal β value is 0.78249754 for a minimum 35 iterations. A rounded-off β = 0.782 results in 111 iterations, and β = 0.7 yields 243 iterations. Sensitivity at the local minimizer β = 0.8617 giving 64 iterations is far less: Choosing β = 0.8 or 0.9 results in 128 or 90 iterations, respectively. It is interesting to note that, as in the case of Algorithms 2 and 2b, the graphs in Figure 2.3b are approximately enveloped by the graph/curve drawn for a = 4.
Behaviour in Early Iterations
Figure 2.4a–c illustrates the working of all three algorithms for the same instance. All three algorithms converge to the optimal solution, with the stopping tolerance of ε = 10−8. The optimal values of the algorithmic parameters, λ = 0.7466 for Algorithm 2, and α = 1 and β = 0.8617 for Algorithm 3, have been used. The third, fifth and fifteenth iterates, as well as the solution curve, are displayed for comparisons of behaviour. At least for the given instance of the problem, it is fair to say from Figure 2.4c that Algorithm 3 gets closer to the solution much more quickly than the others in the few initial iterations—see the third and fifth iterates. It also achieves convergence in a smaller number of iterations (64 as opposed to 530 and 91 iterations of the Algorithms 1 and 2, respectively).
Error Analysis via Numerical Experiments
For a fixed value of N, Algorithms 1–3 converge only to some approximate solution of the original Problem. Therefore, the question as to how the algorithms behave as the time partition is refined, i.e., N is increased, needs to be investigated. For the purpose of a numerical investigation, we define, in the kth iteration, the following errors. Suppose that the pair (u ∗, x ∗) is the optimal solution of Problem (P) and \((u_h^k,x_h^k)\) an approximate solution of Problem (P) in the kth iteration of a given algorithm. Define
where ||⋅||∞ is the ℓ ∞-norm in R 2. For large N, these expressions are reminiscent of the L ∞-norm, and therefore they will be referred to as the L ∞ -error.
For (u ∗, x ∗) in the error expressions, we have used the discretized (approximate) solution obtained for the Euler-discretized Problem (P) utilizing the Ipopt–AMPL suite, with N = 106 and the tolerance set at 10−14.
For N = 2000, these errors are depicted in Figure 2.4d and e. From the graphs it is immediately clear that no matter how much smaller the stopping tolerance is selected, the best error that is achievable with N = 2000 is around 10−2 for the control variable and around 10−3 for the state variable vector. In fact, the graphs also tell that perhaps a much smaller stopping threshold than 10−8 would have achieved the same approximation to the continuous-time solution of Problem (P). By just looking at the graphs, one can see that Algorithm 1 could have been run just for about 300 iterations instead of 530, and Algorithms 2 and 3 could have been run for about 50 iterations to achieve the best possible approximation with N = 2000.
In Figure 2.5, we depict the same errors for N = 103 (parts (a) and (b)), N = 104 (in parts (c) and (d)) and N = 105 (in parts (e) and (f)). It is observed that, with a ten-fold increase in N (which is a ten-fold decrease in h) the errors in both u and x are reduced by ten-folds, implying that the error (both in x and in u) depends on the stepsize h linearly. This is in line with the theory of Euler-discretization of optimal control problems; see, for example, [15, 16]. Furthermore, even for very large values of N, it can be seen from these graphs that a stopping threshold slightly smaller than 10−8 would suffice to get even more stringent error levels, such as around 10−4 for the control variable and around 10−5 for the state variable vector. A larger stopping threshold would obviously result in smaller number of iterations.
Table 2.1 displays the values of the errors, separately in u and x, after the stopping criteria with ε = 10−8 was satisfied, for each of the three algorithms. A precise 10-fold reduction in error with a 10-fold increase in N can be verified with these numbers, as discussed in the previous paragraph. We have added the experiments we have carried out with Ipopt, version 3.12, an interior point optimization software [30], which solved the direct Euler-discretization of Problem (P), with the same values of N and the same tolerance 10−8. Ipopt, running with linear solver MA57, was paired up with the optimization modelling language AMPL [20]. The same 10-fold decreases in error cannot be observed with Ipopt, unless one sets the tolerance for Ipopt to be much smaller than 10−8, say 10−14 (which also means longer computational times). With the tolerance set at 10−14, the error values with Ipopt becomes pretty much the same as those with Dykstra (still with ε = 10−8), which is interesting to note.
As we pointed out earlier, the same errors listed in Table 2.1 can be achieved with bigger stopping thresholds. For N = 103, 104, 105, respectively: with ε = 10−6, 10−6, 10−7, Algorithm 1 converges in 281, 359 and 454 iterations; with ε = 10−5, 10−5, 10−7, Algorithm 2 converges in 65, 50 and 101 iterations; with ε = 10−4, 10−5, 10−6, Algorithm 3 converges in 49, 60 and 70 iterations.
In Table 2.2, the CPU times (in seconds) each algorithm takes, with the respective ε values listed above, are tabulated. Note that Algorithms 1–3 have been coded and run on Matlab, 64-bit (maci64) version R2017b. All software, including AMPL and Ipopt, were run on MacBook Pro, with operating system macOS Sierra version 10.12.6, processor 3.3 GHz Intel Core i7 and memory 6 GB 2133 MHz LPDDR3. In Table 2.2, the CPU times for Ipopt are listed with the tolerance 10−14, since with only this fine tolerance it is possible to obtain the same order of the error magnitudes as those obtained by Algorithms 1–3. With ε = 10−8, the CPU times for Ipopt are 0.06, 0.45 and 4.4 seconds, respectively, which are significantly higher than the times taken by Algorithms 1–3, in addition to worse errors.
Numerical observations suggest two joint winners: Algorithms 2 and 3, i.e., the Douglas–Rachford method and the Aragón Artacho–Campoy algorithm, in both accuracy and speed.
2.6 Conclusion and Open Problems
We have applied three well-known projection methods to solve an optimal control problem, i.e., control-constrained minimum-energy control of double integrator. We have derived the projectors for the optimal control problem and demonstrated that they can be used in Dykstra’s algorithm, the Douglas–Rachford (DR) method and the Aragón Artacho–Campoy (AAC) algorithm, effectively. We carried out extensive numerical experiments for an instance of the problem and concluded that the DR and AAC algorithms (Algorithms 2 and 3) were jointly the most successful. We also made comparisons with the standard discretization approach, only to witness the benefit of using projection methods.
It is interesting to note that when we apply alternating projections, we also seem to converge to \(P_{\mathcal {A}\cap \mathcal {B}}(0)\) even though this is not supported by existing theory.
To the best of authors’ knowledge, the current paper constitutes the first of its kind which involves projection methods and continuous-time optimal control problems. It can be considered as a prototype for future studies in this direction. Some of the possible directions are listed as follows.
-
The setting we have introduced could be extended to general control-constrained linear-quadratic problems.
-
We have used some discretization of the projector as well as the associated IVP in (2.34)–(2.38). This might be extended to problems in more general form. On the other hand, for the particular problem we have dealt with in the present paper, one might take into account the fact that if u −(t) is piecewise linear then its projection is piecewise linear. This might simplify further the expressions given in Proposition 2.1.
-
Although theory for projection methods can in principle vouch convergence only for convex problems, it is well-known that the DR method can be successful for nonconvex problems, see, for example, [10]. It would be interesting to extend the formulations in the current paper to nonconvex optimal control problems.
-
For a certain value of an algorithmic parameter, Figure 2.3 exhibits downward spikes. It would be interesting to see if this phenomenon is also observed in other control-constrained optimal control problems, as well as under other stopping criteria.
Notes
- 1.
In the general case, there is also an auxiliary sequence (p n) associated with A; however, because A is an affine subspace, it is not needed in our setting.
- 2.
It appears that this constraint qualification is not easy to check in our setting.
- 3.
Aragón Artacho and Campoy recommend α = 0.9 and β ∈ [0.7, 0.8]; see [3, End of Section 7].
References
Alt, W., Kaya, C.Y., Schneider, C.: Dualization and discretization of linear-quadratic control problems with bang–bang solutions. EURO J. Comput. Optim. 4, 47–77 (2016)
Alwadani, S., Bauschke, H.H., Moursi, W.M., Wang, X.: On the asymptotic behaviour of the Aragon Artacho-Campoy algorithm. Oper. Res. Letters 46 585–587 (2018)
Aragón Artacho, F.J., Campoy, R.: A new projection method for finding the closest point in the intersection of convex sets. Comput. Optim. Appl. 69, 99–132 (2018)
Aragón Artacho, F.J., Campoy, R.: Computing the resolvent of the sum of maximally monotone operators with the averaged alternating modified reflections algorithm. J. Optim. Th. Appl. 181, 709–726 (2019)
Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. SIAM Publications, Philadelphia (1995)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Second edition. Springer (2017)
Bauschke, H.H., Koch, V.R.: Projection methods: Swiss Army knives for solving feasibility and best approximation problems with halfspaces. Infinite Products of Operators and Their Applications, 1–40 (2012)
Bauschke, H.H., Moursi, W.M.: On the order of the operators in the Douglas–Rachford algorithm. Optimization Letters 10, 447–455 (2016)
Bauschke, H.H., Moursi, W.M.: On the Douglas–Rachford algorithm. Math. Program. (Ser. A) 164, 263–284 (2017)
Borwein, J.M., Sims, B.: The Douglas-Rachford Algorithm in the absence of convexity. In: Bauschke H., Burachik R., Combettes P., Elser V., Luke D., Wolkowicz H. (eds) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol 49, pp. 93–109. Springer, New York, NY (2011)
Boyle J.P., Dykstra R.L.: A method for finding projections onto the intersection of convex sets in Hilbert spaces. In: Advances in Order Restricted Statistical Inference, vol. 37, Lecture Notes in Statistics, pp. 28–47. Springer (1986)
Burachik, R.S., Kaya, C.Y., Majeed, S.N.: A duality approach for solving control-constrained linear-quadratic optimal control problems. SIAM J. Control Optim. 52, 1771–1782 (2014)
Combettes, P.L.: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Sig. Proc. 51, 2432–2442 (2003)
Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)
Dontchev, A.L., Hager, W.W.: The Euler approximation in state constrained optimal control. Math. Comp. 70, 173–203 (2001)
Dontchev, A.L., Hager, W.W., Malanowski, K.: Error bound for Euler approximation of a state and control constrained optimal control problem. Numer. Funct. Anal. Optim. 21, 653–682 (2000)
Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Amer. Math. Soc. 82, 421–439 (1956)
Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Prog. (Ser. A) 55, 293–318 (1992)
Eckstein, J., Ferris, M.C.: Operator-splitting methods for monotone affine variational inequalities, with a parallel application to optimal control. INFORMS J. Comput. 10, 218–235 (1998)
Fourer, R., Gay, D.M., Kernighan, B.W.: AMPL: A Modeling Language for Mathematical Programming, Second Edition. Brooks/Cole Publishing Company / Cengage Learning (2003)
Hestenes, M.R.: Calculus of Variations and Optimal Control Theory. John Wiley & Sons, New York (1966)
Kaya, C.Y., Lucas, S.K. Simakov, S.T.: Computations for bang–bang constrained optimal control using a mathematical programming formulation. Optim. Contr. Appl. Meth. 25, 295–308 (2004)
Kaya, C.Y., Noakes, J.L.: Computations and time-optimal controls, Opt. Cont. Appl. Meth. 17, 171–185 (1996)
Kaya, C.Y., Noakes, J.L.: Computational algorithm for time-optimal switching control. J. Optim. Theory App. 117, 69–92 (2003)
Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
O’Donoghue, B., Stathopoulos, G., Boyd, S.: A splitting method for optimal control. IEEE Trans. Contr. Sys. Tech. 21, 2432–2442 (2013)
Rugh, W.J.: Linear System Theory, 2nd Edition. Pearson (1995)
Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, 2nd Edition. Springer-Verlag, New York (1993)
Svaiter, B.F.: On weak convergence of the Douglas-Rachford method. SIAM J. Control Optim. 49 , 280–287 (2011)
Wächter, A., Biegler, L.T.: On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Math. Prog. 106, 25–57 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Algorithm 1b (Dykstra-b)
- Steps 1–4:
-
(Initialization) Do as in Steps 1–4 of Algorithm 1.
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return u k+1 and stop. Otherwise, set k := k + 1 and go to Step 2.
Algorithm 2b (DR-b)
- Step 1:
-
(Initialization) Choose a parameter \(\lambda \in \left ]0,1\right [\) and the initial iterate u 0 arbitrarily. Choose a small parameter ε > 0, and set k = 0.
- Step 2:
-
(Projection onto \(\mathcal {A}\)) Set u − = λu k. Compute \(\widetilde {u} = P_{\mathcal {A}}(u^-)\) by using (2.9).
- Step 3:
-
(Projection onto \(\mathcal {B}\)) Set \(u^- := 2\widetilde {u}-u^k\). Compute \(\widehat {u} = P_{\mathcal {B}}(u^-)\) by using (2.22).
- Step 4:
-
(Update) Set \(u^{k+1} := u^k + \widehat {u} - \widetilde {u}\).
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return \(\widetilde {u}\) and stop. Otherwise, set k := k + 1 and go to Step 2.
Algorithm 3b (AAC-b)
- Step 1:
-
(Initialization) Choose the initial iterate u 0 arbitrarily. Choose a small parameter ε > 0, two parameters α and β in \(\left ]0,1\right [\), and set k = 0.
- Step 2:
-
(Projection onto \(\mathcal {A}\)) Set u − = u k. Compute \(\widetilde {u} = P_{\mathcal {A}}(u^-)\) by using (2.9).
- Step 3:
-
(Projection onto \(\mathcal {B}\)) Set \(u^- = 2\beta \widetilde {u}-u^k\). Compute \(\widehat {u} = P_{\mathcal {B}}(u^-)\) by using (2.22).
- Step 4:
-
(Update) Set \(u^{k+1} := u^k +2\alpha \beta (\widehat {u}-\widetilde {u})\).
- Step 5:
-
(Stopping criterion) If \(\|u^{k+1} - u^k\|{ }_{L^\infty } \le \varepsilon \), then return \(\widetilde {u}\) and stop. Otherwise, set k := k + 1 and go to Step 2.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bauschke, H.H., Burachik, R.S., Kaya, C.Y. (2019). Constraint Splitting and Projection Methods for Optimal Control of Double Integrator. In: Bauschke, H., Burachik, R., Luke, D. (eds) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-25939-6_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-25939-6_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25938-9
Online ISBN: 978-3-030-25939-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)