1 Introduction and problem formulation

The factoring of decision processes interacting with continuous evolution plays an important role in model-based optimization for many applications. For example, when optimally controlling chemical processes there are often both continuous decisions such as inlet and outlet flows, as well as discrete decisions, such as the operation of on–off valves and pumps that may redirect flows within the reactor, [14, 21]. Such mixed-integer optimal control problems are therefore studied in different communities with different approaches. Most of these approaches address problems that are governed by systems of ordinary differential equations in Euclidean spaces, see [22] for a survey on this topic.

Total discretization of the underlying system obviously leads to typically large mixed-integer nonlinear programs. Hence, relaxation techniques have become an integral part of efficient mixed-integer optimal control algorithms, either in the context of branch-and-bound type methods or, more directly, by means of nonlinear optimal control methods combined with suitable rounding strategies. An important result is that the solution of the relaxed problem can be approximated with arbitrary precision by a solution fulfilling the integer requirements [23].

In this paper, we extend such relaxation techniques to problems that are governed by certain systems of partial differential equations. Motivating applications are for example to switch between reductive and oxidative conditions in order to maximize the performance in a monolithic catalyst [27], port switching in chromatographic separation processes [5, 14], or to optimize switching control within photochemical reactions [26]. Our problem setting also includes the switching control design in the sense that systems are equipped with multiple actuators and the optimizer has to choose one of these together with ordinary controls for each instant in time.

Concerning systems involving partial differential equations, such switching control design has already been studied using several techniques: In [18, Chap. 8] optimal switching controls are constructed for systems governed by abstract semilinear evolution equations by combining ideas from dynamic programming and approximations of the value function using viscosity solutions of the Hamilton-Jacobi-Bellman equations. Switching boundary control for linear transport equations using switching time sensitivities has been studied in [9]. Exemplary for the heat equation and based on variational methods, the controllability in case of switching among several actuators has been considered in [29] and null-controllability for the one-dimensional wave equation with switching boundary control has been considered in [7]. Based on linear quadratic regulator optimal control techniques and enumeration of the integer values for a fixed time discretization, optimal switching control of abstract linear systems has been considered in [13].

Our approach is complementary to the above, as we break the computationally very expensive combinatorial complexity of the problem by relaxation. This comes at the downside of providing only a suboptimal solution and possibly at the price of fast switching but, as we will see, with arbitrary small integer-optimality gap, depending on discretization, and extensions to limit the number of switching.

We will be concerned with the following problem of mixed-integer optimal control: Minimize a cost functional

$$ J = \phi\bigl(z(t_f)\bigr) + \int _0^{t_f} \psi\bigl(z(t),u(t)\bigr)\,dt $$
(1)

over trajectories \(z \in X_{[0,t_{f}]} \subset\{z: [0,t_{f}] \to X\}\) and control functions \(u \in U_{[0,t_{f}]} \subset\{u : [0,t_{f}] \to U\}\) and \(v \in V_{[0,t_{f}]} \subset\{v : [0,t_{f}] \to V\}\) subject to the constraints that z is a mild solution of the operator differential equation

$$ \left\{ \begin{aligned} &\dot{z}(t) = Az(t) + f \bigl(t,z(t),u(t),v(t)\bigr), \quad t \in(0,t_f] \\ &z(0)=z_0 \in X \end{aligned} \right. $$
(2)

and that the control functions satisfy

$$ u(t) \in U_{\mathrm{ad}}\subset U,\quad v(t) \in V_{\mathrm {ad}}\subset V,\quad t\in[0,t_f] $$
(3)

where X, U and V are Banach spaces, \(X_{[0,t_{f}]}\), \(U_{[0,t_{f}]}\) and \(V_{[0,t_{f}]}\) are normed linear spaces, A:D(A)→X is the infinitesimal generator of a strongly continuous semigroup {T(t)} t≥0 on X, t f ≥0 is a fixed real number, f:[0,t f X×U×VX, ϕ:X→ℝ and ψ:X×U→ℝ are given functions, U ad is some subset of U and V ad is a finite subset of V. This setting is for the most part classical, except for the assumptions on V ad.

We will refer to the above infinite-dimensional dynamic optimization problem as mixed-integer optimal control problem, short (MIOCP), and to the control function [u,v] as a mixed-integer control. This accounts for the fact that we do not impose restrictions on the set U adU while we can always identify the finite set V adV of the feasible control values for v with a finite number of integers

$$ V_{\mathrm{ad}}= \bigl\{v^1,\ldots,v^N\bigr\} \simeq \{1,\ldots,N\}. $$
(4)

Moreover, the operator differential equation (2) is an abstract representation of certain initial-boundary value problems governed by linear and semilinear partial differential equations, see, e.g., [20].

The existence of an optimal solution of the problem (MIOCP) depends, inter alia, on the spaces \(X_{[0,t_{f}]}\), \(U_{[0,t_{f}]}\) and \(V_{[0,t_{f}]}\) where we seek z:[0,t f ]→X, u:[0,t f ]→U and v:[0,t f ]→V, respectively. Common choices are, for \(U_{[0,t_{f}]}\), the spaces of square integrable (L 2), piecewise k-times differentiable (\(C^{k}_{\mathrm{pw}}\)) or (piecewise) k-times weakly differentiable (\(H^{k}_{\mathrm{pw}}\)) functions u:[0,t f ]→U and, for \(V_{[0,t_{f}]}\), the spaces of essentially bounded (L ) or piecewise constant (PC) functions v:[0,t f ]→V. We defer these considerations by assuming later that there exists an optimal solution of a related (to a certain extent convexified and relaxed) optimal control problem and present sufficient conditions guaranteeing that the solution of the relaxed problem can be approximated with arbitrary precision by a solution satisfying the integer restrictions. We do only assume that \(X_{[0,t_{f}]} \subset C([0,t_{f}];X)\), \(U_{[0,t_{f}]}\subset L^{1}(0,t_{f};U)\) and \(V_{[0,t_{f}]} \subset L^{1}(0,t_{f};V)\) to ensure that certain quantities in problem (MIOCP) are well-defined.

This relaxation method becomes most easily evident from writing problem (MIOCP) using a differential inclusion, that is, minimize (1) subject to the constraints that z is a solution of

$$ \left\{ \begin{aligned} &\dot{z}(t) \in Az(t) + \bigl\{f \bigl(t,z(t),u(t),v^i\bigr) : v^i \in V_{\mathrm {ad}} \bigr\}, \quad t \in(0,t_f] \\ &z(0)=z_0. \end{aligned} \right. $$
(5)

and u satisfies

$$ u(t) \in U_{\mathrm{ad}}, \quad t\in[0,t_f]. $$

It is well known that, under certain technical assumptions, the solution set of (5) is dense in the solution set of the convexified differential inclusion

$$ \left\{ \begin{aligned} &\dot{z}(t) \in Az(t) + \overline{\operatorname{co}}\bigl\{ f\bigl(t,z(t),u(t),v^i\bigr) : v^i \in V_{\mathrm{ad}}\bigr\}, \quad t \in(0,t_f] \\ &z(0)=z_0, \end{aligned} \right. $$
(6)

where \(\overline{\operatorname{co}}\) denotes the closure of the convex hull. This is proved in [6] for the case when X is a separable Banach space and in [2] for non-separable Banach spaces. While these results rely on powerful selection theorems, our main contribution is a constructive proof based on discretization, giving rise to a numerical method at the prize of additional regularity assumptions.

We will see that the advantage of such a relaxation method is that the convexified problem, using a particular representation of (6), falls into the class of optimal control problems with partial differential equations without integer-restrictions. The already known theory, in particular concerning existence, uniqueness and regularity of optimal solutions as well as numerical considerations such as sensitivities, error analysis for finite element approximations, etc., can thus be carried over to the mixed-integer problem under consideration here. We also discuss the possibility to include state-constraints for example enforcing time-periodicity constraints

$$ z(t_f)=z(0), $$
(7)

as occurring in chromatographic separation processes [5, 14]. The disadvantage of this approach is that we target at a solution that is only suboptimal (though with arbitrary precision) and that switching costs, a standard regularization of mixed-integer problems to prevent chattering solutions, or additional combinatorial constraints can lead to larger optimality/feasibility gaps. Nevertheless, we will show how a-priori bounds for such a gap can be obtained when constraints on the number of switches are incorporated.

The framework we use for the analysis here will be semigroup theory. Recall that for given z 0X and given control functions u,v, the mild solution of the state equation (2) is given by a function zC(0,t f ;X) satisfying the variation of constants formula

$$ z(t) = T(t)z_0 + \int_0^t T(t-s)f\bigl(s,z(s),u(s),v(s)\bigr)\,ds,\quad0 \leq t \leq t_f $$
(8)

in the Lebesgue-Bochner sense. This abstract setting covers in particular the usual setup for weak solutions of linear parabolic partial differential equations with distributed control on reflexive Banach spaces where A arises from a time-invariant variational problem, see [1, Sect. 1.3].

We include in our analysis explicitly the possibility to approximate the state equation (2) and say that \(\tilde{z} \in X_{[0,t_{f}]}\) is an ε-accurate solution of (2) if z is the mild solution and \(\|\tilde {z}(t)-z(t)\|_{X} \leq\varepsilon\) for all t∈[0,t f ]. Accordingly, we define for a given ε≥0 the set of ε-admissible solutions for (MIOCP) as

(9)

Further, we denote throughout the paper by \(H^{1}_{\mathrm{pw}}(0,t_{f};X)\) the space of X-valued functions defined on the interval [0,t f ] and being piecewise once-weakly differentiable with a piecewise defined weak derivative that is square-integrable in the Lebesgue-Bochner sense. Consistently, we denote by \(C^{0,\vartheta }_{\mathrm{pw}}(0,t_{f};X)\) the space of X-valued functions defined on the interval [0,t f ] being piecewise Hölder-continuous with a Hölder-constant ϑ. In both constructions, piecewise means that there exists a finite partition of the interval [0,t f ]

$$ 0=\tau_0<\tau_1< \tau_2<\cdots<\tau_{K_{t_f}}<\tau_{K_{t_f}+1}=t_f $$
(10)

so that the function has the respective regularity on all intervals [τ k ,τ k+1), \(k=0,\ldots,K_{t_{f}}\). We denote by ∥⋅∥ X the norm on X and by \(\|\cdot\|_{\mathcal{L}(X)}\) the operator norm induced by ∥⋅∥ X . Further, we denote by \(\|\cdot\|_{U_{[0,t_{f}]}}\) and \(\|\cdot\|_{V_{[0,t_{f}]}}\) the norm of \(U_{[0,t_{f}]}\) and \(V_{[0,t_{f}]}\), respectively. For simplicity of notation, we also define T(−t)=Id for all t>0, Id denoting the identity on X.

The paper is organized as follows. In Sect. 2, we present details of the relaxation method and the main results concerning estimates of the approximation error. In Sect. 3, we discuss extensions of the method to incorporate certain combinatorial constraints. In Sect. 4, we discuss applications for linear and semilinear equations and present numerical results for the heat equation with spatial scheduling of different actuators and a semilinear reaction-diffusion system with an on-off type control. In Sect. 5, we conclude with some additional remarks and point out open problems.

2 Relaxation method

Consider the following problem involving a particular representation of the convexified differential inclusion (6)

figure a

where we write ω,α,y for u,v,z, respectively, to emphasize the relaxation of the original problem (MIOCP). Here, \(\tilde{V}_{[0,t_{f}]}=\{\alpha: [0,t_{f}] \to\mathbb{R}^{N} : \| \alpha\|_{V_{[0,t_{f}]}} < \infty\}\).

Observe that the control functions α i take values on the full interval [0,1], but that any optimal controls [ω ,α ] of (11a)–(11f) yields an optimal mixed-integer control

$$ \bigl[u^*,v^*\bigr]:=\Biggl[\omega^*,\sum _{i=1}^N \alpha^*_i v^i \Biggr] $$
(12)

of problem (MIOCP) if α (t)∈{0,1}N for almost every t∈(0,t f ). However, it is not very difficult to construct examples where α (t)∈(0,1)N for t on some interval of positive measure. It is only in some special cases where optimality of α (t)∈[0,1]N implies that α takes only values on the boundary of its feasible set. For examples where this property, known as the bang-bang principle, can be verified in the context of partial differential equations, see [28, Sect. 3.2.4] and the references therein. Moreover, the relaxed problem (11a)–(11f) can in most applications only be solved approximately.

Therefore, consider the following hypothesis.

(H0):

Problem (11a)–(11f) has an optimal solution in \(U_{[0,t_{f}]} \times\tilde{V}_{[0,t_{f}]} \times X_{[0,t_{f}]}\).

Under this assumption, we will accept ε-accurate optimal solutions of the relaxed problem (11a)–(11f) and propose in Algorithm 1 an iterative procedure to obtain from these solutions a mixed-integer control taking values in U ad×{0,1}N. We then show in Theorem 1 under certain technical assumptions that the optimal value and the optimal state of problem (11a)–(11f) can be approximated by the proposed procedure with arbitrary precision. To make our terminology precise, we include the following definition.

Algorithm 1
figure 1

Relaxation method

Definition 1

Given some ε≥0, we say that \([\omega^{\varepsilon},\alpha^{\varepsilon},y^{\varepsilon}] \in U_{[0,t_{f}]} \times\tilde{V}_{[0,t_{f}]} \times X_{[0,t_{f}]}\) is an ε-accurate optimal solution of (11a)–(11f) if y ε is an ε-accurate solution of (11b)–(11c) with ω=ω ε and α=α ε, the constraints (11d)–(11f) hold with ω=ω ε and α=α ε and

$$ J\bigl(\omega^\varepsilon,\alpha^\varepsilon,y^\varepsilon\bigr) \leq \inf_{\omega,\alpha,y~\text{\scriptsize s.t. ({11b})--({11f})}} J(\omega,\alpha,y) + \varepsilon. $$
(13)

So any ε-accurate optimal solution of the relaxed problem (11a)–(11f) is admissible with respect to the control constraints on α and ω, ε-close admissible with respect to the state variable y and the corresponding value of the cost function is ε-close to the optimal value of problem (11a)–(11f).

Now, consider Algorithm 1 on p. 6 to obtain a mixed-integer control for problem (MIOCP). The main result is the following.

Theorem 1

Assuming (H0), let [ω ,α ,y ] denote an optimal solution of the relaxed problem (11a)(11f) and assume that the following assumptions hold true.

(H1):

The functions ϕ, ψ, and f satisfy the Lipschitz-estimates

for all y 1,y 2X,ω 1,ω 2U ad, t∈[0,t f ] and i=1,…,N with positive constants η, ξ and L.

(H2):

For all i=1,…,N and t∈[0,t f ] the function

$$ s \mapsto T(t-s)f\bigl(s,y^*(s),\omega^*(s),v^i\bigr) $$

is in \(H^{1}_{\mathrm{pw}}(0,t_{f};X)\) and there exists a positive constant C i such that

$$ \biggl \Vert \frac{d}{ds}\bigl(T(t-s)f\bigl(s,y^*(s),\omega^*(s),v^i \bigr)\bigr)\biggr \Vert _X \leq C_i\quad\textit{a.e. in}~0<s<t<t_f. $$

Let \(C=\sum_{i=1}^{N} C_{i}\).

(H3):

For all i=1,…,N, there exists a positive constant M i such that

$$ \sup_{t\in[0,t_f]} \big\|f\bigl(t,y^*(t),\omega^*(t),v^i\bigr) \big\|_X \leq M_i. $$

Let \(M=\sum_{i=1}^{N} M_{i}\).

(H4):

The solution [ω ,α ,y ] is stable in the sense that there exists a positive constant C J such that

(14)

for [ω,α] in some neighborhood of [u ,α ] in \(U_{[0,t_{f}]} \times\tilde{V}_{[0,t_{f}]}\).

Moreover, assume that ε k→0 and that the sequence \(\{ \mathcal{G}^{k}\}_{k}\) in Algorithm  1 is such that Δt k→0 with

$$ \Delta t^k = \max_{i=1,\ldots,n^k} \bigl \{t^k_{i}-t^k_{i-1}\bigr\}. $$
(15)

Define the constants \(C_{1}=(C_{J}(M+1)(1+\eta+\xi)e^{t_{f} \bar{M}L}+1)\), \(C_{2}=(M + t_{f} C)e^{t_{f} \bar{M}L}\), C 3:=(η+t f ξ)C 1+t f ξC J and C 4:=(η+t f ξ)C 2, where \(\bar{M}=\sup_{t \in[0,t_{f}]} \|T(t)\|_{\mathcal{L}(X)}\) and the constants η, ξ, M, C, L and C J are given by hypothesis (H1)(H4). Then, [u k,v k,z k] defined by Algorithm  1 is in \(\varXi_{\varepsilon^{k}}\) for all k=0,1,2,… and satisfies the estimates

$$ \big\|y^*(t)-z^k(t)\big\|_X \leq C_1\varepsilon^k+C_2(N-1)\Delta t^k,\quad t \in[0,t_f], $$
(16)

and

$$ \big|J\bigl(\omega^*,\alpha^*,y^*\bigr) - J \bigl(u^k,v^k,z^k\bigr)\big| \leq C_3 \varepsilon^k + C_4(N-1) \Delta t^k. $$
(17)

In particular, Algorithm  1 terminates in a finite number of steps with an ε-feasible mixed-integer solution [u ,v ,z ]∈Ξ ε of Problem (MIOCP) satisfying the estimate

$$ \big|J\bigl(\omega^*,\alpha^*,y^*\bigr) - J \bigl(u^*,v^*,z^*\bigr)\big| \leq\varepsilon, $$
(18)

where ε>0 is chosen arbitrarily in step 1.

Before we prove Theorem 1, we first prove a result saying that the deviation of two mild solutions in \(X_{[0,t_{f}]}\) equipped with the uniform norm can be estimated in terms of the absolute value of the integrated difference of two linearly entering control functions. This estimate is non-standard and generalizes the result in [23] to a Banach space setting, noting that the absolute value of the integrated difference is not a norm and in particular not comparable to the \(V_{[0,t_{f}]}\)-norm as a natural choice. This estimate, together with an approximation result for this integrated difference is the key ingredient in order to prove all a-priori estimates needed for the proof of Theorem 1.

Lemma 1

Let ε>0 and \(\bar {M}=\sup_{t \in [0,t_{f}]} \|T(t)\|_{\mathcal{L}(X)}\). Suppose that [ω ,α ,y ] is a feasible solution of the relaxed problem (11a)(11f) and assume that the hypotheses (H1)(H3) of Theorem 1 hold true. Let β=(β 1,…,β N )∈L (0,t f ;[0,1]N) be such that

$$ \max_{i=1,\ldots,N}\sup_{t\in[0,t_f]}\biggl \vert \int _0^t \alpha^*_i(\tau)- \beta_i(\tau)\,d\tau\biggr \vert \leq\varepsilon $$
(19)

and let z be the mild solution of (A3) in Algorithm  1 with \(\beta^{k}_{i}=\beta_{i}\), i=1,…,N, and ω k=ω . Then

$$ \big\|y^*(t)-z(t)\big\|_X \leq \bigl((M + Ct)e^{\bar{M}L t} \bigr) \varepsilon,\quad t\in[0,t_f]. $$
(20)

Proof

Fix t∈[0,t f ] and set, for the sake of brevity, δ(t)=∥y (t)−z(t)∥ X and f i(t,y(t))=f(t,y(t),ω (t),v i). Recalling (10) and using hypothesis (H2), let {τ 0,τ 1,…,τ K+1} be the set of partition points of the functions sT(ts)f i(s,y (s)) as objects in \(H^{1}_{\mathrm{pw}}(0,t;X)\), i=1,…,N, so that τ 0=0 and τ K+1=t. From the definition of the mild solutions for (11a)–(11f) and (A3), we have

$$ \delta(t) = \Biggl \Vert \sum_{i=1}^N \int_0^t T(t-s)f^i\bigl(s,y^*(s) \bigr)\alpha_i^*(s) - T(t-s)f^i\bigl(s,z(s)\bigr) \beta_i(s)\,ds \Biggr \Vert _X. $$

Adding 0=T(ts)f i(s,y (s))β i (s)−T(ts)f i(s,y (s))β i (s) under the integral, applying the triangular inequality and rearranging terms this yields

Now using integration by parts in the second part, we obtain

Then by rearranging terms, noting that the appearing telescopic sum evaluates as

because of τ 0=0, τ K+1=t, T(tt)=Id and \(\int_{0}^{0}\alpha_{i}^{*}(\vartheta)-\beta_{i}(\vartheta)\,d\vartheta = 0\), and by applying the triangular inequality this estimate simplifies to

Then, by definition of δ, f i and the constant \(\bar{M}\), the definition of the constants L, C and M in hypotheses (H1)–(H3), assumption (19) and the fact that β i (t)≤1, this yields

$$ \delta(t) \leq\bar{M}L \int_0^t \delta(s)\,ds + M \varepsilon+ Ct\varepsilon. $$

Finally, using the Gronwall lemma and rearranging terms, we obtain the desired estimate

$$ \delta(t) \leq \bigl((M + Ct)e^{\bar{M}L t} \bigr)\varepsilon. $$

 □

Next, we recall from [23] the following result on integral approximations.

Lemma 2

Let α=(α 1,…,α N ):[0,t f ]→[0,1]N be a measurable function satisfying \(\sum_{i=1}^{N} \alpha_{i}(t)=1\) for all t∈[0,t f ]. Define a piecewise constant function β:[0,t f ]→{0,1}N by

$$ \beta_i(t) = p_{i,j},\quad t\in[t_j,t_{j+1}), \quad i=1,\ldots ,N,\quad j=0,\ldots,n-1 $$
(21)

where for all i=1,…,N, j=0,…,n−1, p i,j is defined by (A2) in Algorithm  1 with \(p_{i,j}^{k}=p_{i,j}\). Then it holds for Δt=max i=1,…,n {t i t i−1}

  1. 1.

    \(\max_{i=1,\ldots,N} \vert \int_{0}^{t} \alpha_{i}(\tau) - \beta_{i}(\tau) \,d\tau \vert \leq(N-1) \Delta t\) for all t∈[0,t f ],

  2. 2.

    \(\sum_{i=1}^{N} \beta_{i}(t) = 1\) for all t∈[0,t f ].

Proof

See Theorem 5 of [23]. □

With the above two results we are now in the position to prove Theorem 1.

Proof of Theorem 1

Let the assumptions of Theorem 1 hold true. First we show that the sequence [u k,v k,z k] obtained by Algorithm 1 is ε k-feasible for the problem (MIOCP). We have u k=ω kU ad for all k by construction of ω k in step 3, \(v^{k}(t) = \sum_{i=1}^{N} \beta_{i}^{k}(t) v^{i}\), t∈[0,t f ], by construction in step 6 so v k(t)∈V ad, t∈[0,t f ], because \(\beta_{i}^{k}(t) \in\{0,1\}\) for all i,k and t∈[0,t f ] as seen from (A1) and (A2). By construction in step 6, z k is an ε k-accurate solution of (A3). Thus, recalling (9), \([u^{k},v^{k},z^{k}] \in\varXi_{\varepsilon^{k}}\) for all k=0,1,2,… .

Next, we show (16). To this end, let y(⋅;ω k,β k), z(⋅;ω ,β k)=y(⋅;ω ,β k) and y(⋅;ω ,α k) denote the mild solutions of (11b), (11c) with the respective controls. The stability assumption (H4) and the continuity assumption (H1) implies that

(22)

for a.e. t∈(0,t f ), where we added 0=−J(ω k,α k,y k)+J(ω k,α k,y k), used the triangular inequality and that [ω k,α k,y k] is an ε-accurate optimal solution of (11a)–(11f). Moreover, by fixing t∈[0,t f ], adding 0=−y(t;ω k,β k)+y(t;ω k,β k), 0=−z(t;ω ,β k)+y(t;ω ,β k) and 0=−y(t;ω ,α k)+y(t;ω ,α k) and using the triangular inequality, we see that

$$ \big\|z^k(t)-y^*(t)\big\|_X \leq \delta_1(t) + \delta_2(t) + \delta_3(t) + \delta_4(t) $$
(23)

with

Observe that δ 1(t)≤ε k, because z k is an ε k-accurate solution of (A3) and thus of (11b), (11c) with controls ω k,α k. By definition of the mild solution and using (H1), we have

(24)

Using (22), the Gronwall inequality implies that

$$ \delta_2(t) \leq C_J(1+\eta+\xi)e^{\bar{M}Lt} \varepsilon^k. $$
(25)

From Lemma 2 with α=α k, β=β k and Δtt k with Δt k from (15), we get that

$$ \max_{i=1,\ldots,N} \biggl \vert \int_0^{t} \alpha^k_i(\tau)-\beta_i^k(\tau) \,d\tau\biggr \vert \leq(N-1) \Delta t^k,\quad t\in[0,t_f]. $$
(26)

Moreover, Lemma 1 used with y =y(⋅;ω ,α k), α =α k, β=β k and ε=(N−1)Δt k implies that

$$ \delta_3(t)=\big\|z\bigl(t;\omega^*,\beta^k\bigr)-y\bigl(t; \omega^*,\alpha^k\bigr)\big\|_X \leq \bigl((M + Ct)e^{\bar{M}L t} \bigr) (N-1)\Delta t^k. $$
(27)

Again by definition of the mild solution we have

(28)

Adding 0=α k(s)(−f(s,y(s;ω ,α ),ω (s),v i)+f(s,y(s;ω ,α ),ω (s),v i)) under the integral, we obtain

(29)

Using again (22) and the Gronwall inequality we obtain that

$$ \delta_4(t) \leq M C_J(1+\eta+\xi) e^{\bar{M}Lt} \varepsilon^k. $$
(30)

Thus, summing up the estimates for δ 1(t),…,δ 4(t) and rearranging terms we obtain from (23) that for all t∈[0,t f ]

$$ \big\|z^k(t)-y^*(t)\big\|_X \leq C_1 \varepsilon^k + C_2 (N-1)\Delta t^k $$
(31)

with \(C_{1}=(C_{J}(M+1)(1+\eta+\xi)e^{t_{f} \bar{M}L}+1)\) and \(C_{2}=(M + t_{f} C)e^{t_{f} \bar{M}L}\). This proves (16).

By definition of the cost function (1) we get from the triangular inequality that

(32)

so that using hypothesis (H1), (22) and (31) we obtain

$$ \big|J\bigl(\omega^*,\alpha^*,y^*\bigr) - J \bigl(u^k,v^k,z^k\bigr)\big| \leq C_3 \varepsilon^k + C_4 (N-1)\Delta t^k $$
(33)

with the constants C 3=(η+t f ξ)C 1+t f ξC J and C 4=(η+t f ξ)C 2(t f ). This proves (17).

Next, suppose that the main loop in Algorithm 1 terminates in step 4 or in step 7. In the first case, the termination criterion implies that

$$ \big|J\bigl(u^*,v^*,z^*\bigr)-J\bigl(\omega^*,\alpha^*,y^*\bigr)\big|=\big|J\bigl( \omega^k,\alpha^k,y^k\bigr)-J\bigl(\omega^*, \alpha^*,y^*\bigr)\big| \leq\varepsilon^k \leq \varepsilon $$
(34)

for some k, because J(ω k,α k,z k) is an ε k-accurate optimal solution of (11a)–(11f). Similarly, in the second case the termination criterion implies that

(35)

for some k. This proves (18) under the assumption that Algorithm 1 terminates.

Finally suppose that Algorithm 1 loops infinitely many times, that is,

$$ \big|J^k_\mathrm{rel}-J^k\big| > \frac{\varepsilon}{2} \quad\text{or}\quad \varepsilon^k>\frac{\varepsilon}{2} \quad\text{for all } k=0,1,2,\ldots $$
(36)

By adding 0=−y (t)+y (t) and using that y k is an ε k-accurate optimal solution, we obtain from (31) that

(37)

Using that ε k→0 and Δt k→0 as k→∞ by assumption, we see from (37) that \(\sup_{t \in[0,t_{f}]} \|z^{k}(t) - y^{k}(t)\|_{X} \to0\) as k→∞. By definition of \(J^{k}_{\mathrm{rel}}\) and J k and using the triangular inequality we have

(38)

so that \(\sup_{t \in[0,t_{f}]} \|z^{k}(t) - y^{k}(t)\|_{X} \to0\) as k→∞ implies by continuity of ϕ and ψ that also \(|J^{k}_{\mathrm{rel}}-J^{k}| \to0\) as k→∞. Together with the assumption that ε k→0 as k→∞ this contradicts (36) and completes the proof. □

Theorem 1 can be seen as a performance analysis of the relaxation method proposed in Algorithm 1. The estimates (16) and (17) prove a bilinear dependency of the mixed-integer control approximation error for the differential state and the optimal value in terms of the chosen maximal integer-control discretization mesh size Δt k and accuracy ε k. This relates to the convergence speed of Algorithm 1 in terms of the chosen refinements for Δt k and ε k. Note that the estimates (16) and (17) suggest to choose Δt k and ε k of the same order. On the other hand, Theorem 1 can be regarded as an existence result of suboptimal solutions for (MIOCP). To emphasize this, we formulate the precise statement explicitly.

Corollary 1

Under the hypothesis (H0)(H3) there exists for every ε>0 a feasible solution \((u^{\varepsilon},v^{\varepsilon},z^{\varepsilon}) \in U_{[0,t_{f}]} \times V_{[0,t_{f}]} \times X_{[0,t_{f}]}\) of problem (MIOCP) satisfying

$$ J\bigl(u^{\varepsilon},v^{\varepsilon},z^{\varepsilon}\bigr) \leq J\bigl(\omega^*,\alpha^*,y^*\bigr) + \varepsilon, $$
(39)

where (ω ,α ,y ) is the optimal solution of the relaxed problem (11a)(11f).

Proof

Apply Theorem 1 with ε k=0 for all k. Then, (ω k,α k,y k)=(ω ,α ,y ) and it can be seen from proof of Theorem 1 that (H4) is then not needed for the estimate (17). Moreover, Ξ 0 is contained in the feasible set of problem (11a)–(11f) and thus J(ω ,α ,y )≤J(u,v,z) for all (u,v,z)∈Ξ 0. Thus, (39) follows from (18). □

Hypothesis (H0) can be checked by classical arguments, cf., e.g., [28]. Hypothesis (H1)–(H3) are needed to estimate the proximity of mixed-integer solutions of (MIOCP) to the optimal solution of the relaxed problem (11a)–(11f) while hypothesis (H4) guarantees in a sense the proximity of the ε-accurate optimal solutions of (11a)–(11f) to the optimal ones. Hypothesis (H1) and (H3) are standard assumptions and can be weakened to appropriate ‘local’ versions using standard arguments. This can also tighten the estimates in Theorem 1. The last conclusion of Theorem 1 concerning the termination of Algorithm 1 and Corollary 1 even hold for the functions ϕ and ψ just continuous as it can be seen from the respective proofs. Hypothesis (H4) can be verified for problems that are well-posed in the Tikhonov sense, cf. the discussion in [15, Sect. 4]. For sufficiently regularized parabolic problems it could also be checked using methods as in [19]. Alternatively, instead of invoking Theorem 1, Corollary 1 or a weaker conclusion presented in Proposition 2 below may be used where (H4) is not needed.

Hypothesis (H2) of Theorem 1 clearly imposes certain regularity assumptions on the linear operator A generating the semigroup {T(t)} t≥0, the function f, but also on the time regularities of the optimal control functions of the relaxed problem (11a)–(11f) in \(U_{[0,t_{f}]}\) and \(\tilde{V}_{[0,t_{f}]}\). The main difficulty with proving (H2) is that y as a solution of (8) with A unbounded may only be continuous and not absolutely continuous in time, hence not necessarily differentiable almost everywhere as this is always true when A=0 (with T(⋅)=Id) and X is a finite dimensional space. This can be delicate in particular for nonlinear systems. We will therefore exemplary discuss hypothesis (H2) in Example 2 below for the case of a semilinear system where A is the generator of an analytic semigroup. A more general analysis is possible for linear systems

$$ \dot{z}(t)=Az(t)+f\bigl(t,u(t),v(t)\bigr),\quad z(0)=z_0 $$
(40)

when f is sufficiently smooth. We formulate this as an auxiliary result.

Proposition 1

Consider the problem (MIOCP) with equation (2) replaced by equation (40), let \(\bar{M}=\sup_{t \in[0,t_{f}]}\|T(t)\|_{\mathcal{L}(X)}\) and suppose that the functions g i:[0,t f ]→X defined by g i(t)=f(t,ω (t),v i), i=1,…,N, satisfy the following conditions.

  1. (i)

    g i(t)∈D(A) for a.e. t∈[0,t f ] and there exists constants \(\bar{L}^{i}\) such that

    $$ \operatorname{ess\,sup}_{t \in[0,t_f]}\bigl \Vert A g^i(t)\bigr \Vert _X \leq\bar{L}^i. $$
  2. (ii)

    g i is differentiable for a.e. t∈[0,t f ] and there exist constants \(\bar{C}^{i}\) such that

    $$ \operatorname{ess\,sup}_{t \in[0,t_f]}\biggl \Vert \frac {d}{dt}g^i(t) \biggr \Vert _X \leq\bar{C}^i. $$

Then hypothesis (H2) of Theorem 1 holds with \(C_{i}:=\bar{M} (\bar{C}^{i}+\bar{L}^{i} )\).

Proof

From condition (i) we get from the chain rule that

$$ \frac{d}{ds} T(t-s)g(t) = T(t-s)\frac{d}{ds}g(t)-T(t-s)Ag(t) $$

for all i=1,…,N and thus, by taking the norm, applying the triangular inequality and using the definition of the constants in (i) and (ii) we obtain

$$ \biggl \Vert \frac{d}{ds} T(t-s)f\bigl(s,\omega^*(s),v^i\bigr) \biggr \Vert _X \leq\bar {M} \bigl(\bar{C}^i+ \bar{L}^i \bigr). $$

 □

The conditions (i) and (ii) are a natural extension of the differentiability assumptions imposed in [23, Corollary 6] for the case when A=0 and X=ℝn. In Sect. 4, we will use such arguments in order to verify hypothesis (H2) in Example 1.

We note that the relaxation method works under much weaker assumptions with slightly weaker conclusions. Suppose that we replace the main hypothesis (H0) by the following much weaker hypothesis.

(\(\mathrm{H}_{0}'\)):

Problem (11a)–(11f) has a feasible solution in \(U_{[0,t_{f}]} \times\tilde{V}_{[0,t_{f}]} \times X_{[0,t_{f}]}\).

Then, we still get the following result, being useful in particular in many practical applications when the solutions [ω k,α k,y k] found in step 3 only satisfy, for example, necessary optimality conditions (up to an accuracy of ε k). The following conclusion can then still be very useful in order to provide bounds for the mixed-integer problem (MIOCP) and we will take advantage of this when discussing the examples in Sect. 4. Nevertheless, an approximation of a globally optimal solution of problem (MIOCP) can of course only be obtained when the relaxed problem in step 3 is solved to ε k-global optimality.

Proposition 2

Under the hypothesis (\(\mathrm{H}_{0}'\)), consider Algorithm  1 where we replace step 3 by

3′:

Select some [ω k,α k,y k] such that ω k and α k is feasible for problem (11a)(11f) and y k is an ε k-accurate solution of (11b), (11c).

Assume hypothesis (H1) and that the hypothesis (H2) and (H3) hold with ω =ω k and y =y(⋅;ω k,α k) with constants C k and M k for all k=0,1,2,… and y(⋅;ω k,α k) being the mild solution of (11b), (11c). Further assume that, as in Theorem 1, ε k→0 and that the sequence \(\{\mathcal{G}^{k}\}_{k}\) is such that Δt k→0. Define the constants C 1=2, \(C_{2}^{k}= ((M^{k} + t_{f} C^{k})e^{t_{f} \bar{M}L} )\), C 3=(η+t f ξ)C 1 and \(C_{4}^{k}=(\eta+t_{f} \xi) C^{k}_{2}\), where \(\bar{M}=\sup_{t \in[0,t_{f}]} \|T(t)\|_{\mathcal{L}(X)}\) and the constants η, ξ and L are given by hypothesis (H1). Then, [u k,v k,z k] defined by Algorithm  1 is in \(\varXi_{\varepsilon^{k}}\) for all k=0,1,2,… and satisfies the estimates

$$ \big\|y^k(t)-z^k(t)\big\|_X \leq C_1 \varepsilon^k + C_2^k(N-1) \Delta t^k,\quad t \in[0,t_f], $$
(41)

and

$$ \big|J\bigl(\omega^k, \alpha^k,y^k\bigr) - J\bigl(u^k,v^k,z^k \bigr)\big| \leq C_3 \varepsilon^k + C_4^k (N-1)\Delta t^k. $$
(42)

In particular, if M k and C k can be chosen independently of k, then Algorithm  1 terminates in a finite number of steps κ with an ε-feasible mixed-integer solution [u ,v ,z ]∈Ξ ε of Problem (MIOCP) satisfying the estimate

$$ \big|J\bigl(\omega^\kappa, \alpha^\kappa,y^\kappa\bigr) - J\bigl(u^*,v^*,z^*\bigr)\big| \leq \varepsilon, $$
(43)

where ε>0 was chosen arbitrarily in step 1.

Proof

Let the assumptions of Proposition 2 hold true. First observe that the sequence [u k,v k,z k] obtained by Algorithm 1 with step 3 replaced by 3′ is ε k-feasible for the problem (MIOCP) by the same arguments as in the proof of Theorem 1.

To show (41) and (42), let y(⋅;ω k,α k) denote the mild solution of (11b), (11c) and z(⋅;u k,β k) be the mild solution of (A3) with the respective controls. Then,

$$ \big\|y^k(t)-z^k(t)\big\|_X \leq\big\|y\bigl(t; \omega^k,\alpha^k\bigr) - z\bigl(t;u^k, \beta^k\bigr)\big\|_X + 2\varepsilon^k,\quad t \in[0,t_f] $$
(44)

because y k and z k are both ε k-accurate solutions of (11b), (11c) and (A3), respectively. From Lemma 2 with α=α k, β=β k, and Δtt k with Δt k from (15), we get that

$$ \max_{i=1,\ldots,N} \biggl \vert \int_0^{t} \alpha^k_i(\tau)-\beta_i^k(\tau) \,d\tau\biggr \vert \leq(N-1) \Delta t^k,\quad t\in[0,t_f]. $$
(45)

Moreover, under hypothesis (H1) and the assumption that the hypothesis (H2) and (H3) hold with ω =ω k and y =y k with constants C k and M k for all k=0,1,2,… , we may apply Lemma 1 with y =y(⋅;ω k,α k), α =α k, β=β k, ω =ω k=u k and ε=(N−1)Δt k and obtain that

$$ \big\|y\bigl(t;\omega^k,\alpha^k\bigr) - z \bigl(t;u^k,\beta^k\bigr)\big\|_X \leq \bigl( \bigl(M^k + C^k t\bigr)e^{\bar{M}L t} \bigr) (N-1)\Delta t^k,\quad t\in[0,t_f]. $$

This proves (41) with C 1=2 and \(C_{2}^{k}= ((M^{k} + t_{f} C^{k})e^{t_{f} \bar{M}L} )\). Using again the continuity assumptions of ϕ and ψ in (H1), we obtain similarly as in the proof of Theorem 1 that

$$ \big\|J\bigl(\omega^k,\alpha^k,y^k\bigr)-J \bigl(u^k,v^k,z^k\bigr)\big\|_X \leq C_3 \varepsilon^k + C_4^k(t) (N-1)\Delta t^k $$
(46)

C 3=C 1(η+t f ξ) and \(C_{4}^{k}=(\eta+ t_{f} \xi)C_{2}^{k}\). This proves (42).

Finally, when the constants C k and M k can be chosen independently of k, ε k→0 and Δt k→0, then \(C_{4}^{k}\) is independent of k and we get by similar arguments as in the proof of Theorem 1 that the iteration terminates after a finite number of steps such that (43) holds. □

In order to solve the optimal control problem (11a)–(11f) numerically, the problem may have to be (adaptively) discretized. In particular, direct or indirect numerical methods may be used. For an introduction to the basic concepts see, e.g., [12]. Depending on the method of choice for the time discretization of the control functions \(\omega\in U_{[0,t_{f}]}\) and \(\alpha\in\tilde{V}_{[0,t_{f}]}\) it may in many cases be advantageous to discretize ω k and α k simultaneously using the grid \(\mathcal{G}^{k}\). This is for example implemented in the software package MS MINTOC designed for solving mixed-integer optimal control problems with ordinary differential equations [22, 24].

We conclude this section with an interesting remark saying that, in the fashion of Theorem 1 and Proposition 2, the relaxation method can also deal with state constraints.

Remark 1

Suppose that we wish to include a constraint of the form

$$ G\bigl(z(t),t\bigr)\geq0,\quad t \in[0,t_f] $$
(47)

in the mixed-integer optimal control problem (MIOCP). Including this constraint also in (11a)–(11f) with z(t) replaced by y(t) and assuming that there exists a function ζL (0,t f ) such that

$$ \big|G(y_1,t)-G(y_2,t)\big| \leq\zeta(t) \|y_1 - y_2\|_X,\quad y_1,y_2 \in X $$
(48)

then (16) yields that

$$ \big|G\bigl(y^*(t),t\bigr)-G\bigl(z^k(t),t \bigr)\big| \leq\zeta(t)C_1 \varepsilon^k + \zeta (t)C_2(N-1)\Delta t^k $$
(49)

with C 1 and C 2 as in Theorem 1. This shows also a bilinear dependency of the integer-control approximation error for the state constraint violation on Δt k and ε k. The conclusion of Proposition 2 can be adapted accordingly.

3 Combinatorial constraints

Suppose we wish to include combinatorial constraints of the form

$$ \#_{v^i \curvearrowright v^j}(v) \leq K^{i,j},\quad i\in I, \quad j\in J $$
(50)

into the mixed-integer optimal control problem (MIOCP) given by (1)–(3), where \(\#_{v^{i} \curvearrowright v^{j}}(v)\) denotes the number of switches of the control function v:[0,t f ]→V ad from value v i to value v j, K i,j are given, non-negative constants and I,J⊂{1,…,N}.

Note that the relaxation method considered in Sect. 2 typically satisfies

$$ \#_{v^i \curvearrowright v^j}(v) \to+\infty $$

for some i,j∈{1,…,N} as we let ε→0, so eventually violating (50) for small ε. Therefore, along the lines of [25], we propose in Algorithm 2 a modification of Algorithm 1. The min-max problem (A5) can be written as a standard mixed-integer linear problem (MILP) using slack variables and can be computed efficiently [25]. We then have the following result.

Algorithm 2
figure 2

Relaxation method with switching constraints

Theorem 2

Suppose that the hypotheses of Theorem 1 hold true and let C 1, C 2, C 3 and C 4 be as in Theorem 1. Then Algorithm  2 terminates for every ε>0 and k max≥0 after a finite number of steps κk max with an ε κ-feasible solution \([u^{*},v^{*},z^{*}] \in\varXi_{\varepsilon^{\kappa}}\) of problem (MIOCP) satisfying the combinatorial constraints (50) and the estimates

$$ \big\|y^*(t)-z^*(t)\big\|_X \leq C_1 \varepsilon^{\kappa} + C_2 \bigl(J_{\mathrm {sub}} \bigl(p^{\kappa,*}\bigr)+\delta\bigr),\quad t \in[0,t_f], $$
(51)

and

$$ \big|J\bigl(\omega^*,\alpha^*,y^*\bigr)-J \bigl(u^*,v^*,z^*\bigr)\big| \leq C_3 \varepsilon^{\kappa} + C_4 \bigl(J_{\mathrm{sub}}\bigl(p^{{\kappa},*}\bigr)+\delta\bigr) $$
(52)

for J sub(p κ,∗) given by Algorithm  2 and some \(0 \leq\delta\leq\max_{l=1,\ldots,n^{\kappa }} \Delta t_{l}^{\kappa}\).

Proof

Algorithm 2 terminates by the criterion in step 7′ after κ steps, κk max, with a solution [u ,v ,z ]. We first show that this solution is ε κ-feasible for the problem (MIOCP). We have u =ω κU ad and by definition of ω κ in step 3, \(v^{\kappa}(t) = \sum_{i=1}^{N} \beta_{i}^{\kappa}(t) v^{i}\), t∈[0,t f ], by definition in step 6, so v κ(t)∈V ad, t∈[0,t f ], because \(\beta_{i}^{k}(t) \in\{0,1\}\) for all i,k and t∈[0,t f ] as seen from (A4) and the constraints in (A5).By construction in step 6, z κ is an ε κ-accurate solution of (A3). Thus, recalling (9), \([u^{*},v^{*},z^{*}] \in\varXi_{\varepsilon^{\kappa}}\). The constraints in (A5) also ensure that the combinatorial constraints (50) are satisfied.

Next we show (51). For all k=0,1,… the cost function in (50) is defined as

$$ J_{\mathrm{sub}}\bigl(p^{k,*}\bigr)=\max_{i=1,\ldots,N} \max_{r=1,\ldots ,n^k} \Biggl \vert \sum_{l=1}^r \bigl(q_{i,l}^k-p_{i,l}^k\bigr) \Delta t_l^k \Biggr \vert . $$
(53)

By definition of \(q_{i,l}^{k}\) in (A6) and \(\beta_{i}^{k}(t)\) in (A4) and rearranging terms, we get

$$ J_{\mathrm{sub}}\bigl(p^{k,*}\bigr)=\max_{i=1,\ldots,N} \max_{r=1,\ldots ,n^k} \biggl \vert \int_0^{t_{r+1}} \alpha_i^k(t)-\beta_i^k(t)\,dt \biggr \vert . $$
(54)

Using that \(\alpha_{i}^{k}(t) \in[0,1]\) and \(\beta_{i}^{k}(t) \in\{0,1\}\) for all t∈[0,t f ], this yields

$$ J_{\mathrm{sub}}\bigl(p^{k,*}\bigr)=\max_{i=1,\ldots,N} \sup_{t \in[0,t_f]} \biggl \vert \int_0^{t} \alpha_i^k(\tau)-\beta_i^k(\tau) \,d\tau\biggr \vert - \delta $$
(55)

for some \(0 \leq\delta\leq\max_{l=1,\ldots,n^{k}} \Delta t_{l}^{k}\). Fixing some t∈[0,t f ], we have as in the proof of Theorem 1

$$ \big\|z^k(t)-y^*(t)\big\|_X \leq\delta_1(t)+ \delta_2(t)+\delta_3(t)+\delta_4(t), $$
(56)

with δ i (t), i=1,…,4, as in (15). Moreover, as in the proof of Theorem 1, we see that δ 1(t)≤ε k, \(\delta_{2}(t) \leq C_{J}(1+\eta+\xi )e^{\bar{M}Lt}\varepsilon^{k}\) and \(\delta_{4}(t)\leq M C_{J}(1+\eta+\xi) e^{\bar{M}Lt}\varepsilon^{k}\). Using (H1)–(H3), we can apply Lemma 1 with y =y(⋅;ω ,α k), α =α k, β=β k and ε=J sub(p k,∗)+δ and get that

$$ \delta_3(t)=\big\|y\bigl(t;\omega^*,\beta^k\bigr)-y\bigl(t; \omega^*,\alpha^k\bigr)\big\|_X \leq(M + C t)e^{\bar{M}L t} \bigl(J_{\mathrm{sub}}\bigl(p^{k,*}\bigr)+\delta\bigr). $$
(57)

Summing up the estimates for δ 1(t),…,δ 4(t) and rearranging terms, this proves (51) with the definition of C 1 and C 2 as in Theorem 1. The estimate (52) then follows from (51) and the definition of the constants C 3 and C 4 as in Theorem 1 using the definition of the cost function J in (1) and the Lipschitz constants η and ξ from (H1). This completes the proof of Theorem 2. □

Remark 2

As already remarked in the case without combinatorial constraints, the method can also deal with state constraints such as (47). Assuming again existence of a function ζL (0,t f ) such that (48) holds true, (51) yields a bounded deviation of the feasible reference trajectory

$$ \big|G\bigl(y^*(t),t\bigr)-G\bigl(z^k(t),t \bigr)\big| \leq\zeta(t)C_1\varepsilon^k + \zeta (t)C_2\bigl(J_{\mathrm{sub}}\bigl(p^{k,*}\bigr)+\delta \bigr), $$
(58)

and hence a bound on the worst case constraint violation. Also, the conclusion of Proposition 2 can be adapted analogously.

4 Examples

In this section we discuss the hypothesis (H1)–(H3) of Theorem 1 exemplary for a linear and a semilinear control problem where A is the generator of an analytic semigroup in view of Proposition 2 and present numerical results for a test problem in each case using the conclusions.

4.1 A linear parabolic equation with lumped controls

Let Ω be a domain in ℝn and f i :Ω→ℝ, i=1,…,N, be fixed control profiles. Consider the internally controlled heat equation

$$ \left\{ \begin{aligned} &\frac{\partial z}{\partial t}(x,t) - \rho\sum_{j=1}^n \frac {\partial^2 z}{\partial x^2_j}(x,t) = f_{\sigma(t)}(x)u(t),\quad\text{in}~Q \\ &z(x,t) = 0,\quad\text{on}~\varSigma \\ &z(x,0)=z_0(x),\quad\text{in}~\varOmega \end{aligned} \right. $$
(59)

where Q=Ω×(0,t f ), Σ=∂Ω×(0,t f ) and ρ is a positive constant.

Suppose that for some λ 1≥0 and λ 2>0 the control task is to minimize the cost function

$$ J = \int_\varOmega\big|z(t_f,x)\big|^2 \,dx + \lambda_1 \int_0^{t_f} \int _\varOmega \big|z(t,x)\big|^2\,dx\,dt + \lambda_2 \int_0^{t_f} \big|u(t)\big|^2\,dt $$
(60)

where z is the weak solution of (59) by selecting u:[0,t f ]→ℝ and a switching signal σ(⋅):[0,t f ]→{1,…,N} determining the control profile f i applied at time t∈[0,t f ].

In order to write the above problem in abstract form (2), we let X=L 2(Ω), set V=U=U ad=ℝ, V ad={1,…,N} and define f:[0,t f U×VX by f(t,u,v)(x):=f v (x)u, \(\phi(z)=\|z\|^{2}_{X}\), \(\psi(z,u)=\lambda_{1}\|z\|^{2}_{X} + \lambda_{2}|u|^{2}\) and define (A,D(A)) as

$$ \begin{aligned} &D(A) = H^2(\varOmega) \cap H^{1}_0(\varOmega) \\ &(Az) (x) = \sum_{j=1}^n \frac{\partial^2 z}{\partial x^2_j} (x),\quad z \in D(A). \end{aligned} $$
(61)

It is well-known that (A,D(A)) is the generator of a strongly continuous (analytic) semigroup of contractions {T(t)} t≥0 on X, see, e.g., [20]. We choose \(X_{[0,t_{f}]}=C([0,t_{f}];X)\), \(U_{[0,t_{f}]}=PC(0,t_{f};\mathbb{R})\) and \(V_{[0,t_{f}]}=L^{\infty}(0,t_{f};\mathbb{R})\).

Let [ω k,α k,y k] be a sequence of feasible solutions of the corresponding relaxed problem (11a)–(11f) so that \([\omega^{k},\alpha^{k},y^{k}] \in\mathcal{S}\) for k=0,1,2,… , where \(\mathcal{S}\) is a bounded subset of \(X_{[0,t_{f}]} \times U_{[0,t_{f}]} \times\tilde{V}_{[0,t_{f}]}\) and assume that the fixed control profiles f i satisfy

$$ f_i \in D(A)\quad \text{for all}~i=1, \ldots,N. $$
(62)

We now want to check if the assumptions of Proposition 2 are satisfied. For this, we can restrict our analysis without loss of generality to \(\mathcal{S}\) and, using that the functions ϕ, ψ and f are locally Lipschitz continuous and \(\mathcal {S}\) is bounded, we can see that hypothesis (H1) holds. Moreover, due to (62) we have that f(t,ω k(t),v i)=f v (x)ω k(t)∈D(A) and

$$ \bigl \Vert Af\bigl(t,\omega^k(t),v^i\bigr)\bigr \Vert _X \leq\|A f_i\|_X \big| \omega^k(t)\big| \leq\bar{C^i} $$
(63)

for some constant \(\bar{C}^{i}\) and due to the choice of \(U_{[0,t_{f}]}\) we have that f(s,ω k(s),v i) is differentiable for a.e. s∈[0,t f ] and

$$ \biggl \Vert \frac{d}{ds} f\bigl(s,\omega^k(s),v^i \bigr) \biggr \Vert _X = \biggl \Vert \frac{d}{ds} f_i \omega^k(s) \biggr \Vert _X \leq \|f_i\|_X \biggl \vert \frac{d}{ds} \omega^k(s)\biggr \vert \leq\bar{L}^i $$
(64)

for some constant \(\bar{L}^{i}\). Noting that (59) and thus also the abstract system is linear, we may apply Proposition 1 to see that hypothesis (H2) holds for every k=0,1,2,… . Also, the bound in (H3) holds for all k, observing that

$$ \sup_{t\in(0,t_f)} \big\|f\bigl(t,\omega^k(t),v^i\bigr) \big\|_X \leq \|f_i\|_X \big\|\omega^k(t) \big\|_\infty\leq M^i, $$
(65)

for some constant M i. Hence, for any such choices [ω k,α k,y k], any sequence ε k→0, Δt k→0 and any ε>0, the relaxation method terminates after finitely many steps κ with an ε-feasible solution [u ,v ,z ] of problem (59) satisfying the estimate

$$ \big|J\bigl(\omega^\kappa,\alpha^\kappa,y^\kappa\bigr) - J \bigl(u^*,v^*,z^*\bigr)\big| \leq \varepsilon, $$
(66)

by Proposition 2. The desired switching structure σ:[0,t f ]→{1,…,N} is finally given by σ(t)=v (t).

Example 1

To demonstrate the applicability of the approach, we implemented the relaxation method for a test problem of the form (59)–(60) with a two-dimensional rectangular domain Ω and the following parameters.

Let Ω=[0,L ξ ]×[0,L ζ ], ρ=0.01, L ξ =1, L ζ =2 and t f =15 and suppose that there are given 9 actuator locations x i with the positions given by (ξ j ,ζ k )∈Ω, where

$$ \xi_j = \frac{j+0.005 L_\xi}{4},\quad \zeta_k = \frac{k+0.005 L_\zeta }{4},\quad j,k=1,2,3. $$
(67)

Further suppose that there is a point actuator for each of these locations x i which we model here by setting f i =B i with

$$ B_i(x) = \frac{1}{\sqrt{2 \pi\epsilon}}e^{\frac {-(x_i-x)^2}{2\epsilon}} $$
(68)

for some small, but fixed ϵ>0. Note that ∫ Ω B i (x) dx=1 and that B i (x) converges to the Dirac delta function δ(xx i ) as ϵ→0.

As initial data we take

$$ z_0(\xi,\zeta)=10\sin(\pi\xi)10\sin(\pi\zeta) $$
(69)

and as parameters in the cost function we take λ 1=2 and \(\lambda_{2}=\frac{1}{500}\).

We have chosen these numerical values to match as closely as possible the two-dimensional example in [13] motivated by thermal manufacturing. The only difference is that the pointwise actuators δ(xx i ) were approximated in [13] by indicator functions of an epsilon environment while we choose here a smoother approximation in view of (62). Regarding a direct treatment of δ(xx i ) as an unbounded control operator instead of using the bounded approximation (68), see the comments in Sect. 5.

The solution of the relaxed optimal control problem (11a)–(11f) has been computed numerically. We discretized the state equation (59) in space using a standard Galerkin approach with triangular elements and linear Ansatz-functions. We eliminated one control by setting \(\tilde{\alpha}_{i}(t)=\alpha_{i}(t)\), i=1,…,N−1, where we then get \(\alpha_{N}(t)=1-\sum_{i=1}^{N-1}\tilde{\alpha }_{i}(t)\) using the constraint \(\sum_{i=1}^{N} \alpha_{i}(t) = 1\), t∈[0,t f ]. This constraint is then always fulfilled and the condition α N (t)∈[0,1], tt 0, is equivalent to imposing that \(\tilde{\alpha}_{i} \in[0,1]\) and \(\sum_{i=1}^{N-1} \tilde{\alpha}_{i} - 1 \leq0\). The resulting semi-discretized control problem was solved with Bock’s direct multiple shooting method [3, 17] implemented in the software-package MUSCOD-II. The control functions ω and \(\tilde{\alpha}\) were chosen as piecewise constant and initialized with ω(t)=0 and \(\tilde{\alpha}_{i}(t)=\frac{1}{9}\), t∈[0,t f ], i=1,…,8.

The computations were made for an unstructured grid with 162 triangular elements and 8, 16 and 32 equidistant shooting intervals. Time integration was carried out by a BDF-method and sensitivities were computed using internal numerical differentiation. Error estimates for these methods provide an accuracy of some ε 1≥0 for the so obtained approximations of the mild solutions.

We implemented Algorithm 1 where we compute in step 3 solutions satisfying first order necessary conditions with an accuracy of ε 2. The above discussion of the abstract example thus applies. In particular, using that λ 2>0, we obtain the existence of a bounded set \(\mathcal {S}\) containing the iterates [ω k,α k,y k]. We adaptively solved the relaxed problem on a common control discretization grid \(\mathcal{G}^{k}\) for u and \(\tilde{\alpha}\). For the computations, we have chosen ε 1=1.0E−04 and used bisection for refinements of the control grids \(\mathcal{G}^{k}\) in step 7. Thus, ε k is given implicitly as a function of ε 1, ε 2 and Δt k. This construction ensures that ε k→0 as k→∞.

The performance of the relaxation method is summarized in Table 1. We see that the relative error of the mixed-integer solution compared with the best found relaxed solution decreases with the grid refinements in accordance with Proposition 2. The best found controls and the evolution of the state norm of the corresponding solution are displayed in Fig. 1. We see the rounding error in form of an overshooting behavior when comparing the evolution of the L 2(Ω)-norm of the relaxed and the mixed-integer solution. This effect decreases with the size of the time discretization step size. The cost corresponding to the best found solution is 6175. Unfortunately, [13] does not report the cost of the best found solution, but a cumulative L 2(0,15;L 2(Ω))-norm of 90.27. The cumulative L 2(0,15;L 2(Ω))-norm of our best found solution is 78.58.

Fig. 1
figure 3

Numerical results for Example 1. The upper figures show the best found integer controls \(v_{i}^{*}(\cdot)\) and their relaxation \(\tilde{\alpha}_{i}^{*}(\cdot)\), from bottom to top, i=1,…,4 (left) and i=5,…,8 (right). Control \(v_{9}^{*}(\cdot)\) is defined by \(v_{9}^{*}(t)=1-\sum_{i=1}^{8} v_{i}^{*}(t)\), t∈[0,15]. The lower figures show the corresponding ordinary control u (⋅) (left) and the evolution of the state norm (right)

Table 1 Performance of the relaxation method for Example 1

4.2 A semilinear reaction-diffusion system

Let Ω be a bounded domain in ℝn with a smooth boundary Γ and consider the classical Lotka-Volterra system with diffusion

$$ \left\{ \begin{aligned} &\frac{\partial z_1}{\partial t}(x,t) - d_1\sum_{j=1}^n \frac {\partial^2 z_1}{\partial x^2_j}(x,t) = z_1(x,t) \bigl(a_1 - b_1v(t) - c_1z_2(x,t)\bigr)\quad \text{in}~Q \\[-4pt] &\frac{\partial z_2}{\partial t}(x,t) - d_2\sum_{j=1}^n \frac {\partial^2 z_2}{\partial x^2_j}(x,t) = z_2(x,t) \bigl(a_2 - b_2v(t) - c_2z_1(x,t)\bigr)\quad \text{in}~Q \\[-4pt] & \frac{\partial z_1}{\partial\nu}(x,t)=\frac{\partial z_2}{\partial \nu}(x,t)=0 \quad \text{on}~\varSigma \\[-4pt] & z_1(x,0)=z_{1,0}(x),\quad z_2(x,0)=z_{2,0}(x) \quad \text{in}~\varOmega \end{aligned} \right. $$
(70)

with constants a i ,b i ,c i ,d i >0, i=1,2, domains Q=Ω×[0,t f ], Σ=Γ×[0,t f ] and control 0≤v(t)≤1. System (70) describes the interaction of two populations z 1 and z 2, both spatially distributed and diffusing in Ω. The initial distribution z 1,0,z 2,0 at t=0 is assumed to be non-negative. The boundary conditions then imply that the populations z 1 and z 2 are confined in Ω for all t≥0. The function v models a control of the system and we shall investigate to approximate optimal controls v (t) taking values in {0,1} as to minimize the distance of the population (z 1,z 2) to its uncontrolled (v=0) steady state distribution \((\bar{z}_{1},\bar{z}_{2})\) given by the constant functions

$$ \bar{z}_1(x)=\frac{a_2}{c_2},\quad \bar{z}_2(x)= \frac{a_1}{c_1},\quad x\in\varOmega. $$

In order to bring the system into abstract form (2), set X=L 2(ΩL 2(Ω), U=U ad={}, V=ℝ, V ad={0,1}, define the operator A:D(A)→X by

$$ \begin{aligned} &D(A)=\biggl\{(z_1,z_2) \in H^2(\varOmega) \times H^2(\varOmega) : \frac {\partial z_1}{\partial\nu}(x,t)=\frac{\partial z_2}{\partial\nu }(x,t)=0, \text{ on}~\varGamma\biggr \} \\ &A(z_1,z_2) (x)=\Biggl(d_1 \sum _{j=1}^n \frac{\partial^2 z_1}{\partial x^2_j}(x),d_2 \sum_{j=1}^n \frac{\partial^2 z_2}{\partial x^2_j}(x) \Biggr),\quad (z_1,z_2) \in D(A), \end{aligned} $$

define the non-linear function f:X×U×V=X×VX by

$$ f\bigl((z_1,z_2),v\bigr) (x)=(z_1(x) \bigl(a_1-b_1v-c_1z_2(x) \bigr),z_2(x) \bigl(a_2-b_2v-c_2z_2(x) \bigr) $$

and define the cost functions ϕ and ψ by

$$ \phi\bigl((z_1,z_2)\bigr)=0, \quad\psi\bigl((z_1,z_2)\bigr) = \int_\varOmega \big\|z_1(x)-\bar{z}_1(x)\big\|^2 + \big\|z_2(x)-\bar{z}_2(x)\big\|^2\,dx. $$
(71)

We choose \(X_{[0,t_{f}]}=C([0,t_{f}];X)\) and \(V_{[0,t_{f}]}=PC(0,t_{f};\mathbb{R})\).

It is well-known, that (A,D(A)) is the generator of an analytic semigroup on X and that for any non-negative initial data z 1,0,z 1,0X, the system (70) has a non-negative unique mild solution (z 1,z 2)∈C([0,t f ],X×X) for every vL (0,t f ;[0,1]). Moreover, for initial data z 1,0,z 1,0D(A) and \(v \in C^{0,\vartheta}_{\mathrm{pw}}(0,t_{f};[0,1])\), ϑ>0, this solution is classical and satisfies

$$ (z_1,z_2) \in C\bigl([0,t_f];D(A) \times D(A)\bigr) \cap H^1\bigl([0,t_f]; X \times X \bigr). $$
(72)

Existence (local in time) and uniqueness follows from classical theory for semilinear parabolic equations, see, e.g., [20, Chap. 6]. Global existence results for (70) are obtained by a-priori bounds on the solution using contracting rectangles [4].

Assume that the initial data satisfies z 1,0,z 1,0D(A) and that [α k,y k] is a sequence of feasible solutions to the corresponding relaxed problem (11a)–(11f) in a bounded set \(\mathcal {S}\subset\tilde{V}_{[0,t_{f}]} \times X_{[0,t_{f}]}\). We want to discuss again the assumptions of Proposition 2.

Hypothesis (H1) holds by the same arguments as in the previous example. Moreover, we claim that hypothesis (H2) holds. Let \(\bar{M}\) be the growth bound of {T(t)} t≥0 on [0,t f ]. By analyticity of {T(t)} t≥0, we have for every feasible \(y \in X_{[0,t_{f}]}\) that s almost everywhere in (0,t f ),

(73)

where \(f_{y}=\frac{d}{dy}f\) and \(y_{s}=\frac{d}{ds}y\). Using that y k(s)∈D(AD(A) for all s∈[0,t f ] and f:D(A)→D(A), we see that

$$ \big\|-A T(t-s)f\bigl(y^k(s),v^i\bigr)\big\|_X \leq \big\|T(t-s)\big\|_{\mathcal{L}(X)} \big\| Af\bigl(y^k(s),v^i\bigr) \big\|_X \leq C_1^{k,i} $$
(74)

for some constants \(C_{1}^{k,i}\). Using that f is a smooth function, we see that

$$ \big\|T(t-s)f_y\bigl(y^k(s),v^i \bigr)y^k_s(s)\big\|_X \leq\bar{M} \big\|f_y\bigl(y^k(s),v^i\bigr)\big\|_X \big\| y^k_s(s)\big\|_X \leq C_2^{k,i}. $$
(75)

Thus (73) yields the estimate

$$ \biggl \Vert \frac{d}{ds}T(t-s)f\bigl(y(s),v^i \bigr)\biggr \Vert _X \leq C_1^{k,i} + C_2^{k,i} $$
(76)

for s∈[0,t f ] a.e. By well-posedness of the problem (70) for every \(v \in V_{[0,t_{f}]}\) and using the boundedness of \(\mathcal{S}\), we get an estimate

$$ \sup_{t \in[0,t_f]} \big\|f\bigl(t,y^k(t),v^i \bigr)\big\|_X \leq M^{i}, $$
(77)

for some constants M i verifying hypothesis (H3). Further, using (H1) and the boundedness of \(\mathcal{S}\), ∥f y (y k(s),v i)∥ X and ∥Af(y k(s),v i)∥ X can be bounded independently of k. Using the state equation (11b), we get

$$ \big\|y^k_s(s)\big\|_X \leq \big\|A y^k(s) \big\|_X + \big\|f\bigl(y^k(s),v^i\bigr) \big\|_X \leq \big\|y^k(s)\big\|_{D(A)} + M^{i} $$
(78)

for a.e. s∈[0,t f ]. So, boundedness of S and the regularity of y in (72) implies that \(\|y^{k}_{s}(s)\|_{X}\) can be bounded independently of k for a.e. s∈[0,t f ]. Hence, the constants \(C^{k,i}_{1}\) and \(C^{k,i}_{2}\) in (76) can be chosen independently of k and we can conclude from Proposition 2 that the relaxation methods terminates after κ steps with an integer solution [v ,z ] satisfying the estimate

$$ \big|J\bigl(\alpha^\kappa,y^\kappa\bigr) - J\bigl(v^*,z^*\bigr)\big| \leq\varepsilon, $$
(79)

for any sequences ε k→0, Δt k→0 and every ε>0.

Example 2

We applied the relaxation method to a semilinear test problem of the form (70) again for a two dimensional domain Ω with the following parameters. Let Ω being a circle with radius 1 centered at (1,1) and choose a 1=a 2=c 1=c 2=1, \(b_{1}=\frac{7}{10}\), \(b_{2}=\frac{1}{2}\), d 1=0.05, d 2=0.01, initial data z 1,0z 2,0D(A) approximated by \(\tilde{z}_{1,0}(x)=\frac{1}{2} d_{\frac{1}{2}}(x-1)\), \(\tilde{z}_{2,0}(x)=\frac{7}{10} d_{\frac{1}{2}}(x-1)\), where d ϵ (x) given by

$$ d_\epsilon(x)=\frac{1}{\sqrt{2 \pi\epsilon}}e^{\frac {-x^2}{2\epsilon}} $$
(80)

models a population concentrated at the origin. For v(t)=0, t≥0, the solution z 1(t,x), z 2(t,x) converges asymptotically to a spatially constant and temporarily non-constant, periodic solution.

The computations for the optimal control are made by the same numerical method as in the previous example, but using a grid with 258 finite elements. For the performance of the relaxation method see Table 2. The best found controls and the evolution of the state norm of the corresponding solutions are displayed in Fig. 2. Again we see a decrease of the integer-approximation error in accordance with Proposition 2. The best found integer control yields a cost of 58.76.

Fig. 2
figure 4

Numerical results for Example 2. The left figure shows the best found relaxed and integer control α (⋅), v (⋅) and the right figure shows the corresponding evolutions of the populations y 1(⋅), y 2(⋅)

Table 2 Performance of the relaxation method for Example 2

5 Conclusions and open problems

We considered mixed-integer optimal control problems for abstract semilinear evolution equations and obtained conditions guaranteeing that the value function and the state of a relaxed optimal control problem can be approximated with arbitrary precision using a control that satisfies integer restrictions. In particular, our approach is constructive and gives rise to a numerical method for mixed-integer optimal control problems with certain partial differential equations. Moreover, we showed how these conditions imply a-priori estimates on the quality of the solution when combinatorial constraints are enforced.

We note that we did not discuss convergence of the constructed sequence of integer controls approximating the optimal value of the relaxed problem. It is not even clear in which topology such a convergence would be meaningful. Several issues related to this questions, in particular in a PDE-context, is discussed in [10, 11] and [8].

Compared to the previously available results on mixed-integer optimal control problems with ordinary differential equations in [23, 24], the setting treated in this paper involves a differential operator A, taken to be a generator of a strongly continuous semigroup. This requires careful regularity considerations. When A is a Laplace operator, we showed on a linear and a semilinear example how such regularity assumptions can be met and provided numerical examples demonstrating the practicability of the approach.

It is clear that the methodology considered in this paper generalizes to the case when the generator A of a strongly continuous semigroup is replaced by a family \(\{A(t)\}_{t \in[0,t_{f}]}\) of unbounded linear operators generating an evolution operator in the sense of [16]. On the other hand it is not so clear how to extend the results in case of unbounded control action, for example, Neumann or Dirichlet boundary control for the heat equation. Recalling the density of solutions to (5) in the set of solution to (6) which motivated our approach, we note that the case of unbounded control is not covered by the available results on operator differential inclusions. While in principle semigroup techniques can deal with unbounded control operators, see for example the exposition in [1, Chap. 3], this extension is non-trivial and requires additional work.