Keywords

1 Introduction

This presentation introduces interval differential-algebraic equations. To our knowledge, the publication that is closest to our theoretical approach is [11], in which an interval arithmetic, they call ValEncIA, is used to analyzed interval DAEs. What is presented here and what is new is that we solve the interval DAE problem using the constraint interval representation (see [6, 7]) to encode all interval initial condition and/or interval coefficients. It is shown that this representation has theoretical advantages not afforded to the usual interval representation. The coefficients and initial values are, for this presentation, constant to maintain the article relatively short. However, the approach in which there are variable coefficients and/or initial values can easily be extend and this is pointed out. Since we transform (variable) interval coefficients to (variable) interval initial values, it is, in theory, a straight forward process, albeit more complex.

The constraint interval representation leads to useful numerical methods as will be demonstrated. The limitations of this publication prohibit the full development of the methods. The initial steps are indicated. What is presented here does not deal directly with what is called validated methods (see [9, 10] for example). However, when the processes developed here are carried out in a system that accounts for numerical errors using outward directed rounding, for example, in INTLAB or CXSC, then the results will be validated. We restrict ourselves to what is called (see below) semi-explicit DAEs. That is, all problems are assumed to have been transformed to the semi-explicit form. However, our approach is much wider.

The general form of the semi-explicit DAE is

(1)
$$\begin{aligned} G(y(t),t)&=0. \end{aligned}$$
(2)

While the focus is incorporating (constant) interval uncertainties in (1), and (2), generalized uncertainties as developed in [8], can be analyzed in similar fashion. Note that the variable (and constant) interval coefficient/initial value is a type of generalized uncertainty and fits perfectly in the theory that is presented.

2 Definition and Properties

One can also think of the implicit ODE with an algebraic constraint,

$$\begin{aligned} F(y(t),y^{\prime }(t),t)&=0,\text { }y(t_{0})=y_{0}\end{aligned}$$
(3)
$$\begin{aligned} G(y(t),t)&=0, \end{aligned}$$
(4)

as a semi-explicit DAE as follows Let

$$\begin{aligned} y^{\prime }(t)&=z(t)\\ F(y(t),z(t),t)&=0,\\ G(y(t),t)&=0, \end{aligned}$$

This will increase the number of variables. However, this will not be the approach for this presentation. Our general form will assume that the DAE is in the semi-explicit form (1), (2).

Given an implicit differential equation (3), when \(\frac{\partial F}{\partial y}\) is not invertible, that is, we do not have an explicit differential equation, at least theoretically, in the form (1), (2)), we can differentiate (4) to obtain

$$\begin{aligned} \frac{\partial G}{\partial y}(y(t),t)y^{\prime }(t)+\frac{\partial G}{\partial t}(y(t),t)=0. \end{aligned}$$
(5)

If \(\frac{\partial G}{\partial y}(y,t)\) is non-singular, then (5) can be solved explicitly for \(y^{^{\prime }}\) as follows.

$$\begin{aligned} y^{\prime } =\left[ \frac{\partial G}{\partial y}(y(t),t)\right] ^{-1}\frac{\partial G}{\partial t}(y,t) \end{aligned}$$
(6)
$$\begin{aligned} F(y(t),\left[ \frac{\partial G}{\partial y}(y(t),t)\right] ^{-1} \frac{\partial G}{\partial t}(y(t),t),t)&=0\end{aligned}$$
(7)
$$\begin{aligned} G(y(t),t)&=0 \end{aligned}$$
(8)

and (6), (7), and (8) is in the form (1), (2) and the DAE is called an index 1 DAE.

This not being the case, that is, \(\frac{\partial G}{\partial y}(y(t),t)\) is singular, then we have the form

$$\begin{aligned} F(y(t),y^{\prime }(t),t)&=0,\\ \frac{\partial G}{\partial y}(y(t),t)y^{\prime }+\frac{\partial G}{\partial t}(y(t),t)&=0, \end{aligned}$$

which can be written as

$$ H(y(t),y^{\prime }(t),t)=0 $$

and we again differentiate with respect to \(y^{\prime }\) and test for singularity. If the partial of H can be solved for \(y^{^{\prime }}\), then we have an index-2 DAE. This process can be continued, in principle, until (hopefully) \(y^{\prime }\) as an explicit function of y and t is found.

DAEs arise in various contexts, in applications. We present two types of problems where DAEs arise - the simple pendulum and unconstrained optimal control, that illustrate the main issues associated with DAEs. Then we will show interval uncertainty in the DAEs using some of these examples. Our solution methods for the interval DAEs are based on what is developed for the examples.

3 Linear Constant Coefficient DAEs

Linear constant coefficient DAEs arise naturally in electrical engineering circuit problems as well as in some control theory problems. A portion of this theory, sufficient to understand our solution methods, is presented next. The linear constant coefficient DAE is defined as

$$\begin{aligned} Ax^{\prime }(t)+Bx(t)=f,\text { }y(t_{0})=y_{0}, \end{aligned}$$
(9)

where A and B are \(m\times m\) matrices, x(t) is a \(m\times 1\) vector “state function” and f is a \(m\times 1\) function vector, and

$$ A=\left[ \begin{array} [c]{cc} A_{1} &{} A_{2}\\ 0 &{} 0 \end{array} \right] ,B=\left[ \begin{array} [c]{cc} B_{1} &{} B_{2}\\ B_{3} &{} B_{4} \end{array} \right] ,f=\left[ \begin{array} [c]{c} f_{1}\\ f_{2} \end{array} \right] , $$

so that

$$ \left[ \begin{array} [c]{cc} A_{1} &{} A_{2}\\ 0 &{} 0 \end{array} \right] \left[ \begin{array} [c]{c} x_{1}^{\prime }(t)\\ x_{2}^{\prime }(t) \end{array} \right] +\left[ \begin{array} [c]{cc} B_{1} &{} B_{2}\\ B_{3} &{} B_{4} \end{array} \right] \left[ \begin{array} [c]{c} x_{1}(t)\\ x_{2}(t) \end{array} \right] =\left[ \begin{array} [c]{c} f_{1}\\ f_{2} \end{array} \right] . $$

Note that this is indeed a semi-explicit DAE, where differential part is

$$\begin{aligned} x^{^{\prime }}(t)&=F(x(t),t)=-A^{-1}(B)x(t)+f(t)\\ A&=\left[ \begin{array} [c]{cc} A_{1}&A_{2} \end{array} \right] ,B=\left[ \begin{array} [c]{cc} B_{1}&B_{2} \end{array} \right] ,f=\left[ \begin{array} [c]{c} f_{1}\\ f_{2} \end{array} \right] , \end{aligned}$$

where the matrix A is assumed to be invertible, and the algebraic part is

$$ G(x(t),t)=B_{3}x_{1}(t)+B_{4}x_{2}(t)=f_{2}. $$

Example 1

Consider the linear constant coefficient DAE \(Ax^{\prime }(t)+Bx(t)=f\) where

$$ A=\left[ \begin{array} [c]{cc} 1 &{} 1\\ 0 &{} 0 \end{array} \right] ,B=\left[ \begin{array} [c]{cc} 0 &{} 0\\ 2 &{} 1 \end{array} \right] ,f=\left[ \begin{array} [c]{c} t\\ e^{t} \end{array} \right] . $$

The ODE part is

$$\begin{aligned} x_{1}^{\prime }(t)+x_{2}^{\prime }(t)=t \end{aligned}$$
(10)

and the algebraic part is

$$\begin{aligned} G(x(t),t)=2x_{1}(t)+x_{2}(t)=e^{t}. \end{aligned}$$
(11)

Integrating the ODE part (10), we get

$$\begin{aligned} \int dx_{1}+\int dx_{2}&=\int tdt,\nonumber \\ x_{1}(t)+x_{2}(t)&=\frac{1}{2}t^{2}+c_{1}. \end{aligned}$$
(12)

Solving (11) and (12)

$$\begin{aligned} 2x_{1}(t)+x_{2}(t)&=e^{t}\\ x_{1}(t)+x_{2}(t)&=\frac{1}{2}t^{2}+c_{1} \end{aligned}$$

simultaneously, we get

$$\begin{aligned} x_{1}(t)&=e^{t}-\frac{1}{2}t^{2}-c_{1},\end{aligned}$$
(13)
$$\begin{aligned} x_{2}(t)&=-e^{t}+t^{2}+2c_{1}. \end{aligned}$$
(14)

Example 2

Consider the linear constant coefficient DAE

$$ \left[ \begin{array} [c]{cc} 1 &{} 0\\ 0 &{} 0 \end{array} \right] x^{\prime }(t)+\left[ \begin{array} [c]{cc} 1 &{} 1\\ 1 &{} 1 \end{array} \right] x(t)=\left[ \begin{array} [c]{c} t\\ e^{t} \end{array} \right] . $$

The ODE part is

$$\begin{aligned} x_{1}^{\prime }(t)+x_{1}(t)+x_{2}(t)=t \end{aligned}$$
(15)

and the algebraic constraint is

$$\begin{aligned} G(x(t),t)=x_{1}(t)+x_{2}(t)=e^{t}. \end{aligned}$$
(16)

Putting (16) into (15) and solving for \(x_{1}^{\prime }\), we have

$$\begin{aligned} x_{1}^{\prime }(t)&=t-e^{t}\\ x_{1}(t)&=\frac{1}{2}t^{2}-e^{t}+c_{1}\\ x_{2}(t)&=2e^{t}-\frac{1}{2}t^{2}-c_{1}. \end{aligned}$$

Remark 1

1) When, for the \(m-\)variable problem, the algebraic constraint can be substituted into the differential equation, and the differential equation, which is linear, is integrated, there are m equations in m unknowns. Interval uncertainty enters when the matrices A and/or B and/or the initial condition \(y_{0}\) are intervals \(A\in [A],B\in [B],y_{0}\in [y_{0}].\) This is illustrated.

4 Illustrative Examples

Two DAE examples, beginning with the simple pendulum, are presented next.

4.1 The Simple Pendulum

Consider the following problem (see [4]) arising from a simple pendulum,

$$\begin{aligned} x^{\prime \prime }(t)&=-\gamma x(t),\text { }x(t_{0})=x_{0},x^{\prime } (t_{0})=x_{0}^{\prime }\nonumber \\ y^{\prime \prime }(t)&=-\gamma y(t)-g,\text { }y(t_{0})=y_{0},y^{\prime }(t_{0})=y_{0}^{\prime }\\ 0&=x^{2}(t)+y^{2}(t)-L^{2}\text { (mechanical constraint) or}\nonumber \\ 0&=\left( x^{\prime }\right) ^{2}(t)+\left( y^{\prime }\right) ^{2}(t)-y(t)g\text { (energy constraint)}\nonumber \end{aligned}$$
(17)

where g is the acceleration due to gravity, \(\gamma \) is the unknown tension in the string, and \(L=1\) is the length of the pendulum string. In this example, we will consider the unknown tension to be an interval \(\left[ \gamma \right] =\left[ \underline{\gamma },\overline{\gamma }\right] \) to model the uncertainty of the value of the tension. Moreover, we will also assume that the initial values are also intervals. We can restate (17) as a first order system where we will focus on the mechanical constraint omitting the energy constraint, as follows:

$$\begin{aligned} u_{1}(t)&=x(t)\Rightarrow u_{1}^{\prime }(t)=u_{3}(t),\text { }u_{1} (t_{0})\in \left[ \left( u_{1}\right) _{0}\right] \nonumber \\ u_{2}(t)&=y(t)\Rightarrow u_{2}^{\prime }(t)=u_{4}(t),\text { }u_{2} (t_{0})\in \left[ \left( u_{2}\right) _{0}\right] \nonumber \\ u_{3}(t)&=x^{^{\prime }}(t)\Rightarrow u_{3}^{\prime }(t)=-u_{5}(t)\cdot u_{1},\text { }u_{3}(t_{0})\in \left[ \left( u_{3}\right) _{0}\right] \\ u_{4}(t)&=y^{^{\prime }}(t)\Rightarrow u_{4}^{\prime }(t)=-u_{5}(t)\cdot u_{2}(t)-g,\text { }u_{4}(t_{0})\in \left[ \left( u_{4}\right) _{0}\right] \nonumber \\ u_{5}(t)&=\gamma \Rightarrow u_{5}^{\prime }(t)=0,\text { }u_{5}(t_{0} )\in \left[ \gamma \right] =\left[ \underline{\gamma },\overline{\gamma }\right] \nonumber \\ G(u_{1}(t),u_{2}(t),t)&=u_{1}^{2}(t)+u_{2}^{2}(t)-1=0.\nonumber \end{aligned}$$
(18)

Note that the uncertainty parameter \(\gamma \) is considered as a part of the differential equation and is constant (\(u_{5}\)) whose initial condition is an interval. This is in general the way constant interval (generalized) uncertainty in the parameters is handled. If the coefficient were in fact variable, \(u_{5}^{\prime }(t)\ne 0\) but a differential equation itself, for example,

$$ u_{5}^{\prime }(t)=h(t) $$

where h(t) would define how the rate of change of the coefficient is varying with respect to time. Equation (18) is a standard semi-explicit real DAE and in this case, with interval initial conditions. How to solve such a system is presented in Sect. 5.

4.2 Unconstrained Optimal Control

We next present the transformation of unconstrained optimal control problems to a DAE. The form of the general unconstrained optimal control is the following.

$$\begin{aligned} \max _{u\in \varOmega }J\left[ u\right]&=\int \limits _{0}^{1} L(x(u,t),u,t)dt\nonumber \\ \text {subject to}&\text {:}x^{\prime }(u,t)=f(x,u,t)\end{aligned}$$
(19)
$$\begin{aligned} x(u,0)&=x_{0}.\nonumber \\ x&: \mathbb {R} ^{m}\times \mathbb {R} \rightarrow \mathbb {R} ^{n},u: \mathbb {R} \rightarrow \mathbb {R} ^{m} \end{aligned}$$
(20)

When \(\varOmega \) is the set of all square integrable functions, then the problem becomes unconstrained. In this case we denote the constraint set \(\varOmega _{0}.\) The Pontryagin Maximization Principle utilizes the Hamiltonian function, which is defined for (19) as

$$\begin{aligned} H(x(t),\lambda ,u,t)=\lambda (t)f(x(t),u,t)+L(x(t),u,t). \end{aligned}$$
(21)

The function \(\lambda (t)\) is called the co-state function and is a row vector \((1\times n).\) The co-state function can be thought of as the dynamic optimization equivalent to the Lagrange multiplier and is defined by the following differential equation:

$$\begin{aligned} \lambda ^{\prime }(t)&=-\frac{\partial H(x(t),\lambda ,u,t)}{\partial x}\\ \lambda (1)&=0.\nonumber \end{aligned}$$
(22)

Under suitable conditions (see [5]), the Pontryagin Maximization Principle (PMP) states that if there exists an (optimal) function v(t) such that \(J\left[ v\right] \ge J\left[ u\right] ,\forall u\in \varOmega ,\) (the optimal control), then it maximizes the Hamiltonian with respect to the control, which in the case of unconstrained optimal control means that

$$\begin{aligned} \frac{\partial H(x(t),\lambda ,v,t)}{\partial u}=0,\text { }v\in \varOmega _{0}, \end{aligned}$$
(23)

where (23) is the algebraic constraint. Thus, the unconstrained optimal control problem, (19) together with the differential equation of the co-state, (22), and the PMP (23) results in a boundary valued DAE as follows:

$$\begin{aligned} x^{\prime }(u,t)&=f(x(t),u,t)\\ \lambda ^{\prime }(x(t),u,t)&=-\frac{\partial H(x,\lambda ,u,t)}{\partial x}\\ x(u,0)&=x_{0}\\ \lambda (x,u,1)&=0\\ G(x(t),\lambda (t),t)&=\frac{\partial H(x,\lambda ,u,t)}{\partial u}=0. \end{aligned}$$

One example that is well studied is the linear quadratic optimal control problem (LQP), which is defined for the unconstrained optimal control problem as

$$\begin{aligned} \max _{u\in \varOmega _{0}}J[u]&=\frac{1}{2}\int \limits _{0}^{1}\left[ x^{T}(t)Qx(t)+u^{T}(t)Ru(t)\right] dt\nonumber \\ \text {subject to}&\text {:}x^{\prime }(t)=Ax(t)+Bu(t)\\ x(u,0)&=x_{0},\nonumber \end{aligned}$$
(24)

where \(A_{n\times n}\) is an \(n\times n\) real matrix, \(B_{n\times m}\) is an \(m\times n\) real matrix, \(Q_{n\times n}\) is a real symmetric positive definite matrix, and \(R_{m\times m}\) is a real invertible positive semi-definite matrix. For the LQP

$$\begin{aligned} H(x,u,\lambda ,t)&=\lambda (t)\left( Ax(t)+Bu(t)\right) +\frac{1}{2}\left( x^{T}(t)Qx(t)+u^{T}(t)Ru(t)\right) \\ \lambda ^{\prime }(t)&=-\frac{\partial H(x,\lambda ,u,t)}{\partial x}=-\lambda (t)A+Qx(t),\text { }\lambda (1)=0, \end{aligned}$$

remembering that \(\lambda \) is a row vector. The optimal control is obtained by solving

$$\begin{aligned} G(x(t),\lambda (t),t)&=\frac{\partial H(x,\lambda ,u,t)}{\partial u} =\lambda (t)B+Ru(t)=0\\ u(t)&=-R^{-1}\lambda (t)B, \end{aligned}$$

which, when put into the differential equations, yields

$$\begin{aligned} x^{\prime }(t)&=Ax(t)-BR^{-1}\lambda (t)B,\text { }x(0)=x_{0}\\ \lambda ^{\prime }(t)&=Qx(t)-\lambda (t)A,\text { }\lambda (1)=0. \end{aligned}$$

This results in the system

$$\begin{aligned} y^{\prime }(t)&=\left[ \begin{array} [c]{c} x^{\prime }(t)\\ \lambda ^{\prime }(t) \end{array} \right] =\left[ \begin{array} [c]{cc} A &{} -BR^{-1}B\\ Q &{} A \end{array} \right] \left[ \begin{array} [c]{c} x(t)\\ \lambda (t) \end{array} \right] \\ x(0)&=x_{0},\lambda (1)=0 \end{aligned}$$

The next section will consider DAEs with interval uncertainty in the coefficients and in the initial conditions.

5 Interval Uncertainty in DAEs

This section develops an interval solution method. The interval problem for non-linear equation considers interval coefficients as variables whose differential is zero and initial condition is the respective interval. For linear problems, it is sometimes advantageous to deal with the interval coefficients directly as illustrated next.

5.1 An Interval Linear Constant Coefficient DAE

Given the linear constant coefficient DAE \(Ax^{\prime }(t)+Bx(t)=f\), suppose that the coefficient matrices are interval matrices \(\left[ A\right] \) and \(\left[ B\right] \). That is,

$$ \left[ A\right] x^{\prime }(t)+\left[ B\right] x(t)=f. $$

Example 3

Consider Example 1, except we have interval entries

$$ \left[ A\right] =\left[ \begin{array} [c]{cc} \left[ A_{1}\right] &{} \left[ A_{2}\right] \\ \left[ A_{3}\right] &{} \left[ A_{4}\right] \end{array} \right] ,B=\left[ \begin{array} [c]{cc} \left[ B_{1}\right] &{} \left[ B_{2}\right] \\ \left[ B_{3}\right] &{} \left[ B_{4}\right] \end{array} \right] ,f=\left[ \begin{array} [c]{c} t\\ e^{t} \end{array} \right] , $$

where \(\left[ A_{1}\right] =\left[ A_{2}\right] =\left[ B_{4}\right] =\left[ 0.9,1.1\right] ,\left[ A_{3}\right] =\left[ A_{4}\right] =\left[ B_{1}\right] =\left[ B_{2}\right] =\left[ 0,0\right] ,\) and \(\left[ B_{3}\right] =\left[ 1.9,2.1\right] .\) The ODE part is

$$\begin{aligned} \left[ 0.9,1.1\right] x_{1}^{\prime }(t)+\left[ 0.9,1.1\right] x_{2}^{\prime }(t)=t \end{aligned}$$
(25)

and the algebraic part is

$$\begin{aligned} G(x(t),t)=\left[ 1.9,2.1\right] x_{1}(t)+\left[ 0.9,1.1\right] x_{2(t)}=e^{t}. \end{aligned}$$
(26)

Integrating (25) we have

$$ \left[ 0.9,1.1\right] x_{1}(t)+\left[ 0.9,1.1\right] x_{2(t)}=\frac{1}{2}t^{2}+c_{1} $$

which together with (26) forms the interval linear system

$$\begin{aligned} \left[ 0.9,1.1\right] x_{1}(t)+\left[ 0.9,1.1\right] x_{2}(t)&=\frac{1}{2}t^{2}+c_{1}\end{aligned}$$
(27)
$$\begin{aligned} \left[ 1.9,2.1\right] x_{1}(t)+\left[ 0.9,1.1\right] x_{2}(t)&=e^{t}. \end{aligned}$$
(28)

Using constraint interval (see [8] where any interval [ab] has the representation \([a,b]=a+\lambda (b-a))\), then

$$\begin{aligned} \left[ \begin{array} [c]{c} x_{1}(\overrightarrow{\lambda })\\ x_{2}(\overrightarrow{\lambda }) \end{array} \right]&=\left[ \begin{array} [c]{cc} 0.9+0.2\lambda _{11} &{} 0.9+0.2\lambda _{12}\\ 1.9+0.2\lambda _{21} &{} 0.9+0.2\lambda _{22} \end{array} \right] ^{-1}\left[ \begin{array} [c]{c} \frac{1}{2}t^{2}+c_{1}\\ e^{t} \end{array} \right] \\&=\frac{1}{\left( 0.9+0.2\lambda _{11}\right) \left( 0.9+0.2\lambda _{22}\right) -\left( 0.9+0.2\lambda _{12}\right) \left( 1.9+0.2\lambda _{21}\right) }\\&\times \left[ \begin{array} [c]{cc} 0.9+0.2\lambda _{22} &{} -\left( 0.9+0.2\lambda _{12}\right) \\ -\left( 1.9+0.2\lambda _{21}\right) &{} 0.9+0.2\lambda _{11} \end{array} \right] \left[ \begin{array} [c]{c} \frac{1}{2}t^{2}+c_{1}\\ e^{t} \end{array} \right] \\&=\left[ \begin{array} [c]{c} -\frac{90.0c_{1}\,-\,90.0e^{t}\,-\,20.0\lambda _{12}e^{t}\,+\,20.0\lambda _{22} c_{1}\,+\,10.0t^{2}\lambda _{22}\,+\,45.0t^{2}}{38.0\lambda _{12}\,-\,18.0\lambda _{11}\,+\,18.0\lambda _{21}\,-\,18.0\lambda _{22}\,-\,4.0\lambda _{11}\lambda _{22} \,+\,4.0\lambda _{12}\lambda _{21}\,+\,90.0}\\ \frac{190.0c_{1}\,-\,90.0e^{t}\,-\,20.0\lambda _{11}e^{t}\,+\,20.0\lambda _{21} c_{1}\,+\,10.0t^{2}\lambda _{21}\,+\,95.0t^{2}}{38.0\lambda _{12}\,-\,18.0\lambda _{11}\,+\,18.0\lambda _{21}\,-\,18.0\lambda _{22}\,-\,4.0\lambda _{11}\lambda _{22} \,+\,4.0\lambda _{12}\lambda _{21}\,+\,90.0} \end{array} \right] , \end{aligned}$$

where \(\overrightarrow{\lambda }=(\lambda _{11},\lambda _{12},\lambda _{21},\lambda _{22}).\) Any instantiation of \(\lambda _{ij}\in [0,1]\) will yield a valid solution given the associated uncertainty. However, if one wishes to extract the interval containing \(\left[ \begin{array} [c]{c} x_{1}(\overrightarrow{\lambda })\\ x_{2}(\overrightarrow{\lambda }) \end{array} \right] ,\) a global min/max over \(0\le \lambda _{ij}\le 1\) would need to be implemented.

Example 4

Consider the linear quadratic problem with interval initial condition for x,

$$\begin{aligned} \max J[u]&=\frac{1}{2}\int \limits _{0}^{1}\left[ x^{2}(t)+u^{2}(t)\right] dt\nonumber \\ \text {subject to}&\text {:}x^{\prime }(t)=u(t),\text { }x(u,0)=\left[ \frac{1}{2},\frac{3}{2}\right] =\frac{1}{2}+\gamma ,\end{aligned}$$
(29)
$$\begin{aligned} 0&\le \gamma \le 1. \end{aligned}$$
(30)

For this problem,

$$\begin{aligned} H(x(t),\lambda (t),u(t),t)&=\lambda (t)u(t)-\frac{1}{2}x^{2}(t)-\frac{1}{2}u^{2}(t),\\ \lambda ^{\prime }(t)&=-\frac{\partial H(x,\lambda ,u,t)}{\partial x}=x(t),\text { }\lambda (1)=0\\ G(x(t),\lambda (t),t)&=\frac{\partial H(x(t),\lambda (t),v(t),t)}{\partial u}=\lambda (t)-v(t)=0\text { or }v(t)=\lambda (t). \end{aligned}$$

Thus

$$\begin{aligned} x^{\prime }(t)&=\lambda (t),\text { }x(0)=\frac{1}{2}+\gamma \\ \lambda ^{\prime }(t)&=x(t),\text { }\lambda (1)=0. \end{aligned}$$

This implies that

$$\begin{aligned} x^{\prime \prime }(t)-x(t)&=0\\ x(t)&=c_{1}e^{t}+c_{2}e^{-t}\\ \lambda (t)&=c_{1}e^{t}-c_{2}e^{-t} \end{aligned}$$

and with the initial conditions

$$\begin{aligned} x(t)&=\frac{\frac{1}{2}+\gamma }{1+e^{2}}e^{t}+\frac{e^{2}(\frac{1}{2}+\gamma )}{1+e^{2}}e^{-t},\\ \lambda (t)&=\frac{\frac{1}{2}+\gamma }{1+e^{2}}e^{t}-\frac{e^{2}(\frac{1}{2}+\gamma )}{1+e^{2}}e^{-t},\\ u_{opt}(t)&=v(t)=\frac{\frac{1}{2}+\gamma }{1+e^{2}}e^{t}-\frac{e^{2} (\frac{1}{2}+\gamma )}{1+e^{2}}e^{-t}.\\ 0&\le \gamma \le 1. \end{aligned}$$

6 Conclusion

This study introduced the method to incorporate interval uncertainty in differential algebraic problems. Two examples of where DAEs under uncertainty arise were presented. Two solution methods with interval uncertainty for the linear problem and for the linear quadratic unconstrained optimal control problem were shown. Unconstrained optimal control problems lead to interval boundary-valued problems, which subsequent research will address. Moreover, more general uncertainties such as generalized uncertainties (see [8]), probability distributions, fuzzy intervals are the next steps in the development of a theory of DAEs under generalized uncertainties.