1 Introduction

We consider problems in the calculus of variations on unbounded intervals Ω of the following structure:

$$ \begin{array}{@{}rcl@{}} J(y):= \int_{\Omega} r(t,y(t), y^{\prime}(t)) \omega(t) dt + r_{0} y(0) \to \text{Min} !\\ \text{ s.t.} y\in Y. \end{array} $$
(1)

This type of problems appears in various fields of applications, such as quantum mechanics, optimal control and asymptotic controllability. From the numerous authors who contributed to this subject we cite [2,3,4,5,6, 8, 11, 13, 22,23,24,25,26,27,28, 30, 31]. In our formulation of the problem (1), the function space for the state variable is chosen in dependence of Ω and the weight function ω. This approach was first motivated by applications in quantum mechanics.

Here, the Hamiltonian principle (or the principle of stationary action) was formulated by Schrödinger (cf. [19]), as a problem in the calculus of variations in an appropriate (complex) Hilbert space H. For the one-dimensional harmonic oscillator in quantum mechanics, the variational formulation is given as follows: We look for the stationary points of the objective

$$ J(\psi)=\int_{-\infty}^{\infty}\left[\frac{\hbar^{2} {\psi^{\prime}}^{2} (x)}{4m}+\frac{1}{4}(m \omega^{2} x^{2}-E)\psi^{2}(x)\right] dx \to\text{Min} ! $$

Here, m denotes the mass of the particle, k is the force constant, \(\omega =\sqrt {\frac {k}{m}}\) is the angular frequency of the oscillator, and \(\hbar \) is the Planck constant. From the point of view of quantum mechanics, we look for those stationary points of the objective for which the wave function ψ tends to zero for x →±. The solutions of the Euler–Lagrange equation with the corresponding decay behavior represent the Hermite-functions

$$ \psi_{n}(x)= \left( \frac{m\omega}{\pi \hbar}\right)^{\frac{1}{4}}\frac {1}{\sqrt{2^{n}n!}}H_{n} \left( \sqrt{\frac{m\omega}{\hbar}}x\right)e^{-\frac{1}{2}\frac{m\omega}{\hbar}}x^{2} $$

with Hermite polynomials Hn given by the Rodriguez formula

$$ \begin{array}{@{}rcl@{}} H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {\mathrm{d}^{n}}{\mathrm{d} x^{n}}}\left( e^{-x^{2}}\right). \end{array} $$

The Hermite polynomials Hn form a complete orthonormal system in the weighted Lebesgue space \(L_{2}(\mathbb {R},\omega )\), where the weight function ω is the density function \(\omega (x)=e^{-x^{2}}\) and the space of the independent variable Ω is \(\mathbb {R}\). Furthermore, the Hermite polynomials can be expressed with the aid of Laguerre polynomials,

$$ L_{n}^{\alpha}(x) := e^{x} x^{-\alpha} \frac{\text{d}^{n}}{\text{d}x^{n}}\left[e^{-x} x^{n+\alpha}\right], \alpha> -1, \alpha \in \mathbb{R} $$

according to

$$ H_{n}(x)=(-1)^{\frac{n}{2}} 2^{n} (n/2) ! L_{n/2}^{-1/2}(x^{2}) $$

if n is even and

$$ H_{n}(x)=(-1)^{\frac{(n-1)}{2}} 2^{n} \left( \frac{(n-1)}{2}\right) ! x L_{\frac{(n-1)}{2}}^{1/2}(x^{2}) $$

if n is odd (see, e.g., [32]). Another example represents the treatment of the hydrogen atom (see [19]). It leads for its radial symmetrical part to a variational problem over Ω := (0,), where generalized Laguerre-functions

$$ \phi_{n}(x)=\frac{1}{n !} e^{-\beta x/2}L_{n}(x) $$

with the generalized Laguerre polynomials

$$ L_{n}(x):=\frac{{e}^{\beta x}}{n!} \frac{\text{d}^{n}}{\text{d}x^{n}}(x^{n}{e}^{-\beta x}) $$

form the solutions of the corresponding Euler–Lagrange equation. The Laguerre polynomials constitute a complete orthonormal system in L2((0,),ω) with ω(x) = eβx and β > 0.

A further application arises from the problem of asymptotic controllability of dynamic systems. We consider a linear system

$$ x^{\prime}(t)=A(t)x(t)+ u(t) \ \text{ a. e. on}\ \mathbb{R}_{+}, x(0) = x_{0}, $$

and ask for controls u which stabilize the system asymptotically and exponentially. In order to realize such a control, we introduce a regulator type objective (see [29]),

$$ J_{\infty}(x,u)= \int^{\infty}_{0} \frac{1}{2}\left\{x^{T}(t)W(t)x(t)+u^{T}(t)R(t)u(t) \right\}\omega(t) dt . $$

It contains the proper weight function ω(t) = eβt, with β > 0. For the state space, we chose a weighted Sobolev space, \(W^{1,n}_{2} (\Omega , \omega )\), \(\Omega =\mathbb {R}_{+}\), described in the next section. This space is a separable Hilbert space. The space of controls is a weighted Lebesgue space\({L_{2}^{n}} (\mathbb {R}_{+},\omega )\), which is a separable Hilbert space as well. The choice of the state space ascertains that the states satisfy a property of Lyapunov type:

$$ |x(t)| \leq C \sqrt{t}e^{-\frac{\beta}{2} t}, $$

(compare Lemma 4). The linear quadratic regulator problem can be treated as a convex problem in Hilbert spaces. A duality concept of convex analysis is used to construct a dual problem which yields a problem in the calculus of variations of the form (1) with ω− 1(t) = eβt as a density function and Y as an appropriate weighted Sobolev space.

One typical approach to solve problems (1) numerically is to cut the time horizon at T sufficiently large and to replace the problem by a receding horizon problem, compare [15]. But for a long horizon, the discretization of the time interval [0,T] generates large problems those stability has to be shown. Here, we propose an alternative approach, which was predicted by methods in quantum mechanics. We do not cut the infinite horizon at any step of the discretization. For a numerical treatment, we propose to expand the solution into a Fourier–Laguerre series and approximate these series by Laguerre polynomials.

The paper has the following structure: The second section contains important definitions and properties of uniformly and non-uniformly weighted Sobolev spaces. Section 3 considers the problem of asymptotic controllability as an optimal control problem with infinite horizon in a Hilbert space setting and discusses the maximum principle and the dual problem, obtained in [29], in an appropriate weighted Sobolev space Y. Section 4 includes the main results of our investigations. We develop a Fourier–Laguerre method for a function yY and provide sufficient conditions that the function y can be expanded into a Fourier–Laguerre series which converges together with its generalized derivative pointwisely and uniformly. In this way, we can construct a polynomial approximation scheme for the solution of the dual problem in Section 6. The last section contains open questions and gives proposals for further directions of research opened by our approach of weighted functional spaces to the calculus of variations and control theory on unbounded domains.

2 Weighted Lebesgue and Sobolev Spaces

We refer to [7] for more detailed representation of weighted spaces and their properties.

2.1 Weighted Lebesgue Spaces

Let Ω be an open set in \(\mathbb {R}^{n}\) and \(\mathcal {M}({\Omega })\) be the set of measurable functions in Ω.

Definition 1

  1. (a)

    A measurable function \(\omega \colon {\Omega } \to \mathbb {R}\) with nonnegative values almost everywhere is called a weightfunction. We denote the set of all weight functions on Ω by \(\mathcal {W}({\Omega })\).

  2. (b)

    A weight function ω is called a density function, iff ω is Lebesgue integrable over Ω, i.e., \(\int _{\Omega } \omega (t) dt<\infty \label {gewicht}\).

With the aid of a weight function \(\omega \in \mathcal {W}\) we define the weighted Lebesgue space.

Definition 2

Let \(\omega \in \mathcal {W}\) and 1 ≤ p < , then

$$ L_{p} (\Omega, \omega) := \left\{x \in \mathcal{M}({\Omega}) \left\vert \int_{\Omega}\right.|x(t)|^{p} \omega(t) dt < \infty\right\}. $$

We equip this space with the norm

$$ \|x\|_{p,\omega} := \left( \int_{\Omega}|x(t)|^{p} \omega(t)dt\right)^{1/p}. $$

For 1 ≤ p < , this space is a Banach space (compare [9, p. 146]). For 1 < p < , it is reflexive. In the case p = 2, the space L2(Ω,ω) becomes a Hilbert space (see [9, p. 243]), with the scalar product

$$ \left\langle x,y\right\rangle_{2,\omega} :=\int_{\Omega} x(t)y(t)\omega(t) dt. $$

2.2 Weighted Sobolev Spaces

For xLp(Ω,ω), we define the distributional derivative \(\mathcal {D}x\) according to [34, p. 46].

Definition 3

Let Sk be a set of weight functions defined on Ω of the following type:

$$ S_{k} :=\left\{\omega_{\alpha}\in\mathcal{W}({\Omega}) ~\vert~ 0\leq |\alpha|\leq k\right\}. $$

For 1 ≤ p < , we define the weighted Sobolev space \({W^{k}_{p}}(\Omega ,S_{k})\) as the set of all functions

$$ x\in L_{p}(\Omega,\omega_{0})\cap L_{1,\text{loc}}({\Omega}), $$

which possess for all multi-indexes \(\alpha \in \mathbb {N}_{0}\) with |α|≤ k, a distributional derivative \(\mathcal {D}^{\alpha }x\) up to the order k such that

$$ \mathcal{D}^{\alpha}x\in L_{p}(\Omega,\omega_{\alpha})\cap L_{1,\text{loc}}({\Omega}) $$

holds. By

$$ \|x\|_{k,p,S_{k}} := \left( \sum\limits_{0\leq|\alpha|\leq k}\|\mathcal{D}^{\alpha}x\|^{p}_{p,\omega_{\alpha}}\right)^{1/p} $$
(2)

we introduce a norm in \({W^{k}_{p}}(\Omega ,S_{k})\).

Let us assume that all weight functions within Sk coincide, i.e., the function and its distributional derivatives are weighted by the same weight; then, we call this space an uniformly weighted Sobolev space\({W^{k}_{p}}(\Omega ,\omega )\) with the weight function ω.

For a general weight function ω, these spaces do not represent Banach spaces. The following lemma provides a criterion that the spaces are complete.

Lemma 1

Letp > 1 and\(\omega \in \mathcal {W}({\Omega })\)bea weight function with

$$ \omega^{-1/(p-1)}\in L_{1,\textup{loc}}({\Omega}), $$
(3)

then we have

$$ L_{p}(\Omega,\omega)\subset L_{1,\textup{loc}}({\Omega}). $$

Proof

See [17], p. 539, Corollary 1.6.

If we assume now the condition (3) is satisfied, then the distributional derivatives make sense for functions xLp(Ω,ω) and we arrive at the following characterization of weighted Sobolev spaces:

Theorem 1

Let 1 < p < and\(\omega _{\alpha }^{-1/(p-1)}\in L_{1,\text {loc}}({\Omega })\)forallωαSk, then\({W^{k}_{p}}(\Omega ,S_{k})\)is, equipped with thenorm\(\|x\|_{k,p,S_{k}}\)from (2), a Banachspace. In particular,\({W^{k}_{2}}(\Omega ,S_{k})\)constitutesa Hilbert space with the scalar product

$$ \left\langle x,y\right\rangle_{k,2,S_{k}} := \sum_{0\leq |\alpha|\leq k}\left\langle \mathcal{D}^{\alpha}x, \mathcal{D}^{\alpha} y\right\rangle_{2,\omega_{\alpha}}. $$

Proof

See [18], p. 540ff., Theorem 1.11.

2.3 Properties of Functions in Special Weighted Spaces

Let us write \(\mathbb {R}_{+}:=[0,\infty )\) and \(\mathbb {R}_{++}:=(0,\infty )\). We consider the weighted Lebesgue space L2(Ω,ω) with \({\Omega } = \mathbb {R}_{++}\) and ω = eϱt, ϱ≠ 0 as well as the corresponding uniformly weighted Sobolev space \({W^{1}_{2}} (\mathbb {R}_{++},e^{\varrho t})\). The following results show that some of the expected properties for (unweighted) Sobolev spaces over bounded intervals have their counterpart in weighted spaces (see also [7]).

Lemma 2

Let\(x\in {W^{1}_{2}} (\mathbb {R}_{++}, e^{\varrho t})\),withϱ≠ 0, then

  1. 1.

    For allx,being restricted to [0,t],we obtain\(x\in {W^{1}_{1}} ([0,t])\)\(\forall t\in \mathbb {R}_{++}\).

  2. 2.

    A Poincar é -type inequality is valid,

    $$ \|x -x(0)\|_{2, e^{\varrho t}}\leq c \|\mathcal{D} x\|_{2,e^{\varrho t}},\quad c> 0. $$

Proof

See [27] for both propositions.

Remark 1

Since functions in \({W_{1}^{1}} ([0,t])\) have a (uniquely determined) continuous representative on (0,t) for all \(t\in \mathbb {R}_{++}\), which can be extended continuously to [0,t] (Theorem of Rellich/Kondrachov, see [1]), functions \(x\in {W^{1}_{2}} (\mathbb {R}_{++},e^{\varrho t})\) possess a continuous representative on \(\mathbb {R}_{+}\) as well, and we write \(x\in {W^{1}_{2}} (\mathbb {R}_{+},e^{\varrho t})\).

The following relations between non-weighted and uniformly weighted Sobolev spaces are satisfied.

Lemma 3

Letω(t) = eβt, β > 0,x(0) = 0.Then, we have:

  1. 1.

    If\(x\in {W^{1}_{2}} (\mathbb {R}_{+},e^{\beta t})\),then\(x\in {W_{1}^{1}}(\mathbb {R}_{+})\)and\(xe^{\beta t}\in {W^{1}_{2}} (\mathbb {R}_{+},e^{-\beta t})\).

  2. 2.

    If\(y\in {W^{1}_{2}} (\mathbb {R}_{+},e^{-\beta t})\),then\(y e^{-\beta t} \in {W^{1}_{1}} (\mathbb {R}_{+})\)and\(ye^{-\beta t}\in {W^{1}_{2}} (\mathbb {R}_{+},e^{\beta t})\).

Proof

Exemplarily, we show the first statement in this lemma. Using the Cauchy–Schwarz inequality, we obtain

$$ \left( \int_{0}^{\infty} |x(t)| dt\right)^{2}\leq \left( \int_{0}^{\infty} x^{2}(t)e^{\beta t} dt\right)\left( \int_{0}^{\infty} e^{-\beta t} dt\right)< \infty $$

and

$$ \left( \int_{0}^{\infty} |\mathcal{D} x(t)| dt\right)^{2} \leq \left( \int_{0}^{\infty}(\mathcal{D} x)^{2}(t)e^{\beta t} dt\right)\left( \int_{0}^{\infty} e^{-\beta t} dt\right)< \infty, $$

and consequently we obtain \(x\in {W_{1}^{1}}(\mathbb {R}_{+})\). Furthermore, x(t)eβt has a distributional derivative. With \(\mathcal {D}(x(t)e^{\beta t})=(\mathcal {D}(x(t))+\beta x(t))e^{\beta t}\), we find

$$ \int_{0}^{\infty} x^{2}(t)e^{2\beta t}e^{-\beta t}dt=\int_{0}^{\infty} x^{2}(t)e^{\beta t}dt $$

and

$$ \int_{0}^{\infty} \left[\mathcal{D}(x(t)e^{\beta t})\right]^{2} e^{-\beta t}dt \leq 2 \int_{0}^{\infty} \left[(\mathcal{D}x(t))^{2} + \beta^{2} x^{2}(t)\right]e^{2\beta t}e^{-\beta t}dt, $$

i.e., \(xe^{\beta t}\in {W^{1}_{2}} (\mathbb {R}_{+},e^{-\beta t})\) follows.

The next lemma shows that a norm-bounded set of functions \(x\in {W^{1}_{2}} (\mathbb {R}_{+},e^{\beta t})\), β > 0, satisfies a pointwise estimate in \(C(\mathbb {R}_{+})\) (we call this a property of Lyapunov type) (see [29]).

Lemma 4

Letω(t) = eβt, β > 0 and\(x\in {W^{1}_{2}} (\mathbb {R}_{+}, e^{\beta t})\),∥x1,2,ω ≤ 1, begiven, then the following inequality holds

$$ |x(t)| \leq C \sqrt{t}e^{-\frac{\beta}{2} t} $$
(4)

for all \(t\in \mathbb {R}_{+}\).

Remark 2

It easily follows from Lemma 4 that \(\lim _{t\to \infty } x(t) =0\). The function x has a property of Lyapunov type [20].

2.4 Differential Equations in Sobolev Spaces

Let Ω = (0,T), \(T\in \mathbb {R}_{++}\). Then, each function \(x\in {W^{1}_{1}} ((0,T))\) has a uniquely determined continuous representative which can be continuously extended to [0,T]. It satisfies the equation

$$ x(t) = x(0)+ {\int_{0}^{t}} {\mathcal{D}}x(\tau)d\tau. $$
(5)

The continuous representative \(x\in {W^{1}_{1}} ([0,T])\) can be identified with an absolutely continuous function xAC([0,T]). The following result goes back to Vitali (1905) (see [10]). If xAC([0,T]), then x is continuous on [0,T] and differentiable almost everywhere, its derivative is Lebesgue integrable and

$$ x(t) = x(0)+{\int_{0}^{t}} x^{\prime} (\tau) d\tau. $$
(6)

Moreover, if there is a function gL1((0,T)) such that at the same time

$$ x(t) = x(0)+{\int_{0}^{t}} g (\tau) d\tau $$

holds, then g(t) = x(t) a.e. on (0,T) and we obtain from (5) and (6)

$$ \mathcal{D}x(t) = x^{\prime} (t)\quad \text{ a.e. on } (0,T). $$

Let now f be a Carathéodory function in the sense of [14], then the two forms of the differential equations

$$ \begin{array}{@{}rcl@{}} x^{\prime} (t) &=&f(t,x(t))\quad\text{a.e. on} \ \ (0,T),\\ \mathcal{D} x &=&f(\cdot,x(\cdot))\quad\text{on}\ \ (0,T) \end{array} $$

considered in \({W^{1}_{1}} ([0,T])\), \(T\in \mathbb {R}_{++}\), are equivalent to the integral equation

$$ x(t) = x(0) +{\int_{0}^{t}} f(\tau,x(\tau))d\tau \quad\text{on}\ (0,T), T\in \mathbb{R}_{++}. $$

Since the restriction of an arbitrary function \(x\in {W^{1}_{2}} (\mathbb {R}_{+}, e^{\varrho t})\) onto a finite interval [0,T], \(T\in \mathbb {R}_{+}\), belongs to \({W^{1}_{1}} ((0,T))\), the differential equations,

$$ \begin{array}{@{}rcl@{}} x^{\prime} (t) &=&f(t,x(t))\quad \text{a.e. on} \ \mathbb{R}_{+}, \end{array} $$
(7)
$$ \begin{array}{@{}rcl@{}} \mathcal{D} x &=&f(\cdot,x(\cdot))\quad\text{on}\ \mathbb{R}_{+} \end{array} $$
(8)

considered for x in the weighted Sobolev space \({W^{1}_{2}} (\mathbb {R}_{+}, e^{\varrho t})\) are equivalent and both of them are equivalent to the integral equation

$$ x(t) = x(0) +{\int_{0}^{t}} f(\tau,x(\tau))d\tau \quad\text{on}\ t \in \mathbb{R}_{++}. $$

The initial value problem x(0) = x0 for the differential equations (7) and (8) is well posed.

3 Problem Formulation

3.1 The Starting Problem (P ω) and its Maximum Principle

The problem (Pω) as an optimization problem in Hilbert spaces is stated as follows:

Let \(\Omega =\mathbb {R}_{++}\), then we minimize

$$ (\textbf{P}_{\omega}):\quad J_{\infty}(x,u) = \int^{\infty}_{0} \frac{1}{2}\left\{x^{T}(t)W(t)x(t)+u^{T}(t)R(t)u(t)\right\}\omega(t) dt $$
(9)

with respect to

$$ \begin{array}{@{}rcl@{}} (x,u) &\in& W^{1,n}_{2} (\mathbb{R}_{+}, \omega) \times {L_{2}^{n}} (\mathbb{R}_{+},\omega), \end{array} $$
(10)
$$ \begin{array}{@{}rcl@{}} \mathcal{D}x &=& A(\cdot)x+ u \text{ on } \mathbb{R}_{+},\quad x(0) = x_{0}, \end{array} $$
(11)
$$ \begin{array}{@{}rcl@{}} u(t) &\in & U \text{ a.e. on } \mathbb{R}_{+}. \end{array} $$
(12)

Here, \(W^{1,n}_{2} (\mathbb {R}_{+}, \omega )\), \({L_{2}^{n}} (\mathbb {R}_{+},\omega )\) denote the n-dimensional vector spaces with components in \({W^{1}_{2}} (\mathbb {R}_{+}, \omega )\) or \(L_{2} (\mathbb {R}_{+},\omega )\), respectively.

We state the following basic hypotheses:

  • (V1) Let \(U= \mathbb {R}^{n}\) (no pointwise control constraints) and ω(t) = eβt, β > 0 be a (proper) weight.

  • (V2) Let W and R be symmetric, \(W,R:\mathbb {R}_{+}\rightarrow \mathbb {R}^{n\times n}\in L_{\infty }^{n\times n}(\mathbb {R}_{+})\cap C^{1,n\times n}(\mathbb {R}_{+})\), \(W_{ij}^{\prime }, R_{ij}^{\prime }:\mathbb {R}_{+}\rightarrow \mathbb {R}\in L_{\infty }(\mathbb {R}_{+})\cap C(\mathbb {R}_{+}), i,j=1\ldots n\).

  • (V3) Let the matrix function \(A: \mathbb {R}_{+}\rightarrow \mathbb {R}^{n\times n} \in L_{\infty }^{n\times n}(\mathbb {R}_{+})\cap C^{n\times n}(\mathbb {R}_{+})\), \(A_{ij}^{\prime }: \mathbb {R}_{+}\rightarrow \mathbb {R} \in L_{\infty }(\mathbb {R}_{+})\cap C(\mathbb {R}_{+}), i,j=1\ldots n\).

  • (V4) Let |M|s be the spectral norm of a matrix M and let \({\lambda ^{M}_{m}}(t)\) denote the smallest eigenvalue of the symmetric matrix M(t), further let

    $$ \overline{A}:=\sup\limits_{t\in \mathbb{R}_{+}} |A(t)|_{s},\quad \underline{W}:=\inf\limits_{t\in\mathbb{R}_{+}} {\lambda_{m}^{W}}(t), \quad \underline{R}:=\inf\limits_{t\in\mathbb{R}_{+}} {\lambda^{R}_{m}}(t). $$

    Further, let the following coercivity properties be satisfied:

    1. 1.

      For \(\overline {A}=0\) let \(\underline { R}> 0\) and \(\underline { W}> 0\).

    2. 2.

      For \(\overline {A}\ne 0\) let

      $$ \underline{W}>0\quad \text{ and} \quad \left( \underline{ R}-\frac{1}{2}\underline{W}\right)> 0. $$
  • (V5) Let us denote the set of all processes satisfying (10)–(11) by \(\mathcal {B}\) and assume \(\mathcal {B}\ne \emptyset \), \(\mathcal {B}\ne \{(x^{\ast },u^{\ast })\}\).

Our consideration is based on the following optimality criterion.

Definition 4

(Criterion L) Let processes \((x,u), (x^{\ast },u^{\ast })\in \mathcal {B}\) be given. Then, the pair \((x^{\ast },u^{\ast })\in \mathcal {B}\) is called globally optimal for (Pω) if for all pairs \((x,u)\in \mathcal {B}\) holds

$$ J_{\infty}(x,u) - J_{\infty} (x^{\ast},u^{\ast}) \geq 0. $$

Remark 3

  1. 1.

    The integral considered in (9) is understood in Lebesgue sense. Due to assumption (V2), the objective is finite for \((x,u)\in \mathcal {B}\) and the integrand in the objective is nonnegative.

  2. 2.

    The same weight as in the objective is used for weighting the process (x,u) itself.

  3. 3.

    Under the stated assumptions, the problem (Pω) has a unique solution, and the optimal control is asymptotically stabilizing (see [29]).

The optimal solution satisfies a Pontryagin type maximum principle (see [29]).

Theorem 2

Let assumptions(V1)(V5)be satisfiedand\((x^{\ast },u^{\ast })\in \mathcal {B}\)be an optimalsolution of (Pω), ω = eβt, β > 0, in the sense ofCriterion L. Then, there are multipliers(λ0, y0), with

$$ \lambda_{0}=1, (N) $$
$$ y_{0}\in W^{1,n}_{2}(\mathbb{R}_{+}, \omega^{-1}),\quad \omega^{-1}(t)=e^{-\beta t}, \beta > 0,(T) $$
$$ H(t,x^{\ast}(t),u^{\ast}(t),y_{0}(t),\lambda_{0})=\max_{ v\in \mathbb{R}^{n}} H(t,x^{\ast}(t),v, y_{0}(t),\lambda_{0}) \quad \text{a.e. on} \ \mathbb{R}_{+} , (M) $$
$$ \mathcal{D}y_{0} = - \nabla_{\xi} H (\cdot,x^{\ast}(\cdot),u^{\ast}(\cdot),y_{0}(\cdot),\lambda_{0}) \quad \text{ on} \ \mathbb{R}_{+}, (C) $$

where \(H:\mathbb {R}_{+}\times \mathbb {R}^{n}\times \mathbb {R}^{n}\times \mathbb {R}^{n}\times \mathbb {R}\to \mathbb {R}\) is the Pontryagin function,

$$ H(t,\xi,v,\eta,\lambda_{0})=-\lambda_{0} \frac{1}{2} \left( \xi^{T}W(t) \xi+v^{T}\text{R}(t)v\right)e^{\beta t}+\eta^{T} (A(t)\xi+v). $$

From the maximum principle, we can conclude regularity properties of its solution (x, u, y0). Firstly, we analyze the regularity of u from the maximum condition (M) and obtain

$$ u^{\ast}(t)= R^{-1}(t)y_{0}(t)e^{-\beta t}\quad \forall t\in \mathbb{R}_{+}. $$
(13)

Since the function y0 belongs to \(W^{1,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\), we find by Lemma 3 that \(y_{0} e^{-\beta t}\in W^{1,n}_{1}(\mathbb {R}_{+}),\) and applying assumption (V2), we obtain that u is continuous and bounded,

$$ |u^{\ast}(t)| \leq K,\quad \lim_{t\to \infty} u^{\ast}(t)=0. $$

Secondly, we analyze the regularity properties of (x, y0) resulting from the canonical equations if we replace the control u in the canonical system by (13).

Lemma 5

Let (x, y0) be a solution of the canonical equations

$$ \left( \begin{array}{c} \mathcal{D}x^{\ast}\\ \mathcal{D}{y}_{0} \end{array} \right) = \left( \begin{array}{cc} A(\cdot) & R^{-1}(\cdot)e^{-\beta t} \\ W(\cdot)e^{\beta t} & -A^{T}(\cdot) \end{array} \right)\left( \begin{array}{c} x^{\ast}\\ {y}_{0} \end{array} \right)\quad\text{on}\ \mathbb{R}_{+}. $$
(14)

Then, we conclude

$$ (x^{\ast}, y_{0})\in W^{2,n}_{2}\left( \mathbb{R}_{+}, e^{\beta t}\right)\times W^{2,n}_{2}\left( \mathbb{R}_{+}, e^{-\beta t}\right). $$
(15)

Proof

Let \(x^{\ast } \in W^{1,n}_{2}(\mathbb {R}_{+}, e^{\beta t})\) and \(y_{0}\in W^{1,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\). According to Lemma 3, we obtain \(x^{\ast } e^{\beta t} \in W^{1,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\) and \(y_{0}e^{-\beta t}\in W^{1,n}_{2}(\mathbb {R}_{+}, e^{\beta t})\). Then, due to assumption (V2), the canonical equations (14) holds pointwisely \(\forall t\in \mathbb {R}_{+}\),

$$ \begin{array}{@{}rcl@{}} (x^{\ast}(t))^{\prime} &=& A(t)x^{\ast}(t) -R^{-1}(t)e^{-\beta t}y_{0}(t),\\ (y_{0}(t))^{\prime} &=& W(t)e^{\beta t}x^{\ast}(t) - A^{T}(t){y}_{0}(t)e^{\beta t}\ \end{array} $$

and we find \(\mathcal {D}x^{\ast }= (x^{\ast })^{\prime }\in W^{1,n}_{2}(\mathbb {R}_{+}, e^{\beta t})\) and \(\mathcal {D}y_{0}= y_{0}^{\prime }\in W^{1,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\).

Remark 4

Each function \(y_{0}\in W^{2,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\) has a representative which is continuously differentiable on \(\mathbb {R}_{+}\), the value \(y^{\prime }_{0}(0)\) is well defined.

3.2 Lagrange Duality for the Problem (P ω)

We follow the general construction scheme, developed in [16], and prepared for the problem (Pω), ω = eβt, β > 0, in a similar form in [29]. The state equations and the initial conditions of the problem (Pω) form a linear operator \(\mathcal {A}_{0}:X_{0}\to Y_{0}\),

$$ \begin{array}{@{}rcl@{}} X_{0}:&=&W_{2}^{1,n}\left( \mathbb{R}_{+},e^{\beta t}\right) \times {L_{2}^{n}} \left( \mathbb{R}_{++},e^{\beta t}\right),\\ Y_{0}:&=&{L_{2}^{n}}\left( \mathbb{R}_{++},e^{\beta t}\right)\times\mathbb{R}^{n},\\ \mathcal{A}_{0}(x,u)&=& \left( \begin{array}{c}x^{\prime} (\cdot)-A(\cdot)x(\cdot)-u(\cdot) \\ x(0) \end{array} \right). \end{array} $$

We remark that the operator \(\mathcal {A}_{0}\) is bounded due to assumption ( V3) (see [29]). With

$$ \textbf{b}:= \left( \begin{array}{c} 0\\ x_{0} \end{array}\right), $$

the admissible domain \(\mathcal {B}\) is given by

$$ \mathcal{B}:= \{(x,u)\in X_{0} | \mathcal{A}_{0}(x,u)=\textbf{b}\}. $$

Let assumptions (V1)(V5) be satisfied. Then, the admissible domain \(\mathcal {B}\) of (Pω) is convex and closed.

The objective J defines a symmetric bilinear form in X0 × X0,

$$ \textbf{Q}((x_{1},u_{1}),(x_{2},u_{2})):= \int_{0}^{\infty} \{{x_{1}^{T}} (t)W(t)x_{2}(t)+{u_{1}^{T}}(t)R(t)u_{2}(t)\}e^{\beta t}dt $$

which is bounded in the norm topology on X0 × X0 due to assumption (V2). We introduce the Lagrange function in the space X0 × Y by

$$ \begin{array}{@{}rcl@{}} {\Phi}((x,u),(y,\eta))&:=& J_{\infty} (x,u) + \left< \mathcal{A}_{0}-\textbf{b}, (y,\eta)\right>_{{L_{2}^{n}}(\mathbb{R}_{+})\times \mathbb{R}^{n}},\\ Y&:=& \{(y,\eta ) | y\in {L_{2}^{n}}(\mathbb{R}_{+},e^{-\beta t}), \eta \in\mathbb{R}^{n}\}. \end{array} $$

A dual problem to (Pω) is now given by

$$ \begin{array}{@{}rcl@{}} \textbf{(D)}: \quad G(y,\eta)&:=&\inf\limits_{(x,u) \in X_{0}} {\Phi}((x,u),(y,\eta))\longrightarrow \text{Max} !\\ &&\text{w.r.t.}\ (y,\eta)\in {L_{2}^{n}}(\mathbb{R}_{+},e^{-\beta t})\times \mathbb{R}^{n} \end{array} $$

and the weak duality relation

$$ \begin{array}{@{}rcl@{}} &&\inf\limits_{(x,u) \in \mathcal{B}} J_{\infty} (x,u) = \inf\limits_{(x,u) \in X_{0}}\left[\sup\limits_{(y,\eta) \in Y}{\Phi}((x,u),(y,\eta))\right]\\ &\geq & \sup\limits_{(y,\eta) \in Y} \left[\inf\limits_{(x,u) \in X_{0}} {\Phi}((x,u),(y,\eta))\right] =\sup\limits_{(y,\eta) \in Y} G(y,\eta) \end{array} $$
(16)

holds. We conclude that each problem of the type

$$ (\tilde{\textbf{D}}):\quad \tilde{G}(y,\eta) \to \text{Max} !\quad \text{w.r.t.}\ (y,\eta)\in \tilde Y\subset Y $$
(17)

satisfying

$$ \tilde{G}(y,\eta) \leq G(y,\eta) \quad \forall (y,\eta)\in \tilde Y $$

is a dual problem to (Pω) as well. We use different realizations of this idea and introduce the following subsets of Y:

$$ \begin{array}{@{}rcl@{}} Y^{\infty} &:=& W^{2,n}_{2} (\mathbb{R}_{+},e^{-\beta t}) \times \mathbb{R}^{n},\\ Y^{N} &:=& \mathcal{P}^{n}_{N} \times \mathbb{R}^{n},\quad N=1,2,\ldots \end{array} $$

with \(\mathcal {P}_{N}\) as the set of all polynomials of degree at most N. Then according to (17), the problems (DN) and (D) are dual problems to (Pω) as well,

$$ \begin{array}{@{}rcl@{}} (\textbf{D}^{\textbf{N}}): \quad G^{N}(y,\eta)&:=&\inf\limits_{(x,u) \in X_{0}} {\Phi}((x,u),(y,\eta))\longrightarrow \text{Max} !\\ &&\hspace{2cm}\text{w.r.t.}\ \ (y,\eta)\in Y^{N},\\ (\textbf{D}^{\infty}): \quad G^{\infty} (y,\eta)&:=&\inf\limits_{(x,u) \in X_{0}} {\Phi}((x,u),(y,\eta))\longrightarrow \text{Max} !\\ &&\hspace{2cm}\text{w.r.t.}\ \ (y,\eta)\in Y^{\infty}. \end{array} $$

Since YNY, the following relation holds

$$ \sup\limits_{(y,\eta)\in Y^{N}} G^{N}(y,\eta)\leq \sup\limits_{(y,\eta)\in Y^{\infty}} G^{\infty} (y,\eta) \leq \inf(P_{\omega}). $$

The obtained dual problem (D) contains two optimization problems itself on the first level, which can be separated. The first optimization problem is

$$ \begin{array}{@{}rcl@{}} J_{1,y}(u) &=& \int_{0}^{\infty} \left\{\frac{1}{2} u^{T}(t)R(t)u(t)e^{\beta t} - u^{T}(t) y(t)\right\}dt\longrightarrow \text{Min} !\\ \text{s. t.}\ \ u &\in & {L_{2}^{n}}(\mathbb{R}_{+},e^{\beta t}). \end{array} $$
(18)

Here, (y,η) ∈ Y is a given parameter.

Remark 5

  1. 1.

    The objective in (18) is coercive in \({L_{2}^{n}}(\mathbb {R}_{+},e^{\beta t})\) due to assumptions (V2) and (V4).

  2. 2.

    The Gâteaux derivative exists and delivers necessary and sufficient conditions for optimality,

    $$ u^{\ast}_{y}(t)= R^{-1}(t)y(t)e^{-\beta t}. $$
    (19)
  3. 3.

    For y = y0, condition (19) coincides with the maximum condition (M) from the maximum principle.

The second optimization problem is

$$ \begin{array}{@{}rcl@{}} J_{2,y}(x)&=& \int_{0}^{\infty} \left\{\frac{1}{2} x^{T}(t)W(t)x(t)e^{\beta t} + [x^{\prime} (t)-A(t)x(t)]^{T} y(t)\right\}dt\\ &&+ (x(0)-x_{0})^{T} \eta \longrightarrow \text{Min} ! \qquad \text{s. t.}\ x\in W_{2}^{1,n}(\mathbb{R}_{+},e^{\beta t}). \end{array} $$
(20)

For (y,η) ∈ Y, we find an equivalent formulation of J2,y by partial integration:

$$ \begin{array}{@{}rcl@{}} J_{2,y}(x)&=& \int_{0}^{\infty} \left\{\frac{1}{2} x^{T}(t)W(t)x(t)e^{\beta t} - x^{T}(t)[{y^{\prime} (t)}+A^{T}(t)y(t)]\right\}dt\\ &&+ x(0)^{T}(\eta-y(0)) -{x_{0}^{T}}\eta \longrightarrow \text{Min} ! \qquad \text{s.t.}\ x\in W_{2}^{1,n}(\mathbb{R}_{+},e^{\beta t}). \end{array} $$
(21)

Remark 6

  1. 1.

    The objective depends on x only (and not on \(\mathcal {D}x\)). In the topology of the space \({L_{2}^{n}}(\mathbb {R}_{+},e^{\beta t})\), it is coercive due to the assumptions (V2) and (V4). The minimum of the objective exists in \({L_{2}^{n}}(\mathbb {R}_{+},e^{\beta t})\), and can be calculated by the Gâteaux derivative:

    $$ x^{\ast}_{y}(t)= W^{-1}(t)[ y^{\prime} (t)+A^{T}(t)y(t)] e^{-\beta t},\quad y(0)=\eta. $$
    (22)
  2. 2.

    Since \(y \in W^{2,n}_{2}(\mathbb {R}_{+}, e^{-\beta t})\), the right-hand side of (22) is well-defined, especially the classical derivative y(t) exists for all \(t\in \mathbb {R}_{+}\) and y(0) is well-posed. Due to the assumptions (V2)(V4), we find \(x_{y}^{\ast }\in W^{1,n}_{2} (\mathbb {R}_{+},e^{\beta t})\).

  3. 3.

    For y = y0, (22) is the canonical equation (C) in the maximum principle. Here, the equation (22), which can be exploited at t = 0 pointwisely, gives the connection between the initial values of the states and the adjoints,

    $$ x^{\ast}(0)= W^{-1}(0)[ y^{\prime}_{0} (0)+A^{T}(0)y_{0}(0)]. $$
    (23)

Now, we combine the results of the two optimization tasks and obtain the dual problem (D):

$$ \begin{array}{@{}rcl@{}} (\textbf{D}^{\infty})\!:\! G^{\infty}(y,y(0))\!:&\!=\!& - \int_{0}^{\infty} \frac{1}{2}\left[y^{\prime} (t)\!+\!A^{T}(t)y(t)\right]^{T} W^{-1}(t)\left[y^{\prime} (t)\!+\!A^{T}(t)y(t)\right]e^{-\beta t}dt\\ &&- \int_{0}^{\infty} \frac{1}{2}y^{T}(t)R^{-1}(t)y(t)e^{-\beta t}dt - {x_{0}^{T}}y(0)\longrightarrow \text{Max} !\\ && \text{s. t.}\ (y,y(0)) \in Y^{\infty}. \end{array} $$

Remark 7

  1. 1.

    The problem (D) has a solution. It can be shown by an estimation that the adjoint y0, obtained from the maximum principle, solves the problem and belongs to Y (see [29] and Lemma 5).

  2. 2.

    For large n and time-dependent coefficients A(t), W(t), and R(t), the solution of the problem (D) cannot be obtained analytically, in general. Hence, we look for an approximation scheme of the dual problem (D).

  3. 3.

    One well-tested approach is a time discretization and the reduction of the problem to a finite horizon T. But for a long horizon T, the following difficulties can appear: Firstly, one does not know how the final conditions for the adjoints y(T) can be fixed. Secondly, a fine discretization leads to large problems whose stability has to be verified.

  4. 4.

    Here, we try to find an alternative approach. We develop a Fourier–Laguerre analysis of the problem in a weighted Sobolev space and express the solution by Fourier–Laguerre series. Then, the original variational problem is transformed into the Hilbert space l2 of Fourier–Laguerre coefficients. Fourier–Laguerre series can be approximated by polynomials.

4 Fourier–Laguerre Expansions for Functions in Weighted Spaces

In this section we study functions

$$ \begin{array}{@{}rcl@{}} y &\in & W^{2,n}_{2} (\mathbb{R}_{+},e^{-\beta t}) \cap W^{2,n}_{2} (\mathbb{R}_{++},S_{2}),\\ S_{2} & =& \left\{\omega_{0} =e^{-\beta t},\omega_{1}=t e^{-\beta t}, \omega_{2} =t e^{-\beta t} \right\}. \end{array} $$
(24)

The introduction of the non-uniformly weighted Sobolev space \(W^{2,n}_{2} (\mathbb {R}_{++},S_{2})\) is motivated by the following idea: Functions in \(L_{2}(\mathbb {R}_{++}, e^{-\beta t})\) can be expanded into a Fourier–Laguerre series of the following type,

$$ y \sim \sum\limits_{k=0}^{\infty} a_{k} L_{k}^{(0,\beta)}. $$
(25)

Herein, \(L_{k}^{(\alpha ,\beta )}\) denote the generalized Laguerre polynomials,

$$ L_{k}^{(\alpha,\beta)}(t):= \frac{1}{k!} t^{-\alpha}{e}^{\beta t} \frac{\text{d}^{k}}{\text{d}t^{k}}\left( t^{\alpha+k}{e}^{- \beta t}\right),\quad \alpha>-1, \beta >0. $$

Let us assume for a moment that (25) holds pointwisely with the equal sign and the series can be differentiated term by term. Then, we obtain due to the relation between Laguerre polynomials,

$$ \frac{d}{dt} L^{(\alpha,\beta)}_{k}(t)= -\beta L^{(\alpha+1,\beta)}_{k-1}(t)\quad \forall k\geq 1, $$

a series for \(\mathcal {D}y\) with Laguerre polynomials \(L_{k}^{(1,\beta )}\),

$$ \mathcal{D}y \sim \sum\limits_{k=0}^{\infty} b_{k} L_{k}^{(1,\beta)}(t), $$
(26)

i.e., \(\mathcal {D}y\in L_{2}(\mathbb {R}_{++}, \omega _{1})\).

The following properties of the generalized Laguerre polynomials can be found in [12], or [33] (for β = 1). It holds

$$ L_{k}^{(\alpha,\beta)}(0):=\frac{{\Gamma}(k+\alpha+1)}{{\Gamma}(\alpha+1){\Gamma}(k+1)}, $$

where Γ denotes the Gamma function. For \(\alpha \in \mathbb {N}_{0}\), we get

$$ L_{k}^{(\alpha,\beta)}(0):= \left( \begin{array}{c} k+\alpha\\ \alpha \end{array}\right). $$

Generalized Laguerre polynomials build a complete orthogonal system in \(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta )})\) (see [33]), where

$$ \omega_{(\alpha,\beta)}(t) := t^{\alpha} e^{-\beta t},\quad \alpha >-1, \beta > 0. $$

We construct a complete orthonormal system by including the scaling factor

$$ h_{k}^{(\alpha, \beta)}:= \left< L_{k}^{(\alpha,\beta)},L_{k}^{(\alpha,\beta)}\right>_{2,\omega_{(\alpha,\beta)}} = \frac{{\Gamma} (k+\alpha+1)}{\beta^{\alpha+1}{\Gamma}(k+1)}. $$

According to Hurwitz, we recall the circumstance that a function \(\phi \in L_{2}(\mathbb {R}_{++},\omega _{(\alpha ,\beta )})\) can be expanded into a Fourier series with orthonormal polynomials by

$$ \phi \sim \sum\limits_{k=0}^{\infty} \frac{\phi_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} L_{k}^{(\alpha,\beta)} $$
(27)

abbreviating

$$ \phi_{k}= \left<\phi,L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}}\frac{1}{\sqrt{h_{k}^{(\alpha,\beta)}}}. $$

Using the completeness of the orthonormal system, we deduce

$$ \lim_{N \to \infty} \left\| \phi-\sum\limits_{k=0}^{N}\frac{\phi_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} L_{k}^{(\alpha,\beta)}\right\|^{2}_{2, \omega_{(\alpha,\beta)}} = 0. $$

Alternatively, ϕ can be approximated by the partial sums of (27) in the quadratic mean.

Since by no means the series in (27) converges pointwisely to the function ϕ in general, the often mistakenly used equal sign in (27) (see, e.g., [12]), should be analyzed carefully. Even with further assumptions on the function ϕ, like absolute continuity, uniform convergence of the Fourier–Laguerre series does not follow in general.

Due to the completeness of the Laguerre polynomials in \(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta )})\), these polynomials fulfill the Parseval equation (see, e.g., [33, p. 246 ff]),

$$ \sum\limits_{k=0}^{\infty} {\phi_{k}^{2}} = \left<\phi,\phi\right>_{2, \omega_{(\alpha,\beta)}}. $$

For functions y belonging to the intersection of the two weighted Sobolev spaces,

$$ y\in W^{2,n}_{2} \left( \mathbb{R}_{+},e^{-\beta t}\right) \cap W^{2,n}_{2} \left( \mathbb{R}_{++},S_{2}\right), $$

we find sufficient conditions for pointwise and uniform convergence of the Fourier–Laguerre series to the function y. Following the technique developed in [33, p. 39ff], we start with the generalized Parseval equation:

Theorem 3

Letfandgbelong to\(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta }))\), then the following equation holds true:

$$ \sum\limits_{k=0}^{\infty} f_{k} g_{k} = \left< f,g\right>_{2,\omega_{(\alpha,\beta )}} $$
(28)

with

$$ \begin{array}{@{}rcl@{}} f_{k} &=& \left< f, L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}}\frac{1}{\sqrt{h_{k}^{(\alpha, \beta)}}},\\ g_{k} &=& \left< g, L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}}\frac{1}{\sqrt{h_{k}^{(\alpha, \beta)}}}. \end{array} $$

Proof

For the case β = 1, we refer to [33, p. 242ff], and the proof can easily be extended to an arbitrary β > 0.

Conclusion

From Theorem 3 and its proof, we immediately deduce:

$$ \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} \left<g,L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}} = \left< f,g\right>_{2, \omega_{(\alpha,\beta)}} $$
(29)

and

$$ \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}}\int_{0}^{\xi} g(t)L_{k}^{(\alpha,\beta)}(t) \omega_{(\alpha,\beta)}(t) dt = \int_{0}^{\xi} f(t)g(t)\omega_{(\alpha,\beta)}(t) dt $$
(30)

for all ξ > 0. Both series in (29) and (30) converge absolutely and uniformly on each interval \((0,\xi ),\ \xi \in \mathbb {R}_{++}\) (see [33, p. 40 ff]).

Let us assume now that a function y can be expanded in a Fourier–Laguerre series:

$$ y(t) \sim \sum\limits_{k=0}^{\infty} \frac{\alpha_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} L_{k}^{(\alpha,\beta)}(t), $$

with coefficients

$$ \alpha_{k}= \left<y,L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}}\frac{1}{\sqrt{h_{k}^{(\alpha,\beta)}}}. $$

We look for conditions under which a pointwise relation

$$ y(t) = \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha,\beta)}}} L_{k}^{(\alpha,\beta)}(t)\quad \forall t \in \mathbb{R}_{++} $$

holds true and how the Fourier–Laguerre coefficients fk can be determined. The following result has been established in [33, p. 243ff] and [32].

Theorem 4

Each function Φ fulfilling for allt > 0 theequation

$$ {\Phi}(t) =\beta e^{\beta t}\int_{t}^{\infty} e^{-\beta \tau}f(\tau)d\tau $$

with some function f belonging to \(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta )})\) can be expanded into a Fourier–Laguerre series

$$ {\Phi}(t) =\sum\limits_{k=0}^{\infty} A_{k} L_{k}^{(\alpha,\beta)}(t), $$
(31)

where

$$ A_{k} = \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}}- \frac{f_{k+1}}{\sqrt{h_{k+1}^{(\alpha, \beta)}}},\quad k=0,1,\ldots, $$

which converges uniformly and absolutely on each finite interval (0,b), \(b\in \mathbb {R}_{++}\). Here, fk are the Fourier–Laguerre coefficients of f in \(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta )})\),

$$ f_{k}= \left<f,L_{k}^{(\alpha,\beta)}\right>_{2, \omega_{(\alpha,\beta)}}\frac{1}{\sqrt{h_{k}^{(\alpha,\beta)}}}. $$

Proof

The proof uses the conclusion (29) of the generalized Parseval equation (28) with the statement

$$ \begin{array}{@{}rcl@{}} g(t) &=& \tau_{\xi}(t)t^{-\alpha},\\ \tau_{\xi}(t)&=&\left\{\begin{array}{ll} 1, & t\geq \xi,\\ 0, & t< \xi. \end{array}\right. \end{array} $$

It easily follows that the function g(t) = τξ(t)tα belongs to \(L_{2}(\mathbb {R}_{++}, \omega _{(\alpha ,\beta }))\) for all ξ > 0. The following integrals can be exactly determined by partial integration,

$$ \int_{\xi}^{\infty} e^{-\beta t}L_{k}^{(\alpha,\beta)}(t)dt = e^{-\beta \xi} \frac{1}{\beta} \left[L_{k}^{(\alpha,\beta)}(\xi)-L_{k-1}^{(\alpha,\beta)}(\xi)\right]\quad\forall k\geq 1. $$
(32)

We insert (32) into the generalized Parseval equation and obtain

$$ \begin{array}{@{}rcl@{}} \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} \int_{\xi}^{\infty} e^{-\beta t}L_{k}^{(\alpha,\beta)}(t)dt &=& \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}}e^{-\beta \xi} \frac{1}{\beta} \left[L_{k}^{(\alpha,\beta)}(\xi)-L_{k-1}^{(\alpha,\beta)}(\xi)\right]\\ &=& \int_{\xi}^{\infty} e^{-\beta t} f(t)dt, \end{array} $$

and finally

$$ \int_{\xi}^{\infty} e^{-\beta t} f(t)dt = e^{-\beta \xi} \frac{1}{\beta}{\Phi}(\xi) $$

for all ξ ∈ (0,b), \(b\in \mathbb {R}_{+}\), with

$$ {\Phi}(\xi):=\sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{h_{k}^{(\alpha, \beta)}}} \left[L_{k}^{(\alpha,\beta)}(\xi)-L_{k-1}^{(\alpha,\beta)}(\xi)\right],\quad L_{-1}^{(\alpha,\beta)}(\xi):=0. $$
(33)

Equation (31) is obtained from (33) by using the property that an absolutely and uniformly converging series can be rearranged in an arbitrary way.

Remark 8

It can be seen immediately from the proof that the series (31) and (33), multiplied by eβt, converge uniformly on \(\mathbb {R}_{++}\).

The result of Theorem 4 can be used with different settings of the functions Φ and f. We use the following one:

Theorem 5

Let\( y\in {W^{2}_{2}} (\mathbb {R}_{+}, e^{-\beta t})\cap {W^{2}_{2}} (\mathbb {R}_{++}, S_{2})\),y(0) = y(0) = 0.Then,\(\mathcal {D}y=y^{\prime }\)canbe expanded into the Fourier–Laguerre series

$$ y^{\prime}(t)=\sum\limits_{k=0}^{\infty} \left( \frac{f_{k}}{\sqrt{h^{(1,\beta)}_{k}}}- \frac{f_{k+1}}{\sqrt{h^{(1,\beta)}_{k+1}}}\right) L_{k}^{(1,\beta)}(t), $$
(34)

where fk are the Fourier–Laguerre coefficients of \(f= y^{\prime }-\frac {1}{\beta }\mathcal {D}(y^{\prime })\) in \( L_{2}(\mathbb {R}_{++},\omega _{(1,\beta )})\).

Proof

We take Lemma 3 into account and obtain \(y^{\prime } e^{-\beta t}\in {W^{1}_{1}} (\mathbb {R}_{+})\). We reconstruct yeβt from its distributional derivative and find

$$ y^{\prime}(t)e^{-\beta t}= {\int_{0}^{t}} \mathcal{D}(y^{\prime}(\tau)e^{-\beta \tau})d\tau,\quad \int_{0}^{\infty} \mathcal{D}(y^{\prime}(t)e^{-\beta t})dt = -y^{\prime} (0)=0. $$
(35)

We combine the two equations in (35) and obtain

$$ y^{\prime}(t)e^{-\beta t} = - \int_{t}^{\infty} \mathcal{D}\left( y^{\prime}(\tau)e^{-\beta \tau}\right)d\tau = \beta \int_{t}^{\infty} \left( y^{\prime}(\tau)-\frac{1}{\beta}\mathcal{D}(y^{\prime}(\tau))\right)e^{-\beta \tau} d\tau $$

and

$$ y^{\prime}(t) = e^{\beta t}\beta \int_{t}^{\infty} f(\tau)e^{-\beta \tau} d\tau,\quad f= y^{\prime}-\frac{1}{\beta}\mathcal{D}(y^{\prime}). $$

For Φ = y and \(f\in L_{2}(\mathbb {R}_{++}, \omega _{(1,\beta )})\), we can apply Theorem 4 and verify the pointwise and uniform convergence of the Fourier–Laguerre series (34) for y on \((0, b), b\in \mathbb {R}_{++}\). With \(h^{(1,\beta )}_{k}=\frac {k+1}{\beta ^{2}}\) and \(L_{-1}^{(1,\beta )}:=0\), we get

$$ y^{\prime}(t) = \beta\left( \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(t)-L_{k-1}^{(1,\beta)}(t)\right]\right). $$

The question arises how the series (34) behaves at t = 0 and in which way it can be continuously extended to t = 0. We prepare the following lemma.

Lemma 6

Let

$$ {\Psi}(t)= \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(t)- L_{k-1}^{(1,\beta)}(t)\right] $$
(36)

be the given power series with the coefficients fk from Theorem 5, which converges uniformly on (0,b), \(\forall b\in \mathbb {R}_{++}\), and representing the continuous function \(\Psi : [0,b)\to \mathbb {R}\) with Ψ(0) = 0. Then, the series

$$ \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(0)-L_{k-1}^{(1,\beta)}(0)\right] = \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{k+1}} $$

converges to Ψ(0) = 0, i.e.,

$$ f_{0}= - \sum\limits_{k=1}^{\infty} \frac {f_{k}}{\sqrt{k+1}} $$

and the series in (36) converges uniformly on [0,b), \(\forall b\in \mathbb {R}_{++}\).

Proof

Let ε > 0 be arbitrary, and \({\Delta } L_{k}^{(1,\beta )}(t):=L_{k}^{(1,\beta )}(t)- L_{k-1}^{(1,\beta )}(t)\). Then, \({\Delta } L_{k}^{(1,\beta )}\) is continuous on \(\mathbb {R}_{+}\) and there exist δk > 0 such that for all t ∈ [0,δk)

$$ \left|{\Delta} L_{k}^{(1,\beta)}(0) - {\Delta} L_{k}^{(1,\beta)}(t)\right| < \frac{1}{k}\frac{\varepsilon}{3PQ}\qquad \forall k\in \mathbb{N}. $$

We apply the Parseval equation for f and the Cauchy–Schwarz inequality and obtain

$$ \begin{array}{@{}rcl@{}} \left|\sum\limits_{k=0}^{N} \frac {f_{k}}{\sqrt{k+1}}\left[{\Delta} L_{k}^{(1,\beta)}(0)- {\Delta} L_{k}^{(1,\beta)}(t)\right]\right| &<& \left( \sum\limits_{k=0}^{\infty} \frac{{f_{k}^{2}}}{k+1}\right)^{\frac{1}{2}} \frac{\varepsilon}{3PQ}\left( \sum\limits_{k=1}^{\infty}\frac{1}{k^{2}}\right)^{\frac{1}{2}} \\ &=&\frac{\varepsilon}{3PQ}\left( \langle f,f\rangle_{2,\omega_{(1,\beta)}}\right)^{\frac{1}{2}} \left( \sum\limits_{k=1}^{\infty}\frac{1}{k^{2}}\right)^{\frac{1}{2}}<\frac{\varepsilon}{3}, \end{array} $$

here \(P:=\left (\langle f,f\rangle _{2,\omega _{(1,\beta )}}\right )^{\frac {1}{2}}\), \(Q:=\left (\sum _{k=1}^{\infty }\frac {1}{k^{2}}\right )^{\frac {1}{2}}=\frac {\pi }{\sqrt {6}}\). This estimate holds \(\forall t\in \mathbb {R}_{+}: |t|<\delta _{0}(N):= \text {Min}\{\delta _{k}, k=1,\ldots ,N\}>0\). The series (36) converges uniformly to Ψ(t) for all t ∈ (0,b). Therefore, we conclude

$$ \left|{\Psi}(t) - f_{0}-\sum\limits_{k=1}^{N} \frac {f_{k}}{k+1}{\Delta} L_{k}^{(1,\beta)}(t)\right| < \frac{\varepsilon}{3} $$

NN0(ε). Since the function Ψ is continuous at t = 0, we get

$$ |{\Psi}(t)-{\Psi}(0)| < \frac{\varepsilon}{3} $$

for all \(t\in \mathbb {R}_{+}: |t| < \hat \delta \). Summarizing, we find

$$ \begin{array}{@{}rcl@{}} \left|\sum\limits_{k=1}^{N} \frac {f_{k}}{\sqrt{k+1}}{\Delta} L_{k}^{(1,\beta)}(0)+f_{0}-{\Psi}(0) \right| &<& \left|\sum\limits_{k=1}^{N} \frac {f_{k}}{\sqrt{k+1}}\left[{\Delta} L_{k}^{(1,\beta)}(0) - {\Delta} L_{k}^{(1,\beta)}(t)\right]\right|\\ &&+ \left|f_{0} +\sum\limits_{k=1}^{N} \frac {f_{k}}{\sqrt{k+1}}{\Delta} L_{k}^{(1,\beta)}(t)-{\Psi}(t)\right|\\ && + |{\Psi}(t)| = \varepsilon \end{array} $$

for all t ∈ [0,δ(N)), \(\delta (N):=\text {Min}\{\delta _{0}(N),\hat \delta \}\) and NN0.

The results of Theorem 5 and Lemma 6 finally yield the following representation

$$ y^{\prime}(t)= \beta\left( \sum\limits_{k=0}^{\infty} \frac{f_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(t)-L_{k-1}^{(1,\beta)}(t)\right]\right) $$
(37)

with

$$ f_{0}= -\sum\limits_{k=1}^{\infty} \frac{f_{k}}{\sqrt{k+1}},\quad y^{\prime}(0)=0. $$

Due to the uniform convergence of the series (37) on [0,t] for all \(t\in \mathbb {R}_{++}\), it can be integrated term by term, \(y\in {W^{1}_{1}}([0,t])\),

$$ \begin{array}{@{}rcl@{}} y(t)= {\int_{0}^{t}} y^{\prime}(\tau) d\tau &=& \beta \left[f_{0} t + \sum\limits_{k=1}^{\infty} \frac{f_{k}}{\sqrt{k+1}} {\int_{0}^{t}} \left( L_{k}^{(1,\beta)}(\tau)-L_{k-1}^{(1,\beta)}(\tau)\right) d\tau\right]\\ &=& \beta f_{0} t - \sum\limits_{k=1}^{\infty} \frac{f_{k}}{\sqrt{k+1}}\left( L_{k+1}^{(0,\beta)}(t) - L_{k}^{(0,\beta)}(t)\right) \\ && - \sum\limits_{k=1}^{\infty} \frac{f_{k}}{\sqrt{k+1}}\left( L_{k+1}^{(0,\beta )}(0)-L_{k}^{(0,\beta)}(0)\right) \\ &=& \beta f_{0} t - \sum\limits_{k=1}^{\infty} \frac{f_{k}}{\sqrt{k+1}}\left( L_{k+1}^{(0,\beta)}(t) - L_{k}^{(0,\beta)}(t)\right). \end{array} $$
(38)

Among the various relations between Laguerre polynomials (see [33, p. 215]), we apply the following one:

$$ -\beta t L_{k}^{(\alpha+1,\beta)}(t)=(k+1) L_{k+1}^{(\alpha,\beta)}(t)-(k+1+\alpha)L_{k}^{(\alpha,\beta)}(t). $$

Therefore, we find for α = 0:

$$ y(t)= \beta t \left[f_{0}+ \sum\limits_{k=1}^{\infty} \frac{f_{k}}{(k+1)\sqrt{k+1}} L_{k}^{(1,\beta )}(t)\right]. $$
(39)

Remark 9

  1. 1.

    The series in (37) and (39) converge uniformly on each interval [0,b), \(b\in \mathbb {R}_{++}\). Multiplied by eβt, they converge uniformly on \(\mathbb {R}_{+}\), because of the fact that yeβt and yeβt are in \({W^{1}_{1}} (\mathbb {R}_{+})\).

  2. 2.

    Neither (37) nor (39) represents the Fourier–Laguerre series of y and y, respectively. But both expansions can be used for a polynomial approximation.

  3. 3.

    Both series in (37) and (39) include Laguerre polynomials \(L_{k}^{(\alpha ,\beta )}\) with α = 1 and Fourier coefficients of \(f=y^{\prime } - \frac {1}{\beta }\mathcal {D} y^{\prime }\) in \(L_{2}(\mathbb {R}_{++}, \omega _{(1,\beta )})\).

5 Problem Formulation of (D ) in the Space of Fourier–Laguerre Coefficients

Let us assume now that the maximum principle for (Pω) is satisfied by a function y0 with

$$ y_{0} \in W^{2,n}_{2} (\mathbb{R}_{+},e^{-\beta t}) \cap W^{2,n}_{2} (\mathbb{R}_{++},S_{2}). $$

Then, the admissible domain for the dual problem (D) can be restricted to

$$ \tilde Y^{\infty} := Y^{\infty} \cap \left( W^{2,n}_{2} (\mathbb{R}_{++},S_{2})\times \{y(0)\}\right). $$

According to inequality (17), we obtain the dual problem \((\tilde {\textbf {D}}^{\infty })\) for (Pω):

$$ \begin{array}{@{}rcl@{}} (\tilde{\textbf{D}}^{\infty})~\tilde G^{\infty}(y,y(0))\!:\!&=&\! - \int_{0}^{\infty} \frac{1}{2} \left[y^{\prime} (t)\!+\!A^{T}(t)y(t)\right]^{T} W^{-1}(t)\left[y^{\prime} (t)\!+\!A^{T}(t)y(t)\right]e^{-\beta t}dt\\ &&- \int_{0}^{\infty} \frac{1}{2}y^{T}(t)R^{-1}(t)y(t)e^{-\beta t}dt - {x_{0}^{T}}y(0)\longrightarrow \text{Max} !\\ &&\text{w.r.t.}\ (y,y(0))\in \tilde Y^{\infty}. \end{array} $$

Functions in \(y \in W^{2,n}_{2} (\mathbb {R}_{+},e^{-\beta t})\) have a continuously differentiable representative on \(\mathbb {R}_{+}\), which is uniquely determined, D := y(0) and E := y(0). We decompose y into two parts,

$$ y(t) = (D+t E) + \hat y(t), $$

with D + tE, \(\hat {y} \in W^{2,n}_{2} (\mathbb {R}_{+},e^{-\beta t}) \cap W^{2,n}_{2} (\mathbb {R}_{++},S_{2} )\), \(E,D\in \mathbb {R}^{n}\) and \(\hat y(0)=0\), \(\hat y^{\prime } (0)=0\). Then, we apply Theorem 5 to each component of the vector \(\hat y\) and obtain the following type of expansions for \(\hat y\) and \(\hat y^{\prime }\):

$$ \begin{array}{@{}rcl@{}} \hat y(t)=\psi_{1}(F)(t):&=& \beta t \left[\phi_{0} + \sum\limits_{k=1}^{\infty} \frac{\phi_{k}}{(k+1)\sqrt{k+1}} L_{k}^{(1,\beta )}(t)\right], \end{array} $$
(40)
$$ \begin{array}{@{}rcl@{}} \hat y^{\prime}(t)=\psi_{2}(F)(t)&:=& \beta\left( \sum\limits_{k=0}^{\infty} \frac{\phi_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(t)-L_{k-1}^{(1,\beta)}(t)\right]\right) \end{array} $$
(41)

with

$$ \phi_{0}= -\sum\limits_{k=1}^{\infty} \frac{\phi_{k}}{\sqrt{k+1}}. $$
(42)

Here, F represents the sequence of the Fourier–Laguerre coefficients,

$$ F:= \{\phi_{k}\}_{k=0}^{\infty},\quad \phi_{k}\in \mathbb{R}^{n}. $$

These coefficients \(\phi _{k}\in \mathbb {R}^{n}\) are considered as new unknowns. Of course, we can use their properties being the Fourier–Laguerre coefficients of \(\hat f= \hat y^{\prime }-\frac {1}{\beta }\mathcal {D}\hat y^{\prime }\) in \({L_{2}^{n}}(\mathbb {R}_{++},\omega _{(1,\beta )})\).

The dual problem \((\tilde {\textbf {D}}^{\infty })\) can now be expressed in terms of the new variables E, D, and F. To improve the representation, we introduce the following denotations:

$$ \begin{array}{@{}rcl@{}} d(D,E)(t) &:=& D+Et,\\ a(D,E)(t) &:=& E +A^{T}(t)(D+Et),\\ L(F)(t)&:=& \psi_{2}(F)(t) +A^{T}(t)\psi_{1}(F)(t). \end{array} $$

Then, we rewrite the dual problem \((\tilde {\textbf {D}}^{\infty })\) equivalently as follows:

$$ \begin{array}{@{}rcl@{}} \hat G^{\infty}(\!D,E,F)\!&:=&\! -\! \int_{0}^{\infty} \frac{1}{2} [a(D,E)(t)\!+\! L(F)(t)]^{T} W^{-1}(t)[ a(D,E)(t)\!+\! L(F)(t)]e^{-\beta t}dt\\ &&\!-\! \int_{0}^{\infty}\! \frac{1}{2}[d(\!D,E)(t)\!+\! \psi_{1}(F)(t)]^{T} R^{-1}(t)[d(D,E)(t)\!+\! \psi_{1}(F)(t)]e^{-\beta t}dt\\ &&\!- {x_{0}^{T}} D\longrightarrow \text{Max} ! \qquad \text{w.r.t.}\ \ (D,E,F)\in \mathbb{R}^{n} \times \mathbb{R}^{n} \times {l}_{2}^{n}. \end{array} $$

A polynomial approximation of the admissible domain is obtained by replacing the Fourier–Laguerre series in (40)–(42) by their partial sums:

$$ \begin{array}{@{}rcl@{}} \hat y^{N}(t)={\psi_{1}^{N}}(F^{N})(t)&:=& \beta t \left[\phi_{0}+ \sum\limits_{k=1}^{N} \frac{\phi_{k}}{(k+1)\sqrt{k+1}} L_{k}^{(1,\beta)}(t)\right]\!, \end{array} $$
(43)
$$ \begin{array}{@{}rcl@{}} \hat y^{\prime,N}(t)={\psi_{2}^{N}}(F^{N})(t)&:=& \beta\left( \sum\limits_{k=0}^{N} \frac{\phi_{k}}{\sqrt{k+1}} \left[L_{k}^{(1,\beta)}(t)-L_{k-1}^{(1,\beta)}(t)\right]\right) \end{array} $$
(44)

and

$$ {\phi_{0}^{N}}:= -\sum\limits_{k=1}^{N} \frac{\phi_{k}}{\sqrt{k+1}} $$
(45)

and

$$ F^{N}:= \{\phi_{k}\}_{k=0}^{N},\quad \phi_{k}\in \mathbb{R}^{n}. $$

The finally resulting dual problem \((\hat {\textbf {D}}^{N})\) possesses the structure of (DN):

$$ \begin{array}{@{}rcl@{}} &&\hat G^{N}(D,E,F^{N})\\ &&:= - \int_{0}^{\infty} \frac{1}{2} \left[a(D,E)(t)+ L(F^{N})(t)\right]^{T} W^{-1}(t)\left[a(D,E)(t)+ L(F^{N})(t)\right]e^{-\beta t}dt\\ &&\quad- \int_{0}^{\infty} \frac{1}{2}\left[d(D,E)(t)+ {\psi_{1}^{N}}(F^{N})(t)\right]^{T} R^{-1}(t)\left[d(D,E)(t) + {\psi_{1}^{N}}(F^{N})(t)\right]e^{-\beta t}dt\\ &&\quad- {x_{0}^{T}} D \longrightarrow \text{Max} ! \qquad \text{w.r.t.}\ (D,E,F^{N})\in \mathbb{R}^{n} \times \mathbb{R}^{n} \times \mathbb{R}^{nN}, \end{array} $$

where

$$ L(F^{N})(t):= {\psi_{2}^{N}}(F^{N})(t)+A^{T}(t){\psi_{1}^{N}}(F^{N})(t). $$

Remark 10

  1. 1.

    The polynomial approximation proposed in (43)–(45) guarantees the pointwise and uniform convergence of the corresponding partial sums to \(\hat y\), \(\hat y^{\prime }\) in \([0,b), b\in \mathbb {R}_{++}\) by Theorems 4 and 5.

  2. 2.

    Due to the decomposition \(y(t) = (D+t E) + \hat y(t)\), the initial values D = y(0) and E = y(0) are exactly incorporated into the dual problem \((\tilde {\textbf {D}}^{\infty })\). The initial values y(0) and y(0) are well defined for functions in \(W^{2,n}_{2} (\mathbb {R}_{+},e^{-\beta t})\).

  3. 3.

    We point out that Theorem 4 allows also other representations for the included Fourier–Laguerre series. In our setting, we used generalized Laguerre polynomials \(L^{(\alpha ,\beta )}_{k}\) with α = 1. This was proposed to remain as near as possible to the regularity properties for y0 in the maximum principle. The condition \(y_{0}\in W_{2}^{2,n}(\mathbb {R}_{++},S_{2})\), which is not necessarily fulfilled, constitutes an additional assumption.

  4. 4.

    As a first attempt for a numerical scheme, we used a polynomial approximation of the form

    $$ y = \sum\limits_{k=0}^{N} a_{k} L_{k}^{(0,\beta)} $$

    in [28]. Herein, the convergence of the polynomial approximation has not been considered and can only be expected in the sense of \(L_{2}(\mathbb {R}_{++},e^{-\beta t})\). This renders an approximation for the initial values y(0) and y(0) difficult.

6 Conclusions and Outlook

In the previous investigations of the authors (cf. [21, 22, 26,27,28,29]), a new approach for treating infinite horizon control problems has been developed. Weighted Sobolev spaces \(W^{1,n}_{2} (\mathbb {R}_{+}, e^{\varrho t}), \varrho \ne 0\) are chosen as the state spaces. Following this approach, the control problem was treated in Hilbert spaces. A duality construction was applied to a convex primary problem of the form (Pω). Its dual problem, a problem in calculus of variations, stayed in the focus of the present paper. The problem has been treated systematically in a special Hilbert space H. The choice of this space,

$$ H= {W^{2}_{2}} (\mathbb{R}_{+}, e^{-\beta t})\cap {W^{2}_{2}} (\mathbb{R}_{++}, S_{2}), $$

as state space seems appropriate due to the following reasons. Firstly, the necessary optimality conditions for (Pω) allow higher regularity of the adjoints, e.g., \(y\in {W^{2}_{2}} (\mathbb {R}_{+}, e^{-\beta t})\). Secondly, the introduction of an appropriate non-uniformly weighted Sobolev space \({W^{2}_{2}} (\mathbb {R}_{++}, S_{2})\) is motivated by the application of a spectral method. Both spaces are used to obtain a spectral representation for functions in H using generalized Laguerre polynomials. The obtained Fourier–Laguerre series for y and the distributional derivative \(\mathcal {D}y\) converge pointwisely and uniformly. The expansions contain Laguerre polynomials \(L_{k}^{(1,\beta )}\). A polynomial approximation, based on these expansions, can be used directly for a numerical approximation scheme and convergence results can be deduced from the theoretical results of Theorems 4 and 5.

A challenging problem is to estimate the spectral accuracy of the proposed scheme. Concerning this problem one can find first results in [12]. It is a future task to compare discretization techniques for infinite horizon control problems. By using the developed pseudo-spectral method, we are finally left with a finite dimensional optimization problem. The number of variables in the problem \((\hat {\textbf {D}}^{N})\) is n + n + Nn, where N is the maximal degree of the polynomials involved. This number is expected to be small—which is typical for Fourier methods. First experiences (see [12, 28]) support this conjecture.

The results of this paper can be applied to regulator problems (Pω) and their dual problems without control constraints. An extension to problems with box constraints for the control seems to be realistic and reasonable. A challenging task is the treatment of control problems (Pω) with nonlinear dynamics. The influence of an appropriate choice of the order α of the Laguerre polynomial \(L_{k}^{(\alpha ,\beta )}\), involved in the approximation scheme, on the stability and convergence is interesting and represents a focus of future work.