1 Introduction

The optimization theory is divided into four major parts, including mathematical programming (static and one player), optimal control (dynamic and one player), game theory (static and many players), and differential game (dynamic and many players) (Basar 1995). An applicable branch of optimization theory is control theory that deals with minimizing (maximizing) a specified cost functional and at the same time satisfying some constraints. Two major methods are used for solving optimal control problems: Indirect methods (Pontryagin’s maximum principle, Bellman’s dynamic programming) that is based on converting the original optimal control problem into a two-point boundary value problem and direct methods (parameterization, discretization) that is based on converting the original optimal control problem into nonlinear optimization problem. Although indirect method has some advantages such as existence and uniqueness of results, it also has some disadvantages such as non-existence of the analytical solutions of optimal control problems in many cases. Therefore, direct methods are converted to an attractive field for researchers of mathematical science. Also, in recent years, different direct numerical methods and efficient algorithms based on using orthogonal polynomials have been used to solve optimal control problems. One of these orthogonal polynomials is wavelet which has good property for approximating functions with discontinuities or sharp changes. Chen and Hsiao (1997) used Haar wavelet orthogonal functions and their integration matrices to optimize dynamic systems to solve lumped and distributed parameter systems. Razzaghi and Yousefi (2001) used Legendre wavelets for solving optimal control problems. Babolian and Babolian and Fattahzadeh (2007) obtained numerical solution of differential equations using operational matrix of integration of Chebyshev wavelets basis. Ghasemi and Tavassoli Kajani (2011) presented a solution of time-varying delay systems by Chebyshev wavelets.

This paper proposes a new numerical method based on state-control parameterization via Chebyshev wavelet for solving general optimal control problems. We choose state-control parameterization because there is no need to integrate the system state equation as in control parameterization. Also, the approximated optimal solutions of state and control variables are obtained at the same time. In comparison with other works that use the power of Chebyshev wavelet to construct operational matrix of integration (Abu Haya 2011; Babolian and Fattahzadeh 2007) to convert a differential equation into an algebraic one, our proposed method does not require operational matrix. This paper is organized as follows: First, we present the basic formulation of optimal control problems. Then, we describe Chebyshev wavelet and use them to approximate state and control variables. A mathematical description of proposed state-control parameterization method is presented and by reporting some examples we compare our method with other methods that have been introduced for solving these examples.

2 Mathematical formulation of general optimal control problems

Consider the following system of differential equation on a fixed interval \([0, t_f]\),

$$\begin{aligned} \dot{x}=f(t,x(t),u(t)), \end{aligned}$$
(1)

where \( x(t)\in \mathbb {R}^l \) is the state vector, \(u(t)\in \mathbb {R}^q\) is piecewise from class of admissible controls \(\mathcal {U}\). The function \(f:\mathbb {R}^1\times \mathbb {R}^l\times \mathbb {R}^q\rightarrow \mathbb {R}^l\) is a vector function which is continuous and has continuous first partial derivative with respect to x. The above equation is called the equation of motion and the initial condition for (1) is:

$$\begin{aligned} x(0)=x_0, \end{aligned}$$
(2)

where \( x_0 \) is a given vector in \( \mathbb {R}^l \).

Along with this process, we have a cost functional of the form:

$$\begin{aligned} J(t,x,u)=\psi \left( t_f,x(t_f)\right) +\int _{0}^{t_f}L\left( t,x(t),u(t)\right) \mathrm{d}t. \end{aligned}$$
(3)

Here, L(txu) is the running cost, and \(\psi (t,x)\) is the terminal cost. The minimization of J(txu) over all controls \(u(t)\in \mathcal {U}\) together with constraints (1) and (2) is called an optimal control problem and the pair (xu) which achieves this minimum is called an optimal control solution. In fact, the optimization problem with performance index as in equation (3) is called a Bolza problem. There are two other equivalent optimization problems, which are called Lagrange and Mayer problems (Fleming and Rishel 1975).

3 Chebyshev wavelets

In this section, a new state-control parameterization using Chebyshev wavelets, to derive a robust method for solving optimal control problems numerically is introduced. We briefly describe Chebyshev wavelet polynomials that used in the next section. By dilation and translation of a single function that called the mother wavelet, a family of wavelets can be constituted (Babolian and Fattahzadeh 2007).

$$\begin{aligned} \phi _{ab}(t)=|a|^{\frac{-1}{2}}\phi \left( \frac{t-b}{a}\right) ,\quad a,b\in \mathbb {R},a \ne 0, \end{aligned}$$

One applicable families of wavelets are Chebyshev wavelet \(\phi _{nm}(t)=\phi (k,m,n,t)\) that are defined on the interval [0, 1) by following formulae:

$$\begin{aligned} \phi _{nm}= \left\{ \begin{array}{ll} \frac{\alpha _m 2^{\frac{k}{2}}}{\sqrt{\pi }}T_m(2^{k+1}t-2n+1), &{}\quad \frac{n-1}{2^k}\le t\le \frac{n}{2^k},\\ 0, &{}\quad O.W. \end{array}\right. \end{aligned}$$

where \(k=1,2,\ldots \),\(n=1,2,3,\ldots ,2^{k}\), \(m=0,1,2,\ldots ,M-1\) is the order of Chebyshev polynomials and,

$$\begin{aligned} \alpha _{m}= \left\{ \begin{array}{ll} \sqrt{2},&{}\quad m=0,\\ 2,&{}\quad m=1,2,\ldots . \end{array}\right. \end{aligned}$$

Here, \(T_m(t)\) are the well known Chebyshev polynomials satisfying the following recursive formulae:

$$\begin{aligned} \left\{ \begin{array}{ll} T_{0}(t)=1,\\ T_{1}(t)=t,\\ \vdots \\ T_{m+1}(t)=2tT_{m}(t)-T_{m-1}(t). \end{array}\right. \end{aligned}$$

4 Proposed state-control parameterization method and main results

In this section, we describe our proposed state-control parameterization method for solving general optimal control problems. Let \(Q\subset PC^{1}([0,t_{f}])\) be set of all piecewise-continuous functions that satisfy initial condition (2). The performance index is function of x(.) and u(.) and problem (1)–(3) may be interpreted as a minimization of J on set Q. Let \(Q_{2^{k}M-1}\subset Q\) be the class of combinations of Chebyshev wavelet polynomials of degree up to \((M-1)\). The basic idea is to approximate the state and control variables by a finite series of Chebyshev wavelets as follows:

$$\begin{aligned} \hat{X}(t)=\sum _{n=1}^{2^k}\sum _{m=0}^{M-1}a_{nm}\phi _{nm}(t), \end{aligned}$$
(4)
$$\begin{aligned} \hat{U}(t)=\sum _{n=1}^{2^k}\sum _{m=0}^{M-1}c_{nm}\phi _{nm}(t). \end{aligned}$$
(5)

We define,

$$\begin{aligned} \phi (t)= & {} [\phi _{10}(t),\ldots ,\phi _{1M-1}(t),\phi _{20}(t),\ldots ,\phi _{2M-1}(t),\ldots ,\phi _{2^{k}0}(t),\ldots ,\phi _{2^{k}M-1}(t)],\\ \alpha= & {} [a_{10},\ldots ,a_{1M-1},a_{20},\ldots ,a_{2M-1},\ldots ,a_{2^{k}0},\ldots ,a_{2^{k}M-1}],\\ \gamma= & {} [c_{10},\ldots ,c_{1M-1},c_{20},\ldots ,c_{2M-1},\ldots ,c_{2^{k}0},\ldots ,c_{2^{k}M-1}]. \end{aligned}$$

The interval \([0,t_{f}]\) can be converted into \(2^{k}\) following subintervals:

$$\begin{aligned} \left[ 0,\frac{1}{2^{k}}t_{f}\right] ,\left[ \frac{1}{2^{k}}t_{f},\frac{2}{2^{k}}t_{f}\right] ,\ldots ,\left[ \frac{2^{k}-1}{2^{k}}t_{f},t_{f}\right] . \end{aligned}$$

In fact, \(Q_{2^{k}M-1}\subset Q\) is the class of combinations of Chebyshev wavelet functions involved \(2^{k}\) polynomials and every polynomial of degree at most \((M-1)\). Thus, the state variable (4) and the control variable (5) can be written as following:

$$\begin{aligned} \hat{X}(t)= \left\{ \begin{array}{ll} \hat{x}_{1}(t)=\sum _{m=0}^{M-1}a_{1m}\phi _{1m}(t),&{}\quad 0\le t\le \frac{1}{2^{k}}t_{f},\\ \hat{x}_{2}(t)=\sum _{m=0}^{M-1}a_{2m}\phi _{2m}(t),&{}\quad \frac{1}{2^{k}}t_{f}\le t \le \frac{2}{2^{k}}t_{f},\\ \vdots \\ \hat{x}_{2^{k}}(t)=\sum _{m=0}^{M-1}a_{2^{k}m}\phi _{2^{k}m}(t),&{}\quad \frac{2^{k}-1}{2^{k}}t_{f}\le t\le t_{f}, \end{array}\right. \end{aligned}$$
(6)
$$\begin{aligned} \hat{U}(t)= \left\{ \begin{array}{ll} \hat{u}_{1}(t)=\sum _{m=0}^{M-1}c_{1m}\phi _{1m}(t),&{}\quad 0\le t\le \frac{1}{2^{k}}t_{f},\\ \hat{u}_{2}(t)=\sum _{m=0}^{M-1}c_{2m}\phi _{2m}(t),&{}\quad \frac{1}{2^{k}}t_{f}\le t\le \frac{2}{2^{k}}t_{f},\\ \vdots \\ \hat{u}_{2^{k}}(t)=\sum _{m=0}^{M-1}c_{2^{k}m}\phi _{2^{k}m}(t),&{}\quad \frac{2^{k}-1}{2^{k}}t_{f}\le t\le t_{f}, \end{array}\right. \end{aligned}$$
(7)

By substituting (6) and (7) into performance index (3), we have

$$\begin{aligned} \hat{J}(a_{10},\ldots , a_{2^{k}M-1},c_{10},\ldots , c_{2^{k}M-1})=\psi (t_f,\hat{X}(t_f))+\int _{0}^{t_f}L\left( t,\hat{X}(t),\hat{U}(t)\right) \mathrm{d}t, \end{aligned}$$

or

$$\begin{aligned} \begin{array}{l} \hat{J}=\psi (t_f,\hat{x}_{2^{k}}(t_f))+\left( \int _{0}^{\frac{1}{2^{k}}t_{f}} L\left( t,\hat{x}_{1}(t),\hat{u}_{1}(t)\right) \mathrm{d}t+\cdots + \int _{\frac{2^{k}-1}{2^{k}}t_{f}}^{t_{f}} L(t,\hat{x}_{2^{k}}(t),\hat{u}_{2^{k}}(t))\mathrm{d}t \right) . \end{array}\nonumber \\ \end{aligned}$$
(8)

Also, after substituting (6) and (7) in (1) it is converted to:

$$\begin{aligned} \dot{\hat{X}}=f(t,\hat{X}(t),\hat{U}(t)), \end{aligned}$$

or

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{\hat{x}}_{1}=f(t,\hat{x}_{1}(t),\hat{u}_{1}(t)), \\ \dot{\hat{x}}_{2}=f(t,\hat{x}_{2}(t),\hat{u}_{2}(t)), \\ \vdots \\ \dot{\hat{x}}_{2^{k}}=f(t,\hat{x}_{2^{k}}(t),\hat{u}_{2^{k}}(t)), \end{array} \right. \end{aligned}$$
(9)

and the initial condition (2) is replaced by equality constraint as follows:

$$\begin{aligned} \hat{x}(0)=\sum _{n=1}^{2^k}\sum _{m=0}^{M-1}a_{nm}\phi _{nm}(t)\Big \vert _{t=0}=\hat{x}_0. \end{aligned}$$
(10)

Furthermore, we must add some constraints to get the continuity of the state variables between the different sections. \((2^{k}-1)\) points exist for which the continuity of state variables have to be ensured. These points are:

$$\begin{aligned} t_{i}=\frac{i}{2^{k}},i=1,2,\ldots ,2^{k}-1. \end{aligned}$$

So there are \((2^{k}-1)\) equality constraints that must be satisfied. These constraints can be shown by the following system of equations:

$$\begin{aligned} \left\{ \begin{array}{ll} \sum _{m=0}^{M-1}a_{1m}\phi _{1m}(t)\left| _{t=t_1}=\sum _{m=0}^{M-1}a_{2m}\phi _{2m}(t) \right| _{t=t_1},\\ \sum _{m=0}^{M-1}a_{2m}\phi _{2m}(t) \left| _{t=t_2}=\sum _{m=0}^{M-1}a_{3m}\phi _{3m}(t) \right| _{t=t_2}, \\ \vdots \\ \sum _{m=0}^{M-1}a_{2^{k}-1m}\phi _{2^{k}-1m}(t)\left| _{t=t_{2^{k}-1}}=\sum _{m=0}^{M-1}a_{2^{k}m}\phi _{2^{k}m}(t) \right| _{t=t_{2^{k}-1}}. \end{array} \right. \end{aligned}$$
(11)

By this process, minimization of problem (8) subject to constraints (9)–(11) can be written as following optimization problem:

$$\begin{aligned} \hbox {Min}\hat{J}(\alpha , \gamma ), \end{aligned}$$
(12)

Subject   to:

$$\begin{aligned} P[\alpha ,\gamma ]^{T}=Q, \end{aligned}$$
(13)

where,

$$\begin{aligned}{}[\alpha ,\gamma ]^{T}= & {} [a_{10},a_{11},\ldots ,a_{1M-1},a_{20},a_{21},\ldots ,a_{2M-1},\ldots ,a_{2^{k}0},a_{2^{k}1},\ldots ,a_{2^{k}M-1} ,\\&c_{10},c_{11},\ldots ,c_{1M-1},c_{20},c_{21},\ldots ,c_{2M-1},\ldots ,c_{2^{k}0},c_{2^{k}1},\ldots ,c_{2^{k}M-1}]. \end{aligned}$$

In fact, the optimal control problem (1)–(3) is converted to optimization problem (12)–(13) and now the optimal values of vector \([\alpha ^*,\gamma ^*]\) can be obtained using a standard quadratic programming method. The above results are summarized in the following algorithm which the main idea is to convert the optimal control into an optimization problem.

Algorithm:

Input: Optimal control problem (1)–(3).

Output: The approximated optimal trajectory, approximated optimal control and approximated performance index J.

Step 0: Choose k and M.

Step 1: Approximate the state and control variable by \((M-1)^{th}\) Chebyshev wavelet series from equation (6) and (7).

Step 2: Find an expression of \(\hat{J}\) from equation (8).

Step 3: Determine the set of equality constraints, by (9)–(11) and find matrix P.

Step 4: Determine the optimal parameters \([\alpha ^{*},\gamma ^{*}]\) by solving optimization problem (12)–(13) and substitute these parameters into equations (6), (7) and (8) to find the approximated optimal trajectory, approximated optimal control and approximated performance index J, respectively.

Step 5: Increase k or M to get better approximation of trajectory, control and performance index.

5 Convergence analysis

The following theorem and lemma ensure the convergence analysis of proposed method.

Theorem 1

Let \(f \in C([a,b],\mathbb {R})\). Then there is a sequence of polynomials \(P_{n}(x)\) that converges uniformly to f(x) on [ab].

Proof

See Rudin (1976). \(\square \)

Lemma 1

If \(\beta _{2^{k}M-1}=\inf _{Q_{2^{k}M-1}} J\), for k and M form \(1,2,3,\ldots \) where \(Q_{2^{k}M-1}\) be a subset of Q consisting of all piecewise-continuous functions involving \(2^{k}\) polynomials and every polynomials of degree at most \((M-1)\), Then \( \lim _{k,M \longrightarrow \infty } (\beta _{2^{k}M-1})=\beta \) where \(\beta =\inf _{Q} J\).

Proof

If we define,

$$\begin{aligned} \beta _{2^{k}M-1}=\min _{(\alpha _{2^{k}M-1},\gamma _{2^{k}M-1})\in \mathbb {R}^{2^{k+1}M}} J(\alpha _{2^{k}M-1},\gamma _{2^{k}M-1}), \end{aligned}$$

then,

$$\begin{aligned} \beta _{2^{k}M-1}=J(\alpha _{2^{k}M-1}^{*},\gamma _{2^{k}M-1}^{*}), \end{aligned}$$

where,

$$\begin{aligned} (\alpha _{2^{k}M-1}^{*},\gamma _{2^{k}M-1}^{*})\in \hbox {Argmin}\{J(\alpha _{2^{k}M-1},\gamma _{2^{k}M-1}):(\alpha _{2^{k}M-1},\gamma _{2^{k}M-1})\in \mathbb {R}^{2^{k+1}M}\}. \end{aligned}$$

Now, let \((x^{*}_{2^{k}M-1}(t),u^{*}_{2^{k}M-1}(t))\in \hbox {Argmin}\{J(x(t),u(t)):(x(t),u(t)) \in Q_{2^{k}M-1}\}\), then,

$$\begin{aligned} J(x^{*}_{2^{k}M-1}(t),u^{*}_{2^{k}M-1}(t))=\min _{(x(t),u(t))\in Q_{2^{k}M-1}}J(x(t),u(t)), \end{aligned}$$

in which \(Q_{2^{k}M-1}\) is a class of combinations of piecewise-continuous Chebyshev wavelet functions involving \(2^{k}\) polynomials and every polynomials of degree at most \((M-1)\), so,

$$\begin{aligned} \beta _{2^{k}M-1}=J(x^{*}_{2^{k}M-1}(t),u^{*}_{2^{k}M-1}(t)). \end{aligned}$$

Furthermore, according to \(Q_{2^{k}M-1}\subset Q_{2^{k}M}\), we have

$$\begin{aligned} \min _{(x(t),u(t))\in Q_{2^{k}M}} J(x(t),u(t))\le \min _{(x(t),u(t))\in Q_{2^{k}M-1}} J(x(t),u(t)). \end{aligned}$$

Thus, we will have \(\beta _{2^{k}M}\le \beta _{2^{k}M-1}\), which means \(\beta _{2^{k}M-1}\) is a non-increasing sequence. Also, this sequence is upper bounded, and therefore is convergent. Now, the proof is complete, that is,

$$\begin{aligned} \lim _{k,M \longrightarrow \infty } (\beta _{2^{k}M-1})=\min _{(x(t),u(t))\in Q}J(x(t),u(t)). \end{aligned}$$

\(\square \)

6 Numerical examples

In this section, for illustrating the efficiency of our proposed method, four examples are considered. The first example consist of a linear time invariant system via one state variable and the second consist of a linear time invariant system via two state variables. Another example which consist of a time-varying linear system is considered and the optimal state and control is obtained. Also, the optimal control of a linear time invariant singular system is obtained in fourth example.

Example 1

(Feldbaum problem) (El-Gindy 1995; Fleming and Rishel 1975; Saberi Nik et al. 2012; Yousefi et al. 2010) Find the optimal control u(t) which minimizes:

$$\begin{aligned} J=\frac{1}{2}\int _{0}^{1}\left( x^{2}(t)+u^{2}(t)\right) \mathrm{d}t, \end{aligned}$$
(14)

Subject to:

$$\begin{aligned} \dot{x}(t)= & {} -x(t)+u(t),\end{aligned}$$
(15)
$$\begin{aligned} x(0)= & {} 1. \end{aligned}$$
(16)

The exact solutions of state and control variables in this problem are (Saberi Nik et al. 2012):

$$\begin{aligned} x^{*}(t)= & {} \cosh (\sqrt{2}t)+\beta \sinh (\sqrt{2}t),\\ u^{*}(t)= & {} (1+\sqrt{2}\beta )\cosh (\sqrt{2}t)+(\sqrt{2}+\beta )\sinh (\sqrt{2}t), \end{aligned}$$

where,

$$\begin{aligned} \beta =-\frac{\cosh (\sqrt{2})+\sqrt{2}\sinh (\sqrt{2})}{\sqrt{2}\cosh (\sqrt{2})+\sinh (\sqrt{2})}\simeq -0.98. \end{aligned}$$

We solved this problem for \(k=1\) and \(M=3\). The state and control variables can be approximated as following:

$$\begin{aligned} \hat{X}(t)= \left\{ \begin{array}{ll} \hat{x}_{1}(t)=\sum _{m=0}^{2}a_{1m}\phi _{1m}(t),&{}\quad 0\le t \le 1,\\ \hat{x}_{2}(t)=\sum _{m=0}^{2}a_{2m}\phi _{2m}(t),&{}\quad 1\le t \le 2, \end{array}\right. \end{aligned}$$
(17)
$$\begin{aligned} \hat{U}(t)= \left\{ \begin{array}{ll} \hat{u}_{1}(t)=\sum _{m=0}^{2}c_{1m}\phi _{1m}(t),&{}\quad 0\le t\le 1,\\ \hat{u}_{2}(t)=\sum _{m=0}^{2}c_{2m}\phi _{2m}(t),&{}\quad 1\le t\le 2, \end{array}\right. \end{aligned}$$
(18)

where,

$$\begin{aligned} \phi (t)= & {} [\phi _{10}(t),\phi _{11}(t),\phi _{12}(t),\phi _{20}(t),\phi _{21}(t),\phi _{22}(t)],\\ \alpha (t)= & {} [a_{10},a_{11},a_{12},a_{20},a_{21},a_{22}],\\ \gamma (t)= & {} [c_{10},c_{11},c_{12},c_{20},c_{21},c_{22}]. \end{aligned}$$

By substituting (17) and (18) into (14), the approximated performance index \(\hat{J}\) can be interpreted as:

$$\begin{aligned} \hat{J}=\frac{1}{2}\left( \int _{0}^{1/2}\left( x_{1}^{2}(t)+u_{1}^{2}(t)\right) \mathrm{d}t+\int _{1/2}^{1}\left( x_{2}^{2}(t)+u_{2}^{2}(t)\right) \mathrm{d}t\right) , \end{aligned}$$
(19)

and by substituting (17) and (18) into (15) the following equality constraints can be obtained:

$$\begin{aligned} \left\{ \begin{array}{ll} 4.787a_{11}-23.936a_{12}+1.128a_{10}-1.128c_{10}+1.595c_{11}-1.595c_{12}= 0,\\ 76.596a_{12}+6.383a_{11}-6.383c_{11}+25.532c_{12}= 0,\\ 51.064a_{12}-51.064c_{12} = 0,\\ 1.595a_{21}-49.468a_{22}+1.128a_{20}-1.128c_{20}+4.787c_{21}-27.128c_{22}= 0,\\ 25.532a_{22}+6.383a_{21}-6.383c_{21}+76.596c_{22}= 0,\\ 51.064a_{22}-51.064c_{22} = 0. \end{array} \right. \end{aligned}$$
(20)

There is only one point which the continuity of state variable must be satisfied. This point is:

$$\begin{aligned} t_{1}=\frac{1}{2}. \end{aligned}$$

So there are one equality constraint \(\hat{x}_1(t)\vert _{t=\frac{1}{2}}=\hat{x}_2(t)\vert _{t=\frac{1}{2}}\) given by:

$$\begin{aligned} 1.128a_{10}+1.595a_{11}+1.595a_{12}-1.128a_{20}+1.595a_{21}-1.595a_{22} = 0. \end{aligned}$$
(21)

Also, from initial condition (16), another constraint \(\hat{x}_1(t)\Big \vert _{t=0}=1\) is produced that can be shown as:

$$\begin{aligned} 1.128a_{10}-1.595a_{11}+1.595a_{12} = 1. \end{aligned}$$
(22)

Minimization of (19) subject to constraints (20)–(22) yield value 0.1930101957 for \(\hat{J}\). The optimal approximated trajectory \(\hat{X}(t)\) can be obtained as:

$$\begin{aligned} \hat{X}(t)= \left\{ \begin{array}{ll} \hat{x}_{1}(t)=0.999999999-1.34207331t+0.718358564t^2,&{}\quad 0\le t \le \frac{1}{2},\\ \hat{x}_{2}(t)=0.923289329-1.01771319t+0.376481015t^2,&{}\quad \frac{1}{2}\le t \le 1, \end{array}\right. \end{aligned}$$

and the optimal approximated control \(\hat{U}(t)\) can be obtained as:

$$\begin{aligned} \hat{U}(t)= \left\{ \begin{array}{ll} \hat{u}_{1}(t)=-0.342073313+0.094643817t+0.718358564t^2,&{}\quad 0\le t \le \frac{1}{2},\\ \hat{u}_{2}(t)=-0.094423867-0.264751165t+0.376481015t^2,&{}\quad \frac{1}{2}\le t \le 1. \end{array}\right. \end{aligned}$$

In Table 1, we present the obtained results for J with our proposed method for different M and k. In Table 2, a comparison between our proposed method and different methods that have been introduced for solving this problem is done. Also, Fig. 1 shows the obtained solutions of control variable for cases \(M=3,k=1\) and \(M=4,k=3\) compared with the exact solution and the error function \(|u^{*}(t)-u(t)|\) for \(M=4,k=3\).

Table 1 Estimated values of J using proposed method
Table 2 Comparison between different methods for J value

As seen from the results reported in Table 1, by increasing k or M we can get better solutions for performance index J. Specially, for case \(k=3,M=4\) it precisely coincide with the exact solution. Also, Table 2 shows that our solution is good compared with the method presented by Abu Haya (2011), Fakharian et al. (2010), Kafash et al. (2013) and Saberi Nik et al. (2012).

Fig. 1
figure 1

Left Graphs of approximate solutions of u(t) for different values of M and k as compared with the exact solution. Right Plot of error function \(|u^{*}(t)-u(t)|\)

Example 2

(Abu Haya 2011; Hsieh 1965; Jaddu 1998; Neuman and Sen 1973; Vlassenbroeck and Van Doreen 1988) Find an optimal controller u(t) that minimizes the following performance index:

$$\begin{aligned} J=\int _{0}^{1}\left( x^{2}(t)+y^{2}(t)+0.005u^{2}(t)\right) \mathrm{d}t, \end{aligned}$$

Subject to:

$$\begin{aligned} \dot{x}(t)= & {} y(t),\\ \dot{y}(t)= & {} -y(t)+u(t), \\ x(0)= & {} 0,\\ y(0)= & {} -1. \end{aligned}$$

We used our proposed method for solving this example and reported the obtained results for J in cases \(M=4,k=3\), \(M=5,k=3\) and \(M=6,k=3\) in Table 3. In Table 4, we present a comparison between different methods that have been introduced for solving this problem and our proposed method. Also, the approximated solutions of control variable by different M and k is plotted in Fig. 2.

Table 3 Estimated values of J using proposed method
Table 4 Comparison between different methods for J value

From Table 4, it can be seen that our proposed method already offers a very precise solution which is better than the results reported in Abu Haya (2011), Hsieh (1965), Jaddu (1998), Majdalawi (2010), Neuman and Sen (1973) and Vlassenbroeck and Van Doreen (1988).

Fig. 2
figure 2

Plot of control variable u(t) for different M and k

Example 3

(Abu Haya 2011; Elnagar 1997) Find the optimal control u(t) which minimizes:

$$\begin{aligned} J=\frac{1}{2}\int _{0}^{1}\left( x^{2}(t)+u^{2}(t)\right) \mathrm{d}t. \end{aligned}$$

Subject to:

$$\begin{aligned} \dot{x}(t)= & {} tx(t)+u(t),\\ x(0)= & {} 1. \end{aligned}$$

We solved this problem in cases \(M=4,k=1\), \(M=4,k=2\) and \(M=5,k=2\) and presented the obtained results for J with our proposed method in Table 5. The approximated solutions of control variable in this cases is plotted in Fig. 3. Also, in Table 6, a comparison between different methods that have been introduced for solving this problem and our proposed method is reported.

Table 5 Estimated values of J using proposed method
Table 6 Comparison between different methods for J value
Fig. 3
figure 3

Plot of control variable u(t) for different M and k

Example 4

(Alirezaei et al. 2012; Dziurla and Newcomb 1979) Consider the following singular system:

$$\begin{aligned} \left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0 \\ \end{array} \right) \left( \begin{array}{c} \dot{x}(t) \\ \dot{y}(t) \\ \end{array} \right) = \left( \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \\ \end{array} \right) \left( \begin{array}{c} x(t) \\ y(t) \\ \end{array} \right) + \left( \begin{array}{c} 0 \\ -1 \\ \end{array} \right) u(t), \end{aligned}$$
$$\begin{aligned} \left( \begin{array}{c} x(0) \\ y(0) \\ \end{array} \right) = \left( \begin{array}{c} -\frac{1}{\sqrt{2}} \\ 1 \\ \end{array} \right) , \end{aligned}$$
Table 7 The exact and approximated values of x(t)
Table 8 The exact and approximated values of y(t)
Fig. 4
figure 4

Left Graphs of approximate solutions of u(t) for different values of M and k as compared with the exact solution. Right Plot of error function \(|u^{*}(t)-u(t)|\)

with the performance index:

$$\begin{aligned} J=\int _{0}^{2}\left( x^{2}(t)+y^{2}(t)+u^{2}(t)\right) \mathrm{d}t. \end{aligned}$$

The system has the following exact solution (Dziurla and Newcomb 1979):

$$\begin{aligned} x^{*}(t)=u^{*}(t)=-\frac{\sqrt{2}}{2(1+e^{4\sqrt{2}})}(e^{\sqrt{2}t}+e^{-\sqrt{2}(t-4)}), \end{aligned}$$
$$\begin{aligned} y^{*}(t)=-\frac{1}{1+e^{4\sqrt{2}}}(e^{\sqrt{2}t}-e^{-\sqrt{2}(t-4)}). \end{aligned}$$

We solved this problem and presented the obtained results of x(t) and y(t) with our proposed method and the exact solutions of these variables in Tables 7 and 8, respectively. Also, the obtained solutions of control variable for \(M=3,k=1\) and \(M=7,k=2\) compared with the exact solutions and the error function \(|u^{*}(t)-u(t)|\) for \(M=7,N=2\) is plotted in Fig. 4.

7 Conclusion

In this paper, a new algorithm using the state-control parameterization method directly has been presented. The Chebyshev wavelets were used as new orthogonal polynomials to parameterize the state and control variables. An example with one state variable and another example with two state variables were solved. Also, for illustrating the efficiency of our proposed method, two examples consist of linear time-varying system and linear time invariant singular system were considered and their solutions were computed. The obtained solutions showed that our proposed method gives comparable results with other similar works. Thus, an advantage of proposed method is producing an accurate approximation of the exact solution. Also, it does not require to compute operational matrix of derivative or operational matrix of integration for converting the control problem to an optimization problem.