1 Introduction

In the real world, fractional calculus has been used to describe the behavior of many real-life phenomena such as hydrologic [1], viscoelastic modelling [2], disease control and prevention [3], the temperature and motor control [4], growths of populations modelling [5], fluid mechanics [6], bioengineering [7], etc. So, different types of fractional differential equations have became an important topic and the theory, details and applications are given in many references like [8]. The optimal control problems of integer order have occurred in engineering, science, geometry and many other fields and the researchers have widely worked on this topic. It has been shown that materials with memory and hereditary effects, and dynamical processes, including gas diffusion and heat conduction, in fractal porous media have more accurate models by fractional-order models than integer-order models [9,10,11], so during last few decades, the area of fractional optimal control problems is considered. The application of fractional optimal control problems can be found in engineering and physics.

The general definition of an optimal control problem requires extremizing of a performance index over an admissible set of control and state functions. The system should be solved subject to constrained dynamics and state and control variables. Optimality conditions for fractional optimal control problems were developed by now, for example, Agrawal presented a general formulation for this problem with Riemann–Liouville derivative in [12] and then solved these type of problems with a numerical algorithm in [13]. Since the dynamic constraints of this problem involve fractional differential equations, finding exact analytic solutions of Hamiltonian system is difficult. Therefore, finding accurate numerical methods to solve different types of fractional optimal control problems has gained much attention recently, for instance in [14] a numerical solution of a class of fractional optimal control problems is introduced via the Legendre orthonormal basis combined with the operational matrix and the Gauss quadrature rule. In [15] approximation methods for the free final time of fractional optimal control problems (FOCPs) are displayed. The considered problems mainly include the fractional differential equations (FDEs) with fractional derivatives (FDs). In [16] an efficient approximate method was presented for solving a class of fractional optimal control problems. Some authors have solved fractional optimal control problems directly without using Hamiltonian equation, [17,18,19].

Fig. 1
figure 1

Crane system.

The theory of trajectory inequality constraints was introduced by Dreus [20]. In [21] authors considered the difficulties of presence of inequality constraint. These problems arise in many fields of engineering such as human-operated bridge crane sketched in Fig. 1a (which is taken from [22]). Design of robust nonlinear controllers based on both conventional and hierarchical sliding mode techniques for double-pendulum overhead crane systems [23] (see Fig. 1b), optimal control of feedback linearizable dynamical systems [24], Van der Pol oscillator problem [25] and Breakwell problem [26]. Recently some authors have considered the general model of this problem in fractional area ( [27,28,29]).

The summary of this paper is as follows: in Sec. 2, some cardinal foundations of the fractional area are introduced. Generalized fractional order of the Chebyshev functions and their operational matrix are introduced in section 3. We have devoted Sec. 4, to the problem statement and the convergence of our method. Finally, in Sect. 5, our numerical results are reported to show the validity of our method. Section 6 is composed of a brief outline of paper.

2 Preliminaries and notations

In this section, some necessary definitions and mathematical preliminaries are given.

Definition 1

The Riemann–Liouville fractional integral of order \(\alpha\) is defined as [8]

$$\begin{aligned} I^{\alpha }f(t) = \left\{ \begin{array}{ll} \frac{1}{\varGamma (\alpha )}\int _{0}^t \frac{f(s)}{(t-s)^{1-\alpha }}ds=\frac{1}{\varGamma (\alpha )} t^{\alpha -1}\star f(t), &{} \alpha>0, t>0,\\ f(t) , \ \ \ \ \ \ \ \ \ \ \ \ &{} \alpha =0. \end{array} \right. \end{aligned}$$
(1)

Definition 2

Caputo’s fractional derivative of order \(\alpha\) is defined as [8]

$$\begin{aligned}&D^{\alpha }f(t)=\frac{1}{\varGamma (n-\alpha )}\int _{0}^{t}\frac{f^{(n)}(s)}{(t-s)^{\alpha +1-n}}ds,\nonumber \\&n-1 <\alpha \le n, \quad n\in N, \end{aligned}$$
(2)

with the following properties

  1. 1.

    \(I^{\alpha }D^{\alpha }f(t) = f(t)-\sum _{i=0}^{n-1}f^{(i)}(0)\frac{t^{i}}{i!},\)

  2. 2.

    \(D^{\alpha }c = 0,\)

  3. 3.

    \(D^{\alpha }(\lambda _{1}f_1(t) + \lambda _{2}f_2(t)) =\lambda _{1}D^{\alpha }f_1(t)+\lambda _{2}D^{\alpha }f_2(t),\)

where \(c, \lambda _{1}\), and \(\lambda _{2}\) are constants.

3 Generalized fractional order of the Chebyshev functions

Some good properties of the Chebyshev polynomials such as orthogonality, recursive relation, having simple real roots, completeness in the space of polynomials, cause many authors to apply these functions in their works [30,31,32,33]. In the current work, the transformation \(x=2(\frac{t}{\eta })^\alpha\) -1, \(\alpha ,\eta> 0\) was used on the Chebyshev polynomials of the first kind and then we apply them to solve optimal control problems. The generalized fractional order of the Chebyshev functions (GFCFs ) is defined in interval \([0,\eta ]\), and is denoted by \(_\eta FT_n^\alpha (t)=T_n(2(\frac{t}{\eta })^\alpha -1)\). The analytical form of \(_\eta FT_n^\alpha (t)\) of degree \(n\alpha\) given by

$$\begin{aligned} {}_\eta FT_n^\alpha (t)= & {} \sum _{k=0}^n(-1)^{k+n}\frac{n2^{2k}(n+k-1)!}{(n-k)!(2k)!} \left( \frac{t}{\eta }\right) ^{\alpha k} \nonumber \\= & {} \sum _{k=0}^n \beta _{n,k,\eta ,\alpha }t^{\alpha k},~~~~t\in [0,\eta ], \end{aligned}$$
(3)

where

$$\begin{aligned} \beta _{n,k,\eta ,\alpha }=(-1)^{k+n} \frac{n2^{2k}(n+k-1)!}{(n-k)!(2k)! \eta ^{\alpha k}}~~~~~~ and ~~~~~~ \beta _{0,k,\eta ,\alpha }=1. \end{aligned}$$

The GFCFs are orthogonal with respect to the weight function \(w(t)=\frac{t^{\frac{\alpha }{2}-1}}{\sqrt{\eta ^\alpha -t^\alpha }}\) in the interval \([0,\eta ]\):

$$\begin{aligned} \int _{0}^{\eta }~_\eta FT_n^\alpha (t)~_\eta FT_m^\alpha (t)w(t)dt=\frac{\pi }{2\alpha }c_n\delta _{mn}. \end{aligned}$$
(4)

where \(\delta _{mn}\) is Kronecker delta, \(c_0=2\), and \(c_n=1\) for \(n \ge 1\).

Also any function y(t), \(t\in [0,\eta ]\), can be expanded as follows:

$$\begin{aligned} y(t)=\sum _{n=0}^\infty a_n ~_\eta FT_n^\alpha (t), \end{aligned}$$

where \(a_n\), can be calculated using the property of orthogonality in the GFCFs as follows:

$$\begin{aligned} a_n=\frac{2\alpha }{\pi c_n}\int _{0}^{\eta }~_\eta FT_n^\alpha (t)y(t)w(t)dt,~~~~n=0,1,2,\cdots . \end{aligned}$$

However, in the numerical methods, the first m-terms of the GFCFs are used to approximate y(t)

$$\begin{aligned} y(t)\approx y_m(t)=\sum _{n=0}^{m-1} a_n ~_\eta FT_n^\alpha (t)=A^T\varPhi (t), \end{aligned}$$
(5)

where A and \(\varPhi (t)\) are the following coefficients and basis vectors.

$$\begin{aligned} A= & {} [a_0,a_1,...,a_{m-1}]^T, \end{aligned}$$
(6)
$$\begin{aligned} \varPhi (t)= & {} [~_\eta FT_0^\alpha (t),~_\eta FT_1^\alpha (t),...,~_\eta FT_{m-1}^\alpha (t)]^T. \end{aligned}$$
(7)

Theorem 1

Suppose that\(D^{k\alpha }y(t)\in C[0,\eta ]\) for\(k=0,1,...,m,\) and\(_\eta F_m^\alpha\) is the generated subspace by \(\{_\eta FT_0^\alpha (t),_\eta FT_1^\alpha (t),\cdots ,_\eta FT_{m-1}^\alpha (t)\}.\)

If \(y_m=A^T\varPhi\) (in the Eq. (5)) is the best approximation to y(t) from \(_\eta F_m^\alpha\), then the error bound is presented as follows:

$$\begin{aligned} \parallel y(t)-y_m(t)\parallel _w \le \frac{ \eta ^{m\alpha } M_\alpha }{2^m\varGamma (m\alpha +1)}\sqrt{\frac{\pi }{\alpha m!}}, \end{aligned}$$

where \(M_\alpha \ge |D^{m\alpha }y(t)|, ~~t\in [0,\eta ].\)

Proof

See Ref. [33]. \(*\)\(\square\)

Theorem 2

The generalized fractional order of the Chebyshev function\(_\eta FT_n^\alpha (t)\), has preciselyn real zeros on interval\((0,\eta )\) in the form

$$\begin{aligned} t_k=\eta \left( \frac{1+\cos \left( \frac{(2k-1)\pi }{2n}\right) }{2}\right) ^\frac{1}{\alpha },~~~~~~ k=1,2,\cdots ,n. \end{aligned}$$

Proof

The Chebyshev polynomial\(T_n(x)\) has n real zeros [34] so we can write

$$\begin{aligned} x_k=cos\left( \frac{(2k-1)\pi }{2n}\right) , \quad k=1,2,...,n, \end{aligned}$$

and using

$$\begin{aligned} T_n(x)=(x-x_1)(x-x_2)...(x-x_n), \end{aligned}$$

\(x=2(\frac{t}{\eta })^\alpha -1\), we have

$$\begin{aligned} T_n(x)=((2(\frac{t}{\eta })^\alpha -1)-x_1)((2(\frac{t}{\eta })^\alpha -1)-x_2)...((2(\frac{t}{\eta })^\alpha -1)-x_n). \end{aligned}$$

Now we can obtain the real zeros of\(_\eta FT_n^\alpha (t)\) as follows:

$$\begin{aligned} t_k=\eta \left( \frac{1+x_k}{2}\right) ^{\frac{1}{\alpha }}.* \end{aligned}$$

In the next theorem, the operational matrix of the Caputo fractional derivative of order\(\alpha> 0\) for the GFCFs is generalized. \(\square\)

Theorem 3

Let\(\varPhi (t)\) be GFCFs vector in Eq. (7), and \(\mathbf D ^{(\alpha )}\) be an\(m\times m\) fractional derivative operational matrix of the Caputo fractional derivatives of order\(\alpha> 0\) as follows:

$$\begin{aligned} D^\alpha \varPhi (t)= \mathbf D ^{(\alpha )} \varPhi (t). \end{aligned}$$
(where8)
$$\begin{aligned} \mathbf D _{i,j}^{(\alpha )}=\left\{ \begin{array}{l}{\frac{2}{\sqrt{\pi }c_j}\mathop {\sum }\nolimits _{k=1}^i \mathop {\sum }\nolimits _{s=0}^j \beta _{i,k,\eta ,\alpha } \beta _{j,s,\eta ,\alpha }\frac{\varGamma (\alpha k+1)\varGamma (s+ k-\frac{1}{2})\eta ^{\alpha (k+s-1)}}{\varGamma (\alpha k-\alpha +1)\varGamma (s+ k)},~i>j}\\ {0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ otherwise} \end{array} \right. \end{aligned}$$
(for9)

\(i,j=0,1,...,m-1.\)

Proof

See Ref. [33]. \(*\)\(\square\)

Theorem 4

If\(\mathbf D ^{(\alpha )}\) is the operational matrix of the Caputo fractional derivatives of order\(\alpha> 0\) for the generalized fractional order of the Chebyshev function, then the error vector of this matrix is zero.

Proof

The error vector is defined as:

$$\begin{aligned} E_{D} =D^\alpha \varPhi (t)- \mathbf D ^{(\alpha )} \varPhi (t) , \quad E_{D} = \left[ \begin{array}{l} e_{0} \\ e_{1}\\ \vdots \\ e_{m-2}\\ e_{m-1} \end{array}\right] . \\ \end{aligned}$$

For \(i=0,1,...,m-1,\) we have

$$\begin{aligned}&D^\alpha _\eta FT_i^\alpha (t)=\sum _{k=0}^i \beta _{i,k,\eta ,\alpha }.D^\alpha t^{\alpha k} \nonumber \\&=\sum _{k=0}^i \beta _{i,k,\eta ,\alpha } \frac{\varGamma (\alpha k+1)}{\varGamma (\alpha k+1-\alpha )} t^{\alpha k-\alpha }, \end{aligned}$$
(10)

and since \(deg(t^{\alpha k-\alpha })\le (m-1) \alpha\) for \(i=0,1,...,m-1,\) Eq. (10) can be expanded as follows

$$\begin{aligned} \sum _{k=0}^i \beta _{i,k,\eta ,\alpha } \frac{\varGamma (\alpha k+1)}{\varGamma (\alpha k+1-\alpha )} t^{\alpha k-\alpha }=\sum _{j=0}^{m-1} a_{i,j}~_\eta FT_j^\alpha (t), \end{aligned}$$
(11)

where

$$\begin{aligned}&a_{i,j}=\frac{2\alpha }{\pi c_j} \int _0^\eta \sum _{k=0}^i \beta _{i,k,\eta ,\alpha } \frac{\varGamma (\alpha k+1)}{\varGamma (\alpha k+1-\alpha )} t^{\alpha k-\alpha }(~_\eta FT_j^\alpha (t))~ w(t)~dt. \\&a_{i,j}=\frac{2\alpha }{\pi c_j}\int _0^\eta \sum _{k=0}^i \beta _{i,k,\eta ,\alpha } \frac{\varGamma (\alpha k+1)t^{\alpha k -\alpha }}{\varGamma (\alpha k-\alpha +1)} \sum _{s=0}^j \beta _{j,s,\eta ,\alpha } t^{\alpha s}\frac{t^{\frac{\alpha }{2}-1}}{\sqrt{\eta ^2-t^\alpha }}dt=\\&\frac{2\alpha }{\pi c_j}\sum _{k=0}^i \sum _{s=0}^j \beta _{i,k,\eta ,\alpha }\beta _{j,s,\eta ,\alpha }\frac{\varGamma (\alpha k+1)}{\varGamma (\alpha k-\alpha +1)}\int _0^\eta \frac{t^{\alpha (k+s-\frac{1}{2})-1}}{\sqrt{\eta ^2-t^\alpha }}dt. \end{aligned}$$

Now, by integration of above equation we can conclude that

$$\begin{aligned} a_{i,j}=\mathbf D _{i,j}^{(\alpha )}, \end{aligned}$$

and as a result for \(i=0,1,...,m-1,\)

$$\begin{aligned} \Vert e_{i}\Vert _{2}=0. \end{aligned}$$

\(\square\)

4 Problem statement

In the current section, the following class of nonlinear fractional systems with inequality constraints is considered

$$\begin{aligned}&D^{\alpha }x_i(t)=F_i(t,x(t),u(t)),\quad 0\le \alpha \le 1, \quad i=1,...,l,\\&S_j(t,x(t),u(t))\le 0,\quad j=1,2,...,r, \end{aligned}$$

where

$$\begin{aligned} x(t)= & {} [ x_{1}(t), x_{2}(t), \ldots , x_{l}(t)]^{T},\\ u(t)= & {} [u_{1}(t), u_{2}(t), \ldots , u_{q}(t) ]^{T}, \end{aligned}$$

are state and control vectors, respectively, and the initial conditions of system are

$$\begin{aligned} x_{i}(0)=\varrho _i, \end{aligned}$$

The aim is to find the optimal control vector u(t) and the corresponding state functions satisfying this system and minimizing the following performance index

$$\begin{aligned} min\quad J(x,u)=\int _{0}^{1} L(t, x(t),u(t)) dt, \end{aligned}$$

where \(L : [0, 1] \times R^2 \rightarrow R\) is a differentiable function. It should be noticed that for \(\alpha =1,\) the mentioned fractional problem is reduced to the classic optimal control problem. For solving these problems, we have focused on fractional one as follows and to find the answer for optimal control problem of integer order \(\alpha =1\) is considered.

First we expand the elements of the state and control vectors in terms of GFCFs

$$\begin{aligned}&x_k(t)\simeq \sum _{n=0}^{m-1} x_{kn}~_\eta FT_n^\alpha (t)=X_k^T \varPhi (t),\nonumber \\&u_s(t)\simeq \sum _{n=0}^{m-1} u_{sn}~_\eta FT_n^\alpha (t)=U_s^T \varPhi (t), \end{aligned}$$
(12)

therefore, we have

$$\begin{aligned} D^{\alpha } x_k(t)\simeq X_k^T \mathbf D ^{(\alpha )}\varPhi (t), \end{aligned}$$
(13)

where \(X_k\) and \(U_s\) are the following unknown coefficients vectors

$$\begin{aligned} X_k=[x_{k0},x_{k1},\ldots ,x_{km-1}]^T,\\ U_s=[u_{s0},u_{s1},\ldots ,u_{sm-1}]^T, \end{aligned}$$

Suppose that \({\hat{\varPhi }}(t)\) and \({\hat{\varPhi }}^{*}(t)\) are the following \(lm\times l\) and \(qm \times q\) matrices, respectively,

$$\begin{aligned} {\hat{\varPhi }}(t) = I_{l} \otimes \varPhi (t), \quad \hat{\varPhi }^{*}(t) = I_{q} \otimes \varPhi (t), \end{aligned}$$
(14)

where \(I_{l}\) and \(I_{q}\) are \(l \times l\) and \(q\times q\) identity matrices, respectively, and \(\otimes\) denotes the Kronecker product [35]. So we can write

$$\begin{aligned} u(t)\simeq (U^T{\hat{\varPhi }}^{*}(t))^T,\quad x(t)\simeq (X^T{\hat{\varPhi }}(t))^T,\quad D^\alpha x(t)\simeq ({\hat{\mathbf{D }}}^{(\alpha )} X^T{\hat{\varPhi }}(t))^T. \end{aligned}$$

Where X, and U are vectors of order \(lm \times 1\) and \(qm \times 1\), respectively, given by

$$\begin{aligned} X = [X_{1}, X_{2}, \ldots , X_{l}]^{T},\quad U= [U_{1}, U_{2}, \ldots , U_{q}]^{T}, \end{aligned}$$
(15)

and

$$\begin{aligned} {{\hat{\mathbf{D }}}^{(\alpha )}}= I_{l} \otimes \mathbf D ^{(\alpha )}. \end{aligned}$$

Now, if these approximations are placed in the cost function, we have:

$$\begin{aligned} J\simeq \int _{0}^{1} L[t, {\hat{\varPhi }}(t)^T X,{\hat{\varPhi }}^{*}(t)^TU]dt \end{aligned}$$

that can be solved numerically by Gauss–Chebyshev integration method. In the next step, we focus on the mentioned dynamical system and write

$$\begin{aligned} H_i(t)= & {} X_i^T \mathbf D ^{(\alpha )}\varPhi (t)-F_i(t, {\hat{\varPhi }}(t)^T X,{\hat{\varPhi }}^{*}(t)^TU), \quad i=1,...,l,\\ G_j= & {} S_j(t, {\hat{\varPhi }}(t)^T X,{\hat{\varPhi }}^{*}(t)^TU)+z_j^2(t),\quad j=1,2,...,r, \end{aligned}$$

where \(z_j(t), \quad j=1,2,...,r,\) are the unknown slacks variable added to inequality trajectory to convert them to equality condition and can be expanded as

$$\begin{aligned} z_j(t) \simeq Z_j^T \varPhi (t), \end{aligned}$$

where \(Z_j\) is the following unknown coefficients vector

$$\begin{aligned} Z_j=[z_{j0},z_{j1},\ldots ,z_{jm-1}]^T,\quad j=1,2,...,r. \end{aligned}$$

Now we consider

$$\begin{aligned} J^\star =J+\sum \limits _{i= 1}^{l}\sum \limits _{f= 1}^{m}(H_i)(t_f)\lambda _{i,f}+\sum \limits _{j= 1}^{r}\sum \limits _{f= 1}^{m}(G_j)(t_f)\gamma _{j,f}, \end{aligned}$$
(16)

where \(\gamma _{j,f}\) and \(\lambda _{i,f}\), are unknown Lagrange multipliers and \(t_f, f=1,...,m\) are collocation points introduced in theorem 2. The necessary conditions for finding the extreme of \(J^\star\), are

$$\begin{aligned}&\frac{\partial J^\star }{\partial X_i}=0,\quad i=1,...,l,\quad \frac{\partial J^\star }{\partial U_j}=0,\quad j=1,...,q,\quad \frac{\partial J^\star }{\partial z_j}=0,\quad j=1,...,r,\\&\frac{\partial J^\star }{\partial \lambda _{i,f}}=0,\quad i=1,...,l,\quad f=1,...,m,\quad \frac{\partial J^\star }{\partial \gamma _{j,f}}=0,\quad j=1,...,r,\quad f=1,...,m, \end{aligned}$$

and this system should be solved under initial condition, as a result, by replacing the obtained values from this system in Eq. (12), u(t) and x(t) can be calculated.

Theorem 5

The approximate solution\(x(.)=X^T\varPhi (.),\) and\(u(.)=U^T\varPhi (.)\), converge, respectively, to the exact solutions as m, the number of the generalized fractional order of the Chebyshev function of the basis vector\(\varPhi (t)\) , tends to infinity.

Proof

We prove this theorem for each state and control variable, the expanding of the results for vector case is straightforward. Suppose \(W_m\) is the set of all \((X^T, U^T)\varPhi (.)\) where \((X^T, U^T)\) satisfies the constraints. By convergence property of fractional Chebyshev polynomials, for every \((X_1,U_1)\varPhi (.)\in W_m,\) there exists a unique pair of functions \((x_1(.), u_1s(.))\) such that

$$\begin{aligned} (X_1^T,U_1^T)\varPhi (t)\longrightarrow (x_1(t), u_1(t)) , m\longrightarrow \infty , t \in [0,1] \end{aligned}$$

According to Theorems 1 and 4 as \(m\longrightarrow \infty\), then \(X_1^T \mathbf D ^{(\alpha )} \varPhi (.).\) tends to \(D^\alpha x_1(.)\). It is clear that \((x_1(.), u_1(.)) \in W\) where W is the set of all (x(.), u(.)) that satisfy the constraints, so as m tends to infinity each element in \(W_m\) tends to an element in W.

Moreover as \(m \rightarrow \infty\), then \(J_1^m=J((X_1^T\varPhi (.),U_1^T\varPhi (.)),\) tends to \(J_1\) where \(J_1^m\) is the value of cost function corresponding to the pair \((X_1^T,U_1^T)\varPhi (.)\) and \(J_1\) is the objective value corresponding to the feasible solution \((x_1(.), u_1(.))\). Now,

$$\begin{aligned} W_1\subseteq \cdots \subseteq W_m \subseteq W_{m+1} \subseteq \cdots \subseteq W, \end{aligned}$$

consequently

$$\begin{aligned} \text {inf}_{W_{1}} J _1 \ge \cdots\ge \text {inf}_{W_m} J_m \ge \text {inf}_{W_{m+1}} J_{m+1 }\ge \cdots \ge \text {inf}_W J, \end{aligned}$$

that is a non-increasing and bound sequence, therefore, it converges to a number \(\zeta \ge \text {inf}_W\). Now, we need to show that \(\zeta =\lim _{m \rightarrow \infty } \text {inf}_{W_m}J_m= \text {inf}_W J\). Given \(\varepsilon>0\), let (x(.), u(.)) be an element in W such that

$$\begin{aligned} J (x(.), u(.)) < \text {inf} _{W }J + \varepsilon , \end{aligned}$$

where, by the definition of inf, such \((x(.), u(.)) \in W\) exists. Since J(x(.), u(.)) is continuous, for this value of \(\varepsilon\), there exists \(N(\varepsilon )\) so that if \(m> N(\varepsilon ),\)

$$\begin{aligned} \vert J (x(.), u(.)) - J (X^T\varPhi (.),U^T\varPhi (.))\vert <\varepsilon . \end{aligned}$$
(17)

Now if \(m> N(\varepsilon )\), then using Eq. (17) gives

$$\begin{aligned} J (X^T\varPhi (.),U^T\varPhi (.))< J (x(.), u(.)) +\varepsilon < \text {inf}_W J + 2\varepsilon , \end{aligned}$$

on the other hand,

$$\begin{aligned} \text {inf}_W J \le \text {inf}_{W_m}J_m\le J (X^T\varPhi (.),U^T\varPhi (.)), \end{aligned}$$

so

$$\begin{aligned} \text {inf}_W J \le \text {inf}_{W_m}J_m < \text {inf}_W J + 2\varepsilon , \end{aligned}$$

or

$$\begin{aligned} 0 \le \text {inf}_{W_m}J_m - \text {inf}_W J < 2\varepsilon , \end{aligned}$$

where \(\varepsilon\) is chosen arbitrary. Thus,

$$\begin{aligned} \zeta =\lim _{m \rightarrow \infty } \text {inf}_{W_m}J_m= \text {inf}_W J, \end{aligned}$$

which completes the proof. \(\square\)

5 Numerical results

In this section, numerical examples are presented to demonstrate the applicability and accuracy of the proposed technique. All the numerical computations have been done using Mathematica. The first two examples are devoted to integer order problem and in examples 3 and 4 the fractional optimal control problems are solved.

Example 1

As a practical and nonlinear example, we consider the following rigid asymmetric spacecraft problem. The Euler’s equations for the angular velocities \(\omega _1\), \(\omega _2\) and \(\omega _3\) of the spacecraft are given by [36]

$$\begin{aligned}&\omega ' _1(t) =-\frac{I_3-I_2}{I_1}\omega _2\omega _3+\frac{u_1}{I_1},\\&\omega ' _2(t) =-\frac{I_1-I_3}{I_2}\omega _1\omega _3+\frac{u_2}{I_2},\\&\omega ' _3(t) =-\frac{I_2-I_1}{I_3}\omega _1\omega _2+\frac{u_3}{I_3}, \end{aligned}$$

where \(u_1\), \(u_2\), and \(u_3\) are the control functions, and \(I_1= 86.24\) kg \(m^2\), \(I_2 = 85.07\) kg \(m^2\) and \(I_3 = 113.59\) kg \(m^2\) are the spacecraft principle inertia.

The performance index to be minimized is given by

$$\begin{aligned} min \quad J=\frac{1}{2} \int _{0}^{100}[u_1^{2}(t) +u_2^{2}(t)+u_3^{2}(t)] dt, \end{aligned}$$
(18)

we consider the state inequality constraint on \(\omega _1\) given by

$$\begin{aligned} \omega _1-(5\times 10^{-6}t^2-5\times 10^{-4}t+0.016)\le 0. \end{aligned}$$

in addition, the following initial and terminal state constraints have to be satisfied:

$$\begin{aligned}&\omega _1(0) = 0.01 r/s, \quad \omega _2(0) = 0.005 r/s, \quad \omega _3(0) = 0.001 r/s,\\&\omega _1(100) = 0, \quad \omega _2(100) = 0, \quad \omega _3(100) = 0. \end{aligned}$$
Table 1 The values of J for Example 1

We use transformation \(t =100\tau\), \(0\le \tau \le 1\) to use our proposed method.

In Table 1, the results for J of our method together with using hybrid of block-pulse and Bernoulli polynomials [37] for various values of N, the order of block-pulse functions, and M, the order of Bernoulli polynomials, and quasilinearization and Chebyshev polynomials for different number of basis polynomials N [36] are listed.

To show the validity of the numerical findings we consider \(m=7\) and we get

$$\begin{aligned}&\omega _1(t)=0.01 - 0.0101654 t + 0.000238568 t^2\\&\quad - 0.0000643454 t^3 - 0.0000102394 t^4 + \\&9.27955\times 10^{-7} t^5+ 4.48983\times 10^{-7 }t^6,\\&\omega _2(t)=0.005 - 0.00467849 t - 0.00047674 t^2\\&\quad + 0.000149936 t^3 + 6.2321410^{-6} t^4\\&\quad - 7.01435\times 10^{-7} t^5 - 2.34996\times 10^{-7} t^6,\\&\omega _3(t)=0.001 - 0.00094848 t - 0.0000762761 t^2\\&\quad + 0.0000243584 t^3 - 4.34376\times 10^{-7} t^4 +\\&1.35975\times 10^{-6} t^5 - 5.27776\times 10^{-7} t^6, \end{aligned}$$

and by choosing \(t=0,\) and \(t=1,\) the initial and terminal state conditions are obtained. Also Figure 2 shows that the obtained state and control functions approximately fulfill the constraints.

Fig. 2
figure 2

Curves for constraints \(G_1(t)\), \(G_2(t)\), \(G_3(t)\), and \(G_4(t)\), Example 1

$$\begin{aligned} G_1(t)= & {} \omega ' _1(t) +\frac{I_3-I_2}{I_1}\omega _2\omega _3-\frac{u_1}{I_1}=0,\\ G_2(t)= & {} \omega ' _2(t) +\frac{I_1-I_3}{I_2}\omega _1\omega _3-\frac{u_2}{I_2}=0,\\ G_3(t)= & {} \omega ' _3(t) +\frac{I_2-I_1}{I_3}\omega _1\omega _2-\frac{u_3}{I_3}=0,\\ G_4(t)= & {} \omega _1-(5\times 10^{-6}t^2-5\times 10^{-4}t+0.016)\le 0. \end{aligned}$$

Example 2

Consider the two-dimensional fractional optimal control problems [38].

$$\begin{aligned} min \quad J= \int _{0}^{1}(x_{1}^2(t)+x_{2}^2(t) +0.005 u^{2}(t)) dt, \end{aligned}$$
(19)

s.t.

$$\begin{aligned}&D^{\alpha }x_{1}(t) = x_{2}(t), \quad 0 < \alpha \le 1,\\&D^{\alpha }x_{2}(t) = -x_{2}(t) + u(t), \\&x_{1}(0) =0, \quad x_{2}(0) =-1, \end{aligned}$$

and subject to inequality conditions

$$\begin{aligned} x_2(t)\le 8(t-0.5)^2-0.5, \end{aligned}$$
Table 2 The values of J for Example 2

The resulting values of J together with the solutions obtained by [39] using Chebyshev finite difference method, results reported in [40] using Chebyshev polynomials and method presented in [28] via different m, the order of Bernstein polynomials, are summarized in Table 2 and we can see that by using our proposed method we have obtained state and control functions into the feasible region which give better values for performance index.

Example 3

Consider the following problem [28]

$$\begin{aligned} min \quad J=\frac{1}{2} \int _{0}^{1}[x_1^{2}(t) +u^{2}(t)] dt, \end{aligned}$$
(20)

s.t.

$$\begin{aligned}&D^{\alpha }x_1(t) = x_2(t), \quad 0 \le \alpha \le 1,\\&D^{\alpha }x_2(t) = -x_2(t)+u(t),\\&\vert u(t)\vert \le 1, \end{aligned}$$

and initial conditions

$$\begin{aligned} x_1(0) =0, \quad x_2(0) =10. \end{aligned}$$

Table 3 shows the values of J obtained by the hybrid functions [37], the rationalized Haar Functions [41] and proposed method in [28] for \(\alpha = 1\), together with the present method, comparing the values of J shows that proposed approach can solve the problem effectively.

Table 3 The values of J with \(\alpha =1,\) for Example 3
Fig. 3
figure 3

Curves of state and control functions for \(\alpha =0.7, 0.8, 0.9, 1\), Example 2

Table 4 shows the convergence between the values of J for different \(\alpha\) as \(\alpha\) approaches to 1 for \(m=5\).

Table 4 The estimated value of J for different \(\alpha\) for Example 3

Since the exact solution of control and state functions for fractional value of \(\alpha\) is not known, the reliability of this method is measured by Fig. 3.

Also Fig. 4 shows that obtained solution for \(\alpha =1\) is into admissible region.

Fig. 4
figure 4

Curves for constraints \(G_1(t)\), \(G_2(t)\), and \(G_3(t)\), Example 3

In Fig. 4, \(G_1(t)\), \(G_1(t)\) and \(G_3(t)\) are the following constraints.

$$\begin{aligned}&G_1(t)=x' _1(t) -x_2(t)=0,\\&G_2(t)=x' _2(t) +x_2(t)-u(t)=0,\\&G_3(t)=\vert u(t)\vert -1\le 0, \end{aligned}$$

Example 4

Consider the following problem [28]

$$\begin{aligned} min \quad J= \int _{0}^{1} (ln 2) x(t) dt, \end{aligned}$$

s.t.

$$\begin{aligned}&D^{\alpha }x(t) = (ln 2) (x(t)+u(t)), \quad 0 \le \alpha \le 1,\\&x(t)+u(t)\le 2,\\&\vert u(t)\vert \le 1, \end{aligned}$$

and initial conditions

$$\begin{aligned} x(0) =0. \end{aligned}$$

The exact solution of this problem for \(\alpha =1,\) is \(x(t) = e^{(ln2)t}-1\), \(u(t)=1\), and \(J=-0.30685281.\) Table 5 shows the values of J obtained by the hybrid functions [42], where N and M show the order of block-pulse functions and Bernoulli polynomials, respectively, and rationalized Haar functions [41] for \(\alpha = 1\), together with the present method, comparing the values of J shows that proposed approach can solve the problem effectively.

Table 5 The values of J with \(\alpha =1,\) for Example 4

Table 6 shows the convergence between the values of J for different \(\alpha\) as \(\alpha\) approaches 1 for \(m=4,\) and the absolute errors are reported in Table 7. Figure 5 demonstrate the validity of obtained solution in fractional case.

Table 6 The estimated value of J for different \(\alpha\) for Example 4
Fig. 5
figure 5

Exact and numerical values of state function for Example 4

Table 7 The absolute errors with \(m=4\) for Example 4

6 Conclusion

In this paper, the generalized fractional order of the Chebyshev functions (GFCF) of the first kind has been introduced, next the fractional derivative operational matrix of these functions is used to approximate the fractional or integer order derivative of the state functions. It should be noticed that this matrix gives the derivative exactly in both fractional and integer cases. As a matter of fact, the functions of the problem are approximated by GFCF functions with unknown coefficients in the cost function and conditions. Therefore, a optimal control problem is reduced to an unconstrained optimization problem. Then optimality conditions yield a system of algebraic equations which is solved by collocation method. As shown, the method is converging and has an appropriate accuracy and stability. Illustrative examples show that this method has good results for linear and nonlinear problems.