1 Introduction

Glioblastoma multiform (GBM) is the most frequent malignant brain tumor which accounts for 16% of primary central nervous system (CNS) tumors (Thakkar et al. 2014). Although GBM mainly occurs in the brain, it can rarely appear in the brain stem, cerebellum, or spinal cord (Blissitt 2014). GBMs are derived from glial cells in the CNS; however, other neural stem cells may serve as the cell of origin for gliomas (Phillips et al. 2006).

The median age of GBM is diagnosed at 64 years (Thakkar et al. 2014), but it can also affect patients at different ages even children. Except for higher proliferative activity of glioma cells in childhood, other morphological features do not differ between adults and children. The incidence rate of this tumor is 1.6 times higher in adult men than women (Ellor et al. 2014; Urbańska et al. 2014).

Some environmental risk factors associated with brain tumors are assumed to be ionizing radiation, smoking, synthetic rubber manufacturing, petroleum refining, air pollution, and toxic agents such as vinyl chloride and pesticides (Alifieris and Trafalis 2015). Furthermore, a group of specific genetic disorders such as retinoblastoma, neurofibromatosis type 1 and 2, Li-Fraumeni syndrome, tuberous sclerosis, and Turcot syndrome may increase the risk of GBM (Ellor et al. 2014).

Clinical presentations of the tumor are highly dependent on the size and the location of the tumor. The most common reasons patients present to primary care centers are symptoms of focal weakness, speech impairment, ataxia, visual disturbances, memory loss, seizure, or increased intracranial pressure and headache (Nelson and Cha 2003; Perry et al. 2006).

Computed tomography (CT) scans or magnetic resonance imaging (MRI) are useful examination techniques for brain tumor diagnosis. In MRI, gadolinium enhancement helps doctors diagnose abnormal tissues and monitor the progress of glioblastomas. The irregular hypodense center of necrosis and heterogenous enhancement of periphery are of frequent features of GBM. Necrosis is an important diagnostic feature for a malignant brain tumor to be recognized as a case of GBM under the classification system of World Health Organization (WHO) (Blissitt 2014). Surrounding vasogenic edema, marked mass effect, intratumoral hemorrhage, and ventricular extension may also be seen on imaging (Ellor et al. 2014). Some GBMs may appear multifocal (multiple lesions at different locations), distant (lesions far from the primary focus), and diffuse or may represent microscopic infiltration, or leptomeningeal dissemination (Johnson et al. 2015).

For definitive diagnosis of GBM, examination of the neurosurgical tumor sample is done based on traditional histological, cytological, and histochemical methods and in case of no access to tumor resection, fine needle aspiration biopsy is carried out (Urbańska et al. 2014).

Fractional order calculus deals with the generalization of derivatives and integrals of arbitrary orders to non-integer orders (Podlubny 1999). In a series of papers (Agrawal et al. 2004; Hassani et al. 2022; Singh et al. 2022; Hammad et al. 2021; Wang et al. 2022; Karthikeyan et al. 2021; Rashid et al. 2022; Hajiseyedazizi et al. 2021; Rashid et al. 2022; Radmanesh and Ebadi 2020; He et al. 2022; Abdollahi et al. 2021; Jaros and Kusano 2014; Kumar et al. 2018; Odibat 2019), the authors have investigated the fractional differential equations in different branches of sciences including mathematics, physics, bioscience, and engineering. Xu et al. (2019) analyzed Legendre-Gauss collocation method for the fractional differential equation of nonlinear distributed-order. They first proved the unique existence of the exact solution and then the high accuracy of the proposed method. Cong et al. (2020) compared solution properties of ordinary and fractional differential equations (FDEs) and proposed some distinct features and a new notion of stability for systems of fractional-order. Singla (2021) used power series expansion technique to investigate the existence of series solutions for some nonlinear systems of time fractional partial differential equations. Garrappa and Kaslik (2020) studied the initial conditions for fractional delay differential equations (FDDEs). They discussed the initialization of FDDEs on both the solution and the fractional operator and found some inconsistencies in the process of incorporating the initial function leading to the fractional derivative. Vargas (2022) presented a finite difference approach to solving a class of fractional differential equations at irregular meshes. The approach was followed on the base of moving least squares method and on the existence of a fractional Taylor polynomial for Caputo fractional derivatives. Bavi et al. (2022) developed a meshless algorithm regarding moving least squares (MLS) shape functions for solving time fractional equation of coronavirus diffusion in different mediums of soil, water, and tissue. Heydari and Atangana (2022) defined a hybrid of Chebyshev and piecewise Chebyshev cardinal functions to solve nonlinear equations of fractional reaction-advection-diffusion. Roohi et al. (2021) simulated the behavior of the generalized Couette flow of fractional Jeffrey nanofluid subjected to porous medium of fluctuating thermochemical effects based on the second kind Chebyshev polynomials. Hosseininia and Heydari (2019) proposed a meshless MLS method for the numerical solution of nonlinear equations of 2D telegraph involving Mittag-Leffler non-singular kernel in the Atangana–Baleanu–Caputo sense using variable-order time fractional derivatives. Heydari et al. (2014) proposed a novel computational approach for solving fractional biharmonic equations based on the combination of operational matrix of fractional derivatives and shifted polynomials of Chebyshev. Sabermahani et al. (2020) achieved a new operational Tau-Collocation method on the Lagrange polynomial basis to find the solution of fractional differential equations of variable order. Bhrawy et al. (2014) used generalized Laguerre orthogonal functions of fractional order to approximate a system of fractional differential equations via a new spectral method. Hussien (2019) developed a collocation operational matrix method for two common delay differential equations of fractional order on generalized Laguerre polynomial basis. Zaky (2020) provided an adaptive spectral collocation method to approximate the solution of a general nonlinear system of fractional differential equations and non-smooth solutions of related integral equations. Zaky (2019) derived an exponentially accurate approach of Jacobi spectral-collocation for non-smooth solutions to nonlinear terminal value problems. Abo-Gabal et al. (2022) proposed Romanovski-Jacobi-Gauss-type quadrature formulae for spectral tau approximation of non-smooth solutions of time-fractional partial differential equations.

Laguerre polynomials (LPs) are widely used as basis functions to numerically solve various types of differential equations. Bhrawy et al. (2014) proposed a formula to express any of Caputo fractional order derivatives in terms of fractional order generalized Laguerre functions. In addition, a fractional order generalized tau technique was proposed for solving Caputo type fractional differential equations. Daşcıoǧlu and Varol (2021) used LPs to develop an approximation method for the numerical solutions of linear fractional Fredholm–Volterra integro-differential equations. Yu et al. (2019) employed the generalized associated Laguerre functions of the first kind as basis functions to numerically solve time-fractional sub-diffusion equations in two-dimensional space on an unbounded domain. Shiralashetti and Kumbinarasaiah (2020) developed a numerical algorithm to find a numerical solution for the system of differential equations based on the Laguerre wavelets exact Parseval frame. Hussien (2019) proposed a collocation method for an approximation of two common delay differential equations of fractional order with generalized LPs basis. Hajimohammadi and Parand (2021) applied a new learning approximation method of generalized Laguerre least squares support vector regression (GLLSSVR) to obtain the solution of time-fractional sub diffusion model (TFSDM) over a semi-infinite domain. Their new method was a combination of collocation/Galerkin method and a kernel of LSSVR method. Chi and Jiang (2021) proposed Laguerre-Legendre spectral method to approximate time direction for the flow of two-dimensional generalized Oldroyd-B model in semi-infinite intervals. Shahni and Singh Shahni and Singh (2022) proposed three computational algorithms based on Taylor-wavelet, Gegenbauer-wavelet, and Laguerre-wavelet collocation methods to solve the integral form of Emden-Fowler equations with a kernel of Green’s functions. Chen et al. Chen et al. (2021) used a novel Laguerre neural network with three layers of neurons for solving Black–Scholes equations and proved its high accuracy and superiority over other existing algorithms. Zhang and Miao (2017) applied weighted LPs to an unconditionally stable scheme for solving one-dimensional telegraph equation.

This paper proposes and applies a fractional order model of glioblastoma brain tumor. The interesting feature of this work is that the LPs are extended to the generalized Laguerre polynomials (GLPs) as a new class of basis function which enables easy approximation of the unknown function and its derivatives. The solution methodology is based on the operational matrices of GLPs and the Lagrange multipliers which reduces solving the model into a nonlinear system of algebraic equations. The proposed model is thus simple and easy to implement for the problem under study whose optimal solution is obtained by solving a system of nonlinear equations. The convergence analysis is also presented for the study model. Furthermore, some test examples are given to verify the validity as well as the applicability of the model.

The distinction between other spectral methods and our proposed approach must be highlighted from the numerical point of view. The error between the exact and numerical solutions must be minimized from an ideal point of view. Accordingly, coefficients must be determined in line with the underlying idea of spectral methods such as Chebyshev, Jacobi, Legendre, and Lagrange polynomials, expressing the solution of a differential equation as a sum of basis functions. There are three common techniques of tau, Galerkin, and collocation used to determine the coefficients. Here, the residual function and the 2-norm of the residual are utilized to transform the study problem into an optimization one and to obtain unknown parameters optimally. Therefore, optimality conditions are found in the form of a nonlinear system of algebraic equations with unknown coefficients. On the other hand, any arbitrary smooth function can be spectrally approximated by singular Sturm–Liouville eigenfunctions such as Chebyshev, Legendre, Jacobi, Lagrange, Hermite, or Laguerre polynomials. In other words, the truncation error tends to zero in a faster way than the potential number of basis functions approaches infinity in the approximation. In conclusion, these basis functions are not most optimally suited to non-analytic function approximation. For this purpose, the use of GLPs is much more efficient.

This work is prepared as follows. In Sect. 2, formulation of fractional order glioblastoma tumor (FGT) and some definitions of fractional calculus are given in the sense of Caputo. In Sect. 3, GLPs are constructed and used to provide operational matrices of derivative, function approximation, and convergence analysis. In Sect. 4, description and analysis of the presented method are made. In Sect. 5, application of the GLPs algorithm is investigated for three examples. In Sect. 6, some concluding remarks are finally drawn.

2 Fractional Order Glioblastoma Tumor in the Caputo Derivative Sense

Glioblastoma multiforme (GBM), also known as a grade IV astrocytoma, starts in brain cells called glial cells. The grading system of gliomas from I to IV indicates the likely progress and growth of brain tumor. Grade IV tumor as the most aggressive type grows rapidly. The tumor may display central necrosis and the tumors cells divide actively. There are more areas of dead tissue and abnormal blood vessel growth. Many researchers have simulated the original two-dimensional model of brain tumor to predict the equation of tumor growth (Cruywagen et al. 1995; Tracqui et al. 1995; Woodward et al. 1996). They have described the effect of therapy on the spatiotemporal growth of tumor by using models that can be read as,

$${\rm Rate\,of\,change\,of\,tumor\,cell\,density}= {\rm Diffusion\,of\,tumor\,cells} + {\rm Growth\,of\,tumor\,cells}$$

in mathematical terms,

$$\begin{aligned}&\frac{\partial U(x,t)}{\partial t}={\mathcal {D}}\nabla ^{2}U(x,t)+\varrho U(x,t)\nonumber \\&\quad ={\mathcal {D}}\frac{1}{x^{2}}\frac{\partial }{\partial x}\left( x^{2}\frac{\partial U(x,t)}{\partial x}\right) +\varrho U(x,t). \end{aligned}$$
(2.1)

Here, U(xt) shows the concentration of tumor cells at location x at time t \(\nabla ^{2}\) indicates the Laplacian operator, and \({\mathcal {D}}\) represents diffusion coefficient as a measure of the spread of invading glioblastoma cells per day in terms of \(cm^2\). The reproduction rate of glioblastoma cells is expressed by \(\varrho \) as a decimal fraction per day. Some authors have added a term of killing rate to investigate the effects of chemotherapy as,

$${\rm Rate\,of\,change\,of\,tumor\,cell\,density}={\rm Diffusion\,of\,tumor\,cells} + {\rm Growth\,of\,tumor\,cells}-{\rm Killing\,rate\,of\,tumor\,cells},$$

Mathematically,

$$\begin{aligned} \frac{\partial U(x,t)}{\partial t}= & {} {\mathcal {D}}\frac{1}{x^{2}}\frac{\partial }{\partial x}\left( x^{2}\frac{\partial U(x,t)}{\partial x}\right) \nonumber \\&\quad +\varrho U(x,t)-\kappa _{t}U(x,t). \end{aligned}$$
(2.2)

where \(\kappa _{t}\) is the killing rate of of tumor cells. Equation (2.2) can be rewritten as

$$\begin{aligned}&\frac{\partial U(x,t)}{\partial t}={\mathcal {D}}\left( \frac{\partial ^{2} U(x,t)}{\partial x^{2}}+\frac{2}{x}\frac{\partial U(x,t)}{\partial x}\right) \nonumber \\ +(\varrho -\kappa _{t})U(x,t). \end{aligned}$$
(2.3)

Assume \(\tau =2{\mathcal {D}} t\) and \(V(x,\tau )=xU(x,t)\), then

$$\begin{aligned} \partial \tau =2{\mathcal {D}}\partial t\Rightarrow \frac{\partial t}{\partial \tau }=\frac{1}{2{\mathcal {D}}}. \end{aligned}$$
(2.4)

From (2.4), we get

$$\begin{aligned} \frac{\partial V(x,\tau )}{\partial \tau }= & {} x \frac{\partial U(x,t)}{\partial \tau }\nonumber \\= & {} x \frac{\partial U(x,t)}{2{\mathcal {D}}\partial t}=\frac{x}{2{\mathcal {D}}}\frac{\partial U(x,t)}{\partial t}, \end{aligned}$$
(2.5)

and

$$\begin{aligned} \begin{array}{l} \frac{\partial V(x,\tau )}{\partial x} =x \frac{\partial U(x,t)}{\partial x}+U(x,t),\\ \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}=x \frac{\partial ^{2} U(x,t)}{\partial x^{2}}+2 \frac{\partial U(x,t)}{\partial x}. \end{array} \end{aligned}$$
(2.6)

In view of (2.5) and (2.6), we can write

$$\begin{aligned} \begin{array}{l} \frac{\partial U(x,t)}{\partial t}=\frac{2{\mathcal {D}}}{x} \frac{\partial V(x,\tau )}{\partial \tau },\\ \frac{\partial U(x,t)}{\partial x}=\frac{1}{x} \left( \frac{\partial V(x,\tau )}{\partial x}-U(x,t)\right) ,\\ \frac{\partial ^{2} U(x,t)}{\partial x^{2}}=\frac{1}{x} \left( \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}-2 \frac{\partial U(x,t)}{\partial x}\right) . \end{array} \end{aligned}$$
(2.7)

Using this, the Eq. (2.2) becomes

$$\begin{aligned} \frac{\partial V(x,\tau )}{\partial \tau }=\frac{1}{2} \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}+\frac{\varrho -\kappa _{t}}{{\mathcal {D}}}V(x,\tau ). \end{aligned}$$
(2.8)

Now, suppose \(W(x,\tau )=\frac{\varrho -\kappa _{t}}{{\mathcal {D}}}V(x,\tau )\) and \(V(x,\tau _{0})\) is initial growth-profile, then

$$\begin{aligned}&\frac{\partial V(x,\tau )}{\partial \tau }=\frac{1}{2} \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}+W(x,\tau ),\nonumber \\&\quad V(x,\tau _{0})=\upsilon (x). \end{aligned}$$
(2.9)

Memory properties have been broadly applied to many complex phenomena in applied sciences. The use of fractional derivatives, for their extra degree of freedom, compared to the use of integer ones may achieve better results. Concerning intrinsic properties of nonlocal operators, fractional differential equations are more helpful in explaining phenomena or processes related to hereditary or memory properties in areas of biology, chemistry, economy, and physics. Readers can refer to Lorenzo and Hartley (2000); Sun et al. (2011).

$$\begin{aligned} \begin{array}{l} ^{C}_{0}{D_{\tau }^{\theta }} V(x,\tau )=\frac{1}{2} \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}+W(x,\tau ),\\ V(x,\tau _{0})=\upsilon (x). \end{array} \end{aligned}$$
(2.10)

where \(^{C}_{0}{D_{\tau }^{\theta }}\) refers to the fractional derivative operator of order \(\theta \) in the Caputo sense with \(0<\theta \le 1\).

Definition 1

The fractional Caputo derivative of order \(\theta \in (m-1,m]\), \(m\in {\mathbb {N}}\) of \(V(x,\tau )\) with respect to \(\tau \) is represented by Hassani et al. (2019, 2020)

$$\begin{aligned} ^{C}{0}{D_{\tau }^{\theta }}V(x,\tau )= \left\{ \begin{array}{ll} \frac{1}{\Gamma \left( m-\theta \right) }\int _{0}^{\tau }\left( \tau -\xi \right) ^{m-\theta -1}\frac{\partial ^{m} V(x,\xi )}{\partial \xi ^{m}}d\xi ,&{} \theta \in (m-1,m), \\ \frac{\partial ^{m} V(x,\tau )}{\partial \tau ^{m}}, &{}\theta =m, \end{array} \right. \end{aligned}$$
(2.11)

where \(\Gamma (\cdot )\) implies the gamma function.

Corollary 1

The definition 1 for \(k\in {\mathbb {N}}\bigcup \{0\}\) results in

$$\begin{aligned} ^{C}{0}{D_{\tau }^{\theta }}\tau ^{k}= \left\{ \begin{array}{ll} \frac{\Gamma (k+1)}{\Gamma (k-\theta +1)}\,\tau ^{k-\theta }, &{} k\ge m, \\ 0, &{} k< m, \end{array} \right. \end{aligned}$$
(2.12)

where \(\theta \in (m-1,m]\).

Definition 2

The two-parameter Mittag-Leffler function \({\textbf {\text {E}}}_{\alpha ,\zeta }(z)\) is defined as Hassani et al. (2019, 2020)

$$\begin{aligned} {\textbf {\text {E}}}_{\alpha ,\zeta }(z)=\sum _{j=0}^{\infty }\frac{z^{j}}{\Gamma {\left( j\alpha +\zeta \right) }}, \end{aligned}$$

where \(\alpha \) and \(\zeta \) are positive constants.

3 Required Tools

In this section, we first introduce GLPs and operational matrices to solve FGT, then provide function approximation and convergence analysis.

3.1 Description of the GLPs

In this subsection, the main concepts of the GLPs are introduced to make some approximation of the given function.

Definition 3

(see Aizenshtadt et al. 1966 and references therein) The Laguerre polynomials (LPs), \({\mathcal {L}}_n(\tau )\), are solutions to linear differential equation of second order \(xy^{\prime \prime }+(1-x)y^{\prime }+ny = 0,~ n \in {\mathbb {N}}\).

Definition 4

(see Aizenshtadt et al. 1966 and references therein) The representation of power series for LPs, \({\mathcal {L}}_n(\tau )\), is provided by

$$\begin{aligned} {\mathcal {L}}_{n}(\tau )=\sum _{k=0}^{n}\frac{( -1)^{k}}{k!}\frac{(n)!}{(k!)(n-k)!}\tau ^{k}. \end{aligned}$$
(3.1)

The first LPs are given by:

$$\begin{aligned} \begin{array}{ll} {\mathcal {L}}_{0}(\tau )=1,\\ {\mathcal {L}}_{1}(\tau )=-\tau +1,\\ {\mathcal {L}}_{2}(\tau )=\frac{1}{2}(\tau ^2-4\tau +2),\\ {\mathcal {L}}_{3}(\tau )=\frac{1}{6}(-\tau ^3+9\tau ^2-18\tau +6). \end{array} \end{aligned}$$

The given function \(u(\tau )\) can be approximated generally with the first \(n+1\) LPs terms as

$$\begin{aligned} u(\tau )\simeq {P}^{T}~Q~\Psi _{n}(\tau ), \end{aligned}$$
(3.2)

where

$$\begin{aligned} Q= \begin{pmatrix} q_{00} &{} q_{01}&{}\cdots &{} q_{0n}\\ q_{10} &{} q_{11}&{}\cdots &{} q_{1n}\\ \vdots &{} \vdots &{}\ddots &{} \vdots \\ q_{n0} &{} q_{n1}&{}\cdots &{} q_{nn}\\ \end{pmatrix}, {P}^{T}=[p_{0}~~p_{1}~~\ldots ~~p_{n}], \Psi _{n}(\tau )=[1\,\,\,\tau \,\,\,\tau ^{2}\,\,\,\ldots \,\,\,\tau ^{n}]^{T}, \end{aligned}$$
(3.3)

and

$$\begin{aligned} q_{ij}= \left\{ \begin{array}{ll} \frac{(-1)^{j}}{j!}\frac{(i)!}{(j!)(i-j)!}, &{} i\ge j, \\ 0, &{} i<j. \end{array} \right. \end{aligned}$$
(3.4)

Definition 5

The GLPs, \({\mathscr {L}}_{m}(\tau )\), are formed with a change of variable. Accordingly, \(\tau ^{i}\) is changed to \(\tau ^{i+\beta _{i}}\), \((i+\beta _{i} > 0)\), on the LPs and defined as

$$\begin{aligned} {\mathscr {L}}_{m}(\tau )=\sum _{k=0}^{m}\frac{(-1)^{k}}{k!} \frac{(m)!}{(k!)(m-k)!}\tau ^{k+\beta _{k}}, \end{aligned}$$
(3.5)

where \(\beta _{k}\) indicate control parameters. If \(\beta _{k}=0\), then GLPs fully coincide with classical LPs.

The expansion of \(v(\tau )\) functions in terms of GLPs can be shown in the form of matrices

$$\begin{aligned} v(\tau )={R}^{T}~{S}~\Phi _{m}(\tau ), \end{aligned}$$
(3.6)

where

$$\begin{aligned} S= \begin{pmatrix} s_{0,0} &{} s_{0,1}&{}s_{0,2}&{}\cdots &{}s_{0,m}\\ s_{1,0} &{} s_{1,1}&{}s_{1,2}&{}\cdots &{}s_{1,m} \\ s_{2,0} &{} s_{2,1}&{}s_{2,2}&{}\cdots &{}s_{2,m} \\ \vdots &{} \vdots &{}\vdots &{} \cdots &{}\vdots \\ s_{m,0} &{} s_{m,1}&{}s_{m,2}&{}\cdots &{} s_{m,m} \\ \end{pmatrix}, R^{T}=[r_{0}~~r_{1}~~\ldots ~~r_{m}], \Phi _{m}(t)=[1\,\,\,\tau ^{1+\beta _{1}}\,\,\,\tau ^{2 +\beta _{2}}\,\,\,\ldots \,\,\,\tau ^{m+\beta _{m}}]^{T}, \end{aligned}$$
(3.7)

and

$$\begin{aligned} s_{ij}= \left\{ \begin{array}{ll} \frac{(-1)^{j}}{j!}\frac{(i)!}{(j!)(i-j)!}, &{} i\ge j, \\ 0, &{} i<j, \end{array} \right. \end{aligned}$$
(3.8)

where \(\beta _{k}\), \(k=1,2,\ldots ,m\), are control parameters.

The following matrices can explain the expansion of given functions \(V(x,\tau )\) by means of GLPs:

$$\begin{aligned} V(x,\tau )\simeq \Phi _{m_{1}}(x)^{T}\,{\mathscr {A}}\,\Psi _{m_{2}}(\tau ), \end{aligned}$$
(3.9)

where \({\mathscr {A}}=[a_{ij}]\) are \((m_{1}+1)\times (m_{2}+1)\) unknown matrices of free coefficients, that must be computed. The vectors \(\Phi _{m_{1}}(x)\) and \(\Psi _{m_{2}}(\tau )\) are defined as:

$$\begin{aligned} \Phi _{m_{1}}(x)={\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x), \Psi _{m_{2}}(\tau )={\mathscr {D}}\,{\mathscr {G}}_{m_{2}}(\tau ), \end{aligned}$$
(3.10)

where

$$\begin{aligned}&{\mathscr {F}}_{m_{1}}(x)= [f_{0}(x)\,\,\,f_{1}(x)\,\ldots \,f_{m_{1}}(x)]^{T},\nonumber \\&{\mathscr {G}}_{m_{2}}(\tau ) = [g_{0}(\tau )\,\,\,g_{1}(\tau )\,\ldots \,g_{m_{2}}(\tau )]^{T}, \end{aligned}$$
(3.11)
$$\begin{aligned}&{\mathscr {A}}= \begin{pmatrix} a_{0,0}&{} a_{0,1}&{}\cdots &{} a_{0,m_{2}}\\ a_{1,0}&{} a_{1,1}&{}\cdots &{} a_{1,m_{2}} \\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ a_{m_{1},0} &{} a_{m_{1},1}&{}\cdots &{} a_{m_{1},m_{2}} \\ \end{pmatrix}, \end{aligned}$$
(3.12)
$$\begin{aligned}&{\mathscr {C}}= \begin{pmatrix} 1 &{} 0&{}0&{}\cdots &{} 0\\ 0 &{} 1&{}0&{}\cdots &{} 0\\ c_{2,0} &{} c_{2,1}&{}c_{2,2}&{}\cdots &{}c_{2,m_{1}} \\ \vdots &{} \vdots &{}\vdots &{} \cdots &{}\vdots \\ c_{m_{1},0} &{} c_{m_{1},1}&{}c_{m_{1},2}&{}\cdots &{} c_{m_{1},m_{1}} \\ \end{pmatrix},\nonumber \\&{\mathscr {D}}= \begin{pmatrix} 1 &{} 0&{}0&{}\cdots &{} 0\\ d_{1,0} &{} d_{1,1}&{}d_{1,2}&{}\cdots &{}d_{1,m_{2}} \\ d_{2,0} &{} d_{2,1}&{}d_{2,2}&{}\cdots &{}d_{2,m_{2}} \\ \vdots &{} \vdots &{}\vdots &{} \cdots &{}\vdots \\ d_{m_{2},0} &{} d_{m_{2},1}&{}d_{m_{2},2}&{}\cdots &{} d_{m_{2},m_{2}} \\ \end{pmatrix}, \end{aligned}$$
(3.13)
$$\begin{aligned}&c_{ij}= \left\{ \begin{array}{ll} \frac{(-1)^{j}}{j!}\frac{(i)!}{(j!)(i-j)!}, &{} i\ge j, \\ 0, &{} i<j, \end{array} \right. \,\,\,\,\,\,\,\,\,\,\, i=2,3,\ldots , m_{1},j=0,1,\ldots , m_{1}, \end{aligned}$$
(3.14)
$$\begin{aligned}&d_{ij}= \left\{ \begin{array}{ll} \frac{(-1)^{j}}{j!}\frac{(i)!}{(j!)(i-j)!}, &{} i\ge j, \\ 0, &{} i<j, \end{array} \right. \,\,\,\,\,\,\,\,\,\,\, i=1,2,\ldots , m_{2},j=0,1,\ldots , m_{2}, \end{aligned}$$
(3.15)
$$\begin{aligned}&f_{i}(x)=\left\{ \begin{array}{lll} x^{i},&{}&{}i=0,1, \\ x^{i+k_{i}},&{}&{}i=2,3,\ldots ,m_{1}, \end{array} \right. \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, g_{j}(\tau )=\left\{ \begin{array}{lll} 1,&{}&{}j=0,\\ \tau ^{j+s_{j}},&{}&{}j=1,2,\ldots ,m_{2}, \end{array} \right. \end{aligned}$$
(3.16)

where \(k_{i}\) and \(s_{j}\) are control parameters.

3.2 Operational Matrix

The fractional derivative of order \(0<\theta \le 1\), of \({\mathscr {G}}_{m_{2}}(\tau )\) can be written as

$$\begin{aligned} ^C_0{D_{\tau }^{\theta }}{\mathscr {G}}_{m_{2}}(\tau ) ={\mathcal {D}}^{\left( \theta \right) }_{\tau }{\mathscr {G}}_{m_{2}}(\tau ), \end{aligned}$$
(3.17)

where \({\mathcal {D}}^{\left( \theta \right) }_{\tau }\) denotes the \((m_{2}+1)\times (m_{2}+1)\) operational matrix of fractional derivative, defined by:

$$\begin{aligned} {\mathcal {D}}^{\left( \theta \right) }_{\tau }=\tau ^{-\theta } \begin{pmatrix} 0 &{} 0&{}0&{}0&{}\cdots &{} 0\\ 0 &{} \frac{\Gamma \left( 2+s_{1}\right) }{\Gamma \left( 2-\theta +s_{1}\right) }&{}0&{}0&{}\cdots &{} 0 \\ 0 &{} 0&{}\frac{\Gamma \left( 3+s_{2}\right) }{\Gamma \left( 3-\theta +s_{2}\right) }&{}0&{}\cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{}\vdots &{}\ddots &{} \vdots \\ 0 &{} 0&{}0&{}0&{}\cdots &{} \frac{\Gamma \left( m_{2}+1+s_{m_{2}}\right) }{\Gamma \left( m_{2}+1-\theta +s_{m_{2}}\right) } \\ \end{pmatrix}, \end{aligned}$$
(3.18)

The second order derivatives of \({\mathscr {F}}_{m_{1}}(x)\) is given by:

$$\begin{aligned} \frac{d^{2}{\mathscr {F}}_{m_{1}}(x)}{dx^{2}}={\mathcal {D}}^{(2)}_{x}\,{\mathscr {F}}_{m_{1}}(x), \end{aligned}$$
(3.19)

where \({\mathcal {D}}^{(2)}_{x}\) denotes \((m_{1}+1)\times (m_{1}+1)\) operational matrix of derivative:

$$\begin{aligned} {\mathcal {D}}^{(2)}_{x}= \begin{pmatrix} 0&{}0&{}0&{}\cdots &{}0\\ 0&{}0&{}0&{}\cdots &{}0 \\ 0&{}0&{}\frac{(2+k_{2})(1+k_{2})}{x^{2}}&{}\cdots &{}0 \\ \vdots &{}\vdots &{}\vdots &{}\ddots &{}\vdots \\ 0&{}0&{}0&{}\cdots &{}\frac{(m_{1}+k_{m_{1}})(m_{1}-1+k_{m_{1}})}{x^{2}} \\ \end{pmatrix}, \end{aligned}$$
(3.20)

where \(k_{i}\), \((i=2,3,\ldots ,m_{1})\) and \(s_{j}\), \((j=1,2,\ldots ,m_{2})\) are control parameters, \(\Gamma (\cdot )\) is the gamma function, \(m_{1}\) and \(m_{2}\) are numbers of basis functions and \(\theta \) is the fractional order.

3.3 Function Approximation

Let \({\mathbb {X}}=L^{2}[0,1]\times [0,1]\) and \({\mathbb {Y}}=\left\langle x^{k_{i}}\tau ^{s_{j}};\,\ 0\le i\le m_{1},\,\ 0\le j\le m_2\right\rangle \). Then, \({\mathbb {Y}}\) suggests a subspace of finite dimensional vector space of \({\mathbb {X}}\,\left( dim {\mathbb {Y}} \le (m_1+1)(m_2+1)<\infty \right) \) with each \({\tilde{V}}={\tilde{V}}(x,\tau )\in X\) converging to a unique best approximation \(V_0=V_0(x,\tau )\in {\mathbb {Y}}\), given by:

$$\begin{aligned} \forall ~{\hat{V}}\in {\mathbb {Y}},~~~\parallel {\tilde{V}}-V_{0}\parallel _2\le \parallel {\tilde{V}}-{\hat{V}}\parallel _2. \end{aligned}$$

More details are evident in Theorem 6.1-1 of Kreyszig (1987). The \(V_0\in {\mathbb {Y}}\) and \({\mathbb {Y}}\) finite dimensional vector subspace of \({\mathbb {X}}\) provide us with unique coefficients \(a_{ij} \in {\mathbb {R}}\). From an elementary argument in linear algebra, we obtain coefficients such that \(V_0(x,\tau )\) dependent variable can be expanded in terms of polynomials of

$$\begin{aligned} V_0(x,\tau )\simeq \Phi _{m_{1}}(x)^{T}\,{\mathscr {A}}\,\Psi _{m_{2}}(\tau ), \end{aligned}$$

where \(\Phi _{m_{1}}(x)^{T}\) and \(\Psi _{m_{2}}(\tau )\) are defined in Eq. (3.10).

3.4 Convergence Analysis

Theorem 1

Suppose \(f:Q\rightarrow {\mathbb {R}}\) is \((m_1+m_2+1)\) times continuously differentiable, say for \(i=1,2,\ldots ,m_1+m_2+1\), \(\left| \frac{\partial ^{m_1+m_2+1}}{\partial ^{n+m-i}}f(x,t)\right| \le M_2\). Let \(Y=\left\langle x^{k_i}{\tau }^{s_j}:0\le i\le m_1,0\le j\le m_2,k_i,s_j\ge 0\right\rangle \), where Y is a linear subspace with finite dimension of \(L^2(Q)\). If \(\Phi ^T_{m_1}\) is a unique best approximation of f out of Y where \(\Phi _{m_1}(x)\) and \(\Psi _{m_2}(\tau )\) are given in (3.9) and \({\mathscr {A}}=[a_{i,j}]:i=1,2,\ldots ,m_1\), \(j=1,2,\ldots ,m_2\) is the coefficient matrix, then the following holds:

$$\begin{aligned} \Vert {\tilde{V}}(x,\tau )-\Phi _{m_1}(x){\mathscr {A}}\Psi _{m_2}(\tau )\Vert _2\le \frac{\Gamma (k+1)M_2\sqrt{M_3(m_1+m_2+2)}}{l!(m_1+m_2+1-l)!}, \end{aligned}$$
(3.21)

where \(M_3=\max \left\{ \frac{\Gamma (2i+k+1)\Gamma (2m_1+2m_2+3-2i+s)}{\Gamma (k+s+2i+2)\Gamma (k+s+2m_1+2m_2+4-2i)}:i=1,2,\ldots ,m_1+m_2+1\right\} \).

Proof

Given Maclaurin’s expression for \({\tilde{V}}(x,\tau )\)

$$\begin{aligned} {\tilde{V}}(x,\tau )=p(x,\tau )+\frac{1}{(m_1+m_2+1)!} \left( x\frac{\partial }{\partial x}+\tau \frac{\partial }{\partial \tau } \right) ^{m_1+m_2+1}{\tilde{V}}(\xi _0x,\xi _0\tau ),~~\xi \in (0,1) \end{aligned}$$
(3.22)

where \(p(x,\tau )=\sum ^{{m_1+m_2}}_{r=0}\frac{1}{r!} \left( x\frac{\partial }{\partial x}+\tau \frac{\partial }{\partial \tau } \right) ^{r}{\tilde{V}}(0,0)\). This implies that

$$\begin{aligned} |{\tilde{V}}(x,\tau )-p(x,\tau )| =\left| \frac{1}{(m_1+m_2+1)!}\left( x\frac{\partial }{\partial x} +\tau \frac{\partial }{\partial \tau }\right) ^{m_1+m_2+1}{\tilde{V}} (\xi _0x,\xi _0\tau )\right| ,~~\xi \in (0,1) \end{aligned}$$
(3.23)

On the other hand, since \(\Phi _{m_1}(x){\mathscr {A}}\Psi _{m_2}(\tau )\) is the best approximation of \({\tilde{V}}(x,\tau )\) we obtain

$$\begin{aligned} \Vert {\tilde{V}}(x,\tau )-\Phi _{m_1}(x){\mathscr {A}}\Psi _{m_2}(\tau )\Vert _2\le \Vert {\tilde{V}}(x,\tau )-p(x,\tau )\Vert _2. \end{aligned}$$

Now, in view of definition of \(L^2\)-norm, we get

$$\begin{aligned}&\Vert {\tilde{V}}(x,\tau )-\Phi _{m_1}(x){\mathscr {A}}\Psi _{m_2}(\tau )\Vert ^2_2\\&=\int _{0}^1\int _{0}^1\left[ \frac{1}{(m_1+m_2+1)!}\left( x\frac{\partial }{\partial x}+\tau \frac{\partial }{\partial \tau }\right) ^{m_1+m_2+1}\right] ^2dxd\tau \\&=\int _{0}^1\int _{0}^1\left[ \frac{1}{(m_1+m_2+1)!}\sum _{i=0}^{m_1+m_2+1}\left( \begin{array}{c} m_1+m_2+1\\ i \\ \end{array} \right) x^{m_1+m_2+1-i}{\tau }^i\frac{{\partial }^{m_1+m_2+1}}{{\partial }x^{m_1+m_2+1}{\partial \tau }^i}\right] ^2dxd\tau \\&\le \frac{M_2^2}{(m_1+m_2+1)!}\int _{0}^1\int _{0}^1\left[ \sum _{i=0}^{m_1+m_2+1}\left( \begin{array}{c} m_1+m_2+1\\ r \\ \end{array} \right) x^{m_1+m_2+1-i}{\tau }^i\right] ^2dxd\tau , \end{aligned}$$

where \(\left( \begin{array}{c} m_1+m_2+1\\ r \\ \end{array} \right) =\max \left\{ \left( \begin{array}{c} m_1+m_2+1\\ i \\ \end{array} \right) :i=1,2,\ldots ,m_1+m_2+1\right\} \). This implies that

$$\begin{aligned}&\Vert {\tilde{V}}(x,\tau )-p\Vert ^2_2\\&\le \frac{M_2^2}{{r!}^2({m_1+m_2+1)!}^2}\int _{0}^1\int _{0}^1\left[ \sum _{i=0}^{m_1+m_2+1}x^{m_1+m_2+1-i}{\tau }^i\right] ^2dxd\tau \\&\le \frac{M_2^2}{{r!}^2({m_1+m_2+1)!}^2}\sum _{i=0}^{m_1+m_2+1}\frac{\Gamma (2i+k+1)\Gamma (2m_1+2m_2+3-2i+s)}{\Gamma (k+s+2i+2)\Gamma (k+s+2m_1+2m_2+4-2i)}\\&\le \frac{\Gamma (k+1)^2M_3(m_1+m_2+2)M_2^2}{{l!}^2{(m_1+m_2+1-l)!}^2}, \end{aligned}$$

which is the desired result. \(\square \)

4 Solution Procedure

In this section, based on the GLPs, an optimization method is presented to solve the study problem (2.10). \(V(x,\tau )\) dependent variable can be expanded in terms of GLPs as

$$\begin{aligned} V(x,\tau )\simeq \Phi _{m_{1}}(x)^{T}\,{\mathscr {A}}\,\Psi _{m_{2}}(\tau )=\left( {\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x)\right) ^{T}\,{\mathscr {A}}\, \left( {\mathscr {D}}\,{\mathscr {G}}_{m_{2}}(\tau )\right) , \end{aligned}$$
(4.1)

where \({\mathscr {A}}=[a_{ij}]\) is undetermined matrix, and \({\mathscr {F}}_{m_{1}}(x)\) and \({\mathscr {G}}_{m_{2}}(\tau )\) are in obedience to Eq. (3.11). From Eqs. (3.17) and (3.19), we have

$$\begin{aligned}&^C_0{D_{\tau }^{\theta }}V(x,\tau )\simeq \left( {\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x)\right) ^{T}\, {\mathscr {A}}\,\left( {\mathscr {D}}\,{\mathcal {D}}^{\left( \theta \right) }_{\tau }{\mathscr {G}}_{m_{2}}(\tau )\right) ,\nonumber \\&V_{xx}(x,\tau )\simeq \left( {\mathscr {C}}\, {\mathcal {D}}^{(2)}_{x}\,{\mathscr {F}}_{m_{1}}(x)\right) ^{T}\, {\mathscr {A}}\,\left( {\mathscr {D}}\,{\mathscr {G}}_{m_{2}}(\tau )\right) . \end{aligned}$$
(4.2)

Replacing Eqs. (4.1) and (4.2) into the initial conditions yield

$$\begin{aligned}&\Lambda (x)\triangleq \left( {\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x)\right) ^{T}\, {\mathscr {A}}\,\left( {\mathscr {D}}\,{\mathscr {G}}_{m_{2}} (\tau _{0})\right) . \end{aligned}$$
(4.3)

Substituting Eqs. (4.1) and (4.2) into Eq. (2.10), we get

$$\begin{aligned} {\mathcal {R}}(x,\tau ,{\mathscr {A}}, {\mathcal {K}},{\mathcal {S}})\triangleq \left( {\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x)\right) ^{T}\, {\mathscr {A}}\,\left( {\mathscr {D}}\,{\mathcal {D}}^{\left( \theta \right) }_ {\tau }{\mathscr {G}}_{m_{2}}(\tau )\right) -\frac{1}{2} \left( {\mathscr {C}}\,{\mathcal {D}}^{(2)}_{x}\,{\mathscr {F}}_{m_{1}}(x) \right) ^{T}\,{\mathscr {A}}\,\left( {\mathscr {D}}\, {\mathscr {G}}_{m_{2}}(\tau )\right) -W(x,\tau ). \end{aligned}$$
(4.4)

Here \({\mathscr {A}}\) stands for unknown free coefficients and \({\mathcal {K}}\) and \({\mathcal {S}}\) stand for unknown control vectors, respectively, for \({\mathscr {F}}_{m_{1}}(x)\) and \({\mathscr {G}}_{m_{2}}(\tau )\), defined as

$$\begin{aligned} {\mathcal {K}}=\left[ k_{2}\,\ k_{3}\,\ldots \,k_{m_{1}}\right] ,~~~{\mathcal {S}}=\left[ s_{1}\,\ s_{2}\,\ldots \,s_{m_{2}}\right] . \end{aligned}$$
(4.5)

The two-norm of the residual vectors can be given by

$$\begin{aligned} {\mathcal {M}}({\mathscr {A}},{\mathcal {K}},{\mathcal {S}})=\int _{0}^{l_{2}}\int _{0}^{l_{1}}{\mathcal {R}}^{2}(x,\tau ,{\mathscr {A}}, {\mathcal {K}},{\mathcal {S}})dxd\tau . \end{aligned}$$
(4.6)

To find the optimal solution, control parameters \({\mathcal {K}}\) and \({\mathcal {S}}\) and undetermined matrix \({\mathscr {A}}\) must be evaluated. The optimization problem is therefore considered as

$$\begin{aligned} \min \,{\mathcal {M}}({\mathscr {A}},{\mathcal {K}},{\mathcal {S}}), \end{aligned}$$
(4.7)

subject to

$$\begin{aligned} \Lambda \left( \frac{i}{m_{1}}\right) =0, i=0,1,\ldots ,m_{1}. \end{aligned}$$
(4.8)

To solve the minimization problem, the Lagrange multipliers method is used.

$$\begin{aligned} {\mathcal {J}}^{*}[{\mathscr {A}},{\mathcal {K}},{\mathcal {S}};\lambda ]={\mathcal {M}}({\mathscr {A}},{\mathcal {K}},{\mathcal {S}})+\lambda \Lambda , \end{aligned}$$
(4.9)

where the vector \(\lambda \) corresponds to unknown Lagrange multipliers and \(\Lambda \) express a known column vector with entries of equality constraints in accordance to Eq. (4.8). The following nonlinear system of algebraic equations presents necessary conditions for local extremum.

$$\begin{aligned} \frac{\partial {\mathcal {J}}^{*}}{\partial {\mathscr {A}}}=0, \frac{\partial {\mathcal {J}}^{*}}{\partial {\mathcal {K}}}=0,\frac{\partial {\mathcal {J}}^{*}}{\partial {\mathcal {S}}}=0, \frac{\partial {\mathcal {J}}^{*}}{\partial \lambda }=0. \end{aligned}$$
(4.10)

This nonlinear system of algebraic equations can be solved using software packages of MAPLE or MATLAB. The approximate solutions of the problem can be determined using control parameters and unknown free coefficients from Eq. (4.1). In the following, a brief description of the algorithm is made.

figure a

5 Numerical Experiments

Now, the proposed scheme is applied for the solution of FGT to assess the method effectiveness. The results are then examined through calculation of both absolute error (AE) and convergence order (CO) as follows:

$$\begin{aligned}&\left| e_{1}\left( x_{i},\tau _{i}\right) \right| =\left| \left( {\mathscr {C}}\,{\mathscr {F}}_{m_{1}}(x_{i})\right) ^{T}\,{\mathscr {A}}\,\left( {\mathscr {D}}\,{\mathscr {G}}_{m_{2}}(\tau _{i})\right) -V\left( x_{i},\tau _{i}\right) \right| , (x_{i},\tau _{i})\in [0,l_{1}]\times [0,l_{2}], \\&CO=\left| \frac{\log \left( AE_{2}\right) }{\log \left( AE_{1}\right) }\right| , \end{aligned}$$

where \(AE_{1}\) and \(AE_{2}\), respectively, are the first and the second AE values.

Example 1

Consider the following FGT:

$$\begin{aligned} \left\{ \begin{array}{l} ^{C}_{0}{D_{\tau }^{\theta }} V(x,\tau )=\frac{1}{2} \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}} +\left( \frac{\Gamma {\left( \frac{8}{3}\right) \,\tau ^{-\theta }}}{\Gamma {\left( \frac{8}{3}-\theta \right) }} -\frac{35}{8}\,x^{-2}\right) V(x,\tau ),(x,\tau )\in [0,1]\times [0,1],\\ V(x,0)=0. \end{array}\right. \end{aligned}$$
(5.1)

The exact solution is given by

$$\begin{aligned} V(x,\tau )=x^{\frac{7}{2}}\,\tau ^{\frac{5}{3}}. \end{aligned}$$
(5.2)

The proposed scheme is implemented to obtain the optimal solution when \(m_{1}=3\), \(m_{2}=1\) and \(\theta =0.80\). The obtained solution is expanded as

$$\begin{aligned} V(x,\tau )\simeq \Phi _{3}(x)^{T}\,{\mathscr {A}}\,\Psi _{1}(\tau ) =\left( {\mathscr {C}}\,{\mathscr {F}}_{3}(x)\right) ^{T}\,{\mathscr {A}}\, \left( {\mathscr {D}}\,{\mathscr {G}}_{1}(\tau )\right) , \end{aligned}$$

where

$$\begin{aligned}&{\mathscr {F}}_{3}(x)\,\triangleq\, [1\,\,\,x\,\,\,x^{2+k_{2}}\,\,\,x^{3+k_{3}}]^{T},\\&{\mathscr {G}}_{1}(\tau )\,\triangleq\, [1\,\,\,\tau ^{1+s_{1}}]^{T}, \end{aligned}$$

and \(k_{2}\), \(k_{3}\) and \(s_{2}\) are control parameters. Moreover, the matrix of unknown coefficients \({\mathscr {A}}\), and matrices of Laguerre coefficients \({\mathscr {C}}\) and \({\mathscr {D}}\) are given by

$$\begin{aligned}&{\mathscr {A}}= \begin{pmatrix} a_{00} &{} a_{01} \\ a_{10} &{} a_{11} \\ a_{20} &{} a_{21} \\ a_{30} &{} a_{31} \\ \end{pmatrix}, \\&{\mathscr {C}}= \begin{pmatrix} 1 &{} 0&{}0&{} 0\\ 1 &{} -1&{}0&{} 0 \\ 1 &{} -2&{}\frac{1}{2}&{}0 \\ 1 &{} -3&{}\frac{3}{2}&{}\frac{-1}{6}\\ \end{pmatrix},\\&{\mathscr {D}}= \begin{pmatrix} 1 &{} 0\\ 1 &{} -1 \\ \end{pmatrix}. \end{aligned}$$

Control parameters and free coefficients are hence obtained with \(m_{1}=3\), \(m_{2}=1\) and \(\theta =0.80\) as follows

$$\begin{aligned}&k_{2}=1.500124,~~~k_{3}=0.500788,~~~s_{1}\\&=0.666666,~~~a_{00}=1.248848,~~~a_{01}=-1.248848,~~~a_{10}=-1.370970,\\&a_{11}=1.370970,~~~a_{20}=-1.004605,~~~a_{21}\\&=1.004605,~~~a_{30}=1.126727,~~~a_{31}=-1.126726. \end{aligned}$$

The plots of the optimal solution and AE with \(m_{1}=3\), \(m_{2}=1\) and \(\theta =0.80\) are shown in Fig. 1. The GLPs method values of AE and CO are listed in Table 1 for \(m_{1}=3\), \(m_{2}=1\) and different values of \(\theta \) at various points \((x,\tau )\). The GLPs method solves the problem with \(m_{1}=3\), \(m_{2}=3\) and \(\theta =\{0.34,0.95\}\). The AE obtained by the GLPs method with \(m_{1}=3\), \(m_{2}=3\), \(\theta =0.34\) (left side) and \(\theta =0.95\) (right side) are illustrated in Fig. 2. The runtime of the proposed method is reported for different choices of \(m_{1}\) and \(m_{2}\) in Table 2. The observed results in Table 1 and Figs. 1 and 2 are indicative of a good agreement between two solutions of exact and approximate. The results also suggest that an increase in the number of basis functions can make improvement in the approximate solution.

Fig. 1
figure 1

The optimal solution and AE for the proposed method with \(m_{1}=3\), \(m_{2}=1\) and \(\theta =0.80\) for Example 1

Table 1 The AE and CO with \(m_{1}=3\), \(m_{2}=1\) and \(\theta =\{0.70,0.80,0.90,1\}\) at various points \((x,\tau )\) in Example 1
Fig. 2
figure 2

The AE for the proposed method with \(m_{1}=3\), \(m_{2}=3\), \(\theta =0.34\) (left side) and \(\theta =0.95\) (right side) for Example 1

Table 2 The runtime (in seconds) of the proposed method with different choices of \(m_{1}\) and \(m_{2}\) for Example 1

Example 2

Consider the following FGT:

$$\begin{aligned}&^{C}_{0}{D_{\tau }^{\theta }} V(x,\tau )=\frac{1}{2} \frac{\partial ^{2} V(x,\tau )}{\partial x^{2}}\nonumber \\&\quad +\exp (-V(x,\tau ))+\frac{1}{2}\exp (-2V(x,\tau )),(x,\tau )\in [0,1]\times [0,1]. \end{aligned}$$
(5.3)

The initial condition is selected in a way that the analytical solution is \(V(x,\tau )=\log (x+\tau +2)\) when \(\theta =1\). The problem is solved by the GLPs method for parameters \(m_{1}=3\), \(m_{2}=3\) and \(\theta =\{0.70,0.80,0.90,1\}\), and the obtained results are shown in Table 3. Graphs of approximate solution and AE with \(m_{1}=3\), \(m_{2}=3\) and \(\theta =0.90\) are given in Fig. 3. The AE obtained by the GLPs method with \(m_{1}=3\), \(m_{2}=4\), \(\theta =0.76\) (left side) and \(\theta =0.87\) (right side) are represented in Fig. 4. The runtime of the proposed method with different choices of \(m_{1}\) and \(m_{2}\) are reported in Table 4. Table 3 as well as Figs. 3 and 4 are suggestive of an acceptable accuracy of the proposed method for approximate solutions of the proposed problem.

Table 3 The AE and CO with \(m_{1}=3\), \(m_{2}=3\) and \(\theta =\{0.70,0.80,0.90,1\}\) at various points \((x,\tau )\) in Example 2
Fig. 3
figure 3

The optimal solution and AE for the proposed method with \(m_{1}=3\), \(m_{2}=3\) and \(\theta =0.90\) for Example 2

Fig. 4
figure 4

The AE for the proposed method with \(m_{1}=3\), \(m_{2}=4\), \(\theta =0.76\) (left side) and \(\theta =0.87\) (right side) for Example 2

Table 4 The runtime (in seconds) of the proposed method with different choices of \(m_{1}\) and \(m_{2}\) for Example 2

6 Conclusion

In this paper, we proposed an optimization technique based on GLPs coupled with Lagrange multipliers for the study of FGT. The scheme was applied to two test problems and the results were recorded in related tabular and graphical forms. From Figs. 1, 2, 3 and 4, and Tables 1 and 3, we verify that only a few number of basis functions are needed in the GLPs method to obtain a satisfactory result. The results of our algorithm pave the way for conducting further research on similar problems in this field to improve theoretical analysis and practical performance of algorithms and achieve additional results in future. In our future works, our new method can be applied to other nonlinear partial differential equations such as fractional diffusion-wave equation, fractional telegraph equation, fractional Klein–Gordon equation, and fractional optimal control problems. Finally, from the numerical results, the biological behavior of the tumor is predicted and the theoretical statements are justified.