Keywords

1 Introduction

The main idea of this study is to demonstrate the applicability of the so-called polynomial fractional operators which in general can be presented as

$$\begin{aligned} {{P}_{c}}\left( t \right) =\sum \limits _{0}^{N}{{{a}_{n}}}D_{t}^{{{\alpha }_{n}}}\left[ f\left( t \right) \right] \end{aligned}$$
(1)

where \(D_{t}^{{{\alpha }_{n}}}\left[ f\left( t \right) \right] \) are fractional derivatives with any type of memory kernels relevant to modeled relaxation process. The concept of these polynomial fractional operators (PFOs) was inspired by research done by Bagley and Torvik [1] on the use of fractional calculus in viscoelastic models, although it is based on the work of Koeller [2] (see also [3, 4], and [5]). The Bagley-Torvik equation will be covered in this chapter, but for now, to better understand the rationale behind how polynomial fractional operators are created, we had want to go over some key fractional calculus principles. The chapter addresses a new modelling philosophy allowing relaxation functions (memory kernels) to be expressed as finite sums (polynomial operators) of elementary kernels, either of singular (power-law) or non-singular (exponential) kernels. This gives an advantages in modelling when single kernel fractional operators are not applicable for modeling of real-world phenomena such as viscoelasticity and diffusions.

1.1 The Koeller Main Idea and Its Background

Now, following Koeller [2] we present step-by step his idea (in the original notations which to some extent may differ from the contemporary expressions in the literature).

N-Fold Iterated Integral and Its Consequences. The n-fold iterated integral can be presented as single integrals as [2]

$$\begin{aligned} \begin{aligned}{{D}^{-n}}x\left( t \right) =\int \limits _{0}^{t}{\int \limits _{0}^{{{t}_{n-1}}}{\ldots \int \limits _{0}^{{{t}_{1}}}{x\left( {{t}_{0}} \right) }}}d{{t}_{0}}d{{t}_{1}}\ldots d{{t}_{n-2}}d{{t}_{n-1}}\\ =\int \limits _{0}^{t}{\frac{{{\left( t-\tau \right) }^{n-1}}}{\left( n-1 \right) !}}, \quad n=0,1,2\ldots , N \end{aligned} \end{aligned}$$
(2)

where \(x\left( t \right) \) is a Heaviside function of class \({{H}^{N}}\) if

$$\begin{aligned} x\left( t \right) =\left\{ \begin{aligned} x\left( t \right) =0,\ t\in \left( -\infty ,0 \right] \\ x\left( t \right) \in {{C}^{N}},\ \,t\in \left( 0,\infty \right) \\ \end{aligned} \right. \end{aligned}$$
(3)

where \({{C}^{N}}\) is the class of all N time continuously differentiable functions on the open interval \(\left( 0,\infty \right) \) and N is appositive integer. The integral of fractional order n between the limits 0 and t is commonly defined by replacing the factorials by the Gamma function, that is [2]

$$\begin{aligned} {{D}^{-n}}x\left( t \right) =\int \limits _{0}^{t}{\frac{{{\left( t-\tau \right) }^{n-1}}}{\varGamma \left( n \right) }}x\left( t \right) d\tau , \quad n\in \left[ 0,\infty \right) \end{aligned}$$
(4)

This is the well-known as the Riemann-Liouville (RL) fractional integral [6].

The differentiation for \(n=\alpha \in \left[ 0,1 \right] \) is defined as (in the original Koeller’s notations) [2]

$$\begin{aligned} {{D}^{\alpha }}x\left( t \right) =D{{D}^{\alpha -1}}\left[ x\left( t \right) \right] =D\int \limits _{0}^{t}{\frac{{{\left( t-\tau \right) }^{-\alpha }}}{\varGamma \left( 1-\alpha \right) }} , \quad D=\frac{d}{dt} \end{aligned}$$
(5)

The Riesz Distribution. In linear viscoelasticity of the creep compliance is taken as

$$\begin{aligned} {{R}_{n}}\left( t \right) =\frac{{{t}^{n}}}{\varGamma \left( n+1 \right) } \end{aligned}$$
(6)

Then we have the so-called Riesz distribution [7] \({{R}_{n}}\left( t \right) \) is valid for all values of n, that is

$$\begin{aligned} {{R}_{n}}\left( t \right) =0 , \quad t\in \left( -\infty ,0 \right) \end{aligned}$$
(7)

and for \({{R}_{n}}\left( t \right) \equiv {{t}^{-n}}\) we have

$$\begin{aligned} {{R}_{n}}\left( t \right) =\frac{{{t}^{-n}}}{\varGamma \left( 1-n \right) } , \quad t\in \left( 0,\infty \right) \end{aligned}$$
(8)

The integration of the Riesz distribution in order to obtain the Stieltjes integral representation of the fractional integral we may integrate the Riemann-Liouville integral (4) by parts. Then, for \(\alpha \ge 0\) we get [2]

$$\begin{aligned} {{D}^{-\alpha }}x\left( t \right) =\int \limits _{0}^{t}{\frac{{{\left( t-\tau \right) }^{\alpha }}}{\varGamma \left( 1+\alpha \right) }}dx\left( \tau \right) +x\left( 0 \right) \frac{{{t}^{\alpha }}}{\varGamma \left( 1+\alpha \right) } \end{aligned}$$
(9)

In terms of Riesz distribution we may present this result as [2]

$$\begin{aligned} \begin{aligned}&{{D}^{-\alpha }}x\left( t \right) =\int \limits _{0}^{t}{{{R}_{\left( -\alpha \right) }}}\left( t-\tau \right) dx\left( \tau \right) +x\left( 0 \right) {{R}_{\left( -\alpha \right) }}\left( t \right) \\&=\left( {{R}_{\left( -\alpha \right) }}*dx \right) \left( t \right) +x\left( 0 \right) {{R}_{\left( -\alpha \right) }}\left( t \right) , \quad \alpha \in \left[ 0,1 \right] \\ \end{aligned} \end{aligned}$$
(10)

where \(\left( {{R}_{\left( -\alpha \right) }}*dx \right) \left( t \right) \) is a Stieltjes convolution. In a similar way, applying the Leibniz rule to the definition of the RL fractional derivative [6] we get [2]

$$\begin{aligned} \begin{aligned}&{{D}^{\alpha }}x\left( t \right) =\int \limits _{0}^{t}{\frac{{{\left( t-\tau \right) }^{-\alpha }}}{\varGamma \left( 1-\alpha \right) }}dx\left( \tau \right) +x\left( 0 \right) \frac{{{t}^{-\alpha }}}{\varGamma \left( 1-\alpha \right) } \\&=\int \limits _{0}^{t}{{{R}_{\left( \alpha \right) }}}\left( t-\tau \right) dx\left( \tau \right) +x\left( 0 \right) \int \limits _{0}^{t}{{{R}_{\left( \alpha \right) }}}\left( t \right) \\&=\int \limits _{0}^{t}{\left( {{R}_{\left( \alpha \right) }}*dx \right) }\left( t \right) +x\left( 0 \right) {{R}_{\left( \alpha \right) }}\left( t \right) ,\quad \alpha \in \left[ 0,1 \right] \\ \end{aligned} \end{aligned}$$
(11)

Further, since \({{D}^{{{\lambda }_{1}}}}{{D}^{{{\lambda }_{2}}}}={{D}^{{{\lambda }_{1}}+{{\lambda }_{2}}}}\) then it follows that \({{R}_{\left( -\lambda \right) }}\) is Stieltjes inverse of \({{R}_{\left( \lambda \right) }}\), that is

$$\begin{aligned} {{R}_{\left( \lambda \right) }}*d{{R}_{\left( -\lambda \right) }}={{R}_{\left( -\lambda \right) }}*d{{R}_{\left( \lambda \right) }}=h \end{aligned}$$
(12)

In accordance with Koeller [2] both the fractional derivative (Riemann-Liouville) and fractional integral can be expressed as Stieltjes convolution in the form

$$\begin{aligned} {{D}^{\lambda }}={{R}_{\left( \lambda \right) }}*dx , \quad \lambda \in \left( -\infty ,\infty \right) \end{aligned}$$
(13)
$$\begin{aligned} {{D}^{-\lambda }}={{R}^{-\lambda }}*dx, \quad \lambda \in \left( -\infty , \infty \right) \end{aligned}$$
(14)

1.2 The Koeller’s Polynomial Operators

The examples developed by Koeller are from the area of linear viscoelasticity (following the works of Bagley ad Torvik [1]) where the general form of the constitutive equations is

$$\begin{aligned} P\left( D \right) \sigma =Q\left( D \right) \varepsilon \end{aligned}$$
(15)

where \(P\left( D \right) \) and \(Q\left( D \right) \) are polynomial operators defined as

$$\begin{aligned} P\left( D \right) =\sum \limits _{n=0}^{N}{{{p}_{n}}}{{D}^{{{\alpha }_{n}}}}, \quad Q\left( D \right) =\sum \limits _{n=0}^{N}{{{q}_{n}}}{{D}^{{{\beta }_{n}}}} \end{aligned}$$
(16)

with fractional (memory) parameters (orders) \({{\alpha }_{n}}\) and \({{\beta }_{n}}\).

Note: When \({{\alpha }_{n}}\) and \({{\beta }_{n}}\) are positive integers, then (15) is the standard differential operator constitutive law.

Further, when \(\sigma \left( t \right) \) and \(\varepsilon \left( t \right) \) are specified, then (15) is a fractional differential equation without jump initial conditions. Hence, the solutions of (15) for any action as input shear stress (or input shear strain) requires knowledge of the entire history of the shear stress (shear strain). The general formulation (15) can be developed as a linear hereditary law if we consider the properties of the Stieltjes convolution and the Riesz distribution [2], namely

$$\begin{aligned} \sum \limits _{n=0}^{N}{{{p}_{n}}}{{R}_{\left( {{\alpha }_{n}} \right) }}*d\sigma =\sum \limits _{n=0}^{N}{{{q}_{n}}}{{R}_{\left( {{\beta }_{n}} \right) }}*d\varepsilon \end{aligned}$$
(17)

Then, we may define fractional polynomials \(B\left( t \right) \) and \(D\left( t \right) \) [2]

$$\begin{aligned} B\left( t \right) =\sum \limits _{n=0}^{N}{{{p}_{n}}}{{R}_{\left( {{\alpha }_{n}} \right) }}\left( t \right) =\sum \limits _{n=0}^{N}{{{p}_{n}}}\frac{{{t}^{-{{\alpha }_{n}}}}}{\varGamma \left( 1-{{\alpha }_{n}} \right) } \end{aligned}$$
(18)
$$\begin{aligned} D\left( t \right) =\sum \limits _{n=0}^{N}{{{q}_{n}}}{{R}_{\left( {{\beta }_{n}} \right) }}\left( t \right) =\sum \limits _{n=0}^{N}{{{q}_{n}}}\frac{{{t}^{-{{\beta }_{n}}}}}{\varGamma \left( 1-{{\alpha }_{n}} \right) } \end{aligned}$$
(19)

and the constitutive law (15) can be presented in two forms [2]

$$\begin{aligned} B*d\sigma =D*d\varepsilon \end{aligned}$$
(20)
$$\begin{aligned} \int \limits _{-\infty }^{t}{B\left( t-\tau \right) }d\sigma \left( \tau \right) =\int \limits _{-\infty }^{t}{D\left( t-\tau \right) }d\varepsilon \left( \tau \right) \end{aligned}$$
(21)

If \({{B}^{-1}}\) and \({{D}^{-1}}\) are defined as Stieltjes inverse of B and D, then applying the associative property of the Stieltjes convolution we have

$$\begin{aligned} \sigma =G*d\varepsilon , \quad \varepsilon =J*d\sigma \end{aligned}$$
(22)

where \(G={{B}^{-1}}*D\) and \(J={{D}^{-1}}*B\) are the relaxation modulus and the creep compliance, respectively.

1.3 Koeller Example of a Polynomial Operator

The Koeller example developed in [2] selects only one memory parameter \(\beta \) (which actually is violation of the causality principle [8], since the input and output should have different time delays). Anyway, the following expansion was considered (three component Kelvin-Voigt model)

$$\begin{aligned} \left( {{p}_{0}}+{{p}_{1}}{{D}^{\beta }}+{{p}_{2}}{{D}^{2\beta }} \right) \sigma =\left( {{q}_{0}}+{{q}_{1}}{{D}^{\beta }}+{{q}_{2}}{{D}^{2\beta }} \right) \varepsilon \end{aligned}$$
(23)

which possesses symmetry, that is no preference is given to the stress or strain. The solution of (23) is [2] by Laplace transforms yields

$$\begin{aligned} J\left( t \right) =\frac{1}{{{E}_{0}}}+\frac{1}{{{E}_{1}}}\left\{ 1-{{E}_{\beta }}\left[ -{{\left( \frac{t}{{{t}_{1}}} \right) }^{\beta }} \right] \right\} +\frac{1}{{{E}_{2}}}\left\{ 1-{{E}_{\beta }}\left[ -{{\left( \frac{t}{{{t}_{2}}} \right) }^{\beta }} \right] \right\} \end{aligned}$$
(24)
$$\begin{aligned} G\left( t \right) ={{E}_{0}}-{{E}_{0}}{{R}_{1}}\left\{ 1-{{E}_{\beta }}\left[ -{{\left( \frac{t}{{{t}_{1}}} \right) }^{\beta }} \right] \right\} -{{E}_{0}}{{R}_{2}}\left\{ 1-{{E}_{\beta }}\left[ -{{\left( \frac{t}{{{t}_{2}}} \right) }^{\beta }} \right] \right\} \end{aligned}$$
(25)

where \({{E}_{0}}\), \({{E}_{1}}\) and \({{E}_{2}}\) are the moduli of the springs and \({{t}_{1}}\),\({{t}_{2}}\) are relaxation times, and

$$\begin{aligned} {{E}_{\beta }}\left( x \right) =\sum \limits _{n=0}^{\infty }{{{\left( -1 \right) }^{n}}\frac{{{x}^{n}}}{\varGamma \left( 1+\beta n \right) }}, t>0, \quad 0<\beta \le 1 \end{aligned}$$
(26)

is the Mittag-Leffler function; When \({{\beta }_{n}}=0,1,....N\) classical results are recovered.

After this example Koeller clearly stated [2] “The final determination of whether fractional calculus is a valuable tool in the study of viscoelastic materials could be answered if specific data were taken over long periods of time and it (they) were fitted to one of these functions”.

To complete this section let us turn on the formulation of the polynomial fractional operator. From the definitions (18) and (19) we may see that the memory functions can be presented as sums of Riesz distributions, namely

$$\begin{aligned} {{M}_{B}}=\sum \limits _{0}^{\infty }{{{p}_{n}}}\frac{{{t}^{-{{\alpha }_{n}}}}}{\varGamma \left( 1-{{\alpha }_{n}} \right) }, \quad {{M}_{D}}=\sum \limits _{0}^{\infty }{{{q}_{n}}}\frac{{{t}^{-{{\beta }_{n}}}}}{\varGamma \left( 1-{{\beta }_{n}} \right) } \end{aligned}$$
(27)

Therefore, the relaxation functions, the shear stress modulus and the shear strain compliance are decomposed as sums of elementary kernels (Riesz distributions).

1.4 Outcomes of the Koeller’s Approach and Beyond

The findings of (27) provide a useful framework for decomposing response (relaxation) functions into sums of basic functions acting as memory kernels in relevant fractional operators. For instance, two possibilities are offered in the context of the fractional operators with the non-singular kernels, namely:

  • A sums of exponential (Maxwell or Debay) memories which be easily obtained by applying the Prony’s decomposition approach of experimental data [9] (see also [10] and [11]). That is

    $$\begin{aligned} {{B}_{\beta }}\left( t \right) \equiv \sum \limits _{0}^{N}{{{b}_{n}}}\exp \left( -\frac{t}{{{\tau }_{n}}} \right) \end{aligned}$$
    (28)

    where \(\tau _{n}\) are discrete relaxation times.

  • Approximations as Mittag-Leffler functions [6]

    $$\begin{aligned} {{B}_{\beta }}\left( t \right) \equiv {{E}_{\beta }}\left( -{{t}^{\beta }} \right) =\sum \limits _{n=0}^{\infty }{{{\left( -1 \right) }^{n}}\frac{{{x}^{-\beta n}}}{\varGamma \left( 1+\beta n \right) }}=1-\frac{{{x}^{-\beta }}}{\varGamma \left( 1+\beta \right) }+\frac{{{x}^{-2\beta }}}{\varGamma \left( 1+2\beta \right) }\ldots \end{aligned}$$
    (29)

    which actually resembles the idea of Koeller to present the operators as sums of Riesz distributions.

According to Koeller’s remark above, the appropriate approximations of the experimental data by these sums have a significant impact on the choice of decomposition (approximation). We will now investigate how models, particularly the constitutive equations in the linear viscoelasticity, can be represented using polynomial fractional operators.

2 Fractional Calculus in Viscoelasticity

2.1 Stress-Strain Viscoelasticity Response and Hereditary Integral Construction

The superposition of the material’s single-step reactions enables the creation of functional relationships between stress and strain while taking into account the fact that there is a temporal lag in both \(G\left( t \right) \) and \(J\left( t \right) \) following the application of the stress or strain. Convolution integrals, such as the stress integral (30), and creep integral (31), are effective in representing these interactions [12,13,14]

$$\begin{aligned} \sigma \left( t \right) =\int \limits _{0}^{t}{G\left( t-s \right) }d\varepsilon \left( s \right) \end{aligned}$$
(30)
$$\begin{aligned} \varepsilon \left( t \right) =\int \limits _{0}^{t}{J\left( t-s \right) }d\sigma \left( s \right) \end{aligned}$$
(31)

In the convolution integrals the lower limit is at \(t=0\) since both \(\sigma \left( t \right) \) a and \(\varepsilon \left( t \right) \) are causal functions. Now, consider the application of the fading memory concept applicable to stress and strain relationships which can be presented as [12, 14]

$$\begin{aligned} \sigma \left( t \right) ={{G}_{\infty }}+\int \limits _{0}^{t}{G\left( t-s \right) }\frac{d\varepsilon }{ds}ds \end{aligned}$$
(32)
$$\begin{aligned} \epsilon \left( t \right) ={{J}_{\infty }}+\int \limits _{0}^{t}{J\left( t-s \right) }d\sigma \left( s \right) \end{aligned}$$
(33)

The values of \({{G}_{\infty }}\) and \({{J}_{\infty }}\) are the instantaneous responses or in other words, the equilibrium values established for long time when the effects of the second terms in (32) and (33) disappear, that is when \(G\left( t-s \right) \) and \(J\left( t-s \right) \) will approach zero.

The relationships (32) and (33) contain Stieltjes integrals [12, 14] because

$$\begin{aligned} \sigma \left( t \right) ={{G}_{\infty }}+\int \limits _{-\infty }^{0}{G\left( t-s \right) }dk\left( s \right) +\int \limits _{0}^{t}{G\left( t-s \right) }dk\left( s \right) \end{aligned}$$
(34)

However, due the causality of \(G\left( t \right) \) [8], i.e. \(G(t)>0\) for \(0<t<\infty \) and \(G(t)=0\) for \(-\infty<t<0\), the first integral is zero.

The appropriate viscoelastic kernel \(G\left( t \right) \) should be able to account for short- and long-term strains to the applied stress and must satisfy the conditions for complete monotonicity following the general constraints set on the relaxation function.

$$\begin{aligned} {{\left( -1 \right) }^{n}}\frac{{{\partial }^{n}}}{\partial {{t}^{n}}}G\left( t \right) \ge 0, \quad n=1,2,... \end{aligned}$$
(35)

2.2 Discrete Spectra as Sums of Exponents

An exponential series (formerly cited as Prony’s series) can be used to depict a non-linear monotonous response [9]

$$\begin{aligned} G\left( t \right) ={{G}_{\infty }}+\sum \limits _{i=0}^{N}{{{G}_{i}}\exp \left( -\frac{t}{{{\tau }_{i}}} \right) } \end{aligned}$$
(36)

The amount of molecular freedom in a material is measured by the number of independent relaxation periods, which may reach exceptionally high values for high polymers. When there are a lot of terms in (36), the total can be approximated by an integral that contains the distribution function \({{M}_{e}}\left( x \right) \) [15,16,17], namely

$$\begin{aligned} G\left( t \right) ={{G}_{\infty }}+\int \limits _{0}^{\infty }{\exp \left( -xt \right) }{{M}_{e}}\left( x \right) dx, \quad {{M}_{e}}\left( x \right) \ge 0 \end{aligned}$$
(37)

Prony’s Series Decompositions: Discrete Relaxation Spectra. Through a decomposition into a Prony series, the viscoelastic relaxation function may be described as a discrete relaxation spectrum \({{\phi }_{P}}\left( t \right) \) with \({{N}^{\phi }}\) with rate constants \({{\beta }_{i}}\) [15,16,17], namely

$$\begin{aligned} {{\phi }_{P}}\left( t \right) ={{\phi }_{\infty }}+\sum \limits _{i=1}^{{{N}^{\phi }}}{{{\phi }_{i}}}{{e}^{-{{\beta }_{i}}t}}={{\phi }_{\infty }}+\sum \limits _{i=1}^{{{N}^{\phi }}}{{{\phi }_{i}}}{{e}^{-\frac{t}{{{\tau }_{i}}}}}, \quad {{\beta }_{i}}=\frac{1}{{{\tau }_{i}}}\ge 0 \end{aligned}$$
(38)

Alternatively using weighted averages (amplitudes or normalized relaxation moduli) \({{\lambda }_{i}}\) as

$$\begin{aligned} \lambda \left( t \right) =\frac{{{\phi }_{P}}\left( t \right) }{{{\phi }_{\infty }}}=1+\sum \limits _{i=1}^{{{N}^{\phi }}}{{{\lambda }_{i}}}\left( {{e}^{-{{\beta }_{i}}t}}-1 \right) , \quad {{\lambda }_{i}}=\frac{{{\phi }_{i}}}{{{\phi }_{\infty }}} \end{aligned}$$
(39)

In (38) and (39) the parameters \({{\phi }_{\infty }}\) and \({{\phi }_{i}}\) are equilibrium (at large times) and relaxation moduli (stiffness), respectively, constrained according to [16, 17],

$$\begin{aligned} {{\phi }_{\infty }}+\sum \limits _{1}^{{{N}^{\phi }}}{{{\phi }_{i}}}=1 \end{aligned}$$
(40)

The generalized Maxwell viscoelastic body, also known as the Maxwell-Wiechert model [16, 17, 20], is analogous to this popular Prony series expression. It consists of \({{N}^{\phi }}\) parallel spring-dashpot components, with a final parallel spring determining the equilibrium behavior. This formula takes into consideration dissipative effects, which appear as creep and stress relaxations that are load-rate dependent. Through its time-dependent shear and bulk moduli, the Prony series representation provides a crude method for representing any viscoelastic model [15,16,17, 20].

Polynomial Fractional Operators with the Caputo-Fabrizio Derivative. Applying the Prony approximation of the relaxation curve and substituting in the convolution integral the following approximation is obtained [16, 17]

$$\begin{aligned} \sigma =\int \limits _{0}^{t}{{{E}_{i}}}\exp \left( -\frac{t-s}{{{\tau }_{i}}} \right) \frac{d\varepsilon }{ds}ds \end{aligned}$$
(41)

Since \(\sigma \left( t \right) \) is assumed as a finite sum of elements it is possible to invert the summation and the integral that leads to the expression [16, 17]

$$\begin{aligned} \sigma \left( t \right) =\int \limits _{0}^{t}{\sum \limits _{i=0}^{N}{{{E}_{i}}}}{{e}^{-\frac{\left( t-s \right) }{{{\tau }_{i}}}}}\frac{d\varepsilon }{ds}ds=\sum \limits _{i=0}^{N}{{{E}_{i}}}\left[ \int \limits _{0}^{t}{{{e}^{-\frac{\left( t-s \right) }{{{\tau }_{i}}}}}\frac{d\varepsilon }{ds}ds} \right] \end{aligned}$$
(42)

This makes it simple to include the memory effect from the convolution integral in each term of the Prony series. This result’s obvious physical meaning is that the strain \({{\varepsilon }_{i}}\left( t \right) \) at a given time t is described as a convolution integral with an exponential kernel.

Through the relation \(\alpha ={1}/{\left( 1-{\tau }/{{{t}_{0}}}\; \right) }\), the fractional parameter \(\alpha \) is related to the dimensionless relaxation time \(\tau /{t}_{0}\), where \({t}_{0}\) is the whole duration of the experiment. With a spectrum of relaxation times then we get [15,16,17]

$$\begin{aligned} {{\alpha }_{i}}=\frac{1}{1-{{{\tau }_{i}}}/{{{t}_{0}}}\;} \end{aligned}$$
(43)

This allows to present \({{\varepsilon }_{i}}\left( t \right) \) in way close to the basic construction of the Caputo-Fabrizio operator [18], namely [15,16,17]

$$\begin{aligned} {{\varepsilon }_{i}}\left( t \right) =\left( 1-{{\alpha }_{i}} \right) \int \limits _{0}^{t}{{{e}^{-\frac{{{\alpha }_{1}}}{1-{{\alpha }_{i}}}\left( \bar{t}-\bar{s} \right) }}\frac{d\varepsilon }{d\bar{s}}d\bar{s}}=\left( 1-{{\alpha }_{i}} \right) D_{t}^{{{\alpha }_{i}}}\varepsilon \left( t \right) \end{aligned}$$
(44)

Thus, the constitutive equation can be presented as [16, 17]

$$\begin{aligned} \sigma \left( t \right) =\sum \limits _{i=0}^{N}{{{E}_{i}}\left( 1-{{\alpha }_{i}} \right) }D_{t}^{{{\alpha }_{i}}}\varepsilon \left( t \right) \end{aligned}$$
(45)

In the context of the initial definition of a polynomial fractional operators we may write

$$\begin{aligned} \sigma \left( t \right) =B_{t}^{{{\alpha }_{n}}}\left[ \varepsilon \left( t \right) \right] , \quad B_{t}^{{{\alpha }_{n}}}=\sum \limits _{i=0}^{N}{{{E}_{i}}\left( 1-{{\alpha }_{i}} \right) D_{t}^{{{\alpha }_{i}}}} \end{aligned}$$
(46)

and

$$\begin{aligned} \varepsilon \left( t \right) =P_{t}^{{{\beta }_{n}}}\left[ \sigma \left( t \right) \right] , \quad P_{t}^{{{\beta }_{n}}}=\sum \limits _{i=0}^{N}{{{E}_{i}}\left( 1-{{\beta }_{i}} \right) D_{t}^{{{\beta }_{i}}}} \end{aligned}$$
(47)

2.3 Viscoelastic Polynomial Fractional Model in Terms of Atangana-Baleanu Derivative

The Atangana-Baleanu derivative of Caputo sense (ABC) [19] can be rewritten (assuming for convenience of the explanations with \(B\left( \alpha \right) =1\)) as

$$\begin{aligned} {}^{ABC}D_{a+}^{\alpha }f\left( t \right) =\frac{1}{1-\alpha }\int \limits _{0}^{z}{\frac{df\left( {\bar{s}} \right) }{d\bar{s}}{{E}_{\alpha }}\left[ -{{\left( \frac{\bar{t}-\bar{s}}{{\bar{\tau }}} \right) }^{\alpha }} \right] }d\bar{s} \end{aligned}$$
(48)

where \(\frac{1-\alpha }{\alpha }={{\left( \frac{\tau }{{{t}_{0}}} \right) }^{\alpha }}={{\left( {\bar{\tau }} \right) }^{\alpha }}, \quad \bar{t}=\frac{t}{{{t}_{0}}} \).

That is through a nondimesionalization of the times [16, 17] we get

$$\begin{aligned} {}^{ABC}D_{a+}^{\alpha }f\left( t \right) =\frac{1}{1-\alpha }\int \limits _{0}^{z}{\frac{df\left( {\bar{s}} \right) }{d\bar{s}}\left\{ \sum \limits _{j=0}^{\infty }{\frac{1}{\varGamma \left( \alpha j+1 \right) }}{{\left[ -{{\left( \frac{\bar{t}-\bar{s}}{{\bar{\tau }}} \right) }^{\alpha }} \right] }^{j}} \right\} }d\bar{s} \end{aligned}$$
(49)

were the argument of the Mittag-Leffler kernel \({{E}_{\alpha }}\left( -z \right) \) is \(z=\left[ {\alpha }/{\left( 1-\alpha \right) }\; \right] {{\left( t-s \right) }^{\alpha }}\) and the following relationship exists [16, 17]

$$\begin{aligned} {{\left( \frac{1-\alpha }{\alpha } \right) }^{j}}={{\left( \frac{\tau }{{{t}_{0}}} \right) }^{j}}\Rightarrow \alpha =\frac{1}{1+\left( {\tau }/{{{t}_{0}}}\; \right) } \end{aligned}$$
(50)

which is the same as that established for the Caputo-Fabrizio operator.

Since the data fitting process practically requires a finite number in the series defining the Mittag-Leffler function, we obtain a discrete spectrum that approximates the relaxation (compliance) function made up of power-law terms \({{\left( {t}/{\tau }\; \right) }^{\alpha j}}\). Further, with \(f\left( \varepsilon \right) =\varepsilon \left( t \right) \), we have [16, 17]

$$\begin{aligned} {}^{ABC}D_{a+}^{\alpha }\varepsilon \left( t \right) =\frac{1}{1-\alpha }\int \limits _{0}^{z}{{{E}_{\alpha }}\left[ -{{\left( \frac{\bar{t}-\bar{z}}{{\bar{\tau }}} \right) }^{\alpha }} \right] }\frac{d\varepsilon \left( {\bar{s}} \right) }{d\bar{s}}d\bar{s} \end{aligned}$$
(51)

As a result, the stress-strain convolution integral has the following form [16, 17]

$$\begin{aligned} \begin{aligned}&\sigma \left( t \right) =\sum \limits _{k=1}^{N}{{{E}_{k}}\left( t \right) }\int \limits _{0}^{t}{{{G}_{k}}\left( t-s \right) }\frac{d}{ds}\varepsilon \left( s \right) \\&=\sum \limits _{k=1}^{N}{{{E}_{k}}\left( t \right) }\left( 1-{{\alpha }_{k}} \right) \left[ \frac{1}{1-{{\alpha }_{k}}}\int \limits _{0}^{z}{{{E}_{{{\alpha }_{k}}}}\left[ -{{\left( \frac{\bar{t}-\bar{s}}{{\bar{\tau }}} \right) }^{{{\alpha }_{k}}}} \right] }\frac{d\varepsilon \left( {\bar{s}} \right) }{d\bar{s}}d\bar{s} \right] \\ \end{aligned} \end{aligned}$$
(52)

As commented above the values of N in the sum of relaxation kernels and J (the number of terms of \({{E}_{\alpha }}\) depends on the approximation approach accepted in data fitting. Hence, in a more compact form (52) can be expressed as

$$\begin{aligned} \sigma \left( t \right) =\sum \limits _{k=1}^{N}{{{E}_{k}}\left( t \right) }\left[ \left( 1-{{\alpha }_{k}} \right) {}^{ABC}D_{a+}^{{{\alpha }_{k}}}\varepsilon \left( t \right) \right] \end{aligned}$$
(53)

because the, the relaxation spectrum is a sum of weighted ABC derivatives of \(\varepsilon \left( t \right) \) [16, 17], namely

$$\begin{aligned} \frac{1-{{\alpha }_{k}}}{{{\alpha }_{k}}}={{\bar{\tau }}_{k}}=\frac{\tau }{{{t}_{0}}}\Rightarrow {{\alpha }_{k}}=\frac{1}{1+{\tau _{k}^{{}}}/{{{t}_{0}}}\;} \end{aligned}$$
(54)

If only one term in the right-hand side of (52) is enough (that is \(N=1\) to approximate the stress relaxation function, then (53) takes the form

$$\begin{aligned} \sigma \left( t \right) =E\left( t \right) \left[ \left( 1-\alpha \right) {}^{ABC}D_{t}^{\alpha }\varepsilon \left( t \right) \right] \end{aligned}$$
(55)

and \(\alpha \) follows from \(\alpha ={1}/{\left( 1+{\tau }/{{{t}_{0}}}\; \right) }\;\) Now, (55) defines a polynomial fractional operator with ABC derivatives, namely

$$\begin{aligned} \sigma \left( t \right) ={}^{ABC}B_{t}^{\alpha }\left[ \varepsilon \left( t \right) \right] , \quad {}^{ABC}B_{t}^{\alpha }=E\left( t \right) \left[ \left( 1-\alpha \right) {}^{ABC}D_{t}^{\alpha } \right] \end{aligned}$$
(56)

If the simple case (55) (with \(N=1\) in the right-hand side of (52)) is not enough to fit the stress relaxation function, then a weighted sum of polynomials (truncated series of Mittag-Leffler function) should be used. Obviously, in this case the parameter estimation should need specific data.

3 Some Comments on the Bagley-Torvik Equation

3.1 The Initial Formulation and Assumed Approximations

Bagley and Torvik began their investigation using Rouse’s idea [21] concerning the effective dynamic shear modulus of rarefied coiled polymers [1]. The sum of exponential decay modes (57) served as the Rouse model’s representation [21] of the stress relaxation with decay times \({{\tau }_{i}}\).

$$\begin{aligned} G\left( t \right) =\sum \limits _{i=1}^{N}{{{G}_{0}}}{{e}^{-\frac{t}{{{\tau }_{i}}}}} \end{aligned}$$
(57)

It is simple to spot the Prony series decomposition in the context of the current investigation. However, if the distribution of \({{\tau }_{i}}\) is proportional to \({{t}^{-\alpha -1}}\), then we get (58) defining a fractional derivative (with singular kernel) from \(\varepsilon \) of order \(\alpha \) (in the original notations of [1])

$$\begin{aligned} {{\sigma }^{\alpha }}=\int \limits _{a}^{t}{G\left( t-s \right) }\frac{d}{ds}\varepsilon \left( s \right) ds \end{aligned}$$
(58)

Hence, the total stress \({{\sigma }^{\alpha }}\) in the generalized Maxwell model, for instance, can be expressed as \({{\sigma }^{\alpha }}=\sum \limits _{i=1}^{N}{{{\sigma }_{i}}}\) where \({{\sigma }_{i}}\) is the stress in the \({{i}^{th}}\) Maxwell element, namely

$$\begin{aligned} {{\sigma }_{i}}={{k}_{i}}\left( \varepsilon -{{\varepsilon }_{i}} \right) ={{\eta }_{i}}\frac{d{{\varepsilon }_{i}}}{dt}\Rightarrow \frac{d{{\sigma }_{i}}}{dt}+\frac{1}{{{\tau }_{i}}}\sigma =\frac{d\left( {{k}_{i}}\varepsilon \right) }{dt}, \quad i=1,...N \end{aligned}$$
(59)

As a result, if the material relaxes in a power-law fashion, a fractional derivative model may be developed within the framework of weakly singular kernels, that is \({{t}^{-\alpha }}\), which leads to

$$\begin{aligned} {{\sigma }^{\alpha }}\left( t \right) ={{\mu }_{A}}D_{a}^{\alpha }\left\lfloor k\varepsilon \left( t \right) \right\rfloor , \quad {{\sigma }^{\alpha }}={{\lim }_{N\rightarrow \infty }}\sum \limits _{i=1}^{N}{{{\sigma }_{i}}}, \quad {{\sigma }^{\alpha }}={{\lim }_{N\rightarrow \infty }}\sum \limits _{i=1}^{N}{{{k}_{i}}} \end{aligned}$$
(60)

where \({{\mu }_{A}}\) is a positive constant.

After this simple explanation let us see the original construction of Bagley and Torvik represented as [17]

$$\begin{aligned} \sigma \left( t \right) +\sum \limits _{m=1}^{N}{{{b}_{m}}{{D}^{{{\beta }_{m}}}}\sigma ={{E}_{0}}}\varepsilon \left( t \right) +\sum \limits _{n=1}^{N}{{{E}_{n}}{{D}^{{{\alpha }_{m}}}}\varepsilon \left( t \right) } \end{aligned}$$
(61)

In (61) the fractional derivatives are Riemann-Liouville derivatives (originally used by Bagley and Torvik) and in case of \(N=1\) this relationship reduces to a simple expression (see comments below)

$$\begin{aligned} \sigma \left( t \right) +b{{D}^{\beta }}\sigma \left( t \right) ={{E}_{0}}\varepsilon \left( t \right) +{{E}_{1}}{{D}^{\alpha }}\varepsilon \left( t \right) \end{aligned}$$
(62)

containing two fractional derivatives with different orders.

Now, if the retardation spectrum corresponds to Riesz distribution and \(\sigma (t)=G_{1}D^{\alpha }\varepsilon (t)\) then the stress relaxation modulus G(t) and the creep compliance J(t) are

$$\begin{aligned} G(t)=\frac{G_{1}}{\varGamma (1-\alpha )}t^{-\alpha }, \quad J(t) = \frac{1}{G_{1}}\frac{1}{\varGamma (1-\alpha )}t^{\alpha } \end{aligned}$$
(63)

Therefore, with a power-law relaxation we get \(\alpha =\beta \) in (62) thus making it a single-fractional order equation.

According to arguments made by Bagley and Torvik, the condition \(alpha=beta\), which naturally results from the power-law stress relaxation kernel and the interconversion, is occasionally taken for granted as a norm. When the stress and strain relaxations in the model (60) are each represented by a single fractional derivative, correspondingly, we obtain a single-order equation in this instance, which is an exception when only one power-law term models the entire stress relaxation. This reduces the model to (61).

3.2 Bagley-Torvik Equation in Terms of Polynomial Caputo-Fabrizio Operators

It is normal to have concerns about properly simulating dynamic processes in non-power law media. Now, we could create a constitutive relationship in the manner of Bagley and Torvik and demonstrate how to reduce the relaxation (57) to (61), which naturally results in the application of the Caputo-Fabrizio operator [17].

$$\begin{aligned} \sigma \left( t \right) ={{E}_{0}}\varepsilon \left( t \right) +{{E}_{1}} D_{t}^{\mu }\left[ \varepsilon \left( t \right) \right] , \quad 0<\mu <1 \end{aligned}$$
(64)

In (64) the fractional operator \(D_{t}^{\mu }\left[ \varepsilon \left( t \right) \right] \) is based on a memory kernel different from the power-law, in this specific case we use exponential memory [17].

$$\begin{aligned} \sigma \left( t \right) +\sum \limits _{i=1}^{N}{{{b}_{i}}\left[ {}^{CF}{{D}^{\beta i}}\sigma \right] ={{E}_{0}}}\varepsilon \left( t \right) +\sum \limits _{i=1}^{N}{{{E}_{i}}\left[ {}^{CF}{{D}^{{{\alpha }_{i}}}}\varepsilon \left( t \right) \right] } \end{aligned}$$
(65)

The Prony decomposition’s fundamental principle is the source of the fractional order series, which has an equal number of terms on both sides of the equation. Moreover, the retardation times \({{\lambda }_{i}}\) and the relaxation times \(\tau _{i}\) obey the conditions [17, 20]

$$\begin{aligned} {{\tau }_{1}}<{{\lambda }_{1}}<....<{{\tau }_{i}}<{{\lambda }_{i}}<...{{\tau }_{N}}<{{\lambda }_{N}} \end{aligned}$$
(66)

In light of the connections between the relaxation (retardation) times and the fractional orders \({{\alpha }_{i}}={1}/{\left( 1+{{{\tau }_{i}}}/{{{t}_{0}}}\; \right) }\;\) and \({{\beta }_{i}}={1}/{\left( 1+{{{\lambda }_{i}}}/{{{t}_{0}}}\; \right) }\;\) we have the following requirement:

$$\begin{aligned} 0<\beta _{{1}}<\alpha _{{1}}<...<\beta _{{i}}<\alpha _{{i}}<...<\beta _{{N}}<\alpha _{{N}}<1 \end{aligned}$$
(67)

Further, a discrete relaxation spectrum (a series of exponents) with an accumulation point at zero, behaves like a power-law for brief periods of time in the context of polymer rheology [16, 17, 22], that is:

$$\begin{aligned} \sum \limits _{i=0}^{\infty }{\exp \left( -{{i}^{\gamma }}\xi \right) }\rightarrow {{t}^{-\frac{1}{\gamma }}}, \quad t\rightarrow 0, \quad \gamma >1 \end{aligned}$$
(68)

Therefore, when for \(t\rightarrow 0\) we may expect that (65) reduces to (68) and (62). This response explains how the model (65) reduces to the Bagley-Torvik equation with power-law-based derivatives and when this occurs: the discrete the relaxation spectrum asymptotic behaviour for short times. When \(N=1\), we get [17]

$$\begin{aligned} \sigma (t)+b \left[ ^{CF}D^{\beta } \sigma (t)\right] =E_{0}\varepsilon (t) + E_{1}\left[ ^{CF}D^{\alpha } \sigma (t)\right] , \quad 1>\beta>\alpha >0 \end{aligned}$$
(69)

Equation (69) contains two fractional derivatives of different orders as in (62) and this is the generalized Zener type model. Moreover, for \(\alpha =\beta =1\), this model reduces to

$$\begin{aligned} \sigma +\tau _{\varepsilon }\frac{d\sigma }{d t}=M_{r} \left( \varepsilon +\tau _{ \sigma }\frac{d \varepsilon }{d t}\right) , \quad \tau _{\sigma }/\tau _{\epsilon } < 1 \end{aligned}$$
(70)

with a relaxation modulus \(M_{r}\). In general, \(\tau < \lambda \) because this the basic causality requirement, meaning that the reaction occurs after the cause of it, but not the other way around, then the ratio \(\tau /\lambda \) is always is less than 1.

4 Polynomial-Based Relationships Between Fractional Operators with Various Kernels

Now, the major goal of this section is to show that there are connections between popular fractional operators with single-memory kernels and the polynomial operator under discussion. The option to use various techniques (approximations) of the system responses (relations function) enables for showing the primary notion of this chapter because memory kernels in the hereditary integrals match (or approximate) the relaxation response of the system modelled.

4.1 Riemann-Liouville and Caputo Formulations as Fractional Caputo-Fabrizio Polynomials

Let us have a look at the Riemann construction for the integral (4) and derivative (5). The Riesz distributions, which act as memory kernels, (6) or (8), are the cores of these convolutions, that is, single power-law functions are used to model the system’s relaxations (responses) based on this idea.

Since there is now a “competition” between the fractional operators with singular and those with non-singular kernels, the exponential sum approximation of the power-law function is quite an intriguing topic. The primary findings will be stated as follows after we quote McLean’s study [23] (see also [24]) next.

Power-Law Function Approximation by a Sum of Exponentials. We can think about a convolution operator with a kernel of \(k\left( t \right) \) [23] and specified points of time to address the power-law approximation by a sum exponentials [23, 24].

$$\begin{aligned} K\left[ u\left( t \right) \right] =\int \limits _{0}^{t}{k\left( t-s \right) u\left( s \right) }ds \quad 0={{t}_{0}}<{{t}_{1}}<{{t}_{2}}\cdots <{{t}_{{{N}_{t}}}}=T \end{aligned}$$
(71)

Thus, allowing to attain a sufficient accuracy when only a moderate number of terms (moderate value of L) for a choice of \(\delta \) that should be smaller than the time interval \(\left( {{t}_{n}}-{{t}_{n-1}} \right) \) between the sampling points. If \(\Delta {{t}_{n}}\ge \delta \) it follows than \(\delta \le {{t}_{n}}-s\le T\) when \(0\le s\le {{t}_{n-1}}\). With \(k\left( t \right) ={{t}^{-\beta }}\), \(\beta >0\), the following transform is considered [23, 24]

$$\begin{aligned} k\left( t \right) \approx \sum \limits _{l=1}^{L}{{{w}_{l}}}\exp \left( {{b}_{l}}t \right) \end{aligned}$$
(72)

As a result, it is possible to achieve an acceptable level of accuracy using just a moderate number of terms (moderate value of L) and a \(\delta \) value that is chosen to be less than the time gap \(\left( {{t}_{n}}-{{t}_{n-1}} \right) \) between the sample points. If \(\Delta {{t}_{n}}\ge \delta \), it is evident that \(\delta \le {{t}_{n}}-s\le T\) occurs when \(0\le s\le {{t}_{n-1}}\). The following transform is said to [23] if \(k\left( t \right) ={{t}^{-\beta }}\) and \(\beta >0\).

$$\begin{aligned} {{t}^{-\beta }}=\frac{1}{\varGamma \left( \beta \right) }\int \limits _{0}^{\infty }{{{e}^{-pt}}}{{p}^{\beta }}\frac{dp}{p} ,\quad t>0 , \quad \beta >0 \end{aligned}$$
(73)

By application the trapezoidal rule with a step \(h>0\) the following approximation can be obtained [23, 24]

$$\begin{aligned} {{t}^{-\beta }}\approx \frac{1}{\varGamma \left( \beta \right) }\sum \limits _{n=-\infty }^{\infty }{{{w}_{n}}}\exp \left( -{{a}_{n}}t \right) ,\quad {{a}_{n}}={{e}^{hn}}, \quad {{w}_{n}}=h{{e}^{\beta hn}} \end{aligned}$$
(74)

with a relative error [23, 24]

$$\begin{aligned} \bar{\rho }\left( t \right) =1-\frac{{{t}^{\beta }}}{\varGamma \left( \beta \right) }\sum \limits _{n=-\infty }^{\infty }{{{w}_{n}}\exp \left( -{{a}_{n}}t \right) }, \quad 0<t<\infty \end{aligned}$$
(75)

When \(t\in \left[ \delta ,t \right] \) with \(\delta \in \left( 0,T=\infty \right) \) and a finite number of terms we get [23, 24]

$$\begin{aligned} {{t}^{-\beta }}\approx \frac{1}{\varGamma \left( \beta \right) }\sum \limits _{n=-M}^{N}{{{w}_{n}}}\exp \left( -{{a}_{n}}t \right) , \quad \delta \le t\le T \end{aligned}$$
(76)

with a bounded error of approximation. In addition, the terms \({{a}_{n}}=\exp \left( nh \right) \) approach zero when \(n\rightarrow -\infty \).

If \( g\left( t \right) \approx \sum \limits _{p=1}^{N}{{{{\tilde{w}}}_{p}}}\exp \left( -{{{\tilde{a}}}_{p}}t \right) , \quad 2N-1<L, {{\tilde{w}}_{l}}>0 , \quad {{\tilde{a}}_{l}}>0 \) consequently 2N parameters from 2N conditions have to be determined, such that

$$\begin{aligned} g\left( t \right) \approx \sum \limits _{p=1}^{N}{{{{\tilde{w}}}_{p}}}\exp \left( -{{{\tilde{a}}}_{p}}t \right) , \quad 2N-1<L, {{\tilde{w}}_{l}}>0 , \quad {{\tilde{a}}_{l}}>0 \end{aligned}$$
(77)

The approximation (76) is a finite sum with many small exponents \({{a}_{n}}\). Now, the task is to develop more efficient approximations with as much as fewer N terms and acceptable accuracy, such that \(g\left( t \right) \approx \sum \limits _{p=1}^{N}{{{{\tilde{w}}}_{p}}}\exp \left( -{{{\tilde{a}}}_{p}}t \right) , \quad 2N-1<L, {{\tilde{w}}_{l}}>0, \quad {{\tilde{a}}_{l}}>0 \). The test for \(\beta ={3}/{4}\;\) (carried out with \(\delta ={{10}^{-6}}\) and \(T=10\)) revealed that with \(M=65\) and \(N=36\) the relative error of approximation is \(\le 0.92\times {{10}^{-8}}\) (\(\delta \le t\le T\)). The data summarized in Table 1 of [23] indicate that for \(L=65\) and \(N=6\) the maximum relative error is about \(1.66{{e}^{-9}}\). The same maximum relative error \(\left( 1.66{{e}^{-9}} \right) \) appears when \(L=62\) and \(N=3\), as well as for \(L=56\) and \(N=2\) (two exponential terms). In all these cases the condition \(2N-1<L\) is obeyed. Similar analysis was thoroughly performed in [10]. Further, if the appropriate coefficients in (72) are scaled as [25, 26]

$$\begin{aligned} {{b}_{i}}={{{b}_{0}}}/{{{q}^{i-1}}}, \quad {{w}_{l}}={{C}_{\beta }}\left( q \right) {{k_{\beta } (t) } }\frac{b_{l}^{\beta }}{\varGamma \left( 1-\beta \right) } \end{aligned}$$
(78)

such as the inverse relaxation times \({{b}_{l}}\) and the constants \({{w}_{l}}\), where q is a scaling parameters and \({{C}_{\beta }}\left( q \right) \) is a fitting dimensionless constant.

In this way, the power-law can be approximated over about \(r=L{{\log }_{10}}q-2\) temporal decades, where q is a scaling parameter related to the inverse relaxation time (rate constants) \({{b}_{l}}={{{b}_{0}}}/{{{q}^{i-1}}}\), between two limits [25, 26]: \({{\tau }_{l}}={1}/{{{b}_{0}}}\;<t<{{\tau }_{h}}={{\tau }_{l}}{{q}^{L-1}}\); and these restrictions always apply in physical situations. Hence, as mentioned by Goychuk [26] this approximation is not only natural but in some cases desirable (see comments in [27] where effects of fractional kennels on the type differentiable functions and emerging problems are discussed). As a result, as indicated by Goychuk in [26] this approximation is not only reasonable in some circumstances but also does so naturally (see the comments in [27] where the effects of fractional values on the type differentiable functions and new issues are examined). According to Goychuk [26], if the scaling parameter q is properly selected, even a decade scaling with \(q=10\), approximations with \(1 \% \) accuracy can be developed. For instance, Goychuk’s example [26] shows a nice fit of \(t^{-0.5}\) over 14 time decades with a sum of 16 exponential terms.

Riemann-Liouville Operators Approximated by Fractional Caputo-Fabrizio Polynomials. Therefore, the Riemann-Liouville integral (4) can be approximated as fractional Caputo-Fabrizio polynomials if the kernel function \({{t}^{-\beta }}\), where \(\beta =\left( 1-\alpha \right) <1\), and therefore \(\alpha =1-\beta \), is approximated as a series (76) or (76)), then we get an approximated Riemann-Liouville integral, namely

$$\begin{aligned} \begin{aligned} {{I}^{\alpha }}f\left( t \right) ={{D}^{-\alpha }}f\left( t \right) \approx \frac{1}{\varGamma \left( \alpha \right) }\int \limits _{0}^{t}{g\left( t \right) f\left( t \right) }ds \\ =\frac{1}{\varGamma \left( \alpha \right) }\int \limits _{0}^{t}{\left[ \sum \limits _{p=1}^{N}{{{{\tilde{\omega }}}_{p}}\exp \left( -{{{\tilde{a}}}_{p}}\left( t-s \right) \right) } \right] f\left( s \right) }ds \end{aligned} \end{aligned}$$
(79)

and an approximated Riemann-Liouville derivative, that is

$$\begin{aligned} \begin{aligned} {}^{RL}D_{t}^{\alpha }\approx \frac{1}{\varGamma \left( \beta \right) }\frac{d}{dt}\int \limits _{0}^{t}{g\left( t \right) f\left( t \right) }\\ =\frac{1}{\varGamma \left( \beta \right) }\frac{d}{dt}\int \limits _{0}^{t}{\left[ \sum \limits _{p=1}^{N}{{{{\tilde{\omega }}}_{p}}\exp \left( -{{{\tilde{a}}}_{p}}\left( t-s \right) \right) } \right] f\left( s \right) }ds, \quad \beta =1-\alpha \end{aligned} \end{aligned}$$
(80)

In both approximations, the discrete fractional orders are related to the rate coefficients \({{\tilde{a}}_{p}}\) (having dimensions \({{s}^{-1}}\)), which are the dimensionless inverse relaxation times, that is \({{\tilde{a}}_{p}}={1}/{{{{\bar{\tau }}}_{rp}}}\), where \({{\bar{\tau }}_{rp}}={{{\tau }_{rp}}}/{{{t}_{0}}}\;\)(\({{t}_{0}}\) is macroscopic time scale) (see Eq. (43) as an example of this)

$$\begin{aligned} {{\tilde{a}}_{p}}=\frac{{{\alpha }_{p}}}{1-{{\alpha }_{p}}} \end{aligned}$$
(81)

Caputo Derivative Approximated by Fractional Caputo-Fabrizio Polynomials. Further, using the same approximation for the Caputo derivative we get

$$\begin{aligned} {}^{C}{{D}^{\beta }}f(t) \approx \frac{1}{\varGamma \left( 1-\beta \right) }\int \limits _{0}^{t}{\left[ \sum \limits _{p=1}^{N}{{{{\tilde{\omega }}}_{p}}\exp \left( -{{{\tilde{a}}}_{p}}\left( t-s \right) \right) } \right] }\frac{df\left( s \right) }{ds}ds \end{aligned}$$
(82)

where \( {\tilde{a}}_{p}\) are defined by (81). Changing the order of the integration and summation in (82) we get (see the same operation in (42))

$$\begin{aligned} \begin{aligned} {}^{C}{{D}^{\beta }}f\left( t \right) \approx \frac{1}{\varGamma \left( 1-\beta \right) }\sum \limits _{p=1}^{N}{\int \limits _{0}^{t}{\left[ {{{\tilde{\omega }}}_{p}}\exp \left( -{{{\tilde{a}}}_{p}}\left( t-s \right) \right) \right] }\frac{df\left( s \right) }{ds}ds}\\ =\frac{1}{\varGamma \left( 1-\beta \right) }\sum \limits _{p=1}^{N}{{}^{CF}D_{t}^{{{\alpha }_{p}}}f\left( t \right) } \end{aligned} \end{aligned}$$
(83)

The essential principle of the fractional polynomial approximation, that we can approximate derivatives with the power-law kernel as a finite sum of Caputo-Fabrizio derivatives, is further illustrated by this conclusion. Regarding the Riemann-Liouville structures, the conclusion is not immediately apparent. It is straightforward to demonstrate the reasonableness of the polynomial approximations, though, because a simple integration by parts makes it possible to see the links between the Caputo and Riemann-Liouville derivatives [6].

Generally speaking, the Caputo construction, which is seen in other fractional operators with non-singular kernels, enables a more convincing demonstration of the rationality in approximation by fractional polynomials.

4.2 Fractional Operator with a Mittag-Leffler Kernel and Fractional Caputo-Fabrizio Polynomials

Mittag-Leffler Function Approximation by Exponential Sums. We shall now briefly discuss the exponential sums approach or the Mittag-Leffler function approximation method [28]

$$\begin{aligned} {{E}_{\alpha }}\left( -{{t}^{\alpha }} \right) =\sum \limits _{k=0}^{\infty }{\frac{{{\left( -1 \right) }^{k}}{{t}^{k\alpha }}}{\varGamma \left( \alpha k+1 \right) }}, \quad 0<\alpha <1, \quad t>0 \end{aligned}$$
(84)

as

$$\begin{aligned} {{E}_{\alpha }}\left( -{{t}^{\alpha }} \right) \approx \sum \limits _{i=1}^{N}{{{w}_{i}}\exp \left( -{{p}_{i}}t \right) }, \quad 0<\alpha <1, \quad t>0 \end{aligned}$$
(85)

Following Lam [28] (see more details in [24]) we have

$$\begin{aligned} E\left( -{{t}^{\alpha }} \right) =\int \limits _{0}^{\infty }{\frac{\sin \left( \alpha \pi \right) }{{{x}^{2}}+2\cos \left( \alpha \pi \right) x+1}}\exp \left( -{{x}^{{1}/{\alpha }\;}} \right) dx \end{aligned}$$
(86)

Then, expressing the integral in (86) as a sum of sub-integrals [28]

$$\begin{aligned} {{E}_{\alpha }}\left( -{{t}^{\alpha }} \right) =\int \limits _{0}^{{{b}^{-N}}}{+\sum \limits _{j=1}^{N}{\int \limits _{{{b}^{-j}}}^{{{b}^{-j+1}}}{+}}}\sum \limits _{j=1}^{M}{\int \limits _{{{b}^{j-1}}}^{{{b}^{j}}}{+}}\int \limits _{{{b}^{M}}}^{\infty }{{}} \end{aligned}$$
(87)

allows in each sub-interval- the Gauss-Legendre quadrature to be applied. And, the result is [28]

$$\begin{aligned} {{E}_{\alpha }}\left( -{{t}^{\alpha }} \right) \approx S\left( t \right) =\sum \limits _{j=1}^{N+M}{\sum \limits _{i=1}^{{{n}_{j}}}{{{w}_{ij}}}}\exp \left( -{{s}_{ij}}t \right) \end{aligned}$$
(88)

with

$$\begin{aligned} {{w}_{ij}}=\omega _{ij}^{\left( {{n}_{j}} \right) }\frac{\sin c\left( \alpha \pi \right) }{x_{ij}^{2}+2\cos \left( \alpha \pi \right) {{x}_{ij}}+1}, \quad {{s}_{ij}}={{\left[ x_{ij}^{\left( {{n}_{j}} \right) } \right] }^{{1}/{\alpha }\;}}, \quad \sin c\left( x \right) =\frac{\sin \left( x \right) }{x} \end{aligned}$$
(89)

where and \(\omega _{ij}^{\left( {{n}_{j}} \right) }\) \(x_{ij}^{\left( {{n}_{j}} \right) }\) are the Gauss-Legendre quadrature nodes and weights of order \({{n}_{j}}\) of the jth interval \(\left[ {{b}^{j-N}},{{b}^{j-N+1}} \right] \).

A Constitutive Equation with Mittag-Leffler Memory Approximated by Caputo-Fabrizio Polynomials. It is feasible to give a single convolution constitutive equation using the Mittag-Leffler function as a memory kernel, as an approximation, using the desired presentation (85) and the result (88), that is

$$\begin{aligned} G\left( t \right) ={{E}_{\alpha 1}}\left[ {{\left( -\frac{t}{{{\tau }_{k}}} \right) }^{{{\alpha }_{k}}}} \right] \approx {{E}_{\alpha }}\left( -{{t}^{\alpha }} \right) \approx \sum \limits _{j=1}^{N+M}{\sum \limits _{i=1}^{{{n}_{j}}}{{{w}_{ij}}}}\exp \left( -{{s}_{ij}}t \right) \end{aligned}$$
(90)

where simple relations link \({{\alpha }_{ij}}={1}/{\left( 1+{{{\tau }_{ij}}}/{{{t}_{0}}}\; \right) }\;\)terms of relaxation periods \({{s}_{ij}}={1}/{{{\tau }_{ij}}}\;\)to fractional orders \({{\alpha }_{ij}}\)of the Caputo-Fabrizio operators.

This replacement of the Mittag-Leffler kernel with a Prony’s series in the convolution integral results in [24]

$$\begin{aligned} \begin{aligned}&\sigma \left( t \right) ={{G}_{\infty }}+\int \limits _{0}^{t}{{{E}_{\alpha 1}}{{\left( \frac{t-s}{{{\tau }_{k}}} \right) }^{{{\alpha }_{k}}}}\frac{d}{ds}}\varepsilon \left( s \right) ds\Rightarrow \sigma \left( t \right) \\&\approx {{G}_{\infty }}+\int \limits _{0}^{t}{\left[ \sum \limits _{j=1}^{N+M}{\sum \limits _{i=1}^{{{n}_{j}}}{{{w}_{ij}}}}\exp \left( -{{s}_{ij}}t \right) \right] \frac{d}{ds}}\varepsilon \left( s \right) ds \\ \end{aligned} \end{aligned}$$
(91)

As a result of the integration and summation orders being reversed in (91), [24] we get

$$\begin{aligned} \sigma \left( t \right) \approx {{G}_{\infty }}+\sum \limits _{j=1}^{N+M}{\sum \limits _{i=1}^{{{n}_{j}}}{\left[ \int \limits _{0}^{t}{{{w}_{ij}}\exp \left( -{{s}_{ij}}t \right) \frac{d}{ds}\varepsilon \left( s \right) ds} \right] }} \end{aligned}$$
(92)

Now, we may express (92) in terms of Caputo-Fabrizio operators with fractional orders \({{\alpha }_{ij}}\) as [24]

$$\begin{aligned} G\left( t \right) \approx {{G}_{\infty }}+\sum \limits _{j=1}^{N+M}{\sum \limits _{i=1}^{{{n}_{j}}}{\left[ \left( 1-{{\alpha }_{ij}} \right) D_{t}^{{{\alpha }_{ij}}}\varepsilon \left( t \right) \right] }} \end{aligned}$$
(93)

This is comparable to the stress relaxation function that Prony’s series directly approximates. Such an approach may facilitate the calculation techniques and avoid the problems with slow convergence of the Mittag-Leffler function. Moreover, from a practical point of view, when high precisions in the approximations requiring too many terms to be involved in the series (defined by the condition (89) are not attainable due to the experimental techniques restrictions, less accuracy in approximation results in less number of Prony’s series approximating the Mittag-Leffler function. Thus this end, these comments only draw a perspective that needs thorough investigations. Such a method might simplify the calculation processes and prevent issues with the Mittag-Leffler function’s sluggish convergence. Additionally, from a practical standpoint, reduced accuracy in approximation leads to fewer Prony’s series approximating the Mittag-Leffler function when high precisions in approximations, indicated by the condition (89), are not achievable due to the experimental procedure limitations. To that aim, these remarks just highlight a viewpoint that requires in-depth research.

5 Final Remarks

The author’s view on fractional polynomial operators is presented in this chapter. It starts with Koeller’s theory and is then expanded to include some recent advancements in fractional calculus, particularly the non-singular kernel operators. This initial step makes it possible to connect operators with singular and non-singular kernels, which in some situations with a practical orientation may ease with computation. However, the primary goal is to demonstrate that all new operators, like Mittag-Leffler (it is simple to develop this line also for Prabhakar, Rabotnov, and others functions, albeit they are not provided here), with memory kernels based on entire functions (of polynomial type converging completely at the complex plane), are in reality polynomial operators.

With satisfaction, we may mention the following from this vantage point and at the date and time that such a position is taken: We may occasionally unearth inspired ideas by looking at what has already been done, even though they were not acknowledged in the original source. By having the ability to discern the invisible in previously seen results, science is being pushed into new frontiers.