1 Introduction

There are lots of different types of fuzzy derivatives and integrations in the literature. Before discussing fuzzy integro-differential equations and their associated numerical algorithms, it is necessary to present an appropriate brief introduction to fuzzy derivatives and integrations of fuzzy functions. Puri and Ralescu (1983) first introduced the concept of Hukuhara differentiability. The approach based on the Hukuhara derivative has the disadvantage that a differentiable function has increasing length of its support interval (Diamond 2000). This is not always a realistic assumption. This is a big shortcoming of Hukuhara derivative. Then, Seikkala (1987) introduced Seikkala derivative which has the same disadvantages as the Hukuhara derivative. So Bede and Gal (2005) introduced strongly generalized differentiability and weakly generalized differentiability of fuzzy number-valued function. These two concepts of differentiability solve the above-mentioned shortcoming of Hukuhara and Seikkala derivative, but these have a disadvantage compared to Hukuhara and Seikkala derivative that is a fuzzy differential equation which has no unique solution. Then, Bede and Stefanini (2009) introduced fuzzy gH-derivative which coincides with the concept of weakly generalized differentiability. Again Bede and Stefanini (2013) introduced another concept of fuzzy derivative, fuzzy g-derivative, which is the most general among the previous definitions. In fact, fuzzy gH-derivative and fuzzy g-derivative coincide whenever gH-difference exists. There are other works also on fuzzy derivative (Chalco-Cano and Román-Flores 2008, 2009) where they redefine fuzzy derivative by considering first two cases of strongly generalized differentiability and neglect last two cases of strongly generalized differentiability. Biswas and Roy (2018a) introduced the concept of gS-derivative which is equivalent to the lateral H-derivative, but compared to the definition of the lateral H-derivative, gS-derivative is easier to understand and use.

There are also various kinds of definitions of fuzzy integration in the literature. Seikkala (1987) defined the fuzzy integral which is same as that proposed by Dubois and Prade (1982a, b). Diamond and Kloeden (2000) introduced the fuzzy Aumann integral. Gal (2000) introduced the fuzzy Riemann integral which is an alternative way to define Aumann-type integral. Wu and Gong (2001) introduced Henstock integral of fuzzy number-valued functions. For a continuous fuzzy number-valued function, fuzzy Aumann integrable (Diamond and Kloeden 2000), fuzzy Riemann integrable (Gal 2000) and fuzzy Henstock integrable (Wu and Gong 2001) all are same.

Lots of work has been done on fuzzy integro-differential equations. Balasubramaniam and Muralisankar (2001), Alikhani et al. (2012), Zeinali et al. (2013), Vua et al. (2014), Donchev et al. (2014), Alikhani and Bahrami (2015) and Zeinali (2017) all have done work on the existence and uniqueness of the solution for different types of fuzzy integro-differential equations. There are different kinds of analytical and numerical procedures to solve different kinds of fuzzy integro-differential equations in the literature (Allahviranloo et al. 2012; Matinfar et al. 2013; Alikhani and Bahrami 2015; Zeinali2017; Otadi and Mosleh 2016; Sathiyapriya and Narayanamoorthy 2017; Biswas and Roy 2018a, b). Allahviranloo et al. (2012) presented the extending of 0-cut and 1-cut solutions method for fuzzy integro-differential equations with trapezoid fuzzy initial value. Matinfar et al. (2013) presented variational iteration method, and Biswas and Roy (2018a) presented differential transform method for fuzzy Volterra integro-differential equations. Alikhani and Bahrami (2015) presented the method of upper and lower solutions, Otadi and Mosleh (2016) presented a method based on Newton–Cotes methods with positive coefficient, Biswas and Roy (2018b) presented Adomian decomposition method, and Sathiyapriya and Narayanamoorthy (2017) presented the extended form of homotopy perturbation method for approximate solution of fuzzy integro-differential equations.

In this work, we have considered fuzzy integro-differential equation which may be linear or nonlinear and may be Volterra or Fredholm or Volterra–Fredholm. This type of general fuzzy integro-differential equation has not been considered before in the literature for analytical or numerical solutions. Kheybari et al. (2017a, b) presented a semianalytical method for crisp integro-differential equation. Here, we have modified and presented same method for fuzzy integro-differential equation.

The organization of this paper is as follows. In Sect. 2, some mathematical preliminaries which are needed to understand fuzzy derivative and integration are given. Section 3 contains the formulation of the problem. In Sect. 4, the algorithm to solve fuzzy integro-differential equation has been presented. Section 5 contains the minimization technique of the residual functions. A brief error and convergence analysis of the proposed method is presented in Sect. 6. Application of the proposed method is presented in Sect. 7 where some test problems have been investigated. Finally, a brief conclusion of the article is presented in Sect. 8.

2 Preliminaries

Definition 2.1

Let E be the set of all upper semicontinuous normal convex fuzzy numbers with bounded \( \alpha \)-cut intervals. It means if \( v \in E \), then the \( \alpha \)-cut set is a closed bounded interval which is denoted by

$$ v_{\alpha } = [v_{1,} v_{2} ]. $$

For arbitrary \( u_{\alpha } = [u_{1,} u_{2} ],v_{\alpha } = [v_{1,} v_{2} ] \) and \( k \ge 0, \), addition \( (u_{\alpha } + v_{\alpha } ) \) and multiplication by k are defined as \( (u + v)_{1} (\alpha ) = u_{1} (\alpha ) + v_{1} (\alpha ),\;(u + v)_{2} (\alpha ) = u_{2} (\alpha ) + v_{2} (\alpha ),\;(ku)_{1} (\alpha ) = ku_{1} (\alpha ),(ku)_{2} (\alpha ) = ku_{2} (\alpha ) \) and each \( y \in R \) can be regarded as a fuzzy number defined by

$$ \mu_{y} (x) = \left\{ {\begin{array}{*{20}c} 1 & {{\text{if}}\;x = y,} \\ 0 & {{\text{if}}\;x \ne y.} \\ \end{array} } \right. $$

The Hausdorff distance between fuzzy numbers is given by \( D:E \times E \to R^{ + } \cup \{ 0\} \)

$$ D(u,v) = \sup \hbox{max} \{ \left| {u_{1} (\alpha ) - v_{1} (\alpha )} \right|,\left| {u_{2} (\alpha ) - v_{2} (\alpha )} \right|\} ,\quad \alpha \in [0,1] $$

It is easy to see that D is a metric in E and has the following properties (see Puri and Ralescu 1983)

  1. (i)

    \( D(u \oplus w,v \oplus w) = D(u,v),\forall u,v,w \in E, \)

  2. (ii)

    \( D(k \odot u,k \odot v) = \left| k \right|D(u,v),\forall k \in R,\quad u,v \in E \)

  3. (iii)

    \( D(u \oplus v,w \oplus e) \le D(u,w) + D(v,e),\forall u,v,w,e \in E, \)

  4. (iv)

    (D, E) is a complete metric space.

Definition 2.2

Let \( f:R \to E \) be a fuzzy-valued function. If for arbitrary fixed \( t_{0} \in R \) and ε > 0, ∃ a \( \delta > 0 \) such that \( \left| {t - t_{0} } \right| < \delta \Rightarrow D(f(t),f(t_{0} )) < \varepsilon , \)f is said to be continuous.

Definition 2.3

(Puri and Ralescu 1983) Let \( x,y \in E \). If there exists \( z \in E \) such that \( x = y + z, \), then z is called the H-difference of x and y and it is denoted by \( x{ \ominus }y. \)

Definition 2.4

(Puri and Ralescu 1983) A function \( f:(a,b) \to E \) is called H-differentiable on \( x_{0} \in (a,b) \) if for \( h > 0 \) sufficiently small there exist the H-differences \( f(x_{0} + h){ \ominus }f(x_{0} ),\;f(x_{0} ){ \ominus }f(x_{0} - h) \) and an element \( f^{\prime}(x_{0} ) \in E \) such that

$$ \mathop {\lim }\limits_{x \to 0} \frac{{f(x_{0} + h){ \ominus }f(x_{0} )}}{h} = \mathop {\lim }\limits_{x \to 0} \frac{{f(x_{0} ){ \ominus }f(x_{0} - h)}}{h} = f^{\prime}(x_{0} ) $$

Definition 2.5

(Seikkala 1987) The Seikkala derivative (S-derivative) at \( x_{0} \in (a,b) \) of a fuzzy number-valued function \( f:(a,b) \to E \) is defined by \( f^{\prime}_{\alpha } (x_{0} ) = [f^{\prime}_{1} (x_{0} ,\alpha ),f^{\prime}_{2} (x_{0} ,\alpha )],\;0 \le \alpha \le 1, \) provided that it defines a fuzzy number \( f^{\prime}(x_{0} ) \in E. \)

Definition 2.6

(Chalco-Cano and Román-Flores 2008) Let \( f:(a,b) \to E \) and \( x_{0} \in (a,b) \). One says f is (1)-differentiable at \( x_{0} , \) if there exists an element \( f^{\prime}(x_{0} ) \in E \) such that for all \( h > 0 \) sufficiently small there exist \( f(x_{0} + h){ \ominus }f(x_{0} ),\;f(x_{0} ){ \ominus }f(x_{0} - h) \) and the limits (in the metric D) \( \mathop {\lim }\limits_{{x \to 0^{ + } }} \frac{{f(x_{0} + h){ \ominus }f(x_{0} )}}{h} = \mathop {\lim }\limits_{{x \to 0^{ + } }} \frac{{f(x_{0} ){ \ominus }f(x_{0} - h)}}{h} = f^{\prime}(x_{0} ) = f^{\prime}(x_{0} ). \)f is (2)-differentiable at \( x_{0} \), if there exists an element \( f^{\prime}(x_{0} ) \in E \) such that for all \( h < 0 \) sufficiently small there exist \( f(x_{0} + h){ \ominus }f(x_{0} ),\;f(x_{0} ){ \ominus }f(x_{0} - h) \) and the limits (in the metric D)

$$ \mathop {\lim }\limits_{{x \to 0^{ - } }} \frac{{f(x_{0} + h){ \ominus }f(x_{0} )}}{h} = \mathop {\lim }\limits_{{x \to 0^{ - } }} \frac{{f(x_{0} ){ \ominus }f(x_{0} - h)}}{h} = f^{\prime}(x_{0} ). $$

Definition 2.7

(Biswas and Roy 2018a) Let \( f:(a,b) \to E \) and \( x_{0} \in (a,b) \). Then, the generalized Seikkala derivative (gS-derivative) of \( f(x) \) at \( x_{0} \) is denoted \( f^{\prime}(x_{0} ) \) and defined by

  1. (i)

    if \( f^{\prime}_{1} (x_{0} ,\alpha ),f^{\prime}_{2} (x_{0} ,\alpha ) \) exist and \( f^{\prime}_{1} (x_{0} ,\alpha ) \le f^{\prime}_{2} (x_{0} ,\alpha ) \), then \( f^{\prime}_{\alpha } (x_{0} ): = [f^{\prime}_{1} (x_{0} ,\alpha ),f^{\prime}_{2} (x_{0} ,\alpha )] \)

  2. (ii)

    if \( f^{\prime}_{1} (x_{0} ,\alpha ),f^{\prime}_{2} (x_{0} ,\alpha ) \) exist and \( f^{\prime}_{1} (x_{0} ,\alpha ) \ge f^{\prime}_{2} (x_{0} ,\alpha ) \) then \( f^{\prime}_{\alpha } (x_{0} ): = [f^{\prime}_{2} (x_{0} ,\alpha ),f^{\prime}_{1} (x_{0} ,\alpha )] \)

Remark 2.1

(Biswas and Roy 2018a) This gS-derivative is well defined because if f(x) is gS-differentiable at \( x_{0} \in [a,b] \) in the form of (i) and (ii) both, then \( f^{\prime}_{1} (x_{0} ,\alpha ) = f^{\prime}_{2} (x_{0} ,\alpha ) \), i.e., \( f^{\prime}(x_{0} ) \in R \subset E. \)

Theorem 2.1

(Biswas and Roy 2018a) The following definitions of fuzzy derivative are equivalent

  1. (a)

    lateral H-derivatives (Definition 2.6)

  2. (b)

    generalized S-derivative (Definition 2.7)

Definition 2.8

(Diamond and Kloeden 2000) A mapping \( f:[a,b] \to E\, \) is said to be strongly measurable if the \( \alpha {\text{ - cut}} \) set mapping \( [f(x)]_{\alpha } \) are measurable for all \( \alpha \in [0,1]. \) Here, measurable means Borel measurable.

A fuzzy-valued function \( f:[a,b] \to E\, \) is called integrably bounded if there exists an integrable function \( h:[a,b] \to {\mathbb{R}}, \) such that \( \left\| {f(t)} \right\|_{F} \le h(t),\,\forall \,t \in [a,b] \) where the norm of fuzzy number is \( \left\| {f(t)} \right\|_{F} = D(f,0). \)

A strongly measurable and integrably bounded fuzzy-valued function is called integrable. The fuzzy Aumann integral of \( f:[a,b] \to E\, \) is defined \( \alpha {\text{ - cut}} \)-wise by the equation

$$ \left[({\text{FA}})\int\limits_{a}^{b} {f(x){\text{d}}x} \right]_{\alpha } = \int\limits_{a}^{b} {[f(x)]_{\alpha } {\text{d}}x} ,\quad \alpha \in [0,1]. $$

The following Riemann-type integral presents an alternative to Aumann-type definition.

Definition 2.9

(Gal 2000) A function \( f:[a,b] \to E\, \) is called Riemann integrable on \( [a,b] \), if there exists \( I \in E, \) with the property \( \forall \varepsilon > 0,\,\,\exists \delta > 0, \) such that for any division of \( [a,b] \), \( d:a = x_{0} < \cdots < x_{n} = b \) of norm \( \nu (d) < \delta , \) and for any points \( \xi_{i} \in [x_{i} ,x_{i + 1} ],\,i = 0, \ldots ,n - 1, \) we have

$$ D\left( {\sum\limits_{i = 0}^{n - 1} {f(\xi_{i} )(x_{i + 1} - x_{i} )} ,I} \right) < \varepsilon . $$

Then, we denote \( I = ({\text{FR}})\int\limits_{a}^{b} {f(x){\text{d}}x} \), and it is called fuzzy Riemann integral.

Definition 2.10

(Wu and Gong 2001) Let \( f:[a,b] \to E\, \) be a fuzzy-valued function and \( \Delta_{n} :a = x_{0} < x_{1} < \cdots < x_{n - 1} < x_{n} = b \) a partition of the interval \( [a,b],\;\xi_{i} \in [x_{i} ,x_{i + 1} ],\;i = 0, \ldots ,n - 1, \) a sequence of points of the partition \( \Delta_{n} \) and \( \delta (x) > 0 \) a valued function over \( [a,b]. \) The division \( P = (\Delta_{n} ,\xi ) \) is said to be δ-fine if \( [x_{i} ,x_{i + 1} ] \subseteq (\xi_{i} - \delta (\xi_{i} ),\xi_{i} + \delta (\xi_{i} )). \)

The function f is said to be Henstock (or FH-) integrable having the integral \( I \in E \) if for any \( \varepsilon > 0 \) there exists a real-valued function δ, such that for any δ-fine division P we have

$$ D\left( {\sum\limits_{i = 0}^{n - 1} {f(\xi_{i} ) \cdot h_{i} ,I} } \right) < \varepsilon , $$

where \( h_{i} = x_{i + 1} - x_{i} . \) Then, I is called the fuzzy Henstock integral of f and it is denoted by \( ({\text{FH}})\int\limits_{a}^{b} {f(t){\text{d}}t} \).

Theorem 2.2

(Bede 2013) A continuous fuzzy number-valued function is fuzzy Aumann integrable, fuzzy Riemann integrable and fuzzy Henstock integrable too and more over

$$ ({\text{FA}})\int\limits_{a}^{b} {f(x){\text{d}}x} = ({\text{FH}})\int\limits_{a}^{b} {f(x){\text{d}}x} = ({\text{FR}})\int\limits_{a}^{b} {f(x){\text{d}}x} $$

3 Fuzzy integro-differential equation

In this article, we consider the following nth-order nonlinear fuzzy Volterra–Fredholm integro-differential equation

$$\begin{aligned}& u^{n} (x) + G(x,U(x)) \\ &\quad + \sum\limits_{j = 1}^{m} {\left( {\int\limits_{a}^{x} {K_{1}^{j} (x,t)F_{1}^{j} (t,U(t))} {\text{d}}t + \int\limits_{a}^{b} {K_{2}^{j} (x,t)F_{2}^{j} (t,U(t))} {\text{d}}t} \right)} \\ &\quad = f(x),\quad x \in [a,b]\end{aligned} $$
(1)

with boundary conditions

$$ \begin{aligned} &\sum\limits_{l = 1}^{{n_{k} }} {a_{lk} u^{{r_{lk} }} (\eta_{lk} ) = d_{k} } ,\quad k = 1,2, \ldots ,n,\quad \\ &\quad \eta_{lk} \in [a,b]\;{\text{and}}\;r_{lk} {\in} \{ 0,1, \ldots ,n - 1\} \end{aligned} $$
(2)

where \( d_{k} ,a_{jk} \) are real fuzzy constants

$$ G(x,U(x)) = G(x,u(x),u^{\prime}(x), \ldots ,u^{n} (x)) $$
$$ F_{1}^{j} (t,U(t)) = F_{1}^{j} (t,u(t),u^{\prime}(t), \ldots ,u^{n} (t)) $$
$$ F_{2}^{j} (t,U(t)) = F_{2}^{j} (t,u(t),u^{\prime}(t), \ldots ,u^{n} (t)) $$

and \( K_{1}^{j} (x,t),K_{2}^{j} (x,t),F_{1}^{j} ,F_{2}^{j} ,G \) for \( j = 1,2, \ldots ,m \) are the continuous fuzzy functions on the interval [a, b].

If \( K_{1}^{j} = 0 \) for all \( j = 1,2, \ldots ,m, \), we have Fredholm-type integro-differential equation and we have its Volterra type if \( K_{2}^{j} = 0 \) for all \( j = 1,2, \ldots ,m, \)

From (1), we have

$$ \left( {u^{n} (x,\alpha )} \right)_{i} + G_{i} (x,U(x,\alpha ),\alpha ) + \sum\limits_{j = 1}^{m} {\left( {\int\limits_{a}^{x} {\left( {K_{1}^{j} (x,t,)F_{1}^{j} (t,U(t))} \right)_{i} } {\text{d}}t + \int\limits_{a}^{b} {\left( {K_{2}^{j} (x,t)F_{2}^{j} (t,U(t))} \right)_{i} } {\text{d}}t} \right)} = f_{i} (x), $$
(3)

where \( i = 1,2 \) and \( x \in [a,b] \) subject to the boundary conditions

$$ \sum\limits_{l = 1}^{{n_{k} }} {\left( {a_{lk} u^{{r_{lk} }} (\eta_{lk} )} \right)_{i} = d_{ki} } ,\;k \ {=}\ 1,2, \ldots ,n,\;i \ {=}\ 1,2\;\eta_{lk} \in [a,b]\;{\text{and}}\;r_{lk} \ {\in} \ \{ 0,1, \ldots ,n - 1\} $$
(4)

where

$$ [u(x)]_{\alpha } = [u_{1} (x,\alpha ),u_{2} (x,\alpha )] $$
$$\begin{aligned} & G_{i} (x,U(x,\alpha ),\alpha ) \\ &\quad = G_{i} (x,u_{1} (x,\alpha ),u^{\prime}_{1} (x,\alpha ), \ldots ,u_{1}^{n} (x,\alpha ),\\&\qquad u_{2} (x,\alpha ),u^{\prime}_{2} (x,\alpha ), \ldots , u_{2}^{n} (x,\alpha ),\alpha ) \end{aligned}$$
$$\begin{aligned} &\left( {K_{1}^{j} (x,t)F_{1}^{j} (t,U(t))} \right)_{i} \\&\quad= P_{i}^{j} \left( x,t,u_{1} (x,\alpha ),u^{\prime}_{1} (x,\alpha ), \ldots ,u_{1}^{n} (x,\alpha ),\right.\\&\qquad\left. u_{2} (x,\alpha ),u^{\prime}_{2} (x,\alpha ), \ldots ,u_{2}^{n} (x,\alpha ),\alpha \right)({\text{say}}) \end{aligned}$$
$$\begin{aligned} &\left( {K_{2}^{j} (x,t)F_{2}^{j} (t,U(t))} \right)_{i} \\&\quad= P_{i}^{j} \left( x,t,u_{1} (x,\alpha ),u^{\prime}_{1} (x,\alpha ), \ldots ,u_{1}^{n} (x,\alpha ),\right.\\&\qquad\left. u_{2} (x,\alpha ),u^{\prime}_{2} (x,\alpha ), \ldots ,u_{2}^{n} (x,\alpha ),\alpha \right)({\text{say}}) \end{aligned}$$

4 Method of solution

To solve (3), we assume that its approximate solution takes the following form

$$ u_{i,N}^{A} (x,\alpha ) = \psi_{i} (x,\alpha ) + \sum\limits_{j = 0}^{n} {\xi_{ij} \varphi_{ij} (x,\alpha ),} \quad i = 1,2. $$
(5)

where \( u_{i,N}^{A} (x,\alpha ) \) is the approximate solution of \( u_{i} (x,\alpha ) \) with (N + 2) approximating terms. Also, \( \psi_{i} (x,\alpha ) \) and \( \{ \varphi_{ij} (x,\alpha )\}_{j = 1}^{N} \) must be obtained in such a way that \( \{ u_{i,N}^{A} (x,\alpha )\}_{i = 1}^{2} \) satisfy the boundary conditions (4). To ensure that (3) satisfy boundary conditions (4), we must have

$$ \sum\limits_{l = 1}^{{n_{k} }} {\left( {a_{lk} u_{,N}^{{Ar_{lk} }} (\eta_{lk} )} \right)_{i} = d_{ki} } ,\;k = 1,2 \ldots ,n,\quad i = 1,2, $$
(6)

where \( [u_{,N}^{A} (\eta_{lk} )]_{\alpha } = [u_{1,N}^{A} (x,\alpha ),u_{2,N}^{A} (x,\alpha )] \).

Now, since we do not know that \( u_{,N}^{A} (\eta_{lk} ) \) is gS-differentiable in the form of (i) or (ii) and we want to consider general case here, so, we are denoting \( [u_{,N}^{{Ar_{lk} }} (\eta_{lk} )]_{\alpha } = [u_{s,N}^{{Ar_{lk} }} (x,\alpha ),u_{{s^{\prime},N}}^{{Ar_{lk} }} (x,\alpha )], \) where s will take exactly one value from {1,2} depending on the type of gS-derivative we are considering and \( s^{\prime} = \{ 1,2\} - \{ s\} \). So, using (5), (6) can be written as following

$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \psi_{{s_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{{s_{2} j}} } \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \phi_{{s_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} } = d_{k1} $$
$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \psi_{{s^{\prime}_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{{s^{\prime}_{2} j}} } \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \phi_{{s^{\prime}_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} } = d_{k2} $$

where si will take exactly one value from {1,2} depending on the type of gS-derivative we are considering and \( s^{\prime}_{i} = \{ 1,2\} - \{ s_{i} \} ,\;i = 1,2 \).

If for \( k = 1,2, \ldots ,n \) and \( j = 0,1, \ldots ,N \) we have the following equalities, then \( \{ u_{i,N}^{A} (x)\}_{i = 1}^{2} \) satisfies the boundary condition (4).

$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \psi_{{s_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = d_{k1} , $$
$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \psi_{{s^{\prime}_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = d_{k2} , $$
$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \phi_{{s_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = 0, $$
$$ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \phi_{{s^{\prime}_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = 0. $$

In fact, these conditions are sufficient conditions which ensure that \( \{ u_{i,N}^{A} (x,\alpha )\}_{i = 1}^{2} \) satisfy the boundary conditions (4). We would find \( \{ \psi_{i} (x,\alpha )\}_{i = 1}^{2} \) in such a way that it satisfies the following auxiliary differential equations

Case I

If \( u_{,N}^{A} (x) \) is gS-differentiable in the form of (i) or n is an even number, then

$$ \psi_{i}^{n} (x,\alpha ) = f_{i} (x,\alpha ),\;x \in [a,b],\;i = 1,2 $$
(7)

subject to the following non-homogeneous boundary conditions

$$ \begin{aligned} & \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \psi_{{s_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = d_{k1} , \\ & \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \psi_{{s^{\prime}_{2} }}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = d_{k2} , \\ \end{aligned} $$
(8)

Hence, if we express \( \psi_{i} (x,\alpha ) \) as

$$ \psi_{i} (x,\alpha ) = \int\limits_{a}^{x} {\frac{{(x - t)^{n - 1} f_{i} (t,\alpha )}}{(n - 1)!}{\text{d}}t} + \sum\limits_{q = 1}^{n - 1} {c_{iq} x^{q} } , $$
(9)

Then, \( \psi_{i} (x,\alpha ) \) for \( i = 1,2 \) satisfies (7). By taking the rth-derivative of (9), we have

$$ \psi_{i}^{r} (x,\alpha ) = \left\{ {\begin{array}{*{20}l} {\int\limits_{a}^{x} {\frac{{(x - t)^{n - r - 1} f_{i} (t,\alpha )}}{(n - r - 1)!}{\text{d}}t + \sum\limits_{q = r}^{n - 1} {c_{iq} \frac{{q!x^{q - r} }}{(q - r)!}} ,} } \hfill & {r = 0,1, \ldots ,n - 1} \hfill \\ {f_{i} (x,\alpha ),} \hfill & {r = n.} \hfill \\ \end{array} } \right. $$
(10)

Substituting (10) in boundary conditions (8) gives the following linear system with respect to \( \{ c_{iq} \}_{q = 1}^{n - 1} \)

$$ \begin{aligned} &\sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \left[ {\int\limits_{a}^{{\eta_{lk} }} {\frac{{(\eta_{lk} - t)^{{n - r_{lk} - 1}} f_{{s_{2} }} (t,\alpha )}}{{(n - r_{lk} - 1)!}}{\text{d}}t + \sum\limits_{q = r}^{n - 1} {c_{{s_{2} q}} \frac{{q!\eta_{lk}^{{q - r_{lk} }} }}{{(q - r_{lk} )!}}} } } \right] = d_{k1} ,} \quad \\&\quad k = 1,2, \ldots ,n \end{aligned} $$

or,

$$ \sum\limits_{l = 1}^{{n_{k} }} {\sum\limits_{q = r}^{n - 1} {\frac{{q!a_{{lks_{1} }} \eta_{lk}^{{q - r_{lk} }} }}{{(q - r_{lk} )!}}} c_{{s_{2} q}} } = d_{k1} - \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \int\limits_{a}^{{\eta_{lk} }} {\frac{{(\eta_{lk} - t)^{{n - r_{lk} - 1}} f_{{s_{2} }} (t,\alpha )}}{{(n - r_{lk} - 1)!}}{\text{d}}t} } $$

and

$$ \sum\limits_{l = 1}^{{n_{k} }} {\sum\limits_{q = r}^{n - 1} {\frac{{q!a_{{lks^{\prime}_{1} }} \eta_{lk}^{{q - r_{lk} }} }}{{(q - r_{lk} )!}}} c_{{s^{\prime}_{2} q}} } = d_{k2} - \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \int\limits_{a}^{{\eta_{lk} }} {\frac{{(\eta_{lk} - t)^{{n - r_{lk} - 1}} f_{{s^{\prime}_{2} }} (t,\alpha )}}{{(n - r_{lk} - 1)!}}{\text{d}}t} } $$
(11)

Note that when one can not compute the integral terms of the right-hand side of (11) analytically, it can be computed by a numerical quadrature. Equations of (11) ensure that the boundary conditions of (8) are satisfied. The unknown boundary coefficients \( \{ c_{iq} \}_{q = 0}^{n - 1} \) can be obtained by solving the linear system of Eq. (11).

Suppose that \( V = \{ p_{0} (x),p_{1} (x), \ldots ,p_{N} (x)\} \) be a set of basis polynomials on [a, b], where pk(x) is polynomial of degree k, for \( k = 0,1, \ldots ,N. \) Also, we would like \( \{ \varphi_{ij} \}_{j = 1}^{N} ,i = 1,2 \) be such that they satisfy the following auxiliary differential equation under homogeneous boundary conditions

$$ \phi_{ij}^{n} = p_{j} (x),\;x \in [a,b],\;i = 1,2,\quad j = 0,1, \ldots ,N $$
(12)
$$ \begin{aligned} & \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \varphi_{{s_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = 0, \\ & \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \varphi_{{s^{\prime}_{2} j}}^{{r_{lk} }} (\eta_{lk} ,\alpha )} = 0 \\ \end{aligned} $$
(13)

Consequently, if we define \( \varphi_{ij} (x,\alpha ) \) as

$$ \varphi_{ij} (x,\alpha ) = \int\limits_{a}^{x} {\frac{{(x - t)^{n - 1} p_{j} (x)}}{(n - 1)!}} {\text{d}}t + \sum\limits_{q = 0}^{n - 1} {c_{ij,q} x^{q} } $$
(14)

then \( i = 1,2 \) and \( j = 0,1, \ldots ,N,\;\varphi_{ij} (x,\alpha ) \) satisfy (12). By taking the rth-derivative of (14), we have

$$ \varphi_{ij}^{r} = \left\{ {\begin{array}{*{20}l} {\int\limits_{a}^{x} {\frac{{(x - t)^{n - r - 1} p_{j} (t)}}{(n - r - 1)!}{\text{d}}t + \sum\limits_{q = r}^{n - 1} {c_{ij,q} \frac{{q!x^{q - r} }}{(q - r)!}} ,} } \hfill & {r = 0,1, \ldots ,n - 1} \hfill \\ {p_{j} (x),} \hfill & {r = n} \hfill \\ \end{array} } \right. $$
(15)

Substituting (15) in the homogeneous boundary conditions (13) for each \( i = 1,2 \) and \( J = 0,1, \ldots ,N \) yields the following linear system with respect to the unknowns \( \{ c_{ij,q} \}_{q = 0}^{n - 1} \)

$$ \left. \begin{aligned} \sum\limits_{l = 1}^{{n_{k} }} {\sum\limits_{q = r}^{n - 1} {\frac{{q!a_{{lks_{1} }} \eta_{lk}^{{q - r_{lk} }} }}{{(q - r_{lk} )!}}c_{{s_{2} j,q}} = - \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{1} }} \int\limits_{a}^{{\eta_{lk} }} {\frac{{(\eta_{lk} - t)^{{n - r_{lk} - 1}} p_{j} (t)}}{{(n - r_{lk} - 1)!}}{\text{d}}t} } } } \hfill \\ \sum\limits_{l = 1}^{{n_{k} }} {\sum\limits_{q = r}^{n - 1} {\frac{{q!a_{{lks^{\prime}_{1} }} \eta_{lk}^{{q - r_{lk} }} }}{{(q - r_{lk} )!}}c_{{s^{\prime}_{2} j,q}} = - \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{1} }} \int\limits_{a}^{{\eta_{lk} }} {\frac{{(\eta_{lk} - t)^{{n - r_{lk} - 1}} p_{j} (t)}}{{(n - r_{lk} - 1)!}}{\text{d}}t} } } } \hfill \\ \end{aligned} \right\} $$
(16)

System (16) ensures that the homogeneous boundary conditions (13) hold. One can compute the integral term of the right-hand side of (16) analytically because \( p_{j} (x) \) is a polynomial. Solution of linear system (16) gives the unknown coefficients \( \{ c_{ij,q} \}_{q = 0}^{n - 1} . \)

Substituting (9) and (14) in (5) yields

$$ \begin{aligned} u_{i,N}^{A} (x,\alpha ) & = \int\limits_{a}^{x} {\frac{{(x - t)^{n - 1} }}{(n - 1)!}\left[ {f_{i} (t,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (t)} } \right]{\text{d}}t} \\ & \quad + \sum\limits_{q = 0}^{n - 1} {[c_{iq} + \sum\limits_{j = 0}^{N} {c_{ij,q} \xi_{ij} } ]x^{q} } \\ \end{aligned} $$
(17)

From (10), (15) and rth-derivative of (17), we have

$$ u_{i,N}^{A(r)} (x,\alpha ) = \left\{ {\begin{array}{*{20}l} {\int\limits_{a}^{x} {\frac{{(x - t)^{n - r - 1} }}{(n - r - 1)!}\left[ {f_{i} (t,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (t)} } \right]{\text{d}}t + \sum\limits_{q = k}^{n - 1} {\left[ {c_{iq} + \sum\limits_{j = 0}^{N} {c_{ij,q} \xi_{ij} } } \right]\frac{{q!x^{q - 1} }}{(q - 1)!}} ,} } \hfill & {r = 0,1, \ldots ,n - 1} \hfill \\ {f_{i} (x,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (x)} ,} \hfill & {r = n} \hfill \\ \end{array} } \right. $$
(18)

If \( \{ u_{i,N}^{A} (x,\alpha )\}_{i = 1}^{2} \) be the exact solution of (3), then we must have

$$ \left( {u_{,N}^{A(n)} (x,\alpha )} \right)_{i} + G_{i} \left( {x,U_{N}^{A} (x,\alpha ),\alpha } \right) + \sum\limits_{j = 1}^{m} {\left( {\int\limits_{a}^{x} {\left( {K_{1}^{j} (x,t)F_{1}^{j} (t,U_{N}^{A} (t))} \right)_{i} } {\text{d}}t + \int\limits_{a}^{b} {\left( {K_{2}^{j} (x,t)F_{2}^{j} (t,U_{N}^{A} (t))} \right)_{i} } {\text{d}}t} \right)} = f_{i} (x,\alpha ), $$
(19)

\( i = 1,2,\;x \in [a,b] \), where

$$ \begin{aligned} U_{N}^{A} (x,\alpha ) &= (u_{1,N}^{A} (x,\alpha ),u_{1,N}^{A(1)} (x,\alpha ), \ldots ,u_{1,N}^{A(n)} (x,\alpha ),\\&\quad u_{2,N}^{A} (x,\alpha ),u_{2,N}^{A(1)} (x,\alpha ), \ldots ,u_{2,N}^{A(n)} (x,\alpha )) \end{aligned} $$
$$ \left( {K_{1}^{j} (x,t)F_{1}^{j} (t,U_{N}^{A} (t))_{i} = P_{i}^{j} (x,t,U_{N}^{A} (x,\alpha ),\alpha )} \right) $$
$$ \left( {K_{2}^{j} (x,t)F_{21}^{j} (t,U_{N}^{A} (t))_{i} = Q_{i}^{j} (x,t,U_{N}^{A} (x,\alpha ),\alpha )} \right) $$

Hence, using (18), we can write (19) as

$$ \begin{aligned} &\sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (x)} + G_{i} (x,U_{N}^{A} (x,\alpha ),\alpha ) \\ &\quad + \sum\limits_{j = 1}^{m} \left( \int\limits_{a}^{x} {P_{i}^{j} \left( {x,t,U_{N}^{A} (t,\alpha ),\alpha } \right){\text{d}}t} \right.\\&\quad\left.+ \int\limits_{a}^{b} {Q_{i}^{j} \left( {x,t,U_{N}^{A} (t,\alpha ),\alpha } \right){\text{d}}t} \right) \ {=}\ 0,\;i \ {=}\ 1,2,\quad x \ {\in}\ [a,b]. \end{aligned} $$
(20)

Therefore, the left-hand side of (20) should be equal to zero for the exact solution. However, since our method is a numerical method and we are trying to find an approximate solution as close as possible to the exact solution, hence, our approximate solution \( \{ u_{i,N}^{A} (x,\alpha )\}_{i = 1}^{2} \) often does not satisfy the relation (20). So, for the approximate solution, the left-hand side of (20) is as much close to zero, and our approximate solution will be that much close to exact solution. We shall use this fact in the next section to get better approximate solution from (17).

After substituting the computed values of \( \{ c_{iq} \}_{q = 0}^{n - 1} \) and \( \{ c_{ij,q} \}_{q = 0}^{n - 1} \) for \( i = 1,2 \) and \( j = 0,1, \ldots ,N \) in (17), which are obtained by solving (11) and (16), we define the following residual functions

$$ \begin{aligned} & R_{i,N} (x,\bar{\xi },\alpha ) \\ &\quad = \sum\limits_{j = 1}^{m} {\left( {\int\limits_{a}^{x} {P_{i}^{j} \left( {x,t,U_{N}^{A} (t,\alpha ),\alpha } \right){\text{d}}t} + \int\limits_{a}^{b} {Q_{i}^{j} \left( {x,t,U_{N}^{A} (t,\alpha ),\alpha } \right){\text{d}}t} } \right)} \\ & \quad + \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (x)} + G_{i} \left( {x,U_{N}^{A} (x,\alpha ),\alpha } \right),\;i = 1,2,\quad x \in [a,b], \\ & {\text{where}}\;\bar{\xi } = (\xi_{10} ,\xi_{11} , \ldots ,\xi_{1N} ,\xi_{20} , \ldots ,\xi_{2N} ). \\ \end{aligned} $$
(21)

Case II

If \( u_{,N}^{A} (\eta_{lk} ) \) is gS-differentiable in the form of (ii) and n is an odd number, then in place of Eq. (7) we will consider

$$ \psi_{i}^{n} (x,\alpha ) = f_{{i^{\prime}}} (x,\alpha ),\;x \in [a,b],\;i = 1,2\;{\text{and}}\;i^{\prime} = \{ 1,2\} - \{ i\} $$
(7a)

So, in place of (17) the approximate solution will be

$$ \begin{aligned} u_{i,N}^{A} (x,\alpha ) & = \int\limits_{a}^{x} {\frac{{(x - t)^{n - 1} }}{(n - 1)!}\left[ {f_{{i^{\prime}}} (t,\alpha ) + \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (t)} } \right]{\text{d}}t} \\ & \quad + \sum\limits_{q = 0}^{n - 1} {\left( {c_{iq} + \sum\limits_{j = 0}^{N} {c_{ij,q} \xi_{ij} } } \right)x^{q} } \\ \end{aligned} $$
(17a)

Now, since in Case I, after putting (18) in (19) \( f_{i} ,\,\,\,i = 1,2 \) get canceled and there is no \( f_{i} ,\,\,\,i = 1,2 \) in the residual functions (21). So, here also for Case II, we will get the same notational residual functions (21) where \( \{ u_{i,N}^{A} (x,\alpha )\}_{i = 1}^{2} \) is given by (17a) instead of (17).

The approximate solution is not completely known yet because the value of \( \{ \xi_{ij} \}_{j = 1}^{N} \) for \( i = 1,2 \) yet to be computed. In the next section, we obtain \( \{ \xi_{ij} \}_{j = 1}^{N} \) for \( i = 1,2 \) in such a way that the residual functions be minimized or forced to be zero in an average sense over the interval [a, b].

5 Minimization of the residual functions

In this section, we present a minimization algorithm for the residual functions (21). To do this, we obtain the unknown coefficients \( \{ \xi_{ij} \}_{j = 0}^{N} \) of (21) for \( i = 1,2 \) in such a way that the following weighted integrals of the residual function be equal to zero

$$ E_{ij} (\bar{\xi },\alpha ) = \int\limits_{a}^{b} {w_{j} } R_{i,N} (x,\bar{\xi },\alpha ){\text{d}}x,\;j = 1,2, \ldots ,N,\;i = 1,2 $$

where \( \{ w_{j} \}_{j = 0}^{N} \) is a set of weight functions.

In this algorithm, the following weight functions are taken from the displaced Dirac delta function

$$ w_{j} = \delta (x - x_{j} ) = \left\{ {\begin{array}{*{20}l} { + \infty ,} \hfill & {x = x_{j} } \hfill \\ {0,} \hfill & {x \ne x_{j} } \hfill \\ \end{array} } \right. $$

So, for \( j = 1,2, \ldots ,N, \) we have the following property

$$ E_{ij} (\bar{\xi },\alpha ) = \int\limits_{a}^{b} {w_{j} } R_{i,N} (x,\bar{\xi },\alpha ){\text{d}}x = R_{i,N} (x_{j} ,\bar{\xi },\alpha ) = 0, $$
(22)

Hence, the residual functions are forced to be zero at N + 1 specified collocation points \( \{ x_{j} \}_{j = 0}^{N} . \) If N increases, then the approximate solutions will be better, because the residual functions \( \{ R_{i,N} (x,\bar{\xi },\alpha )\}_{i = 1}^{2} \) equal to zero over the interval \( [a,b]. \) We have different options for the collocation points. In this article, we use Chebyshev collocation points \( x_{j} = \cos (\frac{j\pi }{N}) \) for \( j = 0,1, \ldots ,N. \)

Solving the system (22) yields the unknown coefficients \( \{ \xi_{ij} \}_{j = 0}^{N} \) for \( i = 1,2. \)

6 Error analysis

In this section, the convergence of our method for solving fuzzy integro-differential equation has been presented.

Theorem 6.1

Suppose that

$$ \begin{aligned}\varepsilon_{i} (x,\bar{\xi },\alpha ) &= G_{i} \left( {x,U_{N}^{A} (x,\alpha ),\alpha } \right) \\&\quad+ \sum\limits_{j = 1}^{m} \left( \int\limits_{a}^{x} {P_{i}^{j} \left( {x,t,U_{N}^{A} (x,\alpha ),\alpha } \right){\text{d}}t} \right.\\&\quad \left.+ \int\limits_{a}^{b} {Q_{i}^{j} \left( {x,t,U_{N}^{A} (x,\alpha ),\alpha } \right){\text{d}}t} \right) \end{aligned}$$

where

$$ \begin{aligned} & G_{i} (x,U_{N}^{A} (x,\alpha ),\alpha ) = G_{i} (x,u_{1,N}^{A} (x,\alpha ),u_{1,N}^{A(1)} (x,\alpha ), \ldots , \\ & u_{1,N}^{A(n)} (x,\alpha ),u_{2,N}^{A} (x,\alpha ),u_{2,N}^{A(1)} (x,\alpha ), \ldots ,u_{2,N}^{A(n)} (x,\alpha ),\alpha ) \\ & u_{i,N}^{A} (x,\alpha ) = \psi_{i} (x,\alpha ) + \sum\limits_{j = 0}^{n} {\xi_{ij} \varphi_{ij} (x,\alpha ),} \quad i = 1,2 \\ \end{aligned} $$
$$ \bar{\xi } = (\xi_{10} ,\xi_{11} , \ldots ,\xi_{1N} ,\xi_{20} , \ldots ,\xi_{2N} ). $$

If \( \{ \varepsilon_{i} (x,\bar{\xi },\alpha )\}_{i = 1}^{2} \) are analytic at all points inside and on circle C of radius r with center at \( x_{0} \in (a,b) \) and if x is any interior point of C, then we can obtain \( \bar{\xi } \) in such a way that \( \left| {R_{i,N} } \right| \to 0 \) as \( N \to \infty . \)

Proof

From (21) and Taylor series expansion for polynomials \( p_{j} (x) \), we have

$$ \begin{aligned} & R_{i,N} (x,\bar{\xi },\alpha ) = \sum\limits_{j = 0}^{N} {\xi_{ij} p_{j} (x)} + \varepsilon_{i} (x,\bar{\xi },\alpha ) \\ & = \sum\limits_{j = 0}^{N} {\xi_{ij} \sum\limits_{k = 0}^{j} {\frac{{(x - x_{0} )^{k} }}{k!}p_{j}^{k} (x_{0} )} } + \varepsilon_{i} (x,\bar{\xi },\alpha ) \\ \end{aligned} $$
(23)

By Cauchy’s integral formula, we have

$$ \varepsilon_{i} (x,\bar{\xi },\alpha ) = \frac{1}{2\pi i}\oint {\frac{{\varepsilon_{i} (z,\bar{\xi },\alpha )}}{z - x}{\text{d}}z} \quad i^{2} = - 1 $$

We can write

$$ \begin{aligned}\frac{{\varepsilon_{i} (z,\bar{\xi },\alpha )}}{z - x} &= \frac{{\varepsilon_{i} (z,\bar{\xi },\alpha )}}{{z - x_{0} }}\left[ 1 + \frac{{x - x_{0} }}{{z - x_{0} }} + \frac{{(x - x_{0} )^{2} }}{{(z - x_{0} )^{2} }} \right.\\ &\quad\left.+ \cdots + \frac{{(x - x_{0} )^{N} }}{{(z - x_{0} )^{N} }} + \frac{{(x - x_{0} )^{N + 1} }}{{(z - x_{0} )^{N + 1} (z - x)}} \right]. \end{aligned} $$

Therefore,

$$ \begin{aligned} \varepsilon_{i} (x,\bar{\xi },\alpha ) & = \frac{1}{2\pi i}\sum\limits_{k = 0}^{N} {(x - x_{0} )^{k} } \oint\limits_{C} {\frac{{\varepsilon_{i} (z,\bar{\xi },\alpha )}}{{(z - x_{0} )^{k + 1} }}{\text{d}}z} + T_{i,N} \\ & = \sum\limits_{k = 0}^{N} {\frac{{(x - x_{0} )^{k} }}{k!}} \frac{{{\text{d}}^{k} \varepsilon_{i} (x_{0} ,\bar{\xi },\alpha )}}{{{\text{d}}x^{k} }} + T_{i,N} , \\ \end{aligned} $$
(24)

where \( T_{i,N} = \frac{{(x - x_{0} )^{N + 1} }}{2\pi i}\oint\limits_{C} {\frac{{\varepsilon_{i} (z,\bar{\xi },\alpha )}}{{(z - x_{0} )^{N + 1} (z - x)}}{\text{d}}z} . \)By substituting (24) in (23)

$$ R_{i,N} (x,\bar{\xi },\alpha ) = \sum\limits_{k = 0}^{N} {\frac{{(x - x_{0} )^{k} }}{k!}\left[ {\sum\limits_{j = k}^{N} {\xi_{ij} p_{j}^{k} (x_{0} )} + \frac{{{\text{d}}^{k} \varepsilon_{i} }}{{{\text{d}}x^{k} }}(x_{0} ,\bar{\xi },\alpha )} \right]} + T_{i,N} $$

So,

$$ \left| {R_{i,N} (x,\bar{\xi },\alpha )} \right| \le \left| {\sum\limits_{k = 0}^{N} {\frac{{(x - x_{0} )^{k} }}{k!}\left[ {\sum\limits_{j = k}^{N} {\xi_{ij} p_{j}^{k} (x_{0} )} + \frac{{{\text{d}}^{k} \varepsilon_{i} (x_{0} ,\bar{\xi },\alpha )}}{{{\text{d}}x^{k} }}} \right]} } \right| + \left| {T_{i,N} } \right| $$
(25)

For the first term of the right-hand side of (25) to vanish, we set

$$ \sum\limits_{j = k}^{N} {\xi_{ij} p_{j}^{k} (x_{0} )} = - \frac{{{\text{d}}^{k} \varepsilon_{i} (x_{0} ,\bar{\xi },\alpha )}}{{{\text{d}}x^{k} }},\quad k = 0,1, \ldots ,N,\;i = 1,2. $$
(26)

So (25) becomes

$$ \left| {R_{i,N} (x,\bar{\xi },\alpha )} \right| \le \left| {T_{i,N} } \right| $$

From system of Eq. (26), we can obtain \( \xi_{ij} = \xi_{ij}^{A} \) for \( i = 1,2 \) and \( j = 0,1, \ldots ,N. \) Now we define \( \varepsilon_{i}^{A} (z,\alpha ) = \varepsilon_{i} (z,\bar{\xi }^{A} ,\alpha ), \) where \( \bar{\xi }^{A} = (\xi_{10}^{A} ,\xi_{11}^{A} , \ldots ,\xi_{1N}^{A} ,\xi_{20}^{A} , \ldots ,\xi_{2N}^{A} ). \) There is a constant M such that for any z on C,

$$ \left| {\frac{{\varepsilon_{i} (z,\bar{\xi }^{A} ,\alpha )}}{z - x}} \right| = \left| {\frac{{\varepsilon_{i}^{A} (z,\alpha )}}{z - x}} \right| \le M,\quad \left| {z - x_{0} } \right| = r $$
(27)

Therefore,

$$ \begin{aligned} & \left| {R_{i,N} (x,\bar{\xi }^{A} ,\alpha )} \right| \le \left| {R_{i,N} } \right| = \frac{{\left| {x - x_{0} } \right|^{N + 1} }}{2\pi }\left| {\oint\limits_{C} {\frac{{\varepsilon_{i}^{A} (z,\alpha )}}{{(z - x_{0} )^{N + 1} (z - x)}}{\text{d}}z} } \right| \\ & \le \frac{{\left| {x - x_{0} } \right|^{N + 1} }}{2\pi }\oint\limits_{C} {\left| {\frac{{\varepsilon_{i}^{A} (z,\alpha )}}{{(z - x_{0} )^{N + 1} (z - x)}}} \right|\left| {{\text{d}}z} \right|} \\ & \le \frac{{M\left| {x - x_{0} } \right|^{N + 1} }}{{2\pi r^{N + !} }}\oint\limits_{C} {\left| {{\text{d}}z} \right|} \;\left( {{\text{using }}\left( {26} \right)} \right) \\ & = \frac{{M\left| {x - x_{0} } \right|^{N + 1} }}{{r^{N} }}. \\ \end{aligned} $$

Now, since \( \frac{{\left| {x - x_{0} } \right|}}{r} < 1 \), we have \( \left| {R_{i,N} } \right| \to 0 \) as \( N \to \infty \). This completes the proof.

It is necessary to state that we can control the accuracy of the obtained approximate solutions by evaluating the upper bound of the mean value of the residual functions \( R_{i,N} (x,\alpha ) \) on the interval [a, b]. In order to estimate the upper bound of the mean value of residual functions, we suppose that \( R_{i,N} (x,\alpha ) \) for \( i = 1,2 \) are integrable with respect to x in the Riemann

$$ \left| {\int\limits_{a}^{b} {R_{i,N} (x,\alpha ){\text{d}}x} } \right| \le \sqrt {b - a} \left\| {R_{i,N} } \right\|_{2} ,\;{\text{where}}\;\left\| {R_{i,N} } \right\|_{2} = \left\{ {\int\limits_{a}^{b} {\left| {R_{i,N} (x,\alpha )} \right|^{2} {\text{d}}x} } \right\}^{{\frac{1}{2}}} . $$

According to the mean value theorem for integrals, if \( R_{i,N} (x) \) is continuous in [a, b], there exists a point c in (a, b) such that\( \left| {\int\limits_{a}^{b} {R_{i,N} (x,\alpha ){\text{d}}x} } \right| = (b - a)R_{i,N} (c,\alpha ) \).Thus

$$ \left| {R_{i,N} (c,\alpha )} \right| = \frac{{\left| {\int\limits_{a}^{b} {R_{i,N} (x,\alpha ){\text{d}}x} } \right|}}{b - a} \ {\le}\ \frac{{\sqrt {b - a} \left\| {R_{i,N} } \right\|_{2} }}{b - a} = \frac{{\left\| {R_{i,N} } \right\|_{2} }}{{\sqrt {b - a} }} \ {=}\ \bar{R}_{i,N} \; ( {\text{say)}}. $$

If \( \bar{R}_{i,N} \to 0 \) when N is sufficiently large, then the error of approximate solution is negligible.

In addition, the linear problems can be defined by

$$ \begin{aligned} & (u^{n} (x,\alpha ))_{i} + \sum\limits_{k = 0}^{n} \left[ \left( {\mu_{k} (x)u^{k} (x)} \right)_{i} + \int\limits_{a}^{x} {\left( {K_{1}^{k} (x,t)u^{k} (t)} \right)_{i} } {\text{d}}t \right.\\&\quad\left.+ \int\limits_{a}^{b} {\left( {K_{2}^{k} (x,t)u^{k} (t)} \right)_{i} } {\text{d}}t \right] = f_{i} (x,\alpha ), \end{aligned} $$
(28)

with boundary conditions

$$ \sum\limits_{l = 1}^{{n_{k} }} {\left( {a_{lk} u^{{r_{lk} }} (\eta_{lk} )} \right)_{i} = d_{ki} } ,\;k \ {=}\ 1,2, \ldots ,n,\;i \ {=}\ 1,2,\;\eta_{lk} \in [a,b]\;{\text{and}}\;r_{lk} \ {\in}\ \{ 0,1, \ldots ,n - 1\} . $$
(29)

We define the error function \( \varepsilon_{i} (x,\alpha ) = u_{i} (x,\alpha ) - u_{i,N}^{A} (x,\alpha ) \) for \( i = 1,\,2 \) where \( \{ u_{i} (x,\alpha )\}_{i = 1}^{2} \) is the set of exact solution of the system (28). If \( u_{i,N}^{A} (x,\alpha ) \) for \( i = 1,\,2 \) be the approximate solution of (28), then we have

$$ \begin{aligned} & u_{{s_{1} ,N}}^{A(n)} (x,\alpha ) + \sum\limits_{k = 0}^{n} \left[ \mu_{{ks_{2} }} (x,\alpha )u_{{s_{3} ,N}}^{A(k)} (x,\alpha ) + \int\limits_{a}^{x} {K_{{1s_{4} }}^{k} (x,t,\alpha )u_{{s_{5} ,N}}^{A(k)} (t,\alpha )} {\text{d}}t \right.\\&\quad\quad\left.+ \int\limits_{a}^{b} {K_{{2s_{6} }}^{k} (x,t,\alpha )u_{{s_{7} ,N}}^{A(k)} (t,\alpha )} {\text{d}}t \right] = f_{1} (x,\alpha ) + R_{1,N} (x,\alpha ) \\ & u_{{s^{\prime}_{1} ,N}}^{A(n)} (x,\alpha ) + \sum\limits_{k = 0}^{n} \left[ \mu_{{ks^{\prime}_{2} }} (x,\alpha )u_{{s^{\prime}_{3} ,N}}^{A(k)} (x,\alpha ) + \int\limits_{a}^{x} {K_{{1s^{\prime}_{4} }}^{k} (x,t,\alpha )u_{{s^{\prime}_{5} ,N}}^{A(k)} (t,\alpha )} {\text{d}}t \right.\\&\qquad\left.+ \int\limits_{a}^{b} {K_{{2s^{\prime}_{6} }}^{k} (x,t,\alpha )u_{{s^{\prime}_{7} ,N}}^{A(k)} (t,\alpha )} {\text{d}}t \right] = f_{2} (x,\alpha ) + R_{2,N} (x,\alpha ) \\ \end{aligned} $$
(30)
$$ \left. \begin{aligned} \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{8} }} u_{{s_{9} ,N}}^{{A(r_{lk} )}} (\eta_{lk} ,\alpha )} = d_{k1} \hfill \\ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{8} }} u_{{s^{\prime}_{9} ,N}}^{{A(r_{lk} )}} (\eta_{lk} ,\alpha )} = d_{k2} \hfill \\ \end{aligned} \right\} $$
(31)

where \( k = 1,2, \ldots ,N, \)\( s_{j} \in \{ 1,2\} , \)\( s^{\prime}_{j} = \{ 1,2\} - \{ s_{j} \} \) for \( j = 1,2, \ldots ,8,9 \).

Using the definition of error function, from (28), (29), (30) and (31), we can write the following system of integro-differential equation with homogeneous boundary conditions

$$ \left. \begin{aligned} \varepsilon_{{s_{1} }}^{n} (x,\alpha ) + \sum\limits_{k = 0}^{n} {\left[ {\mu_{{ks_{2} }} (x,\alpha )\varepsilon_{{s_{3} }}^{k} (x,\alpha ) + \int\limits_{a}^{x} {K_{{1s_{4} }}^{k} (x,t,\alpha )\varepsilon_{{s_{5} }}^{k} (t,\alpha )} {\text{d}}t + \int\limits_{a}^{b} {K_{{2s_{6} }}^{k} (x,t,\alpha )\varepsilon_{{s_{7} }}^{k} (t,\alpha )} {\text{d}}t} \right]} + R_{1,N} (x,\alpha ) = 0 \hfill \\ \varepsilon_{{s^{\prime}_{1} }}^{n} (x,\alpha ) + \sum\limits_{k = 0}^{n} {\left[ {\mu_{{ks^{\prime}_{2} }} (x,\alpha )\varepsilon_{{s^{\prime}_{3} }}^{k} (x,\alpha ) + \int\limits_{a}^{x} {K_{{1s^{\prime}_{4} }}^{k} (x,t,\alpha )\varepsilon_{{s^{\prime}_{5} }}^{k} (t,\alpha )} {\text{d}}t + \int\limits_{a}^{b} {K_{{2s^{\prime}_{6} }}^{k} (x,t,\alpha )\varepsilon_{{s^{\prime}_{7} }}^{k} (t,\alpha )} {\text{d}}t} \right]} + R_{2,N} (x,\alpha ) = 0 \hfill \\ \end{aligned} \right\} $$
(32)
$$ \left. \begin{aligned} \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks_{8} }} \varepsilon_{{s_{9} }}^{{A(r_{lk} )}} (\eta_{lk} ,\alpha )} = 0 \hfill \\ \sum\limits_{l = 1}^{{n_{k} }} {a_{{lks^{\prime}_{8} }} \varepsilon_{{s^{\prime}_{9} }}^{{A(r_{lk} )}} (\eta_{lk} ,\alpha )} = 0 \hfill \\ \end{aligned} \right\} $$
(33)

where \( \{ \varepsilon_{i} (x,\alpha )\}_{i = 1}^{2} \) is the set of error functions.

Solving the error problem (32) by the proposed method gives the approximate solution \( \varepsilon_{i,N}^{A} (x,\alpha ) \) of \( \varepsilon_{i} (x,\alpha ) \). Consequently, we have the following improved approximate solution

$$ u_{i,N}^{{A\,{\text{imp}}}} (x,\alpha ) = u_{i,N}^{A} (x,\alpha ) + \varepsilon_{i,N}^{A} (x,\alpha ),\;i = 1,2. $$

Note that we can iterate this process to reach a satisfactory approximate solution, and if the exact solution of the problem is not available, then we can approximate the error function \( \varepsilon_{i} (x,\alpha ) \) by \( \varepsilon_{i,N}^{A} (x,\alpha ) \).

In order to perform a superior error analysis of the obtained approximate solutions by our method, we use the following convergence indicators with their rate of convergence

  • The consecutive error: \( D_{i}^{N} (\alpha ) = \left\| {u_{i,N + 1}^{A} (x,\alpha ) - u_{i,N}^{A} (x,\alpha )} \right\|_{2} . \)

  • The point-wise error:\( E_{i}^{N} (x,\alpha ) = u_{i}^{\text{exact}} (x,\alpha ) - u_{i,N}^{A} (x,\alpha ). \)

  • The reference error: \( \varepsilon_{{{\text{ref}},i}}^{N} (\alpha ) = \left\| {E_{i}^{N} (x,\alpha )} \right\|_{2} . \)

  • The max absolute error: \( e_{i}^{N} (\alpha ) = \left\| {E_{i}^{N} (x,\alpha )} \right\|_{\infty } = \hbox{max} \left\{ {\left| {E_{i}^{N} (x,\alpha )} \right|/a \le x \le b} \right\}. \)

  • The relative error: \( rel.err_{i}^{N} (x,\alpha ) = \left| {\frac{{E_{i}^{N} (x,\alpha )}}{{u_{i}^{\text{exact}} (x,\alpha )}}} \right|. \)

7 Application

In this section, we apply the present method on some test problems. In all problems, our approximation space is based on the Chebyshev polynomials of the first kind. The numerical results are calculated using Wolfram Mathematica 9.0, and the figures are drawn using MATLAB R2010a. As one will see from the numerical results, by increasing N, the accuracy of the approximate solution increases, because the residual functions \( \{ R_{i,N} (x,\bar{\xi })\}_{i = 1}^{N} \) equal to zero at more points. Here we have considered only the gS-derivative of type (i). Similarly type (ii) can also be used.

Problem 7.1

Consider the following fuzzy linear Volterra integro-differential equation with exact solution

$$ \begin{aligned} & [u_{1} ,u_{2} ] = \left[ {\frac{1}{2}(\alpha + 2)(\sinh t - \sin t) + \frac{1}{2}(\alpha + 1)(\cos t + \cosh t)} \right. \\ &\quad \left. { + \frac{1}{2}\alpha (\sinh t + \sin t), \frac{1}{2}(4 - \alpha )(\sinh t - \sin t)} \right. \\ & \left. {\quad + \frac{1}{2}(3 - \alpha )(\cos t + \cosh t) + \frac{1}{2}(2 - \alpha )(\sinh t + \sin t)} \right] \\ \end{aligned} $$
$$ u^{\prime\prime}(x) = ax + \int\limits_{0}^{x} {(x - t)u(t){\text{d}}t} ,\quad 0 \le x \le 1 $$

with initial conditions \( [u(0)]_{\alpha } = [\alpha + 1,3 - \alpha ] \), \( [u^{\prime}(0)]_{\alpha } = [\alpha ,2 - \alpha ] \) where \( [a]_{\alpha } = [\alpha + 2,4 - \alpha ]. \)i.e.,

$$ \left. \begin{aligned} u^{\prime\prime}_{1} (x,\alpha ) - \int\limits_{0}^{x} {(x - t)u_{1} (t,\alpha ){\text{d}}t} = (\alpha + 2)x,\quad 0 \le x \le 1 \hfill \\ u^{\prime\prime}_{1} (x,\alpha ) - \int\limits_{0}^{x} {(x - t)u_{2} (t,\alpha ){\text{d}}t} = (4 - \alpha )x,\quad 0 \le x \le 1 \hfill \\ \end{aligned} \right\} $$
(34)

with initial conditions \( u^{\prime}_{1} (0,\alpha ) = \alpha ,u_{1} (0,\alpha ) = \alpha + 1,u^{\prime}_{2} (0,\alpha ) = 2 - \alpha ,u_{2} (0,\alpha ) = 3 - \alpha . \)

This problem can be obtained from (3) by setting

$$ \begin{aligned} & n = 2,\;m = 1,\;a = 0,\;G_{i} (x,U(x,\alpha ),\alpha ) = 0,\;\\&\quad\int\limits_{a}^{x} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t = - \int\limits_{0}^{x} {(x - t)u_{i} (t,\alpha ){\text{d}}t} , \\ & \int\limits_{a}^{b} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t = 0,\;i = 1,2, \\ & f_{1} (x,\alpha ) = (\alpha + 2)x,\;f_{2} (x,\alpha ) = (4 - \alpha )x \\ \end{aligned} $$

To solve (34) by proposed method, we define the following non-homogeneous auxiliary differential equations\( \psi^{\prime\prime}_{1} (x,\alpha ) = (\alpha + 2)x,\;\psi_{1} (0,\alpha ) = \alpha + 1,\;\psi^{\prime}_{1} (0,\alpha ) = \alpha \) and \( \psi^{\prime\prime}_{2} (x,\alpha ) = (4 - \alpha )x,\;\psi_{2} (0,\alpha ) = 3 - \alpha ,\;\psi^{\prime}_{2} (0,\alpha ) = 2 - \alpha \)So, from (9) we have

$$ \psi_{1} (x,\alpha ) = (\alpha + 2)\frac{{x^{3} }}{6} + c_{11} x + c_{10} $$
$$ \psi_{2} (x,\alpha ) = (4 - \alpha )\frac{{x^{3} }}{3!} + c_{21} x + c_{20} $$

The unknown coefficients \( \{ c_{iq} \}_{q = 0}^{1} \) for \( i = 1,\,2 \) are easily obtained from the initial conditions

$$ c_{10} = \alpha + 1,\,\,c_{20} = 3 - \alpha ,\,\,c_{11} = \alpha ,\,\,c_{21} = 2 - \alpha $$

Also, for N = 2 we have the following homogeneous differential equations

$$ \begin{aligned}\phi^{\prime\prime}_{ij} (x,\alpha ) &= p_{j} (x),\;\phi_{ij} (0,\alpha ) = 0,\;\phi^{\prime}_{ij} (0,\alpha ) = 0,\;\\ i &= 1,2,\;j = 0,1,2 \end{aligned} $$

where \( p_{j} (x) = \cos (j\cos^{ - 1} x) \) is the Chebyshev polynomial of degree j. So from (14) it follows that

$$ \phi_{ij} (x,\alpha ) = \left\{ {\begin{array}{*{20}l} {\frac{1}{2}x^{2} + c_{i0,1} x + c_{i0,0} ,} \hfill & {j = 0} \hfill & {} \hfill \\ {\frac{1}{6}x^{3} + c_{i1,1} x + c_{i1,0} ,} \hfill & {j = 1,} \hfill & {i = 1,2} \hfill \\ {\frac{1}{6}x^{4} - \frac{1}{2}x^{2} + c_{i2,1} x + c_{i2,0} ,} \hfill & {j = 2} \hfill & {} \hfill \\ \end{array} } \right. $$

The unknown coefficients \( \left\{ {c_{ij,q} } \right\}_{q = 0}^{1} ,\,\,i = 1,\,2 \) and j = 0,1,2 can be found from the initial conditions\( c_{ij,q} = 0 \) for \( q = 0,\,1 \), \( i = 1,\,2 \), \( j = 0,\,1,\,2 \)So, from (5) we can find the following approximate solution of (34) as

$$ \left. \begin{aligned} u_{1,2}^{A} (x,\alpha ) = \frac{{\xi_{12} }}{6}x^{4} + \left( {\frac{\alpha + 2}{6} + \frac{{\xi_{11} }}{6}} \right)x^{3} + \left( {\frac{{\xi_{10} }}{2} - \frac{{\xi_{12} }}{2}} \right)x^{2} + \alpha x + (\alpha + 1) \hfill \\ u_{2,2}^{A} (x,\alpha ) = \frac{{\xi_{22} }}{6}x^{4} + \left( {\frac{4 - \alpha }{6} + \frac{{\xi_{21} }}{6}} \right)x^{3} + \left( {\frac{{\xi_{20} }}{2} - \frac{{\xi_{22} }}{2}} \right)x^{2} + (2 - \alpha )x + (3 - \alpha ) \hfill \\ \end{aligned} \right\} $$
(35)

In order to evaluate the residual functions, we must replace \( u_{1,2}^{A} (x,\alpha ) \) and \( u_{2,2}^{A} (x,\alpha ) \) in (34) in place of \( u_{1} (x,\alpha ) \) and \( u_{2} (x,\alpha ) \), respectively. So, we have

$$ \begin{aligned} R_{1,2} (x,\bar{\xi },\alpha ) & = - \frac{{\xi_{12} }}{180}x^{6} - \frac{{\xi_{11} + \alpha + 2}}{120}x^{5} - \frac{{\xi_{10} - \xi_{12} }}{24}x^{4} \\ & \quad - \frac{\alpha }{6}x^{3} + \left( {2\xi_{12} - \frac{\alpha + 1}{2}} \right)x^{2} + \xi_{11} x \\ & \quad + (\xi_{10} - \xi_{12} ) \\ R_{2,2} (x,\bar{\xi },\alpha ) & = - \frac{{\xi_{22} }}{180}x^{6} - \frac{{\xi_{21} + 4 - \alpha }}{120}x^{5} - \frac{{\xi_{20} - \xi_{22} }}{24}x^{4} \\ & \quad - \frac{2 - \alpha }{6}x^{3} + \left( {2\xi_{22} - \frac{3 - \alpha }{2}} \right)x^{2} + \xi_{21} x \\ & \quad+ (\xi_{20} - \xi_{22} ) \\ \end{aligned} $$

where \( \bar{\xi } = (\xi_{10} ,\xi_{11} ,\xi_{12} ,\xi_{20} ,\xi_{21} ,\xi_{22} ). \) We should try to determine the unknown coefficients \( \left\{ {\xi_{ij} } \right\}_{j = 0}^{2} \) for i = 1,2 in such a way that the values of \( \left\{ {E_{ij} (\bar{\xi },\alpha )} \right\}_{j = 0}^{2} \) in (22) be minimized or forced to be zero. To do this, we use the presented minimization process in Sect. 5. So we have found \( \left\{ {\xi_{ij} } \right\}_{j = 0}^{2} \) as follows

$$ \begin{aligned} & \xi_{10} = 0.2507 + 0.2507\alpha ,\;\xi_{11} = 0.0168 + 0.1765\alpha ,\;\\&\quad\xi_{12} = 0.2507 + 0.2507\alpha , \\ & \xi_{20} = 0.7521 - 0.2507\alpha ,\;\xi_{21} = 0.3697 - 0.1765\alpha ,\;\\&\quad\xi_{22} = 0.7521 - 0.2507\alpha \\ \end{aligned} $$

Substituting this values of \( \left\{ {\xi_{ij} } \right\}_{j = 0}^{2} \) for i = 1,2 in (35) yields

$$ \begin{aligned} u_{1,2}^{A} (x,\alpha ) & = 1 + 0.3361x^{3} + 0.0418x^{4} \\ & \quad + \alpha (1 + x + 0.1961x^{3} + 0.0418x^{4} ) \\ u_{2,2}^{A} (x,\alpha ) & = 3 + 2x + 0.7281x^{3} \\ & \quad + 0.1253x^{4} + \alpha ( - 1 - x - 0.1961x^{3} - 0.0418x^{4} ) \\ \end{aligned} $$

The absolute point-wise errors of obtained values by our method for different numbers of approximating terms are shown in Tables 1 and 2 which shows promising results for small number of approximating terms. The consecutive and reference errors of the solutions of our method for different numbers of approximating terms are presented in Table 3 which shows very small errors for small number of approximating terms. Figure 1 shows that as the number of approximating terms increases the absolute error decreases very rapidly for different membership values.

Table 1 Absolute point-wise errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.1
Table 2 Absolute point-wise errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.1
Table 3 Consecutive errors and reference errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.1
Fig. 1
figure 1figure 1

Behavior of the absolute values of error functions versus x, and different values of the approximating terms, N for Problem 7.1. a\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 2,\,3,\,4 \). b\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 5,6,7 \). c\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 8,\,9,\,10 \). d\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 2,\,3,\,4 \). e\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 5,6,7 \). f\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 8,\,9,\,10 \). g\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 2,\,3,\,4 \). h\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 5,6,7 \). i\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 8,\,9,\,10 \). j\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 2,\,3,\,4 \). k\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 5,6,7 \). l\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 8,\,9,\,10 \)

Problem 7.2

Consider the following fuzzy linear Volterra integro-differential equation (Allahviranloo et al. 2012) with exact solution \( [u_{1} ,u_{2} ] = [(\alpha - 2)e^{x} ,\,\,(2 - \alpha )e^{x} ] \)

$$ u^{\prime}(x) = a + \int\limits_{0}^{x} {u(t){\text{d}}t,\quad 0 \le x \le 1} $$

with initial conditions \( [u(0)]_{\alpha } = [\alpha - 2,2 - \alpha ] \), where \( [a]_{\alpha } = [\alpha - 2,2 - \alpha ]. \)This problem can be obtained from (3) by setting

$$ \begin{aligned} & n = m = 1,\;a = 0,\;G_{i} (x,U(x)) = 0,\;\int\limits_{a}^{x} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t \\ &= - \int\limits_{0}^{x} {u_{i} (t,\alpha )dt} ,\;\int\limits_{a}^{b} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t = 0, \\ & i = 1,2\;f_{1} (x,\alpha ) = (\alpha - 2),\;f_{2} (x,\alpha ) = (2 - \alpha ) \\ \end{aligned} $$

By the same application of the new method which is used for Problem 7.1, the approximating solutions can be obtained for different values of N. The absolute point-wise errors of obtained values by our method for different numbers of approximating terms, N, are shown in Tables 4 and 5. The max absolute errors and relative errors of the solutions of our method for value of N are presented in Table 6 which shows also very promising results for different numbers of approximating terms. In Fig. 2, we also see that the error decreases very rapidly as the value of N increases.

Table 4 Absolute point-wise errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.2
Table 5 Absolute point-wise errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.2
Table 6 Max absolute errors and relative errors of obtained values by our method for different numbers of approximating terms, N for Problem 7.2
Fig. 2
figure 2figure 2

Behavior of the absolute values of error functions versus x, and different values of the approximating terms, N for Problem 7.2. a\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 2,\,3,\,4 \). b\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 5,6,7 \). c\( \left| {E_{1}^{N} (x,0.4)} \right|, \) for \( N = 8,\,9,\,10 \). d\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 2,\,3,\,4 \). e\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 5,6,7 \). f\( \left| {E_{1}^{N} (x,0.8)} \right|, \) for \( N = 8,\,9,\,10 \). g\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 2,\,3,\,4 \). h\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 5,6,7 \). i\( \left| {E_{2}^{N} (x,0.4)} \right|, \) for \( N = 8,\,9,\,10 \). j\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 2,\,3,\,4 \). k\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 5,6,7 \). l\( \left| {E_{2}^{N} (x,0.8)} \right|, \) for \( N = 8,\,9,\,10 \)

Problem 7.3

Consider the following fuzzy linear Volterraintegro-differential equation (Matinfar et al. 2013) with exact solution \( [u_{1} ,u_{2} ] = [\alpha x,(2 - \alpha )x] \)

$$ u^{\prime}(x) = f(x) + \int\limits_{0}^{x} {(2x - 1)^{2} (1 - 2t)u(t){\text{d}}t,\quad 0 \le x \le 1} $$

with initial conditions \( [u(0)]_{\alpha } = [0,0] \), where \( \begin{aligned} [f(x)]_{\alpha } & = \left[ {\frac{2 - \alpha }{3}(8x^{5} - 14x^{4} + 8x^{3} ) - \frac{4 - \alpha }{6}x^{2} - \frac{1 - \alpha }{3}x + \frac{11}{12}\alpha } \right. \\ & \quad \left. { + \frac{1}{12},\frac{8}{3}\alpha x^{5} - \frac{14}{3}\alpha x^{4} + \frac{8}{3}\alpha x^{3} - \frac{2 + \alpha }{6}x^{2} + \frac{1 - \alpha }{3}x - \frac{11}{12}\alpha + \frac{23}{12}} \right]. \\ \end{aligned} \)

This problem can be obtained from (3) by setting

$$ \begin{aligned} & n = m = 1,\;a = 0,\;G_{i} (x,U(x)) = 0,\;\\&\quad\int\limits_{a}^{x} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t \\ & = - \int\limits_{0}^{x} {((2x - 1)^{2} (1 - 2t)u(t))_{i} {\text{d}}t} , \\ & \int\limits_{a}^{b} {(K_{1}^{1} (x,t)F_{1}^{1} (t,U(t)))_{i} } {\text{d}}t = 0,\;i = 1,2,\; \\ & f_{1} (x,\alpha ) = \frac{2 - \alpha }{3}\left( {8x^{5} - 14x^{4} + 8x^{3} } \right) - \frac{4 - \alpha }{6}x^{2} \\&\quad- \frac{1 - \alpha }{3}x + \frac{11}{12}\alpha + \frac{1}{12}, \\ & f_{2} (x,\alpha ) = \frac{8}{3}\alpha x^{5} - \frac{14}{3}\alpha x^{4} + \frac{8}{3}\alpha x^{3} - \frac{2 + \alpha }{6}x^{2} + \frac{1 - \alpha }{3}x \\&\quad- \frac{11}{12}\alpha + \frac{23}{12} \\ \end{aligned} $$

By the same application of the new method which is used for Problem 7.1, the approximating solutions can be obtained for different values of N. The figure is same for both the residual functions for \( u_{1,7}^{A} \) and \( u_{2,7}^{A} \) which is given in Fig. 3. The comparison of absolute errors between our method, variational iteration method (VIM) (Matinfar et al. 2013) and homotopy perturbation method (HPM) (Matinfar et al. 2013) is shown in Tables 7 and 8 which shows that our method gives less error than VIM (Matinfar et al. 2013) and HPM (Matinfar et al. 2013) for the same number of iterations. The comparison of relative errors between our method, VIM (Matinfar et al. 2013) and HPM (Matinfar et al. 2013) is given in Table 9 for different numbers of iterations. Here also our method gives better results than VIM (Matinfar et al. 2013) and HPM (Matinfar et al. 2013).

Fig. 3
figure 3

Residual function for N = 7 for Problem 7.3

Table 7 Comparison of absolute errors between our method, variational iteration method (Matinfar et al. 2013) and homotopy perturbation method (Matinfar et al. 2013) for Problem 7.3
Table 8 Comparison of absolute errors between our method, variational iteration method (Matinfar et al. 2013) and homotopy perturbation method (Matinfar et al. 2013) for Problem 7.3
Table 9 Comparison of relative errors between our method, variational iteration method (Matinfar et al. 2013) and homotopy perturbation method (Matinfar et al. 2013) for Problem 7.3

8 Conclusion

A reliable method has been presented to conveniently solve a wide class of fuzzy integro-differential equations with multi-point or mixed boundary conditions. A theorem has been given regarding the convergence of our method which shows that as the number of approximating terms increases, the residual error goes to zero. We have explained the practicality and efficiency of this method by examining several numerical examples, and the obtained results shows high enhancement over existing methods, variational iteration method (Matinfar et al. 2013) and homotopy perturbation method (Matinfar et al. 2013) in Tables 7, 8 and 9. From Figs. 1 and 2, we can see that as the number of approximating terms increases, the absolute error decreases very rapidly. From Tables 1, 2, 3, 4, 5 and 6, we can see that the proposed method provides excellent solutions for fuzzy integro-differential equations, because of the small absolute error for very few numbers of approximating terms.

It is nothing remarkable that the proposed method has other advantages. The solutions of auxiliary equations can be tabulated in a way that one can use them for any problem with same multi-point or mixed boundary conditions. Since we have considered a fuzzy integro-differential equation which has not been considered in the literature for analytical or numerical solutions before, our proposed method even can give solutions to those fuzzy integro-differential equations which cannot be solved by using any other methods in the literature.