1 Introduction

Many complex problems arising in Science and Engineering have function of nonlinear and transcendental nature, which can be expressed in the form of f(x) = 0 in single variable. The initial and boundary value problems arising in kinetic theory of gases, elasticity and other areas are reduced to nonlinear form in order to solve them. Due to their significant importance, many new numerical and analytical methods have been developed for these problems because it is not always possible to derive its exact solution by usual algebraic processes, for example, iterative numerical solvers based on Newton’s method [13], Taylor series, homotopy perturbation method and its various modified versions, quadrature formula, variational iteration method and decomposition method [48]. The performance of all of these methods is highly sensitive to initial start point of the algorithm. These algorithms work efficiently if start point is close to the solutions of the equation and normally fail otherwise. On the other hand, stochastic techniques based on viable global search methodologies have strength to solve these nonlinear equations effectively without prior knowledge of biased initial guess. Aim of this study was to provide an alternate, accurate and reliable platform for solving these problems by exploiting the strength of evolutionary computation mainly based on genetic algorithms.

These stochastic solvers have been widely used for solving number of linear and nonlinear problems like solution of nonlinear first Painleve equation [9, 10], optimization of unsupervised neural networks for Troesch’s Problem for stiff and nonstiff cases [11, 12], one-dimensional Bratu’s problem arising in fuel ignition model of combustion theory [13, 14], nonlinear, singular transformed boundary values problems of 2-dimensional Bratu’s equations [15, 16], nonlinear singular systems based on Emden–Fowler equations [17], nonlinear Van der Pol oscillators [18, 19], differential equations involving arbitrary order derivative based on nonlinear Riccati and Bagley-Torvik fractional differential equations [20, 21], recent applications in the field of computational fluid dynamics based on nonlinear Jeffery-Hamel flow problems [22, 23] and so on. These are the motivating factors for author to explore in these methodologies to provide an effective and reliable intelligent computing framework for nonlinear equations with single variable.

In the present study, computational intelligent techniques are used for solving nonlinear equations with the help of effective variants of genetic algorithms (GA), as a tool for viable global search method. The proposed scheme is evaluated on twenty different benchmark problems involving nonlinear polynomial and transcendental functions. Comparative analysis of the proposed results is made with state of the art iterative solver to validate and verify the correctness of the design scheme. The reliability and effectiveness of the proposed schemes are analyzed based on sufficiently large set of data generated from independent execution of the optimizer GAs with different reproduction operators. Comparative study of the proposed schemes is also made on the basis of results of statistical analyses in term of accuracy, convergence and computational complexity.

The organization of the paper is as follows. In Sect. 2, brief literature review for iterative methods used for nonlinear equations is narrated. In Sect. 3, the design methodology is provided in terms of formulation of fitness function, introductory material on GA, stepwise procedure, flow diagram and necessary parameter setting of GA. In Sect. 4, results of detailed simulation performed along with comparison from iterative solvers are given in both numerical and graphically illustrations. In Sect. 5, comparative studies are presented based on data generated from sufficient large number of independent execution of each variants of GAs. In the last section, findings and observations are listed along with future research directions.

2 Overview of iterative methods

Here brief overview of iterative methods for finding root of a nonlinear equation of the form

$$f(x) = 0$$
(1)

Taylor’s series expansion with assumption that \(\alpha\) is the initial root of (1) and \(\gamma\) be the start point or initial estimate very close to \(\alpha\) for (1) is given as:

$$f(\gamma ) + (x - \gamma )f^{\prime} (\gamma ) + \frac{{(x - \gamma )^{2} }}{2!}f^{\prime \prime }(\gamma ) = 0$$
(2)

Simplification of (2) by assuming \(x - \gamma \approx 0\) gives

$$x = \gamma - \frac{f(\gamma )}{{f^{{\prime }} (\gamma )}}\,$$
(3)

This fix point formulation is used to suggest the following iterative method.

Algorithm 2.1

For a given x 0, compute the approximate solution x n+1 by the iterative schemes.

$$x_{n + 1} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime }(x_{n} )}}\,$$
(4)

The above formulation is known as Newton method (NM), which is quadratic convergent [24, 25]. From Eq. (2), one can derive

$$x = \gamma - \frac{{2f(\gamma )f^{{\prime }} (\gamma )}}{{2f^{{{\prime }2}} (\gamma ) - f(\gamma )f^{{{\prime \prime }}} (\gamma )}}\,\,$$
(5)

Algorithm 2.2

For a given x 0, compute the approximate solution x n+1 by iterative schemes given in [26] as:

$$\left\{ \begin{array}{l} y_{n} = x_{n} - \frac{{2f(x_{n} )f^{{\prime }} (x_{n} )}}{{2f^{{{\prime }2}} (x_{n} ) - f(x_{n} )f^{{{\prime \prime }}} (x_{n} )}} \hfill \\ x_{n + 1} = x_{n} - \frac{{2\left( {f(x_{n} ) + f(y_{n} )} \right)f^{{\prime }} (x_{n} )}}{{2f^{{{\prime }2}} (x_{n} ) - \left( {f(x_{n} ) + f(y_{n} )} \right)f^{\prime\prime}(x_{n} )}}\,\, \hfill \\ \end{array} \right.$$
(6)

The Eq. (6) is known as fifth-order convergent two-step Halley method for nonlinear equations [27].

Algorithm 2.3

For a given x 0, compute the approximate solution x n+1 by the iterative schemes as:

$$\left\{ \begin{array}{l} y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{{\prime }} (x_{n} )}} - \frac{{f^{{{\prime \prime }}} (x_{n} )f^{2} (x_{n} )}}{{2f^{{{\prime }3}} (y_{n} ) - 2f(x_{n} )f^{{\prime }} (x_{n} )f^{{{\prime \prime }}} (x_{n} )}} \hfill \\ x_{n + 1} = y_{n} - \frac{{f(y_{n} )}}{{f^{{\prime }} (x_{n} ) + f^{{{\prime \prime }}} (x_{n} )(y_{n} - x_{n} )}} \hfill \\ \end{array} \right.$$
(7)

The algorithm given in (11) is proposed in [28].

Algorithm 2.4

For a given x 0, compute the approximate solution x n+1 by the iterative procedure developed in [6] as:

$$\left\{ \begin{array}{l} y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime }(x_{n} )}} - \frac{{f^{{{\prime \prime }}} (x_{n} )f^{2} (x_{n} )}}{{2f^{{{\prime }3}} (y_{n} ) - 2f(x_{n} )f^{{\prime }} (x_{n} )f^{{\prime }} (x_{n} )}} \hfill \\ x_{n + 1} = y_{n} - \frac{{f(y_{n} )}}{{f{\prime }(x_{n} )}} - \frac{{f^{{{\prime \prime }}} (x_{n} )f(y_{n} )}}{{2f^{{{\prime }3}} (x_{n} )}} \hfill \\ \end{array} \right.$$
(8)

The expression (8) is reported in [29].

3 Design methodology

In this section, we shall present the methodology developed for solving nonlinear algebraic equation using optimization technique based on variants of genetic algorithm, as a tool for effective global search.

3.1 Formulation of Fitness function

A broad variety of mathematical models of the problems which are arising in science and engineering can be organized into equations of the form f(x) = 0, where x and f(x) may be real, complex, or vector quantities. The fitness function or merit function for solving nonlinear equations can be formulated on the basis of absolute or square error as:

$${\text{minimize}}\quad \left\{ {\varepsilon = \left| {f(x)} \right|\,\,\,\,\,{\text{or}}\,\,\,\,\left( {f(x)} \right)^{2} } \right.,$$
(9)

subject to bounded constraints on x.

Nonlinear equations are solved by finding the global minimum of error function \(\varepsilon\) in Eq. (9). With the availability of appropriate x such that fitness function, \(\varepsilon \to 0\), then consequently \(f(x) \to 0\)and x is the approximate solution or root of the nonlinear equation f(x) = 0. The same methodology is extended for solving system of n equations in n unknowns. However, in this study, we confined our investigation to fitness function \(\varepsilon\) as given in (9) for finding the root of nonlinear equations involving the polynomial and transcendental functions.

3.2 Learning methodology

Global minimization of the error function (9) is carried out with evolutionary computing technique mainly based on genetic algorithms (GAs). Since the introduction of GA by John Holland in early 1970s in his book [30], the algorithm is widely used for optimization problems in diverse fields. The construction of GAs is based on mathematical model inspired from the process of natural genetic and evolutionary mechanisms. The efficiency and performance of GA depend on a number of factors, including the setting of system parameters and design of the GA operators, i.e., genetic reproduction, evaluation and selection [31]. The two basic functions crossover and mutation are performed in genetic reproduction. Evaluation is executed by means of the fitness function which depends on specific optimization problem. Selection is the method that obtains parent individuals with probability proportional to their relative fitness for the assistant process. There are five main tasks in which GA is constructed for any problem. These tasks are genetic representation, a technique for generating an initial population of solutions, genetic operators design, fitness function definition and system parameters design. Few recent application of genetic algorithm in applied science and technology includes classification of handwritten Hindi ‘SWARS’ [32], effective detection of target motions [33], design of autonomous robots [34] and optimization of double screen frequency selective surface structure [35].

3.2.1 Optimal solution search for nonlinear equations

Keeping in view strength and applicability of GAs in diverse environment, GAs are used for optimization of fitness function for finding the root of nonlinear equations. The generic flow diagram overall procedure is given in Fig. 1, while necessary detail of the procedural steps is given as follows:

Fig. 1
figure 1

Graphical illustration of proposed methodology for finding the solution of nonlinear equations based on variants of genetic algorithms

  • Step 1: Initialization: Random assignment and declarations are made for initial values of parameters of GAs, and these settings are also tabulated in Table 1 for important parameters of the algorithm. The selection of parameters of stochastic algorithms is made on the basis of knowledge of the problems, exploiting the experience, extensive experimentation efforts and good luck. The performance of the algorithm is highly sensitive to these parameters, and slight change in the values/settings may result in premature convergence problem.

    Table 1 Parameter settings for GAs in simulations
  • Step 2: Fitness Evaluation: Calculate the fitness of each individual of population using the problem-specific error function given in Eq. (9) for finding the root of nonlinear equations.

  • Step 3: Termination Criteria: Terminate the execution of algorithm based on any of following criteria:

    • Predefined fitness value ε ≤ 10−35 is achieved.

    • Predefine number of generations are executed.

    If termination criterion is fulfilled, then go to step 5.

  • Step 4: Reproduction: Create next generation on the basis of different combinations of reproduction operator including selection, crossover and mutation. The following variants of evolutionary algorithms are invoked by calling the function as listed in Table 2.

    Table 2 Variants of GA based on different reproduction operators
  • Step 5: Comparative Analyses: Repeat the procedure from Steps 1 to 4 for all twelve variants of GAs by taking three different initial ranges of the population to generate enough data set on which reliable comparison is made for the performance of the algorithms.

4 Numerical experimentation

In this section, results of detailed simulation performed on number of nonlinear equations by proposed design approaches as well as state of the art iterative solvers are presented.

Twenty different nonlinear equations are taken to evaluate the robustness, efficiency and performance of the proposed methods based on variants of genetic algorithms. These equations are listed in Table 3 and usually used as a test bench to verify and validate the working of the algorithms [19, 3640]. First of all, our intension is to find the results of well-known solvers including Newton method (NM) and few of its variants. The results obtained are provided in Table 4 along with the number of iterative steps involved and level of accuracy achieved. The performance of these algorithms is highly sensitive to the value of initial guess, and generally, all these methods fail whenever wider inputs are used as a start point of the algorithms. It is observed from Table 4 that generally, the accuracy of the level 10−15 has been achieved with almost all the solvers by using the initial guess very close to the exact solution of the nonlinear problems.

Table 3 The problems set with proposed solution by GA-1 for inputs between (−5, 5)
Table 4 Comparison of various iterative schemes

On the other hand, the proposed design schemes based on variants of GAs are also applied to solve the problems tabulated in Table 3 using the parameter settings as given in Table 1 and Table 2. One of the best results obtained by GA-1 for initial range (−5, 5) are given in Table 3, and these values match fifteen decimal places of accuracy with state of the art variants of NM schemes. The learning curves and distance between the individuals for GA-1, 2, 3 and 4 with initial range (−1, 1), GA-5, 6, 7 and 8 with initial range (−5, 5) and GA-9, 10, 11 and 12 with initial range (−50, 50) are given in Figs. 2, 3 and 4, respectively, for different nonlinear problems. It is seen from these figures that proposed variants of GAs are able to provide convergent and accurate results for different spans of the inputs where most of the reported iterative method based on NM fails to give the convergent solutions for these inputs.

Fig. 2
figure 2

Learning curves and distance between individual of the population for variant of GAs for different nonlinear equations by taking initial range (−1, 1). a Method GA-1, problem f 1, b method GA-2, problem f 5, c method GA-3, problem f 10, d method GA-4, problem f 15

Fig. 3
figure 3

Learning curves and distance between individual of the population for variant of GAs for different nonlinear equations by taking initial range (−5, 5). a Method GA-5, problem f 2, b method GA-6, problem f 6, c method GA-7, problem f 12, d method GA-8, problem f 18

Fig. 4
figure 4

Learning curves and distance between individual of the population for variant of GAs for different nonlinear equations by taking initial range (−50, 50). a Method GA-9, problem f 3, b method GA-10, problem f 9, c method GA-11, problem f 16, d Method GA-12, problem f 20

The value of the fitness ε achieved by GA-1, 2 and 3 with different initial ranges is given in Table 5, while for algorithms GA-7, 8 and 9 results are tabulated in Table 6. Similarly, graphical results are presented in Fig. 5 for algorithms GA-4, 5, 6, 10, 11 and 12. The results are plotted on semi-log scale to elaborate the small variation in the values. The results presented in Tables 5 and 6 as well as in Fig. 5 are based on one of the best results in sufficiently large number of independent runs of algorithms. It is observed form results given in tables that for most of the problems, the fitness limit 0 is achieved, and in Fig. 5, these results are shown in blank bar. In general, all the algorithms give considerably accurate results for all the nonlinear examples except problem f 11 for which few variants of GAs with initial range (-1, 1) fail to provide the convergent solutions. Generally, all the variants of GAs provide good accuracy, i.e., f(x) or ε < 10−15, for all tested nonlinear problems with different range of input data to generate a population for algorithms.

Table 5 The value of fitness, ε or f(x), achieved by variants of GAs for different set of inputs ranges of the populations
Table 6 The value of fitness, ε or f(x), achieved by variants of GAs for different set of inputs ranges of the populations
Fig. 5
figure 5

Values of fitness, ε or f(x), achieved by variants of GAs for different set of inputs ranges of the populations. a Initial range (−1, 1), b initial range (−1, 1), c initial range (−5, 5), d initial range (−5, 5), e initial range (−50, 50), f initial range (−50, 50)

The results presented so far in tables and figures are based on dominant solution, i.e., the results with an individual of the population with best fitness values, while the nondominant solution-based value of fitness achieved by rest of individual in population should also be analyzed. In this regard, in our numerical experimentation, 40 individuals are taken in the population, the value of fitness achieved by the nondominant individual is plotted in Fig. 6 for each variants of GA. It is seen in general that about 10–15 % of nondominant solutions of proposed schemes have given reasonably accurate results. It mean that single run of any variants of GA produced one best solution and numbers of other solutions equal to the population of individuals of the problem with relatively lesser accuracy.

Fig. 6
figure 6

Values of fitness, ε or f(x), achieved by variants of GAs by the nondominant solutions of GAs for various nonlinear equations. a GA-1 for (−1, 1), b GA-2 for (−1, 1), c GA-3 for (−1, 1), d GA-4 for (−1, 1), (e) GA-5 for (−5, 5), f GA-6 for (−5, 5), g GA-7 for (−5, 5), h GA-8 for (−5, 5), i GA-9 for (−50, 50), j GA-10 for (−50, 50), k GA-11 for (−50, 50), l GA-12 for (−50, 50)

5 Statistical analysis

Comparative studies based on the results of statistical analysis are presented for each variants of GA for all nonlinear equations to draw a constructive conclusion for performance of each algorithm.

Fifty independent runs for all 12 proposed schemes are carried out for finding the roots of each nonlinear algebraic equation using the parameter settings of the algorithms as listed in Tables 1 and 2. The performance of the algorithms is analyzed on the basis of statistical parameters of mean and standard deviation (STD). Results on the basis of value of fitness ε are given in Table 7 for algorithms GA-1, 2, 3, 4, 5 and 6, while for algorithms GA-7, 8, 9, 10, 11 and 12 are listed in Table 8 for all 20 nonlinear equations. The numerical values presented in the tables clearly show that generally, the level of accuracy range 10−12–10−15 is achieved by the proposed algorithms and small values of STD establish the consistency or reliability of the design approaches. The single or few unsuccessful runs of the algorithms have a great effect on the values of statistical parameters, i.e., mean and STD, and therefore, analysis is continued further by plotting the values of fitness against number of independent runs of each algorithm in Fig. 7. The values of fitness achieved are shown in ascending order. It is seen that only a few runs of algorithms GA-3, GA-5 and GA-7 as in Fig. 7c, f, g, respectively, show premature convergence for some nonlinear equations, while results presented in rest of the figure generally show consistent convergence. It is also observed that significant number of runs of variants of GAs provided fitness ε = 0 for different examples.

Table 7 The value of statistical parameters based on fitness, ε or f(x), achieved by variants of GAs with different input ranges of the population
Table 8 The value of statistical parameters based on fitness, ε or f(x), achieved by variants of GAs with different input ranges of the population
Fig. 7
figure 7

Results of statistical analysis based on level of accuracy achieved by the best individual of population in each independent run of proposed algorithms for different nonlinear equations. a GA-1 for (−1, 1), b GA-2 for (−1, 1), c GA-3 for (−1, 1), d GA-4 for (−1, 1), e GA-5 for (−5, 5), f GA-6 for (−5, 5), g GA-7 for (−5, 5), h GA-8 for (−5, 5), i GA-9 for (−50, 50), j GA-10 for (−50, 50), k GA-11 for (−50, 50), l GA-12 for (−50, 50)

Computational complexity of each of the variants of GA is also evaluated based on value of mean execution time (MET), i.e., average time taken for finding the optimal solution, and mean value of function evaluation (MFE), i.e., average number of function calculated for optimization by an algorithm. The values of MET and MFE based on 50 independent runs of each algorithm for all 20 examples with different initial ranges are tabulated in Table 9 and Table 10. It is seen from the results given in tables that except nonlinear equation f 18 which is relatively easy to tackle, the values of MET and MFE are around 0.8 s and 20,000, respectively. It is observed that all the algorithms, i.e., GA-1 to 12 take almost similar computational budget for finding the optimal results of each nonlinear problem.

Table 9 Computational complexity for variants of GAs based on MET and MFE values
Table 10 Computational complexity for variants of GAs based on MET and MFE values

To elaborate the performance of each variants of GAs, the analyses continue further based on the value of global operators, i.e., global accuracy (GA), global mean generation (GMG), global mean value of function evaluations (GMFE) and global mean execution time (GMET). The values of these global calculated based on 50 independent runs of each algorithm are given in Table 11 for all three initial ranges. The value of global operators is based on overall performance of variants of GA on all 20 nonlinear equations. The values of global accuracy is examined on the basis of percentage of independent runs achieved predefined level of fitness, i.e., ε < 10−05, 10−10, 10−15 and 10−20. It is seen from the results given in Table 11 that based on criteria 10−20 the performance of GA-1, GA-5 and GA-9 is better that the rest. The dominance of GA-1, GA-5 and GA-9 is also seen from smaller values of GMG and GMFE performance operators while the values of GMET of these methods are similar to the other variants of GAs. The numerical experimentations and calculations are performed on Aspire V3-471G Acer Notbook, with a 2.5 GHz Core-i5 processor, 4.00 GB RAM and running MATLAB version 2012b.

Table 11 Performance comparison for variants of GAs using global operators

6 Conclusion

On the basis of the simulation and results presented in the last section, following conclusions can be drawn:

  1. 1.

    Root finding problem for nonlinear equations involving polynomial and transcendental functions without prior knowledge of initial guess close to exact solution is made effectively by evolutionary computing approach based on variants of GAs

  2. 2.

    Comparative analyses of the proposed results with state of the art iterative numerical solvers based on variants of Newton methods validate the correctness of the design approach.

  3. 3.

    The variants of GAs have been incorporated as solvers for optimization by taking different reproduction operators, and it is found that the best results are obtained using GA-1, GA-5 and GA-9. It is seen that all these variants of GAs are based on heuristic and adaptive feasible function used as a crossover and mutation operator, respectively, while any of the selection operator based on stochastic uniform, reminder or roulette function.

  4. 4.

    Validation and verification of the results are made on the basis of statistical analysis based on 50 independent runs of each algorithm for all 20 equations with different set of initial ranges, and it is found that all these variants can provide small values of statistical parameters of mean and standard deviation.

  5. 5.

    Computational complexity of the design variants of GAs in terms of mean execution time and mean values of function evaluation shows that there is no noticeable difference between them.

  6. 6.

    The accuracy and convergence of the design approaches based on values of global performance operator including global accuracy achieved, global mean generation, global mean function evaluation, global mean execution time for all nonlinear equations show that generally, all the design schemes provide effective and reliable results, but the performance of GA-1, GA-5 and GA-9 is invariably the best for these parameters from the rest of algorithms.

  7. 7.

    Besides the accurate, convergent and effective results of the proposed design scheme for finding roots of nonlinear algebraic equation, other advantages of the algorithm are the simplicity of the concept, ease in implementation and wider application domain, which make them attractive and valuable.

In future, one should look for modified version of global search methodologies based on evolutionary and swarm intelligence techniques to solve not only nonlinear algebraic equations but also system of these equations involving large set of variables.