Abstract
In the present study, a novel intelligent computing approach is developed for solving nonlinear equations using evolutionary computational technique mainly based on variants of genetic algorithms (GA). The mathematical model of the equation is formulated by defining an error function. Optimization of fitness function is carried out with the competency of GA used as a tool for viable global search methodology. Comprehensive numerical experimentation has been performed on number of benchmark nonlinear algebraic and transcendental equations to validate the accuracy, convergence and robustness of the designed scheme. Comparative studies have also been made with available standard solution to establish the correctness of the proposed scheme. Reliability and effectiveness of the design approaches are validated based on results of statistical parameters.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Many complex problems arising in Science and Engineering have function of nonlinear and transcendental nature, which can be expressed in the form of f(x) = 0 in single variable. The initial and boundary value problems arising in kinetic theory of gases, elasticity and other areas are reduced to nonlinear form in order to solve them. Due to their significant importance, many new numerical and analytical methods have been developed for these problems because it is not always possible to derive its exact solution by usual algebraic processes, for example, iterative numerical solvers based on Newton’s method [1–3], Taylor series, homotopy perturbation method and its various modified versions, quadrature formula, variational iteration method and decomposition method [4–8]. The performance of all of these methods is highly sensitive to initial start point of the algorithm. These algorithms work efficiently if start point is close to the solutions of the equation and normally fail otherwise. On the other hand, stochastic techniques based on viable global search methodologies have strength to solve these nonlinear equations effectively without prior knowledge of biased initial guess. Aim of this study was to provide an alternate, accurate and reliable platform for solving these problems by exploiting the strength of evolutionary computation mainly based on genetic algorithms.
These stochastic solvers have been widely used for solving number of linear and nonlinear problems like solution of nonlinear first Painleve equation [9, 10], optimization of unsupervised neural networks for Troesch’s Problem for stiff and nonstiff cases [11, 12], one-dimensional Bratu’s problem arising in fuel ignition model of combustion theory [13, 14], nonlinear, singular transformed boundary values problems of 2-dimensional Bratu’s equations [15, 16], nonlinear singular systems based on Emden–Fowler equations [17], nonlinear Van der Pol oscillators [18, 19], differential equations involving arbitrary order derivative based on nonlinear Riccati and Bagley-Torvik fractional differential equations [20, 21], recent applications in the field of computational fluid dynamics based on nonlinear Jeffery-Hamel flow problems [22, 23] and so on. These are the motivating factors for author to explore in these methodologies to provide an effective and reliable intelligent computing framework for nonlinear equations with single variable.
In the present study, computational intelligent techniques are used for solving nonlinear equations with the help of effective variants of genetic algorithms (GA), as a tool for viable global search method. The proposed scheme is evaluated on twenty different benchmark problems involving nonlinear polynomial and transcendental functions. Comparative analysis of the proposed results is made with state of the art iterative solver to validate and verify the correctness of the design scheme. The reliability and effectiveness of the proposed schemes are analyzed based on sufficiently large set of data generated from independent execution of the optimizer GAs with different reproduction operators. Comparative study of the proposed schemes is also made on the basis of results of statistical analyses in term of accuracy, convergence and computational complexity.
The organization of the paper is as follows. In Sect. 2, brief literature review for iterative methods used for nonlinear equations is narrated. In Sect. 3, the design methodology is provided in terms of formulation of fitness function, introductory material on GA, stepwise procedure, flow diagram and necessary parameter setting of GA. In Sect. 4, results of detailed simulation performed along with comparison from iterative solvers are given in both numerical and graphically illustrations. In Sect. 5, comparative studies are presented based on data generated from sufficient large number of independent execution of each variants of GAs. In the last section, findings and observations are listed along with future research directions.
2 Overview of iterative methods
Here brief overview of iterative methods for finding root of a nonlinear equation of the form
Taylor’s series expansion with assumption that \(\alpha\) is the initial root of (1) and \(\gamma\) be the start point or initial estimate very close to \(\alpha\) for (1) is given as:
Simplification of (2) by assuming \(x - \gamma \approx 0\) gives
This fix point formulation is used to suggest the following iterative method.
Algorithm 2.1
For a given x 0, compute the approximate solution x n+1 by the iterative schemes.
The above formulation is known as Newton method (NM), which is quadratic convergent [24, 25]. From Eq. (2), one can derive
Algorithm 2.2
For a given x 0, compute the approximate solution x n+1 by iterative schemes given in [26] as:
The Eq. (6) is known as fifth-order convergent two-step Halley method for nonlinear equations [27].
Algorithm 2.3
For a given x 0, compute the approximate solution x n+1 by the iterative schemes as:
The algorithm given in (11) is proposed in [28].
Algorithm 2.4
For a given x 0, compute the approximate solution x n+1 by the iterative procedure developed in [6] as:
3 Design methodology
In this section, we shall present the methodology developed for solving nonlinear algebraic equation using optimization technique based on variants of genetic algorithm, as a tool for effective global search.
3.1 Formulation of Fitness function
A broad variety of mathematical models of the problems which are arising in science and engineering can be organized into equations of the form f(x) = 0, where x and f(x) may be real, complex, or vector quantities. The fitness function or merit function for solving nonlinear equations can be formulated on the basis of absolute or square error as:
subject to bounded constraints on x.
Nonlinear equations are solved by finding the global minimum of error function \(\varepsilon\) in Eq. (9). With the availability of appropriate x such that fitness function, \(\varepsilon \to 0\), then consequently \(f(x) \to 0\)and x is the approximate solution or root of the nonlinear equation f(x) = 0. The same methodology is extended for solving system of n equations in n unknowns. However, in this study, we confined our investigation to fitness function \(\varepsilon\) as given in (9) for finding the root of nonlinear equations involving the polynomial and transcendental functions.
3.2 Learning methodology
Global minimization of the error function (9) is carried out with evolutionary computing technique mainly based on genetic algorithms (GAs). Since the introduction of GA by John Holland in early 1970s in his book [30], the algorithm is widely used for optimization problems in diverse fields. The construction of GAs is based on mathematical model inspired from the process of natural genetic and evolutionary mechanisms. The efficiency and performance of GA depend on a number of factors, including the setting of system parameters and design of the GA operators, i.e., genetic reproduction, evaluation and selection [31]. The two basic functions crossover and mutation are performed in genetic reproduction. Evaluation is executed by means of the fitness function which depends on specific optimization problem. Selection is the method that obtains parent individuals with probability proportional to their relative fitness for the assistant process. There are five main tasks in which GA is constructed for any problem. These tasks are genetic representation, a technique for generating an initial population of solutions, genetic operators design, fitness function definition and system parameters design. Few recent application of genetic algorithm in applied science and technology includes classification of handwritten Hindi ‘SWARS’ [32], effective detection of target motions [33], design of autonomous robots [34] and optimization of double screen frequency selective surface structure [35].
3.2.1 Optimal solution search for nonlinear equations
Keeping in view strength and applicability of GAs in diverse environment, GAs are used for optimization of fitness function for finding the root of nonlinear equations. The generic flow diagram overall procedure is given in Fig. 1, while necessary detail of the procedural steps is given as follows:
-
Step 1: Initialization: Random assignment and declarations are made for initial values of parameters of GAs, and these settings are also tabulated in Table 1 for important parameters of the algorithm. The selection of parameters of stochastic algorithms is made on the basis of knowledge of the problems, exploiting the experience, extensive experimentation efforts and good luck. The performance of the algorithm is highly sensitive to these parameters, and slight change in the values/settings may result in premature convergence problem.
-
Step 2: Fitness Evaluation: Calculate the fitness of each individual of population using the problem-specific error function given in Eq. (9) for finding the root of nonlinear equations.
-
Step 3: Termination Criteria: Terminate the execution of algorithm based on any of following criteria:
-
Predefined fitness value ε ≤ 10−35 is achieved.
-
Predefine number of generations are executed.
If termination criterion is fulfilled, then go to step 5.
-
-
Step 4: Reproduction: Create next generation on the basis of different combinations of reproduction operator including selection, crossover and mutation. The following variants of evolutionary algorithms are invoked by calling the function as listed in Table 2.
-
Step 5: Comparative Analyses: Repeat the procedure from Steps 1 to 4 for all twelve variants of GAs by taking three different initial ranges of the population to generate enough data set on which reliable comparison is made for the performance of the algorithms.
4 Numerical experimentation
In this section, results of detailed simulation performed on number of nonlinear equations by proposed design approaches as well as state of the art iterative solvers are presented.
Twenty different nonlinear equations are taken to evaluate the robustness, efficiency and performance of the proposed methods based on variants of genetic algorithms. These equations are listed in Table 3 and usually used as a test bench to verify and validate the working of the algorithms [1–9, 36–40]. First of all, our intension is to find the results of well-known solvers including Newton method (NM) and few of its variants. The results obtained are provided in Table 4 along with the number of iterative steps involved and level of accuracy achieved. The performance of these algorithms is highly sensitive to the value of initial guess, and generally, all these methods fail whenever wider inputs are used as a start point of the algorithms. It is observed from Table 4 that generally, the accuracy of the level 10−15 has been achieved with almost all the solvers by using the initial guess very close to the exact solution of the nonlinear problems.
On the other hand, the proposed design schemes based on variants of GAs are also applied to solve the problems tabulated in Table 3 using the parameter settings as given in Table 1 and Table 2. One of the best results obtained by GA-1 for initial range (−5, 5) are given in Table 3, and these values match fifteen decimal places of accuracy with state of the art variants of NM schemes. The learning curves and distance between the individuals for GA-1, 2, 3 and 4 with initial range (−1, 1), GA-5, 6, 7 and 8 with initial range (−5, 5) and GA-9, 10, 11 and 12 with initial range (−50, 50) are given in Figs. 2, 3 and 4, respectively, for different nonlinear problems. It is seen from these figures that proposed variants of GAs are able to provide convergent and accurate results for different spans of the inputs where most of the reported iterative method based on NM fails to give the convergent solutions for these inputs.
The value of the fitness ε achieved by GA-1, 2 and 3 with different initial ranges is given in Table 5, while for algorithms GA-7, 8 and 9 results are tabulated in Table 6. Similarly, graphical results are presented in Fig. 5 for algorithms GA-4, 5, 6, 10, 11 and 12. The results are plotted on semi-log scale to elaborate the small variation in the values. The results presented in Tables 5 and 6 as well as in Fig. 5 are based on one of the best results in sufficiently large number of independent runs of algorithms. It is observed form results given in tables that for most of the problems, the fitness limit 0 is achieved, and in Fig. 5, these results are shown in blank bar. In general, all the algorithms give considerably accurate results for all the nonlinear examples except problem f 11 for which few variants of GAs with initial range (-1, 1) fail to provide the convergent solutions. Generally, all the variants of GAs provide good accuracy, i.e., f(x) or ε < 10−15, for all tested nonlinear problems with different range of input data to generate a population for algorithms.
The results presented so far in tables and figures are based on dominant solution, i.e., the results with an individual of the population with best fitness values, while the nondominant solution-based value of fitness achieved by rest of individual in population should also be analyzed. In this regard, in our numerical experimentation, 40 individuals are taken in the population, the value of fitness achieved by the nondominant individual is plotted in Fig. 6 for each variants of GA. It is seen in general that about 10–15 % of nondominant solutions of proposed schemes have given reasonably accurate results. It mean that single run of any variants of GA produced one best solution and numbers of other solutions equal to the population of individuals of the problem with relatively lesser accuracy.
5 Statistical analysis
Comparative studies based on the results of statistical analysis are presented for each variants of GA for all nonlinear equations to draw a constructive conclusion for performance of each algorithm.
Fifty independent runs for all 12 proposed schemes are carried out for finding the roots of each nonlinear algebraic equation using the parameter settings of the algorithms as listed in Tables 1 and 2. The performance of the algorithms is analyzed on the basis of statistical parameters of mean and standard deviation (STD). Results on the basis of value of fitness ε are given in Table 7 for algorithms GA-1, 2, 3, 4, 5 and 6, while for algorithms GA-7, 8, 9, 10, 11 and 12 are listed in Table 8 for all 20 nonlinear equations. The numerical values presented in the tables clearly show that generally, the level of accuracy range 10−12–10−15 is achieved by the proposed algorithms and small values of STD establish the consistency or reliability of the design approaches. The single or few unsuccessful runs of the algorithms have a great effect on the values of statistical parameters, i.e., mean and STD, and therefore, analysis is continued further by plotting the values of fitness against number of independent runs of each algorithm in Fig. 7. The values of fitness achieved are shown in ascending order. It is seen that only a few runs of algorithms GA-3, GA-5 and GA-7 as in Fig. 7c, f, g, respectively, show premature convergence for some nonlinear equations, while results presented in rest of the figure generally show consistent convergence. It is also observed that significant number of runs of variants of GAs provided fitness ε = 0 for different examples.
Computational complexity of each of the variants of GA is also evaluated based on value of mean execution time (MET), i.e., average time taken for finding the optimal solution, and mean value of function evaluation (MFE), i.e., average number of function calculated for optimization by an algorithm. The values of MET and MFE based on 50 independent runs of each algorithm for all 20 examples with different initial ranges are tabulated in Table 9 and Table 10. It is seen from the results given in tables that except nonlinear equation f 18 which is relatively easy to tackle, the values of MET and MFE are around 0.8 s and 20,000, respectively. It is observed that all the algorithms, i.e., GA-1 to 12 take almost similar computational budget for finding the optimal results of each nonlinear problem.
To elaborate the performance of each variants of GAs, the analyses continue further based on the value of global operators, i.e., global accuracy (GA), global mean generation (GMG), global mean value of function evaluations (GMFE) and global mean execution time (GMET). The values of these global calculated based on 50 independent runs of each algorithm are given in Table 11 for all three initial ranges. The value of global operators is based on overall performance of variants of GA on all 20 nonlinear equations. The values of global accuracy is examined on the basis of percentage of independent runs achieved predefined level of fitness, i.e., ε < 10−05, 10−10, 10−15 and 10−20. It is seen from the results given in Table 11 that based on criteria 10−20 the performance of GA-1, GA-5 and GA-9 is better that the rest. The dominance of GA-1, GA-5 and GA-9 is also seen from smaller values of GMG and GMFE performance operators while the values of GMET of these methods are similar to the other variants of GAs. The numerical experimentations and calculations are performed on Aspire V3-471G Acer Notbook, with a 2.5 GHz Core-i5 processor, 4.00 GB RAM and running MATLAB version 2012b.
6 Conclusion
On the basis of the simulation and results presented in the last section, following conclusions can be drawn:
-
1.
Root finding problem for nonlinear equations involving polynomial and transcendental functions without prior knowledge of initial guess close to exact solution is made effectively by evolutionary computing approach based on variants of GAs
-
2.
Comparative analyses of the proposed results with state of the art iterative numerical solvers based on variants of Newton methods validate the correctness of the design approach.
-
3.
The variants of GAs have been incorporated as solvers for optimization by taking different reproduction operators, and it is found that the best results are obtained using GA-1, GA-5 and GA-9. It is seen that all these variants of GAs are based on heuristic and adaptive feasible function used as a crossover and mutation operator, respectively, while any of the selection operator based on stochastic uniform, reminder or roulette function.
-
4.
Validation and verification of the results are made on the basis of statistical analysis based on 50 independent runs of each algorithm for all 20 equations with different set of initial ranges, and it is found that all these variants can provide small values of statistical parameters of mean and standard deviation.
-
5.
Computational complexity of the design variants of GAs in terms of mean execution time and mean values of function evaluation shows that there is no noticeable difference between them.
-
6.
The accuracy and convergence of the design approaches based on values of global performance operator including global accuracy achieved, global mean generation, global mean function evaluation, global mean execution time for all nonlinear equations show that generally, all the design schemes provide effective and reliable results, but the performance of GA-1, GA-5 and GA-9 is invariably the best for these parameters from the rest of algorithms.
-
7.
Besides the accurate, convergent and effective results of the proposed design scheme for finding roots of nonlinear algebraic equation, other advantages of the algorithm are the simplicity of the concept, ease in implementation and wider application domain, which make them attractive and valuable.
In future, one should look for modified version of global search methodologies based on evolutionary and swarm intelligence techniques to solve not only nonlinear algebraic equations but also system of these equations involving large set of variables.
References
Chun C (2006) Construction of Newton-like iteration methods for solving nonlinear equations. Numeriche Mathematik 104:297–315
Kumar M, Singh AK, Srivastava A (2013) Various Newton-type iterative methods for solving nonlinear equations. J Egypt Math Soc 21(3):334–339
Darvishi MT, Barati A (2007) A third-order Newton-type method to solve systems of nonlinear equations. Appl Math Comput 187(2):630–635
Noor MA (2007) A new family of iterative methods for nonlinear equations. Appl Math Comput 190:553–558
Noor MA (2010) Some iterative methods for solving nonlinear equations using homotopy perturbation technique. International Journal of Computational Mathematics 87:141–149
Noor MA, Khan WA, Noor KI, Al-Said E (2011) Higher-order iterative methods free from second derivative for solving nonlinear equations. International Journal of the Physical Sciences 6(8):1887–1893
Thukral R (2011) Eighth-order iterative methods without derivatives for solving nonlinear equations. ISRN Applied Mathematics 2011:1–12
Traub JF (1964) Iterative methods for the solution of equations. Prentice-Hall, Englewood Cliffs, p 310
Raja, MAZ, Khan JA, and Qureshi IM (2013) Numerical treatment for Painlevé equation i using neural networks and stochastic solvers. In: Jordanov I, Jain LC (eds) Innovation in intelligent machines-3. Studies in computational intelligence, vol 442. Springer, Berlin, pp 103–117
Raja MAZ, Khan JA, Ahmad SI, and Qureshi IM (2012) Solution of the Painlevé equation-I using neural network optimized with swarm intelligence. Comput Intell Neurosci, Article ID. 721867, pp 1–10
Raja MAZ (2014) Unsupervised neural networks for solving Troesch’s problem. Chin Phys B 23(1):018903
Raja MAZ (2014) Stochastic numerical techniques for solving Troesch’s Problem. Accepted in Information Sciences 279:860–873. doi:10.1016/j.ins.2014.04.036
Raja MAZ. Solution of one-dimension Bratu equation arising in fuel ignition model using ANN optimized with PSO and SQP. Connection Science, 17-04-2014 doi:10.1080/09540091.2014.907555
Raja MAZ, Ahmad SI (2014) Numerical treatment for solving one-dimensional Bratu problem using neural networks. Neural Comput Appl 24(3–4):549–561. doi:10.1007/s00521-012-1261-2
Raja MAZ, Ahmad SI, Samar R (2013) Neural network optimized with evolutionary computing technique for solving the 2-dimensional Bratu problem. Neural Comput Appl 23(7–8):2199–2210
Raja MAZ, Samar R, and Rashidi MM (2014) Application of three Unsupervised Neural Network Models to singular nonlinear BVP of transformed 2D Bratu equation. Neural Comput Appl. doi:10.1007/s00521-014-1641-x
Khan JA, Raja MAZ, Qureshi IM (2011) Numerical treatment of nonlinear Emden–Fowler equation using stochastic technique. Ann Math Artif Intell 63(2):185–207
Khan JA, Raja MAZ, Qureshi IM (2011) Hybrid evolutionary computational approach: application to van der Pol oscillator. Int J Phys Sci 6(31):7247–7261. doi:10.5897/IJPS11.922
Khan JA, Raja MAZ and Qureshi IM. Novel approach for van der Pol oscillator on the continuous time domain. Chin Phys Lett 28:110205. doi:10.1088/0256-307X/28/11/110205
Raja MAZ, Khan JA, and Qureshi IM (2011) Solution of fractional order system of Bagley-Torvik equation using evolutionary computational intelligence. Math Probl Eng 2011, Article ID. 765075, 18 pp
Raja MAZ, Khan JA, Qureshi IM (2010) A new stochastic approach for solution of Riccati differential equation of fractional order. Ann Math Artif Intell 60(3–4):229–250
Raja MAZ, Samar R (2014) Numerical treatment for nonlinear MHD Jeffery–Hamel problem using neural networks optimized with interior point algorithm. Neurocomputing 124:178–193. doi:10.1016/j.neucom.2013.07.013
Raja MAZ, Samar R (2014) Numerical treatment of nonlinear MHD Jeffery–Hamel problems using stochastic algorithms. Comput Fluids 91:28–46
Noor MA, Khan WA, Hussain A (2007) A new modified Halley method without second derivatives for nonlinear equation. Appl Math Comput 189.2:1268–1273
Traub JF (1964) Iterative methods for solution of equations. Prentice-Hall, Englewood Cliffs
Cordero A, Hueso JL, Martinez E, Torregrosa JR (2012) Steffensen type method for solving nonlinear equations. J Comput Appl Math 236(12):3058–3064
Noor MA, Inayat Noor K (2006) Fifth-order iterative methods for solving nonlinear equations. Appl Math Comput. doi:10.1016/j.amc.2006.10.007.
Kou J, Li Y, Wang X (2006) A family of fifth-order iterations composed of Newton and third-order methods. Appl Math Comput. doi:10.1016/j.amc.2006.07.150
Kou J, Li Y (2006) Improvements of Chebyshev–Halley methods with fifth-order convergence. Appl Math Comput. doi:10.1016/j.amc.2006.09.097
Holland JH (1975) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press, Ann Arbor
Goldberg DE, Holland JH (1988) Genetic algorithms and machine learning. Mach Learn 3(2):95–99
Kumar S et al (2013) Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi ‘SWARS’. Connection Science 25(4):197–215
Song A, Zhang M (2012) Genetic programming for detecting target motions. Connection Science 24(2–3):117–141
Pini G, Tuci E (2008) On the design of neuro-controllers for individual and social learning behaviour in autonomous robots: an evolutionary approach. Connection science 20(2–3):211–230
Wang J-B, Jun L (2011) Double screen frequency selective surface structure optimized by genetic algorithm. Acta Phys. Sin. 60(5):057304
Petković MS (2013) A note on the priority of optimal multipoint methods for solving nonlinear equations. Appl Math Comput 219(10):5249–5252
Herceg D, Herceg D (2013) Means based modifications of Newton’s method for solving nonlinear equations. Appl Math Comput 219(11):6126–6133
Chicharro F et al (2013) Complex dynamics of derivative-free methods for nonlinear equations. Appl Math Comput 219(12):7023–7035
Ardelean G (2013) A new third-order Newton-type iterative method for solving nonlinear equations. Appl Math Comput 219(18):9856–9864
Soleymani Fazlollah (2013) An efficient twelfth-order iterative method for finding all the solutions of nonlinear equations. J Comput Methods Sci Eng 13(3):309–320
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Raja, M.A.Z., Sabir, Z., Mehmood, N. et al. Design of stochastic solvers based on genetic algorithms for solving nonlinear equations. Neural Comput & Applic 26, 1–23 (2015). https://doi.org/10.1007/s00521-014-1676-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-014-1676-z