Abstract
Training an artificial neural network is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms has some drawbacks such as getting stuck in local minima and computational complexity. Therefore, evolutionary algorithms are employed to train neural networks to overcome these issues. In this work, Artificial Bee Colony (ABC) Algorithm which has good exploration and exploitation capabilities in searching optimal weight set is used in training neural networks.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Rumelhart, D.E., Williams, R.J., Hinton, G.E.: Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition 1, 318–362 (1986)
Liu, Z., Liu, A., Wang, C., Niu, Z.: Evolving neural network using real coded genetic algorithm (GA) for multispectral image classification. Future Generation Computer Systems 20, 1119–1129 (2004)
Craven, M.P.: A Faster Learning Neural Network Classifier Using Selective Backpropagation. In: Proceedings of the Fourth IEEE International Conference on Electronics, Circuits and Systems, Cairo, Egypt, December 15-18, 1997, vol. 1, pp. 254–258 (1997)
Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Techn. Rep. CS 89-117, Department of Computer Science, Colorado State University (1989)
Rudnick, M.: A bibliography of the intersection of genetic search and artificial neural networks. Techn. Rep. No. CS/E 90- 001. Oregon Graduate Center, Beaverton, OR (1990)
Weiß, G.: Combining neural and evolutionary learning: Aspects and approaches. Techn. Rep. FKI-132-90. Institut fur Informatik, Technische Universit at Munchen (1990)
Weiß, G.: Towards the synthesis of neural and evolutionary learning. In: Omidvar, O. (ed.) Progress in neural networks, ch. 5, vol. 5, Ablex, Norwood, NJ (1993)
Albrecht, R.F., Reeves, C.R., Steele, N.C.: Artificial neural nets and genetic algorithms. Proceedings of the International Conference, Innsbruck, Austria. Springer, Heidelberg (1993)
Jones, A.J.: Genetic algorithms and their applications to the design of neural networks. Neural Computing & Applications 1, 32–45 (1993)
Ling, S.H., Lam, H.K., Leung, F.H.F., Lee, Y.S.: A Genetic Algorithm Based Variable Structure Neural Network, Industrial Electronics Society, 2003. In: IECON 2003. The 29th Annual Conference of the IEEE, vol. 1, pp. 436–441 (2003)
Marshall, S.J., Harrison, R.F.: Optimization and Training of Feedforward Neural Networks By Genetic Algorithms, Artificial Neural Networks, 1991. In: Second International Conference on, November 18-20, 1991, pp. 39–43 (1991)
Verma, B., Ghosh, R.: A novel evolutionary neural learning algorithm, Evolutionary Computation, 2002. In: CEC 2002. Proceedings of the 2002 Congress on, vol. 2, pp. 1884–1889 (2002)
Gao, Q., Lei, K.Q.Y., He, Z.: An Improved Genetic Algorithm and Its Application in Artificial Neural Network, Information, Communications and Signal Processing, 2005. In: Fifth International Conference on, December 06-09, 2005, pp. 357–360 (2005)
Yao, X.: Evolvutionary artificial neural networks. International Journal of Neural Systems 4(3), 203–222 (1993)
Eberhart, R.C., Dobbins, R.W.: Designing Neural Network explanation facilities using Genetic algorithms. In: IEEE International Joint Conference on Publication, November 18-21,1991, vol. 2, pp. 1758–1763 (1991)
Jelodar, M.S., Fakhraie, S.M., Ahmadabadi, M.N.: A New Approach for Training of Artificial Neural Networks using Population Based Incremental Learning (PBIL). In: International Conference on Computational Intelligence, pp. 165–168 (2004)
Tsai, J.T., Chou, J.H., Liu, T.K.: Tuning the Structure and Parameters of a Neural Network by Using Hybrid Taguchi-Genetic Algorithm. IEEE Transactions on Neural Networks 17(1) (2006)
El-Gallad, A.I., El-Hawary, M., Sallam, A.A., Kalas, A.: Swarm-intelligently trained neural network for power transformer protection. In: Canadian Conference on Electrical and Computer Engineering, vol. 1, pp. 265–269 (2001)
Mendes, R., Cortez, P., Rocha, M., Neves, J.: Particle swarm for feedforward neural network training. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1895–1899 (2002)
Van der Bergh, F., Engelbrecht, A.: Cooperative learning in neural networks using particle swarm optimizers. South African Computer Journal 26, 84–90 (2000)
Ismail, A., Engelbrecht, A.: Global optimization algorithms for training product unit neural networks. In: IJCNN 2000. International Joint Conference on Neural Networks, vol. 1, pp. 132–137. IEEE Computer Society, Los Alamitos, CA (2000)
Kennedy, J., Eberhart, R.: Swarm Intellegence. Morgan Kaufmann Publishers, San Francisco (2001)
Meissner, M., Schmuker, M., Schneider, G.: Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinformatics 7, 125 (2006)
Fan, H., Lampinen, J.: A Trigonometric Mutation Operation to Differential Evolution. Journal of Global Optimization 27, 105–129 (2003)
Ilonen, J., Kamarainen, J.I., Lampinen, J.: Differential Evolution Training Algorithm for Feed-Forward Neural Networks
Pavlidis, N.G., Tasoulis, D.K., Plagianakos, V.P., Nikiforidis, G., Vrahatis, M.N.: Spiking Neural Network Training Using Evolutionary Algorithms, Neural Networks, 2005. In: IJCNN 2005. Proceedings 2005 IEEE International Joint Conference, vol. 4, pp. 2190–2194 (2005)
Plagianakos, V.P., Vrahatis, M.N.: Training Neural Networks with Threshold Activation Functions and Constrained Integer Weights, Neural Networks, 2000. In: IJCNN 2000. Proceedings of the IEEE-INNS-ENNS International Joint Conference, vol. 5, pp. 161–166 (2000)
Plagianakos, V.P., Vrahatis, M.N.: Neural Network Training with Constrained Integer Weights, Evolutionary Computation, 1999. In: CEC 1999. Proceedings of the 1999 Congress, vol. 3, p. 2013 (1999)
Yu, B., He, X.: Training Radial Basis Function Networks with Differential Evolution, Granular Computing, 2006. In: IEEE International Conference on, May 10-12, 2006, pp. 369–372 (2006)
Yao, X., Liu, Y.: A New Evolutionary System for Evolving Artificial Neural Networks. IEEE Transactions on Neural Networks 8(3), 694–713 (1997)
Gao, W.: New Evolutionary Neural Networks, 2005. In: First International Conference on Neural Interface and Control Proceedings, May 26-28, 2005, Wuhan, China (2005)
Davoian, K., Lippe, W.: A New Self-Adaptive EP Approach for ANN Weights Training, Enformatika Transactions on Engineering, Computing and Technology, Barcelona, Spain, October 22-24, 2006, vol. 15, pp. 109–114 (2006)
Leung, C., Member, Chow, W.S.: A Hybrid Global Learning Algorithm Based on Global Search and Least Squares Techniques for Backpropagation Networks, Neural Networks,1997. In: International Conference on, vol. 3, pp. 1890–1895 (1997)
Karaboga, D.: An Idea Based On Honey Bee Swarm For Numerical Optimization, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005)
Basturk, B., Karaboga, D.: An Artificial Bee Colony (ABC) Algorithm for Numeric function Optimization. In: IEEE Swarm Intelligence Symposium, May 12-14, 2006, Indianapolis, Indiana, USA (2006)
Weiß, G.: Neural Networks and Evolutionary Computation. PartI: Hybrid Approaches in Artificial Intelligence. In: International Conference on Evolutionary Computation, pp. 268–272 (1994)
Fahlman, S.: An empirical study of learning speed in back-propagation networks, Technical Report CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, PA 15213 (September 1988)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Karaboga, D., Akay, B., Ozturk, C. (2007). Artificial Bee Colony (ABC) Optimization Algorithm for Training Feed-Forward Neural Networks. In: Torra, V., Narukawa, Y., Yoshida, Y. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2007. Lecture Notes in Computer Science(), vol 4617. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73729-2_30
Download citation
DOI: https://doi.org/10.1007/978-3-540-73729-2_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73728-5
Online ISBN: 978-3-540-73729-2
eBook Packages: Computer ScienceComputer Science (R0)