Abstract
In a previous work we showed that hardlimit multilayer neural networks have more computational power than sigmoidal multilayer neural networks [1]. In 1962 Minsky and Papert showed the limitations of a single perceptron which can only solve linearly separable classification problems and since at that time there was no algorithm to find the weights of a multilayer hardlimit perceptron research on neural networks stagnated until the early eighties with the invention of the Backpropagation algorithm [2]. Nevertheless since the sixties there have arisen some proposals of algorithms to implement logical functions with threshold elements or hardlimit neurons that could have been adapted to classification problems with multilayer hardlimit perceptrons and this way the stagnation of research on neural networks could have been avoided. Although the problem of training a hardlimit neural network is NP-Complete, our algorithm based on mathematical programming, a mixed integer linear model (MILP), takes few seconds to train the two input XOR function and a simple logical function of three variables with two minterms. Since any linearly separable logical function can be implemented by a perceptron with integer weights, varying them between -1 and 1 we found all the 10 possible solutions for the implementation of the two input XOR function and all the 14 and 18 possible solutions for the implementation of two logical functions of three variables, respectively, with a two layer architecture, with two neurons in the first layer. We describe our MILP model and show why it consumes a lot of computational resources, even a small hardlimit neural network translates into a MILP model greater than 1G, implying the use of a more powerful computer than a common 32 bits PC. We consider the reduction of computational resources as the near future work main objective to improve our novel MILP model and we will also try a nonlinear version of our algorithm based on a MINLP model that will consume less memory.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
References
Barahona da Fonseca, J.: Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result. In: Proceedings of ESANN 2013, Ciaco, Belgium, pp. 345–350 (2013)
Minsky, M., Papert, S.: Perceptrons. MIT Press, Massachusetts (1965)
Gonzalez, R., Lawler, E.L.: Two-level threshold minimization. In: Proceedings of the Sixth Annual Symposium on Switching Circuit Theory and Logical Design, 41–4 (1965)
Orponen, P.: Computational Complexity of Neural Networks: a Survey. NC-TR-94-010 Technical Report, Department of Computer Science, University of London, England (1994)
Blum, A.L., Rivest, R.L.: Training a 3-node neural network is NP-complete. Lecture Notes in Computer Science, 661 (1993)
ILOG: Cplex Solver (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
da Fonseca, J.B. (2015). A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9095. Springer, Cham. https://doi.org/10.1007/978-3-319-19222-2_40
Download citation
DOI: https://doi.org/10.1007/978-3-319-19222-2_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19221-5
Online ISBN: 978-3-319-19222-2
eBook Packages: Computer ScienceComputer Science (R0)