Abstract
The article is devoted to the analysis of neural networks from the positions of the neuromorphic approach. The analysis allows to conclude that modern artificial neural networks can effectively solve particular problems, for which it is permissible to fix the topology of the network or its small changes. In the nervous system, as a prototype, the functional element - the neuron - is a fundamentally complex object, which allows implementing a change in topology through the structural adaptation of the dendritic tree of a single neuron. Promising direction of development of neuromorphic systems based on deep spike neural networks in which structural adaptation can be realized is determined.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Currently, there are many poorly formalized problems that are badly solved by existing methods (detection and recognition of objects in conditions of significant data shortage, control of unstable systems, control of the behavior of mobile agents in a volatile environment, etc.).
One of the most promising common approaches to solving such problems is artificial neural networks (ANN), in particular, deep neural networks (DLN), which are now actively developing. This is due, in particular, with the advent of new hardware (NVIDIA graphics accelerators [1], specialized processors (BrainScaleS [2, 3], SpiNNaker [4], NIDA [5], DANNA [6], Neurogrid [7], IBM TrueNorth [8]), which allow efficient numerical calculations on the basis of the mathematical apparatus of the DLN, and the direction of neuromorphic systems, whose architecture and design are based on the principles of the work of the biological neural structures of the nervous system. This is a fairly broad interpretation, in which the deep learning fit well. Possible successes of neuromorphic systems are associated, first of all, with the biological plausibility of their basic neuron component and its hardware implementation. In this sense, some specialized processors (in particular, IBM TrueNorth) refer specifically to processors of the neuromorphic type.
2 Overview of Deep Neural Network Architectures
Today, the practical application of neural networks is most intensively developed in the trend of deep learning.
There is a large number of networks within this trend [9]. The basic architectures, from which all the main implementations are obtained:
-
Feed forward (FF) (Perceptron, Autoencoders [10]);
-
Fully connected networks (FCN) (Markov Chain [11], Hopefield network [12], Boltzmann Machine [13];
-
Convolutional neural networks (CNN) (LeNet [14], VGG-19 [15], Google Inception [16]);
-
Recurrent neural networks (RCN) (LSTM [17], Deep Residual Network (ResNet) [18,19,20]);
There are separately presented architectures such as growing neural networks, in which the following widespread types can be distinguished:
-
Networks based on Kohonen maps (SOM [21], ESOM [22], GHSOM [23], SOS [24]);
-
SOINN, ESOINN [25];
-
Neural Gas Network [26] and its derivatives GNG [27], IGNG [28], GCS [29], TreeGCS [30], PGCS [31] and others.
Relatively new works are devoted to the implementation of spiking neural networks, based on the above architectures [32,33,34]. The advantages of deep spiking neural networks are firstly declared in the significant energy savings in the case of hardware implementation.
If we consider the achievements of neural networks from the point of view of solving particular problems, great progress has been made in this direction. So, according to the results of the competition in recent years, DLN have been won in most computer vision tasks (pattern recognition, object detection, segmentation, etc.). It is important to note that such networks are effective in problems in which there are high local correlations in the input data.
Also, there is the big problem of combining a set of private solutions, formed by neural networks to solve common problems of controlling agent behavior in a complex environment. In other words, the solution, for example, of object detection problem, converts the space of high-dimensional input data into a space of low dimensionality of the classes of objects to be detected. If it is necessary to create a flexible control system for the behavior of the agent (robot) in a volatile environment, we are forced to operate with a number of such particular solutions. This naturally limits the agent in adaptability to changes in the environment. Part of this problem is solved in growing networks.
Despite the fact that ANN were originally based on the analogy with the nervous system, the majority of neural networks in their topology, training rules and principles of functioning as a whole is very different, and the trend away from biological likelihood is growing. In particular, the development of networks follows the path of increasing the number of layers, but not the complexity of the functional element of neural networks - the neuron; and growing neural networks are based on the addition of neurons and layers, in contrast to change in the structure of a neuron dendritic tree in a biological system, where each dendrite provides complex information processing.
If we compare the known features of the nervous system and ANN (assuming that the advantages of the still disjointed architectures of ANN will be unified), then following table can be made (Table 1).
It seems promising to consider the possibility of complicating the model of the neural networks functional element with an emphasis on the possibilities of network structural adaptation, in the trend of the neuromorphic approach.
3 Neuron Models
There are many widespread models of neurons. By the level of abstraction, models can be divided into:
-
Biological (biophysical)-models based on the modeling of biochemical and physiological processes, which, as a consequence, lead to a certain behavior of the neuron in certain modes of operation (the Hodgkin–Huxley model [35]).
-
Phenomenological-models describing certain phenomena of the behavior of a neuron in certain modes of operation as a “black box” (the Izhikevich model [36]).
-
Formal-models with the highest level of abstraction, describing only the basic properties of the neuron (formal neuron [37]).
Each model can correspond to several features from this classification. In the framework of ANN in general, and DLN, in particular, modifications of formal neuron models, with different activation functions (Sigmoid, hyperbolic tangent, ReLU and its derivatives [38]) are used. Spiking variations of deep networks basically contain such models of neurons as variations of the threshold integrator model [39], the Izhikevich model mentioned above.
One of the promising options for implementing the model of an element of neuromorphic systems is the phenomenological model of a dynamic spike neuron with the ability to describe the spatial structure of the dendritic apparatus [40]. This model allows us to describe the variable topology of a neural network, based on the principles of neural structure formation known from neurophysiology [41].
4 Discussion
The main feature of the nervous system, which is still not considered in the ANN archives, is a great potential in structural (topological) restructuring. Structural adaptation in the nervous system is largely based on the high complexity of a single element of the network - the neuron.
The analysis allows to identify the following areas of development of ANN in the framework of the neuromorphic approach:
-
Complicating the neuron model, adding the possibility of describing the structure of the membrane (as generalizing and binding elements) of the neuron.
-
Development of learning algorithms, taking into account the modification of the structure of the generalizing and binding elements of the neuron.
-
Development of ANN architectures that allow training and data output simultaneously in multiple contexts.
References
Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran J.: cuDNN: Efficient Primitives for Deep Learning arXiv:1410.0759v3 [cs.NE], 18 December 2014
Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M.A., Schmuker, M., Brüderle, D., Schemmel, J., Meier, K.: Six networks on a universal neuromorphic computing substrate. Front. Neurosci. 7(11) (2013). doi:10.3389/fnins.2013.00011
Schemmel, J., Bruderle, D., Grubl, A., Hock, M., Meier, K., Millner, S.: A wafer-scale neuromorphic hardware system for large-scale neural modeling. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1947–1950. IEEE (2010)
Furber, S.B., Lester, D.R., Plana, L., Garside, J.D., Painkras, E., Temple, S., Brown, A.D., et al.: Overview of the spinnaker system architecture. IEEE Trans. Comput. 62(12), 2454–2467 (2013)
Schuman, C.D., Birdwell, J.D.: Variable structure dynamic artificial neural networks. Biol. Inspired Cognit. Archit. 6, 126–130 (2013)
Schuman, C.D., Disney, A., Reynolds, J.: Dynamic adaptive neural network arrays: a neuromorphic architecture. In: Workshop on Machine Learning in HPC Environments, Supercomputing (2015)
Benjamin, B.V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J.-M., Alvarez-Icaza, R., Arthur, J.V., Merolla, P., Boahen, K., et al.: Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE. 102(5), 699–716 (2014)
Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)
Van Veen, F.: The Neural Network Zoo (2016). http://www.asimovinstitute.org/neural-network-zoo/. Accessed 16 Apr 2017
Kingma, D.P., Welling, M.: Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114 (2013)
Hayes, B.: First links in the Markov chain. Am. Sci. 101(2), 252 (2013)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982)
Hinton, Geoffrey E., Sejnowski, Terrence J.: Learning and releaming in Boltzmann machines. Parallel Distrib. Process. 1, 282–317 (1986)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE. 86(11), 2278–2324 (1998)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Published as a Conference Paper at ICLR 2015
Szegedy, C., et al.: Rethinking the Inception Architecture for Computer Vision. arXiv:1512.00567v3 [cs.CV], 11 December 2015
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
He, K., et al.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
Moniz, J., Pal, C.: Convolutional Residual Memory Networks. arXiv:1606.05262 [cs.CV], 14 July 2016
Targ, S., Almeida, D., Lyman, K.: Generalizing Residual Architectures. arXiv:1603.08029v1 [cs.LG], 25 March 2016
Kohonen, Teuvo: Self-organized formation of topologically correct feature maps. Biol. Cybern. 43(1), 59–69 (1982)
Deng, D., Kasabov, N.: ESOM: an algorithm to evolve self-organizing maps from on-line data streams. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2000), Como, Italy, 24–27 July 2000, vol. vi, pp. 3–8. IEEE computer society (2000)
Growing Hierarchical Self-Organizing Map (GHSOM), Dittenbach, M., Merkl, D., Rauber, A.: The growing hierarchical self-organizing map. In: Proceedings of the International Joint Conference On Neural Networks (IJCNN 2000), vol. VI, pp. 15–19
Self-Organizing Surfaces (SOS), Zell, A., Bayer, H., Bauknecht, H.: Similarity analysis of molecules with self-organizing surfaces—an extension of the self-organizing map. In: Proceedings of International Conference on Neural Networks, ICNN 1994, Piscataway, pp. 719–724 (1994)
Furao, S., Hasegawa, O.: An incremental network for on-line unsupervised classification and topology learning. Neural Netw. 19(1), 90–106 (2006)
Martinetz, T., Schulten, K.: A “neural gas” Network Learns Topologies. Artificial Neural Networks, pp. 397–402. Elsevier, Amsterdam (1991)
Fritzke, B.: A growing neural gas network learns topologies. Adv. Neural. Inf. Process. Syst. 7, 625–632 (1995)
Prudent, Y., Ennaji, A.: An incremental growing neural gas learns topologies. In: Neural Networks, IJCNN 2005 (2005)
Fritzke, B.: Growing cell structures- a self-organizing network for unsupervised and supervised learning. Neural Netw. 7(9), 1441–1460 (1994)
Hodge, V., Austin, J.: Hierarchical growing cell structures: TreeGCS. In: IEEE TKDE Special Issue on Connectionist Models for Learning in Structured Domains
Vlassis, N., Dimopoulos, A., Papakonstantinou, G.: The probabilistic growing cell structures algorithm. Lecture Notes in Computer Science, vol. 1327, p. 649 (1997)
Hunsberger, E., Eliasmith, C.: Spiking deep networks with LIF neurons. arXiv:1510.08829v1 [cs.LG], 29 October 2015
Gavrilov, A., Panchenko, K.: Methods of learning for spiking neural networks. A survey. In: 13th International Scientific-Technical Conference APEIE–39281, At Novosibirsk, vol. 1, part 2, pp. 60–65 (2016)
Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113(1), 54–66 (2015)
Hodgkin, A.L., Huxley, A.F.: A quantative description of membrane current and its application conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952)
Izhikevich E.M.: Simple model of spiking neurons. In: IEEE Transactions on Neural Networks. A Publication of the IEEE Neural Networks Council, vol. 14 (2003)
McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)
Nair, V., Hinton, G.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), Haifa, Israel, 21–24 June 2010
Burkitt, A.N.: A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95(1), 1–19 (2006)
Bakhshiev, A.V., Gundelakh, F.V.: Application the spiking neuron model with structural adaptation to describe neuromorphic systems. Procedia Comput. Sci. 103, 190–197 (2017)
Nicholls, J.G., Martin, A.R., Fuchs, P.A., Brown, D.A., Diamond, M.E., Weisblat, D.A.: From Neuron to Brain. Sinauer Associates Incorporated, Sunderland (1999)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Bakhshiev, A., Stankevich, L. (2018). Prospects for the Development of Neuromorphic Systems. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research. NEUROINFORMATICS 2017. Studies in Computational Intelligence, vol 736. Springer, Cham. https://doi.org/10.1007/978-3-319-66604-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-66604-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66603-7
Online ISBN: 978-3-319-66604-4
eBook Packages: EngineeringEngineering (R0)