Abstract
Neural models are used in both computational neuroscience and in pattern recognition. The aim of the first is understanding of real neural systems, and of the second is gaining better, possibly brainlike performance for systems being built. In both cases, the highly parallel nature of the neural system contrasts with the sequential nature of computer systems, resulting in slow and complex simulation software. More direct implementation in hardware (whether digital or analogue) holds out the promise of faster emulation both because hardware implementation is inherently faster than software and the operation is much more parallel. There are costs to this: modifying the system (for example, to test out variants of the system) is much harder when a full application-specific integrated circuit has been built. Fast emulation can permit direct incorporation of a neural model into a system, permitting real-time input and output. Appropriate selection of implementation technology can help to make simplify interfacing the system to external devices. We review the technologies involved and discuss some example systems.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
- Hardware Implementation
- Neural Model
- Output Unit
- Neural Information Processing System
- Learn Vector Quantization
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
P. Hammarlund and O. Ekeberg (1998): Large neural network simulations on multiple hardware platforms. Journal of Computational Neuroscience 5, 443–459.
E. Claverol, A. Brown, and J. Chad (2001): Scalable cortical simulations on Beowulf architectures. Neurocomput. 43, 307–315.
D. Hammerstrom (2001): Biologically inspired computing. [Online]. Available: http://www.ogi.ece.edu/strom
Neural network hardware. [Online]. (1998): Available: http://neuralnets.web.cern.ch/NeuralNets/nnwlnHepHard.html
E. Kandel, J. Schwartz, and T. Jessell (2000): Principles of Neural Sci. (4th Ed.) McGraw Hill.
C. Koch (1999): Biophysics of Computation. Oxford.
T. Bell (1991): A channel space theory of dendritic self-organisation. AI Laboratory, Free University of Brussels, Tech. Rep. 91-4.
B. Mel (1994): Information processing in dendritic trees. Neural Comput. 6, 1031–1085.
D. Aidley (1999): The Physiology of Excitable Cells. (4th Ed.) Cambridge University Press.
S. Hammeroff (1999): The neuron doctrine is an insult to neurons. Behavioural and Brain Sciences, 22, 838–839.
W. McCulloch and W. Pitts (1943): A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, reprinted in [142].
D. Hebb (1949): The Organization of Behavior. Wiley, New York. partially reprinted in [142].
J. Anderson (1995): An Introduction to Neural Networks. Cambridge, MA: MIT Press.
F. Rosenblatt (1962): Principles of Neurodynamics. Spartan, New York.
J. Hertz, A. Krogh, and R. Palmer (1991): Introduction to the Theory of Neural Computation. Addison Wesley.
S. Haykin (1999): Neural Networks: A Comprehensive Foundation. (2nd Ed.) Macmillan.
B. Widrow and M. Hoff (1960): Adaptive switching circuits, In 1960 IRE WESCON Convention Record. New York: IRE, 4, 96–104.
R. Rescorla and A. Wagner (1972): A theory of pavlovian conditioning: The effectiveness of reinforcement and nonreinforcement. In Classical Conditioning II: Current Research and Theory (A. Black and W. Prokasy, eds) Appleton-Century-Crofts, New York: 64–69.
M. Minsky and S. Papert (1969): Perceptrons. MIT Press, Cambridge partially reprinted in [142].
J. Hopfield (1982): Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences. USA, 79, 1982, reprinted in [142].
D. Ackley, G. Hinton, and T. Sejnowski (1985): A learning algorithm for boltzmann machines. Cognitive Science, 9, reprinted in [142].
A. Bryson and Y.-C. Ho (1969): Applied Optimal Control. Blaisdell, New York.
P. Werbos (1974): Beyond regression: New tools for prediction and analysis in the behavioral sciences. Ph.D. dissertation, Harvard University.
D. Parker (1985): Learning logic. Center for Computational Research in Economics and Management Science, Massachusetts Institute of Technology, Cambridge, MA, Tech. Rep. TR-47.
Y. Le Cun (1985): Une procédure d’apprentissage pour réseau à seuil assymétrique. In Cognitiva 85: A la Frontière de l’Intelligence Artificielle des Sciences de la Connaissance des Neurosciences, (Paris 1985). CESTA, Paris: 599–604.
D. Rumelhart, G. Hinton, and R. Williams (1986): Learning representations by back-propagating errors. Nature, 323, 533–536, reprinted in [142].
J. Moody and C. Darken (1988): Learning with localized receptive fields. In Proceedings of the 1988 Connectionist Models Summer School, (D. Touretzky, G. Hinton, and T. Sejnowski, eds) (Pittsburg). Morgan Kaufmann, San Mateo 133–143.
C. Bishop (1995): Neural networks for Pattern Recognition. Clarendon Press, Oxford.
J. Elman (1990): Finding structure in time. Cognitive Science. 14, 179–211.
H. Barlow (1959): Sensory mechanisms, the reduction of redundancy and intelligence. The Mechanisation of Thought Processes: NPL Symposium, 10.
T. Kohonen, T. Huang, and M. Schroeder (2000): Self-organizing Maps. (3rd ed.) Springer-Verlag.
L. Lapique (1907): Sur l’excitation electrique des nerfs. J. Physiology. Paris, 620–635.
W. Gerstner (1995): Time structure of the activity in neural network models. Physical Reviews E. 51, 738–758.
W. Gerstner and W. Kistler (2002): Spiking Neural Models. Cambridge.
J. Feng and D. Brown (2000): Integrate-and-fire models with nonlinear leakage. Bulletin of Mathematical Biology. 62, 467–481.
J. Feng and G. Wei (2001): Increasing inhibitory input increases neuronal firing rate: when and why? Diffusion process cases. J. Phys. A. 34, 7493–7509.
E. Izhikevich. Which model to use for cortical spiking neurons? submitted to IEEE Transactions of Neural Networks.
—, Simple model of spiking neurons, accepted for publication in IEEE Transactions of Neural Networks.
M. Hines and N. Carnevale (1997): The NEURON simulation environment. Neural Computation. 9, 1179–1209.
G. Bi and M. Poo (2001): Synaptic modification by correlated activity: Hebb’s postulate revisited. Annual Review of Neuroscience. 24, 139–166.
L. Smith (2002): Using Beowulf clusters to speed up neural simulations. Trends in the Cognitive Science. 6, 231–232.
R. Fitzhugh (1966): An electronic model of the nerve membrane for demonstration purposes. J. Appl. Physiology. 21, 305–308.
R. Johnson and G. Hanna (1969): Membrane model: a single transistor analog of excitable membrane. J. Theoretical Biology. 22, 401–411.
E. R. Lewis (1968): An electronic model of the neuroelectric point process. Kybernetik. 5, 30–46.
G. Roy (1972): A simple electronic analog of the squid axonmembrane: the neuro FET. IEEE Transactions on Biomedical Engineering. BME-18, 60–63.
W. Brockman (1979): A simple electronic neuron model incorporating both active and passive responses. IEEE Transactions on Biomedical Engineering. BME-26, 635–639.
F. Rosenblatt (1958): The perceptron: a probabilistic mode for information storage and processing in the brain. Psychological Rev. 65, 386–408.
B. Widrow (1962): Generalization and information storage in networks of ADALINE neurons. In Self-Organizing Systems (G. Yovitts, ed) Spartan Books.
R. Runge, M. Uemura, and S. Viglione (1968): Electronic synthesis of the avian retina. IEEE Transactions on Biomedical Eng., BME-15, 138–151.
L. Smith (1989): Implementing neural networks. In New Developments in Neural Computing (J. Taylor and C. Mannion, eds) Adam Hilger, 53–70.
I. Aybay, S. Cetinkaya, and U. Halici (1996): Classification of neural network hardware. Neural Network World. 6(1), 11–29.
“AN220E04 datasheet: Dynamically reconfigurable FPAA,” Anadigm, 2003.
R. Hecht-Nielsen, Neurocomputing. Addison-Wesley, 1990.
E. Vittoz, H. Oguey, M. Maher, O. Nys, E. Dijkstra, and M. Cehvroulet (1991): Analog storage of adjustable synaptic weights. In VLSI Design of Neural Networks. (U. Ramacher and E. Rueckert, eds) Kluwer Academic.
“80170nx electrically trainable analog neural network,” Intel Corporation, 1991.
J. Meador, A. Wu, C. Cole, N. Nintunze, and P. Chintrakulchai (1991): Programmable impulse neural circuits. IEEE Transactions on Neural Networks. 2(1), 101–109.
C. Diorio, P. Hasler, B. Minch, and C. Mead (1996): A single-transistor silicon synapse. IEEE Transactions on Electron Devices. 43(11), 1982–1980.
L. Smith, B. Eriksson, A. Hamilton, and M. Glover (1999): SPIKEII: an integrate-and-fire aVLSI chip. Int. J. Neural Syst. 9(5), 479–484.
D. Hsu, M. Figueroa, and C. Diorio (2002): Competitive learning with floating-gate circuits. IEEE Transactions on Neural Networks. 13, 732–744.
T. Morie, T. Matsuura, M. Nagata, and A. Iwata (2003) A multinanodot floating-gate mosfet circuit for spiking neuron models. IEEE Transactions on Nanotechnology. 2, 158–164.
D. Green (1999) Digital Electronics (5th ed.) Prentice Hall.
C. Mead (1989): Analog VLSI and Neural Systems. Addison-Wesley.
S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, and R. Douglas (2002): Analog VLSI: Circuits and Principles. MIT Press.
E. Ifeachor and B. Jervis (2002): Digital Signal Processing: A Practical Approach (2nd ed.) Prentice Hall.
M. Hohfield and S. Fahlman (1997): Probabilistic rounding in neural network learning with limited precision. Neurocomputing. 4, 291–299.
E. Sackinger (1997): Measurement of finite precision effects in handwriting and speech recognition algorithms. In ICANN 97: LNCS 1327 (W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, eds), Springer Verlag, 1223–1228.
P. Moerland and E. Fiesler (1997): Neural network adaptations to hardware implementations. In Handbook of Neural Computation (E. Fiesler and R. Beale, eds) IOP Publishing.
S. Draghici (2002): On the capabilities of neural networks using limited precision weights. Neural Networks. 15, 395–414.
I. Corporation (1990): 80170NN electrically trainable analog neural network. Datasheet.
C. S. Lindsey, B. Denby, and T. Lindblad. Neural network hardware. [Online]. Available: http://www.avaye.com/ai/nn/hardware/index.html
A. Eide (1994): An implementation of the zero instruction set computer (zisc036) on a pc/isa-bus card, [Online]. Available: citeseer.nj.nec.com/eide94implementation.html
H. McCartor (1991): Back propagation implementation on the adaptive solutions cnaps neurocomputer chip. In Advances in Neural Information Processing Systems 3, (R. Lippmann, J. Moody, and D. Touretzky, eds), Morgan Kaufmann pp. 1028–1031.
N. Mauduit, M. Duranton, and J. Gobert (1992): Lneuro 1.0: A piece of hardware LEGO for building neural network systems. IEEE Transactions on Neural Networks. 3(3).
Y. Deville (1995) Digital VLSI neural networks: from versatile neural processors to application-specific chips. Proc. of the International Conference on Artificial Neural Networks ICANN’95, Paris, France, Industrial Conference, Session 9, VLSI and Dedicated Hardware.
U. Ramacher, J. Beichter, W. Raab, J. Anlauf, N. Bruels, U. Hachmann, and M. Weseling (1991): Design of a 1st generation neurocomputer. In VLSI Design of Neural Networks, (U. Ramacher and E. Rueckert, eds), Kluwer Academic.
U. Ramacher, W. Raab, J. Anlauf, U. Hachmann, J. Beichter, N. Bruls, R. Manner, J. Glas, and A. Wurz (1993): Multiprocessor and memory architecture of the neurocomputer SYNAPSE-1. Proc. International Conference on Microelectronics for Neural Networks. Edinburgh, pp. 227–232.
H. Chen and A. Murray (2002): A continuous restricted Boltzmann machine with a hardware amenable training algorithm. In Proceedings of ICANN 2002, pp. 426–431.
—, A continuous restricted Boltzmann machine with an implementable training algorithm. In IEEE Proceedings on Vision Image and Signal Processing.
G. Hinton, B. Sallans, and Z. Ghahramani (1999): A hierarchical community of experts. In Learning in Graphical Models (M. Jordan, ed) MIT Press pp. 479–494.
P. Fleury and A. Murray (2003): Mixed-signal VLSI implementation of the products of experts’ contrastive divergence learning scheme. In Proceedings of ISCAS 2003. 5, pp. 653–656.
A. Murray, L. Tarassenko, H. Reekie, A. Hamilton, M. Brownlow, D. Baxter, and S. Churcher (1991): Pulsed silicon neural nets—following the biological leader. In Introduction to VLSI Design of Neural Networks (U. Ramacher, ed), Kluwer pp. 103–123.
A. Murray, S. Churcher, A. Hamilton, A. Holmes, G. Jackson, R. Woodburn, and H. Reekie (1994) Pulse-stream VLSI neural networks. IEEE MICRO, pp. 29–39.
A. Hamilton, S. Churcher, P. Edwards, G. B. Jackson, A. Murray, and H. Reekie (1994): Pulse-stream VLSI circuits and systems: the EPSILON neural network chipset. Int. J. Neural Sys. 4(4), 395–405.
P. Richert, L. Spaanenburg, M. Kespert, J. Nijhuis, M. Schwarz, and A. Siggelkow (1991): ASICs for proto-typing pulse-density modulated neural networks. In Introduction to VLSI Design of Neural Networks (U. Ramacher, ed), Kluwer pp. 125–151.
T. Lehmann (1997): Classical conditioning with pulsed integrated neural networks: Circuits and system. pt. II, IEEE Transactions on Circuits and Systems, 45(6), 720–728.
T. Lehmann and R. Woodburn (1999): Biologically-inspired learning in pulsed neural networks. In Learning on Silicon: Adaptive VLSI Neural Systems (G. Cauwenberghs and M. Bayoumi, eds) Kluwer, pp. 105–130.
L. Watts (1993): Event driven simulation of networks of spiking neurons. In Advances in Neural Information Processing Systems 6 (J. Alspector, J. Cowan, and G. Tesauro, eds), pp. 927–934.
A. Nishwitz and H. Glünder (1995): Local lateral inhibition—a key to spike synchronization. Biological Cybernetics. 73(5), 389–400.
L. Smith, B. Eriksson, A. Hamilton, and M. Glover (1999): Fast digital simulation of spiking neural networks and neuromorphic integration with SPIKELAB. Int. J. Neural Sys. 9(5), 473–478.
S. Lim, A. Temple, S. Jones, and R. Meddis (1998): Digital hardware implementation of a neuromorphic pitch extraction system. In Neuromorphic Systems: Engineering Silicon from Neurobiology (L. Smith and A. Hamilton, eds), World Scientific.
N. Mtetwa, L. Smith, and A. Hussain (2000): Stochastic resonance and finite resolution in a network of leaky integrate-and-fire neurons. In Artificial neural networks—ICANN 2002. Springer, Madrid, Spain pp. 117–122.
S. Wolpert and E. Micheli-Tzanakou (1996): A neuromime in VLSI. IEEE Transactions on Neural Networks, 7(2), 300–306.
M. Glover, A. Hamilton, and L. Smith (1998): Analogue VLSI integrate and fire neural network for clustering onset and offset signals in a sound segmentation system. In Neuromorphic Systems: Engineering Silicon from Neurobiology (L. Smith and A. Hamilton, eds), pp. 238–250.
S.-C. Liu and B. A. Minch (2001): Homeostasis in a silicon integrate and fire neuron. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA (T. K. Leen, T. G. Dietterich, and V. Tresp, eds), MIT Press, pp. 727–733.
E. Chicca, D. Badoni, V. Dante, M. D’Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P.D. Giudice (2003): A vlsi recurrent network of integrate-and-fire neurons connected by plastic synapses with long term memory. IEEE Transactions on Neural Network. 14(5), 1409–1416.
J. Mavor, M. Jack, and P. Denyer (1983): Introduction to MOS LSI Design. Addison Wesley.
B. Eriksson (2002): A critical study of a hardware integrate-and-fire neural network. Master’s thesis, University of Stirling, Department of Computing Science and Mathematics.
G. Indiveri (2003): A low-power adaptive integrate-and-fire neuron circuit. In Proc. IEEE International Symposium on Circuits and Systems. May 2003.
G. Patel and S. P. DeWeerth (1997): Analog VLSI Morris-Lecar neuron. Electronics Letters, 33, 997–998.
C. Morris and H. Lecar (1981): Voltage oscillations in the barnacle giant muscle fiber. Biophysics J. 35, 193–213.
C. Rasche and R. Hahnloser (2001): Silicon synaptic depression. Biological Cybernetics. 84, 57–62.
W. Maass (1997): Networks of spiking neurons: The third generation of neural network models. Neural Networks. 10(9), 1659–1671.
A. Bofill, R. Woodburn, and A. Murray (2001): Circuits for VLSI implementation of temporally-asymmetric Hebbian learning. In Neural Information Processing Systems. Vancouver.
A. Bofill-i-Petit and A. Murray (2003): Synchrony detection by analogue VLSI neurons with bimodal STDP synapses. accepted for NIPS 2003.
E. Chicca, G. Indiveri, and R. Douglas (2003): An adaptive silicon synapse. In Proc. IEEE International Symposium on Circuits and Systems. May.
M. Hewitt and R. Meddis (1991): An evaluation of eight computer models of mammalian inner hair-cell function. J. Acoustical Soc. Am. 90(2), 904–917.
J. Lazzaro and C. Mead (1989): Circuit models of sensory transduction in the cochlea. In Analog VLSI Implementations of Neural Networks. Kluwer pp. 85–101.
I. Grech, J. Micallef, and T. Vladimirova (1999): Silicon cochlea and its adaptation to spatial localisation. IEE Proceedings—Circuits Devices and Systems. 146(2), 70–76.
A. van Schaik and A. McEwan (2003): An analog VLSI implementation of the meddis inner hair cell model. EURASIP J. Applied Signal Processing.
K. Boahen, Point-to-point connectivity between neuromorphic chips using address-events. IEEE Transactions on Circuits and Systems II. 47(5), 416–434.
I. Segev, M. Rapp, Y. Manor, and Y. Yarom (1992): Analog and digital processing in single nerve cells: dendritic integration and exonal propagation. In Single Neuron Computation (T. McKenna, J. Davis, and S. Zornetzer, eds) pp. 173–198.
J. Elias (1993): Artificial dendritic trees. Neural Comput. 5(4), 648–664.
D. Northmore and J. Elias (1996): Spike train processing by a silicon neuromorph: The role of sublinear summation in dendrites. Neural Comput. 8(6), 1245–1265.
J. Elias and D. Northmore (1995): Switched-capacitor neuromorphs with wide-range variable dynamics. IEEE Transactions on Neural Networks. 6(6), 1542–1548.
M. Ohtani, H. Yamada, K. Nishio, H. Yonezu, and Y. Furukawa (2002) Analog LSI implementation of biological direction-sensitive neurons. part 1 Japanese Journal of Applied Physics, 41, 1409–1416.
W. Westerman, D. P. Northmore, and J. G. Elias (1998): A hybrid (hardware/software) approach towards implementing hebbian learning in silicon neurons with passive dendrites. In Neuromorphic Systems: Engineering Silicon from Neurobiology. (L. Smith and A. Hamilton, eds), World Scientific.
A. Saurdagiene, B. Porr, and F. Woergoetter (2004): How the shape of preand post-synaptic signals can influence STDP: A biophysical model, accepted for Neural Comput.
R. Douglas, M. Mahowald, and K. Martin (1996): Neuroinformatics as explanatory neuroscience. Neuroimage. S25–S27.
M. Mahowald and R. Douglas (1991): A silicon neuron. Nature, 354 (6354), 515–518.
R. Douglas and M. Mahowald (1995): A construction set for silicon neurons. In An Introduction to Neural and Electronic Networks (S. Zornetzer, J. L. Davis, C. Lau, and T. McKenna, eds) Academic Press pp. 277–296.
C. Rasche, R. Douglas, and M. Mahowald (1998): Characterization of a silicon pyramidal neuron. In Neuromorphic Systems: Engineering Silicon from Neurobiology (L. Smith and A. Hamilton, eds) World Scientific.
C. Rasche and R. Douglas (2001): An improved silicon neuron. Analog Integrated Circuits and Signal Processing. 23(3), 227–236.
C. Breslin and L. Smith (1999): Silicon cellular morphology. International Journal of Neural Systems. 9(5), 491–495.
J. Shin and C. Koch (1999): Adaptive neural coding dependent con the time-varying statistics of the somatic input current. Neural Computation. 11(8), 1893–1913.
C. Rasche (1999): An aVLSI basis for dendritic adaptation. IEEE Transactions on Circuits and Systems II. 48(6), 600–605.
C. Rasche and R. Douglas (2001): Forward-and backpropagation in a silicon dendrite. IEEE Transactions on Neural Networks. 12(2).
B. A. Minch, P. Hasler, C. Diorio, and C. Mead (1995): A silicon axon. In Advances in Neural Information Processing Systems (G. Tesauro, D. Touretzky, and T. Leen, eds) 7. The MIT Press, pp. 739–746.
C. Rasche and R. Douglas (1999): Silicon synaptic conductances. J. Comput. Neuroscience. 7(1), 33–39.
R. Mudra and G. Indiveri (1999): A modular neuromorphic navigation system applied to line following and obstacle avoidance tasks. In Experiments with the Mini-Robot Khepera: Proceedings of the 1st International Khepera Workshop (A. A. Loeffler, F. Mondada, and U. Rueckert, eds), pp. 99–108.
C. Schauer, T. Zahn, P. Paschke, and H. Gross (2000): Binaural sound localization in an artificial neural network. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 865–868.
A. van Schaik and S. Shamma (2003): A neuromorphic sound localizer for a smart mems system. In IEEE International Symposium on Circuits and Systems. pp. 864–867.
G. Cauwenberghs, R. Edwards, Y. Deng, R. Genov, and D. Lemonds (2002): Neuromorphic processor for real-time biosonar object detection. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). pp. 3984–3987.
G. Crebbin and M. Fajria (2000): Integrate-and-fire models for image segmentation. In Visual Communications and Image Processing 2000, pp. 867–874.
T. Netter and N. Franceschini (2002): A robotic aircraft that follows terrain using a neuromorphic eye. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), pp. 129–134.
M. Lewis, M. Hartmann, R. Etienne-Cummings, and A. Cohen (2001): Control of a robot leg with an adaptive aVLSI CPG chip. Neurocomputing. 38, 1409–1421.
T. Delbrck, S.-C. Liu, E. Chicca, G. M. Ricci, and S. Bovet. (2001): The physiologist’s friend chip. [Online]. Available: http://www.ini.unizh.ch/tobi/friend/chip/index.html
T. Berger, M. Baudry, R. Brinton, J. Liaw, V. Marmarelis, A. Park, B. Sheu, and A. Tanguay (2001): Brain-implantable biomimetic electronics as the next era in neural prosthetics. Proceedings of the IEEE. 89(7), 993–1012.
T. Lande, J. Marienborg, and Y. Berg (2000): Neuromorphic cochlea implants. In IEEE International Symposium on Circuits and Sys. (ISCAS 2000), pp. 401–404.
E. Maynard (2001): Visual prostheses. Annual Review of Biomedical Engineering. 3, 145–168.
R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, and H. Seung (2000): Digital selection and analogue amplification coexist in a cortexinspired silicon circuit. Nature. 405, 947–951.
T. DeMarse, D. Wagenaar, A. Blau, and S. Potter (2001): The neurally controlled animat: Biological brains acting with simulated bodies. Autonomous Robots. 11, 305–310.
J. Anderson and E. Rosenfeld (eds) (1988): Neurocomputing: Foundations of Research. MIT Press, Cambridge.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer Science+Business Media, Inc.
About this chapter
Cite this chapter
Smith, L.S. (2006). Implementing Neural Models in Silicon. In: Zomaya, A.Y. (eds) Handbook of Nature-Inspired and Innovative Computing. Springer, Boston, MA. https://doi.org/10.1007/0-387-27705-6_13
Download citation
DOI: https://doi.org/10.1007/0-387-27705-6_13
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-40532-2
Online ISBN: 978-0-387-27705-9
eBook Packages: Computer ScienceComputer Science (R0)