Abstract
This chapter is divided into three sections, i.e., machine learning, neuroscience, and technology. This distribution corresponds to the main driving factors of the new AI revolution, meaning algorithms and data, knowledge of the brain structure, and greater computational power. The goal of the chapter is to give an overview of the state of art of these three blocks in order to understand what AI is going toward.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmad, S., & Hawkins, J. (2015). Properties of sparse distributed representations and their application to hierarchical temporal memory. arXiv:1503.07469.
Arthur, B. W. (1994). Inductive reasoning and bounded rationality. American Economic Review, 84(2), 406–411.
Barlow, H. B., Kaushal, T. P., & Mitchison, G. J. (1989). Finding minimum entropy codes. Neural Computation, 1(3), 412–423.
Bengio, Y., Courville, A. C., & Vincent, P. (2012). Unsupervised feature learning and deep learning: A review and new perspectives. CoRR. arXiv:abs/1206.5538.
Candès, E. J., Romberg, J. K., & Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59(8), 1207–1223.
Cao, Y., & Yang, J. (2015). Towards making systems forget with machine unlearning. IEEE Symposium on Security and Privacy, 2015, 463–480.
Chen, X., Duan, X., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. arXiv:1606.03657.
Giustolisi, O., & Savic, D. A. (2006). A symbolic data-driven technique based on evolutionary polynomial regression. Journal of Hydroinformatics, 8(3), 207–222.
Hawkins, J., & Ahmad, S. (2016). Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits, 10, 23.
Hinton, G., & Sejnowski, T. (1999). Unsupervised learning: Foundations of neural computation. Cambridge: MIT Press.
Holland, J. H. (1975). Adaptation in natural and artificial systems. Cambridge: MIT Press.
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks (pp. 1942–1948).
Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural selection. Cambridge: MIT Press.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.
Rocki, K. (2016). Towards machine intelligence (pp. 1–15). CoRR. arXiv:abs/1603.08262.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323, 533–536.
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. arXiv:1606.03498.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(379–423), 623–656.
Stanley, K. O., & Lehman, J. (2015). Why greatness cannot be planned—The myth of the objective. Berlin: Springer International Publishing.
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. Transactions on Evolutionary Computation, 1(1), 67–82.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Corea, F. (2019). Advancements in the Field. In: An Introduction to Data. Studies in Big Data, vol 50. Springer, Cham. https://doi.org/10.1007/978-3-030-04468-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-04468-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04467-1
Online ISBN: 978-3-030-04468-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)