Skip to main content

Advancements in the Field

  • Chapter
  • First Online:
An Introduction to Data

Part of the book series: Studies in Big Data ((SBD,volume 50))

  • 3011 Accesses

Abstract

This chapter is divided into three sections, i.e., machine learning, neuroscience, and technology. This distribution corresponds to the main driving factors of the new AI revolution, meaning algorithms and data, knowledge of the brain structure, and greater computational power. The goal of the chapter is to give an overview of the state of art of these three blocks in order to understand what AI is going toward.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Ahmad, S., & Hawkins, J. (2015). Properties of sparse distributed representations and their application to hierarchical temporal memory. arXiv:1503.07469.

    Google Scholar 

  • Arthur, B. W. (1994). Inductive reasoning and bounded rationality. American Economic Review, 84(2), 406–411.

    Google Scholar 

  • Barlow, H. B., Kaushal, T. P., & Mitchison, G. J. (1989). Finding minimum entropy codes. Neural Computation, 1(3), 412–423.

    Article  Google Scholar 

  • Bengio, Y., Courville, A. C., & Vincent, P. (2012). Unsupervised feature learning and deep learning: A review and new perspectives. CoRR. arXiv:abs/1206.5538.

    Google Scholar 

  • Candès, E. J., Romberg, J. K., & Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59(8), 1207–1223.

    Article  MathSciNet  Google Scholar 

  • Cao, Y., & Yang, J. (2015). Towards making systems forget with machine unlearning. IEEE Symposium on Security and Privacy, 2015, 463–480.

    Google Scholar 

  • Chen, X., Duan, X., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. arXiv:1606.03657.

    Google Scholar 

  • Giustolisi, O., & Savic, D. A. (2006). A symbolic data-driven technique based on evolutionary polynomial regression. Journal of Hydroinformatics, 8(3), 207–222.

    Article  Google Scholar 

  • Hawkins, J., & Ahmad, S. (2016). Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits, 10, 23.

    Google Scholar 

  • Hinton, G., & Sejnowski, T. (1999). Unsupervised learning: Foundations of neural computation. Cambridge: MIT Press.

    Google Scholar 

  • Holland, J. H. (1975). Adaptation in natural and artificial systems. Cambridge: MIT Press.

    Google Scholar 

  • Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks (pp. 1942–1948).

    Google Scholar 

  • Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural selection. Cambridge: MIT Press.

    MATH  Google Scholar 

  • McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.

    Article  MathSciNet  Google Scholar 

  • Rocki, K. (2016). Towards machine intelligence (pp. 1–15). CoRR. arXiv:abs/1603.08262.

    Google Scholar 

  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323, 533–536.

    Article  Google Scholar 

  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. arXiv:1606.03498.

    Google Scholar 

  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.

    Article  Google Scholar 

  • Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(379–423), 623–656.

    Article  MathSciNet  Google Scholar 

  • Stanley, K. O., & Lehman, J. (2015). Why greatness cannot be planned—The myth of the objective. Berlin: Springer International Publishing.

    Google Scholar 

  • Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. Transactions on Evolutionary Computation, 1(1), 67–82.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesco Corea .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Corea, F. (2019). Advancements in the Field. In: An Introduction to Data. Studies in Big Data, vol 50. Springer, Cham. https://doi.org/10.1007/978-3-030-04468-8_5

Download citation

Publish with us

Policies and ethics