Skip to main content

Improving Generalization Ability of Self-Generating Neural Networks Through Ensemble Averaging

  • Conference paper
  • First Online:
Knowledge Discovery and Data Mining. Current Issues and New Applications (PAKDD 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1805))

Included in the following conference series:

Abstract

We present an ensemble averaging effect for improving the generalization capability of self-generating neural networks applied to classification problems. The results of our computational experiments show that ensemble averaging effect is 1–7% improvements in accuracy comparing with single SGNN for three benchmark problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Wen, W. X., Pang, V. and Jennings, A.: Self-Generating vs. Self-Organizing, What’s Different? In: Simpson, P. K. (eds.): Neural Networks Theory, Technology, and Applications. IEEE, New York (1996) 210–214

    Google Scholar 

  2. Kohonen, T.: Self-Organizing Maps. Springer-Verlag, Berlin (1995)

    Google Scholar 

  3. Wen, W. X., Jennings, A. and Liu, H.: Learning a Neural Tree. In: International Joint Conf. on Neural Networks. Beijing (1992) 751–756

    Google Scholar 

  4. Inoue, H. and Narihisa, H.: Performance of Self-Generating Neural Network Applied to Pattern Recognition. In: 5th International Conf. on Information Systems Analysis and Synthesis, Vol. 5. Orlando, FL (1999) 608–614

    Google Scholar 

  5. Rumelhart, D., Hinton, G. E. and Williams, R. J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D., McClelland, J. and the PDP Reserch Group (eds.): Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, MA (1986) 318–362

    Google Scholar 

  6. Thrun, S. B. et alt: The MONK’s Problems — A Performance Comparison of Different Learning Algorithms. Technical report CMU-CS-91-197. Carnegie Mellon University (1991)

    Google Scholar 

  7. Prehelt, L.: PROBEN1 — A Set of Neural Network Benchmark Problems and Benchmarking Rules. Technical report 21/94. Universität Karlsruhe (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Inoue, H., Narihisa, H. (2000). Improving Generalization Ability of Self-Generating Neural Networks Through Ensemble Averaging. In: Terano, T., Liu, H., Chen, A.L.P. (eds) Knowledge Discovery and Data Mining. Current Issues and New Applications. PAKDD 2000. Lecture Notes in Computer Science(), vol 1805. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45571-X_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-45571-X_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67382-8

  • Online ISBN: 978-3-540-45571-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics