Skip to main content

Klassifikation

  • Chapter
  • First Online:
Data Mining

Part of the book series: Computational Intelligence ((CI))

  • 21k Accesses

Zusammenfassung

Klassifikation ist ein überwachtes Lernverfahren , das markierte Daten verwendet, um Objekte zu Klassen zuzuordnen. Es werden falsch positive und falsch negative Fehler unterschieden und auf dieser Basis zahlreiche Klassifikationskriterien definiert. Oft werden Paare solcher Kriterien zur Bewertung von Klassifikatoren verwendet und z. B. in einem ROC- (engl. Receiver Operating Curve) oder PR-Diagramm (engl. Precision Recall) dargestellt. Unterschiedliche Klassifikatoren mit spezifischen Vor- und Nachteilen werden vorgestellt: der naive Bayes-Klassifikator, lineare Diskriminanzanalyse, die Supportvektormaschine auf Basis des Kernel-Tricks, nächste-Nachbarn-Klassifikatoren, lernende Vektorquantifizierung und hierarchische Klassifikation mit Regressionsbäumen.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Literatur

  1. D. W. Aha. Editorial: Lazy learning. Artificial Intelligence Review (Special Issue on Lazy Learning), 11(1–5):7–10, June 1997.

    Google Scholar 

  2. R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, New York, 1999.

    Google Scholar 

  3. T. Bayes. An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370–418, 1763.

    Google Scholar 

  4. J. C. Bezdek and N. R. Pal. Two soft relatives of learning vector quantization. Neural Networks, 8(5):729–743, 1995.

    Google Scholar 

  5. J. C. Bezdek, T. R. Reichherzer, G. S. Lim, and Y. Attikiouzel. Multiple-prototype classifier design. IEEE Transactions on Systems, Man, and Cybernetics C, 28(1):67–79, 1998.

    Google Scholar 

  6. L. Breiman, J. H. Friedman, R. A. Olsen, and C. J. Stone. Classification and Regression Trees. Chapman & Hall, New Work, 1984.

    Google Scholar 

  7. F. L. Chung and T. Lee. Fuzzy learning vector quantization. In IEEE International Joint Conference on Neural Networks, volume 3, pages 2739–2743, Nagoya, October 1993.

    Google Scholar 

  8. J. Davis and M. Goadrich. The relationship between precision–recall and ROC curves. In International Conference on Machine Learning, pages 233–240, 2006.

    Google Scholar 

  9. R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1974.

    Google Scholar 

  10. R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179–188, 1936.

    Google Scholar 

  11. G. V. Kass. Significance testing in automatic interaction detection (AID). Applied Statistics, 24:178–189, 1975.

    Google Scholar 

  12. T. Kohonen. Learning vector quantization. Neural Networks, 1:303, 1988.

    Google Scholar 

  13. T. Kohonen. Improved versions of learning vector quantization. In International Joint Conference on Neural Networks, volume 1, pages 545–550, San Diego, June 1990.

    Google Scholar 

  14. J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society A, 209:415–446, 1909.

    Google Scholar 

  15. J. Neyman and E. S. Pearson. Interpretation of certain test criteria for purposes of statistical inference, part I. Joint Statistical Papers, Cambridge University Press, pages 1–66, 1967.

    Google Scholar 

  16. M. J. D. Powell. Radial basis functions for multi–variable interpolation: a review. In IMA Conference on Algorithms for Approximation of Functions and Data, pages 143–167, Shrivenham, 1985.

    Google Scholar 

  17. J. R. Quinlan. Induction on decision trees. Machine Learning, 11:81–106, 1986.

    Google Scholar 

  18. L. Rokach and O. Maimon. Data Mining with Decision Trees: Theory and Applications. Machine Perception and Artificial Intelligence. World Scientific Publishing Company, 2008.

    Google Scholar 

  19. B. Schölkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002.

    Google Scholar 

  20. G. Shakhnarovich, T. Darrell, and P. Indyk. Nearest–Neighbor Methods in Learning and Vision: Theory and Practice. Neural Information Processing. MIT Press, 2006.

    Google Scholar 

  21. S. Theodoridis and K. Koutroumbas. Pattern Recognition. Academic Press, San Diego, 4th edition, 2008.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas A. Runkler Prof. Dr.-Ing. .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Fachmedien Wiesbaden

About this chapter

Cite this chapter

Runkler, T. (2015). Klassifikation. In: Data Mining. Computational Intelligence. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-8348-2171-3_8

Download citation

Publish with us

Policies and ethics