Skip to main content

Unlearning Helps

  • Conference paper
  • First Online:
Automata, Languages and Programming (ICALP 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1853))

Included in the following conference series:

Abstract

Overregularization seen in child language learning, re verb tense constructs, involves abandoning correct behaviors for incorrect ones and later reverting to correct behaviors. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning, and relearning. A decisive learner doesn’t do this and, in general, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H. The present paper shows that decisiveness is a real restriction on Gold’s model of iteratively (or in the limit) learning of grammars for languages from positive data. This suggests that natural U-shaped learning curves may not be a mere accident in the evolution of human learning, but may be necessary for learning. The result also solves an open problem.

Second-time decisive learners conjecture each of their hypotheses for a language at most twice. By contrast, they are shown not to restrict Gold’s model of learning, and correspondingly, there is an apparent lack of reports in child development of the opposite, W-shaped learning curves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. John Case and Chris Lynes. Machine inductive inference and language identification. In M. Nielsen and E. M. Schmidt, (editors), Proceedings of the 9th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science 140, pages 107–115. Springer-Verlag, 1982.

    Chapter  Google Scholar 

  2. Mark A. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1–11, 1990.

    Article  MATH  MathSciNet  Google Scholar 

  3. Mark A. Fulk, Sanjay Jain, and Daniel N. Osherson. Open problems in “Systems that learn”. Journal of Computer and System Sciences, 49:589–604, 1994.

    Article  MathSciNet  Google Scholar 

  4. Rusins Freivalds, Efim B. Kinber, and Rolf Wiehagen. On the Power of Inductive Inference from Good Examples. Theoretical Computer Science, 110:131–144, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  5. E. Mark Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.

    Article  MATH  Google Scholar 

  6. Sanjay Jain, Daniel N. Osherson, James S. Royer, and Arun Sharma. Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, Massachusetts, 1999. Second Edition of Reference [8].

    Google Scholar 

  7. Gary Marcus, Steven Pinker, Michael Ullman, Michelle Hollander, T. John Rosen, and Fei Xu. Overregularization in Language Acquisition. Monographs of the Society for Research in Child Development, vol. 57, no. 4. University of Chicago Press, 1992. Includes commentary by Harold Clahsen.

    Google Scholar 

  8. Daniel N. Osherson, Michael Stob, and Scott Weinstein. Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge, Massachusetts, 1986.

    Google Scholar 

  9. Daniel Osherson and Scott Weinstein. Criteria of language learning. Information and Control, 52:123–138, 1982.

    Article  MATH  MathSciNet  Google Scholar 

  10. Steven Pinker. Formal models of language learning. Cognition, 7:217–283, 1979.

    Article  Google Scholar 

  11. Hartley Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York, 1967. Reprinted, MIT Press 1987.

    MATH  Google Scholar 

  12. Gisela Schäfer-Richter. Uber Eingabeabhängigkeit und Komplexität von Inferenzstrategien. Ph.D. Thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Germany, 1984.

    Google Scholar 

  13. Sidney Strauss and Ruth Stavy, (editors). U-Shaped Behavioral Growth. Developmental Psychology Series. Academic Press, NY, 1982.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Baliga, G., Case, J., Merkle, W., Stephan, F. (2000). Unlearning Helps. In: Montanari, U., Rolim, J.D.P., Welzl, E. (eds) Automata, Languages and Programming. ICALP 2000. Lecture Notes in Computer Science, vol 1853. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45022-X_71

Download citation

  • DOI: https://doi.org/10.1007/3-540-45022-X_71

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67715-4

  • Online ISBN: 978-3-540-45022-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics