Abstract
The present work introduces and justifies the notion of hyperrobust learning where one fixed learner has to learn all functions in a given class plus their images under primitive recursive operators. The following is shown: This notion of learnability does not change if the class of primitive recursive operators is replaced by a larger enumerable class of operators. A class is hyperrobustly Ex-learnable iff it is a subclass of a recursively enumerable family of total functions. So, the notion of hyperrobust learning overcomes a problem of the traditional definitions of robustness which either do not preserve learning by enumeration or still permit topological coding tricks for the learning criterion Ex. Hyperrobust BC-learning as well as the hyperrobust version of Ex-learning by teams are more powerful than hyperrobust Ex-learning. The notion of bounded totally reliable BC-learning is properly between hyperrobust Ex-learning and hyperrobust BC-learning. Furthermore, the bounded totally reliably BC-learnable classes are characterized in terms of infinite branches of certain enumerable families of bounded recursive trees. A class of infinite branches of a further family of trees separates hyperrobust BC-learning from totally reliable BC-learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Janis Bārzdiņš. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs, Latvian State University, Riga, 210:82–88, 1974.
Leonard Blum and Manuel Blum. Towards a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.
John Case, Sanjay Jain, Matthias Ott, Arun Sharma and Frank Stephan. Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), pages 44–55, ACM Press, New York, 1998.
John Case, Susanne Kaufmann, Efim Kinber and Martin Kummer. Learning recursive functions from approximations. Journal of Computer and System Sciences, 55:183–196, 1997.
John Case and Carl Smith. Comparison of identi_cation criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.
Jerome Feldmann. Some decidability results on grammatical inference and complexity. Information and Control, 20:244–262, 1972.
Marc Fulk. Robust separations in inductive inference. In Proceedings of the 31st Annual Symposium on Foundations of Computer Science (FOCS), pages 405–410, St. Louis, Missouri, 1990.
Sanjay Jain, Carl Smith and Rolf Wiehagen. On the power of learning robustly. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), pages 187–197, ACM Press, New York, 1998.
Wolfgang Merkle and Frank Stephan. Trees and learning. Proceedings of the Ninth Annual Conference on Computational Learning Theory (COLT), pages 270–279, ACM Press, New York, 1996.
Eliana Minicozzi. Some natural properties of strong-identification in inductive inference. Theoretical Computer Science, 2:345–360, 1976.
Piergiorgio Odifreddi. Classical Recursion Theory. North-Holland, Amsterdam, 1989.
Daniel Osherson, Michael Stob and Scott Weinstein. Systems that Learn. MIT Press, Cambridge, Massachusetts, 1986.
Lenny Pitt. Probablistic inductive inference. Journal of the Association of Computing Machinery, 36:383–433, 1989.
Lenny Pitt and Carl Smith. Probability and plurality for aggregations of learning machines. Information and Computation, 77:77–92, 1998.
Karlis Podnieks. Comparing various concepts of function prediction, Part 1. Theory of Algorithms and Programs, Latvian State University, Riga, 210:68–81, 1974.
Carl Smith. The power of pluralism for automatic program synthesis Journal of the Association of Computing Machinery, 29:1144–1165, 1982.
Thomas Zeugmann. On Bārzdiņš’ conjecture. In K.P. Jantke, editor, Proceedings of the International Workshop on Analogical and Inductive Inference (AII’86), volume 265 of LNCS, pages 220–227. Springer, 1986.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ott, M., Stephan, F. (1999). Avoiding Coding Tricks by Hyperrobust Learning. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_15
Download citation
DOI: https://doi.org/10.1007/3-540-49097-3_15
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65701-9
Online ISBN: 978-3-540-49097-5
eBook Packages: Springer Book Archive