Abstract
Testing a black-box system without recourse to a specification is difficult, because there is no basis for estimating how many tests will be required, or to assess how complete a given test set is. Several researchers have noted that there is a duality between these testing problems and the problem of inductive inference (learning a model of a hidden system from a given set of examples). It is impossible to tell how many examples will be required to infer an accurate model, and there is no basis for telling how complete a given set of examples is. These issues have been addressed in the domain of inductive inference by developing statistical techniques, where the accuracy of an inferred model is subject to a tolerable degree of error. This paper explores the application of these techniques to assess test sets of black-box systems. It shows how they can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context.
Chapter PDF
Similar content being viewed by others
Keywords
- Finite State Machine
- Software Testing
- Inductive Inference
- Testing Context
- Probably Approximately Correct
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Angluin, D.: learning regular sets from queries and counterexamples. Information and Computation 75, 87–106 (1987)
Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)
Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)
Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.: Learnability and the vapnik-chervonenkis dimension. Journal of the ACM 36, 929–965 (1989)
Briand, L., Labiche, Y., Bawar, Z., Spido, N.: Using machine learning to refine category-partition test specifications and test suites. Information and Software Technology 51, 1551–1564 (2009)
Cherniavsky, J., Smith, C.: A recursion theoretic approach to program testing. IEEE Transactions on Software Engineering 13 (1987)
Dupont, P., Miclet, L., Vidal, E.: What is the search space of the regular inference? (1994)
Ghani, K., Clark, J.: Strengthening inferred specifications using search based testing. In: International Conference on Software Testing Workshops (ICSTW). IEEE, Los Alamitos (2008)
Haussler, D.: Quantifying inductive bias: Ai learning algorithms and valiant’s learning framework. Artificial Intelligence 36, 177–221 (1988)
de la Higuera, C.: Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, Cambridge (2010)
Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)
Last, M.: Data mining for software testing. In: The Data Mining and Knowledge Discovery Handbook, pp. 1239–1248. Springer, Heidelberg (2005)
Lee, D., Yannakakis, M.: Principles and Methods of Testing Finite State Machines - A Survey. Proceedings of the IEEE 84, 1090–1126 (1996)
Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
Mitchell, T.: Generalization as search. Artificial Intelligence 18(2), 203–226 (1982)
Nagappan, N., Murphy, B., Basili, V.: The influence of organizational structure on software quality: an empirical case study. In: International Conference on Software Engineering (ICSE), pp. 521–530. ACM, New York (2008)
Perkins, J., Ernst, M.: Efficient incremental algorithms for dynamic detection of likely invariants. SIGSOFT Software Engineering Notes 29, 23–32 (2004)
Poll, E., Schubert, A.: Verifying an implementation of ssh. In: Workshop on Issues of Theory of Security (WITS), pp. 164–177 (2007)
Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)
Romanik, K.: Approximate testing and its relationship to learning. Theoretical Computer Science 188(1-2), 175–194 (1997)
Romanik, K., Vitter, J.: Using Vapnik-Chervonenkis dimension to analyze the testing complexity of program segments. Information and Computation 128(2), 87–108 (1996)
Shahamiri, S., Kadira, W., Ibrahima, S., Hashim, S.: An automated framework for software test oracle. Information and Software Technology 53 (2011)
Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)
Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)
Vapnik, V., Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)
Walkinshaw, N.: The practical assessment of test sets with inductive inference techniques. In: Bottaci, L., Fraser, G. (eds.) TAIC PART 2010. LNCS, vol. 6303, pp. 165–172. Springer, Heidelberg (2010)
Walkinshaw, N., Bogdanov, K., Damas, C., Lambeau, B., Dupont, P.: A framework for the competitive evaluation of model inference techniques. In: Proceedings of the First International Workshop on Model Inference In Testing (MIIT), pp. 1–9. ACM, New York (2010)
Walkinshaw, N., Bogdanov, K., Derrick, J., Paris, J.: Increasing functional coverage by inductive testing: A case study. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 126–141. Springer, Heidelberg (2010)
Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)
Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)
Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)
Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 IFIP International Federation for Information Processing
About this paper
Cite this paper
Walkinshaw, N. (2011). Assessing Test Adequacy for Black-Box Systems without Specifications. In: Wolff, B., Zaïdi, F. (eds) Testing Software and Systems. ICTSS 2011. Lecture Notes in Computer Science, vol 7019. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24580-0_15
Download citation
DOI: https://doi.org/10.1007/978-3-642-24580-0_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24579-4
Online ISBN: 978-3-642-24580-0
eBook Packages: Computer ScienceComputer Science (R0)