Abstract
This paper presents a technique that uses code to automatically generate its own test cases at run time by using a combination of symbolic and concrete (i.e regular) execution The input values to a program (or software component) provide the standard interface of any testing framework with the program it is testing and generating input values that will explore all the “interesting” behavior in the tested program remains an important open problem in software testing research. Our approach works by turning the problem on its head: we lazily generate from within the program itself the input values to the program (and values derived from input values) as needed. We applied the technique to real code and found numerous corner case errors ranging from simple memory overflows and infinite loops to subtle issues in the interpretation of language standards.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Cadar, C., Engler, D.: Execution generated test cases: How to make systems code crash itself. Technical Report CSTR 2005-04 3, Stanford University (2005)
Godefroid, P., Klarlund, N., Sen, K.: Directed automated random testing. In: Proceedings of the Conference on Programming Language Design and Implementation (PLDI), Chicago, IL USA. ACM Press, New York (2005)
Miller, B.P., Fredriksen, L., So, B.: An empirical study of the reliability of UNIX utilities. Communications of the Association for Computing Machinery 33, 32–44 (1990)
Miller, B., Koski, D., Lee, C.P., Maganty, V., Murthy, R., Natarajan, A., Steidl, J.: Fuzz revisited: A re-examination of the reliability of UNIX utilities and services. Technical report, University of Wisconsin - Madison (1995)
Barrett, C., Berezin, S.: CVC Lite: A new implementation of the cooperating valid ity checker. In: Alur, R., Peled, D.A. (eds.) CAV 2004. LNCS, vol. 3114, pp. 515–518. Springer, Heidelberg (2004)
Ganesh, V., Berezin, S., Dill, D.L.: A decision procedure for fixed-width bit-vectors. Unpublished Manuscript (2005)
Necula, G.C., McPeak, S., Rahul, S., Weimer, W.: Cil: Intermediate language and tools for analysis and transformation of c programs. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, p. 213. Springer, Heidelberg (2002)
Securiteam: Mutt exploit (2003), http://www.securiteam.com/unixfocus/5FP0T0U9FU.html
Rinard, M., Cadar, C., Dumitran, D., Roy, D.M., Leu, T., William, S., Beebee, J.: Enhancing server availability and security through failure-oblivious computing. In: Symposium on Operating Systems Design and Implementation (2004)
Wsmp3 webpage (2005), http://wsmp3.sourceforge.net/
Associates, C.: Wsmp3 exploit (2003), http://www3.ca.com/securityadvisor/vulninfo/Vuln.aspx?ID=15609
Secunia: Wsmp3 exploit (2003), http://secunia.com/product/801/
Boyer, R.S., Elspas, B., Levitt, K.N.: Select – a formal system for testing and debugging programs by symbolic execution. ACM SIGPLAN Notices 10, 234–245 (1975)
Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. In: ISSTA 1998: Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis, pp. 53–62. ACM Press, New York (1998)
Ball, T.: A theory of predicate-complete test coverage and generation. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2004. LNCS, vol. 3657, pp. 1–22. Springer, Heidelberg (2005)
Ball, T., Majumdar, R., Millstein, T., Rajamani, S.K.: Automatic predicate abstraction of c programs. In: PLDI 2001: Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation, pp. 203–213. ACM Press, New York (2001)
Boyapati, C., Khurshid, S., Marinov, D.: Korat: Automated testing based on Java predicates. In: Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 123–133 (2002)
Ferguson, R., Korel, B.: The chaining approach for software test data generation. ACM Trans. Softw. Eng. Methodol. 5, 63–86 (1996)
Gupta, N., Mathur, A.P., Soffa, M.L.: Automated test data generation using an iterative relaxation method. In: SIGSOFT 1998/FSE-6: Proceedings of the 6th ACM SIGSOFT international symposium on Foundations of software engineering, pp. 231–244. ACM Press, New York (1998)
Edvardsson, J., Kamkar, M.: Analysis of the constraint solver in una based test data generation. In: ESEC/FSE-9: Proceedings of the 8th European software engineering conference held jointly with 9th ACM SIGSOFT international symposium on Foundations of software engineering, pp. 237–245. ACM Press, New York (2001)
Holzmann, G.J.: The model checker SPIN. Software Engineering 23, 279–295 (1997)
Godefroid, P.: Model Checking for Programming Languages using VeriSoft. In: Proceedings of the 24th ACM Symposium on Principles of Programming Languages (1997)
Holzmann, G.J.: From code to models. In: Proc. 2nd Int. Conf. on Applications of Concurrency to System Design, Newcastle upon Tyne, U.K, pp. 3–10 (2001)
Brat, G., Havelund, K., Park, S., Visser, W.: Model checking programs. In: IEEE International Conference on Automated Software Engineering, ASE (2000)
Corbett, J., Dwyer, M., Hatcliff, J., Laubach, S., Pasareanu, C., Robby, Z.H.: Bandera: Extracting finite-state models from java source code. In: ICSE 2000 (2000)
Ball, T., Rajamani, S.: Automatically validating temporal safety properties of interfaces. In: Dwyer, M.B. (ed.) SPIN 2001. LNCS, vol. 2057, p. 103. Springer, Heidelberg (2001)
Das, M., Lerner, S., Seigle, M.: Path-sensitive program verification in polynomial time. In: Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation, Berlin, Germany (2002)
Coverity: SWAT: the Coverity software analysis toolset (2005), http://coverity.com
Bush, W., Pincus, J., Sielaff, D.: A static analyzer for finding dynamic programming errors. Software: Practice and Experience 30, 775–802 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cadar, C., Engler, D. (2005). Execution Generated Test Cases: How to Make Systems Code Crash Itself. In: Godefroid, P. (eds) Model Checking Software. SPIN 2005. Lecture Notes in Computer Science, vol 3639. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11537328_2
Download citation
DOI: https://doi.org/10.1007/11537328_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28195-5
Online ISBN: 978-3-540-31899-6
eBook Packages: Computer ScienceComputer Science (R0)