Abstract
At the very beginning of system development, typically only natural-language requirements are documented. As an informal source of information, however, natural-language specifications may be ambiguous and incomplete; this can be hard to detect by means of manual inspection. In this work, we present a formal model, named data-flow reactive system (DFRS), which can be automatically obtained from natural-language requirements that describe functional, reactive and temporal properties. A DFRS can also be used to assess whether the requirements are consistent and complete. We define two variations of DFRS: a symbolic and an expanded version. A symbolic DFRS (s-DFRS) is a concise representation that inherently avoids an explicit representation of (possibly infinite) sets of states and, thus, the state space-explosion problem. We use s-DFRS as part of a technique for test-case generation from natural-language requirements. In our approach, an expanded DFRS (e-DFRS) is built dynamically from a symbolic one, possibly limited to some bound; in this way, bounded analysis (e.g., reachability, determinism, completeness) can be performed. We adopt the s-DFRS as an intermediary representation from which models, for instance, SCR and CSP, are obtained for the purpose of test generation. An e-DFRS can also be viewed as the semantics of the s-DFRS from which it is generated. In order to connect such a semantic representation to established ones in the literature, we show that an e-DFRS can be encoded as a TIOTS: an alternative timed model based on the widely used IOLTS and ioco. To validate our overall approach, we consider two toy examples and two examples from the aerospace and automotive industry. Test cases are independently created and we verify that they are all compatible with the corresponding e-DFRS models generated from symbolic ones. This verification is performed mechanically with the aid of the NAT2TEST tool, which supports the manipulation of such models.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Aichernig BK, Brandl H, Jöbstl E, Krenn W, Schlick R, Tiran S: Killing strategies for model-based mutation testing. Softw Test Verif Reliab 25(8), 716–748 (2015)
Abrial J.-R.: The B-book: assigning programs to meanings. Cambridge University Press, New York (1996)
Aceituna D, Do H, Srinivasan S (2014) A systematic approach to transforming system requirements into model checking specifications. In: Companion Proceedings of the 36th International Conference on Software Engineering, ICSE Companion 2014. ACM, New York, pp 165–174
Ambriola V, Gervasi V: On the systematic analysis of natural language requirements with CIRCE. Autom Softw Eng 13(1), 107–167 (2006)
Allen J (1995) Natural language understanding. Benjamin/Cummings, Redwood City
Blackburn M, Busser R, Fontaine J (1997) Automatic Generation of Test Vectors for SCR-style Specifications. In: Annual conference on computer assurance
Backes J, Cofer D, Miller S, Whalen MW (2015) Requirements analysis of a quad-redundant flight control system. In: Havelund K, Holzmann G, Joshi R (eds) NASA formal methods, Lecture notes in computer science, vol 9058. Springer International Publishing, Pasadena, pp 82–96
Bucchiarone A, Gnesi S, Fantechi A, Trentanni G (2010) An experience in using a tool for evaluating a large set of natural language requirements. In: Proceedings of the 2010 ACM symposium on applied computing, SAC ’10. ACM, New York, pp 281–286
Boddu R, Guo L, Mukhopadhyay S, Cukic B (2004) RETNA: from requirements to testing in a natural way. In: IEEE International requirements engineering conference, pp 262–271
Carvalho G (2016) NAT2TEST: generating test cases from natural language requirements based on CSP. PhD thesis, Centro de Informática, Universidade Federal de Pernambuco (UFPE), Brazil
Carvalho G, Barros F, Carvalho A, Cavalcanti A, Mota A, Sampaio A (2015) NAT2TEST Tool: from Natural Language Requirements to Test Cases based on CSP. In: International conference on software engineering and formal methods. Springer International Publishing, Cham
Carvalho G, Barros F, Lapschies F, Schulze U, Peleska J (2014) Model-Based Testing from Controlled Natural Language Requirements. In: Artho C, Ölveczky PC (eds) Formal techniques for safety-critical systems, Communications in computer and information science, vol 419. Springer International Publishing, Cham, pp 19–35
Cimatti A, Clarke EM, Giunchiglia F, Roveri M (1999) NuSMV: a new symbolic model verifier. In: Proceedings of the 11th international conference on computer aided verification, CAV ’99. Springer-Verlag, London, pp 495–499
Carvalho G, Carvalho A, Rocha E, Cavalcanti A, Sampaio A (2014) A formal model for natural-language timed requirements of reactive systems. In: Merz S, Pang J (eds) Formal methods and software engineering, international conference on formal engineering methods ICFEM, Lecture notes in computer science, vol 8829. Springer International Publishing, Cham, pp 43–58
Carvalho G, Cavalcanti A, Sampaio A (2015) DFRS: definition and proofs. Technical report, Universidade Federal de Pernambuco
Carvalho G, Falcão D, Barros F, Sampaio A, Mota A, Motta L, Blackburn M (2013) Test case generation from natural language requirements based on SCR specifications. In: Symposium on applied computing, vol 2, pp 1217–1222
Carvalho G, Falcão D, Barros F, Sampaio A, Mota A, Motta L, Blackburn M (2014) NAT2TEST SCR : test case generation from natural language requirements based on SCR specifications. Sci Comput Program 95(Part 3(0)):275–297
Carvalho G, Sampaio A, Mota A (2013) A CSP timed input-output relation and a strategy for mechanised conformance verification. In: Formal methods and software engineering, LNCS, vol 8144. Springer, Berlin, Heidelberg, pp 148–164
Esser M, Struss P (2007) Obtaining models for test generation from natural-language like functional specifications. In: International workshop on principles of diagnosis, pp 75–82
FAA (2009) Requirements engineering management findings report. Technical report, U.S. Department of Transportation-Federal Aviation Administration
Fillmore CJ (1968) The case for case. In: Bach, Harms (eds) Universals in linguistic theory. Holt, Rinehart, and Winston, New York, pp 1–88
Ferrari A, Lipari G, Gnesi S, Spagnolo GO (2014) Pragmatic ambiguity detection in natural language requirements. In: Artificial intelligence for requirements engineering (AIRE), 2014 IEEE 1st International Workshop on, pp 1–8
Ilic D (2007) Deriving formal specifications from informal requirements. In: Computer software and applications conference, 2007. COMPSAC 2007. 31st Annual International, vol 1, pp 145–152
ISO (2002) Z formal specification notation (ISO/IEC 13568). Technical report, International Organization for Standardization
Lee WJ, Cha SD, Kwon YR: Integration and analysis of use cases using modular Petri nets in requirements engineering. Softw Eng IEEE Trans 24(12), 1115–1130 (1998)
Leonard E, Heitmeyer C: Program synthesis from formal requirements specifications using APTS. High Order Symbol Comput 16, 63–92 (2003)
Leveson NG, PerErik Heimdahl M, Hildreth H, Reese JD: Requirements Specification For Process-Control Systems. IEEE Trans Softw Eng 20(9), 684–707 (1994)
Larsen K, Mikucionis M, Nielsen B (2004) Online testing of real-time systems using uppaal: status and future work. In: Perspectives of model-based testing—Dagstuhl Seminar, vol 04371
Liu S, Sun J, Liu Y, Zhang Y, Wadhwa B, Dong JS, Wang X (2014) Automatic early defects detection in use case documents. In: Proceedings of the 29th ACM/IEEE international conference on automated software engineering, ASE ’14. ACM, New York, pp 785–790
Miller SP, Tribble AC, Whalen MW, Heimdahl MPE (2006) Proving the shalls. Int J Softw Tools Technol Transf 8(4–5):303–319
Nogueira S, Sampaio A, Mota A: Test generation from state based use case models. Formal Aspects Comput 26(3), 441–490 (2014)
Owre S, Rushby JM, Shankar N (1992) PVS: A prototype verification system. In: Kapur D (ed) 11th International Conference On Automated Deduction (CADE), Lecture notes in artificial intelligence, vol 607. Springer-Verlag, Saratoga, pp 748–752
Peleska J et al (2011) A real-world benchmark model for testing concurrent real-time systems in the automotive domain. In: Proceedings of the ICTSS, ICTSS’11. Springer-Verlag, Berlin, Heidelberg, pp 146–161
Peleska J, Vorobev E, Lapschies F, Zahlten C (2011) Automated model-based testing with RT-Tester. Technical report, Universität Bremen
Roscoe AW (2010) Understanding concurrent systems. Springer, London
Schwitter R (2002) English as a formal specification language. In: Proceedings of the 13th international workshop on database and expert systems applications
Schnelte M (2009) Generating test cases for timed systems from controlled natural language specifications. In: International conference on system integration and reliability improvements, pp 348–353
Siegl S, Hielscher K-S, German R (2010) Model based requirements analysis and testing of automotive systems with timed usage models. In: Requirements engineering conference (RE), 2010 18th IEEE International, pp 345–350
Junior VS, Vijaykumar NL: Generating model-based test cases from natural language requirements for space application software. Softw Qual J 20, 77–143 (2012)
39. Sun J, Liu Y, Dong JS, Pang J Pat: (2009) Towards flexible verification under fairness, Lecture Notes in Computer Science, vol 5643. Springer, Heidelberg, pp 709–714
40. Tretmans J (1999) Testing concurrent systems: a formal approach. In: Proceedings of CONCUR. Springer-Verlag, London, pp 46–65
Author information
Authors and Affiliations
Corresponding author
Additional information
Stephan Merz, Jun Pang, Jin Song Dong and Cliff Jones
Rights and permissions
About this article
Cite this article
Carvalho, G., Cavalcanti, A. & Sampaio, A. Modelling timed reactive systems from natural-language requirements. Form Asp Comp 28, 725–765 (2016). https://doi.org/10.1007/s00165-016-0387-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00165-016-0387-x