Abstract
The software requirements are documented in natural language to make it easy for the users to understand the document. This flexibility of natural language comes with the risk of introducing unwanted ambiguities in the requirements thus leading to poor quality. In this paper, we propose and evaluate a framework that automatically analyses the ambiguities in a requirements document, summarizes the document and assess its quality. We analyse the ambiguities that can occur in natural language and present a method to automate ambiguity analysis and consistency and completeness verification that are usually carried out by human reviewers which is time consuming and ineffective. The Open Text Summarizer based system summarizes the document and provides an extract of it. We use a decision tree based quality evaluator that identifies the quality indicators in the requirements document and evaluates it.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
References
Aone, C., Bennett, S.W.: Applying machine learning to anaphora resolution. In: Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, pp. 302–314 (1996)
Berry, D.M., Kamsties, E., Krieger, M.M.: From Contract Drafting to Software Specification: Linguistic Sources of Ambiguity, University of Waterloo, Ontario, Canada (2003)
Hovy, E.: Automated Text Summarization. In: Mitkov, R. (ed.) The Oxford Handbook of Computational Linguistics (2005)
Lloret, E., Palomar, M.: Text summarization in progress: a literature review. Artificial Intelligence Review 37 (2012)
Lami, G.: QuARS: A Tool for Analyzing Requirement, CMU/SEI-2005-TR-014 (2005)
Yang, H., de Roeck, A., Gervasi, V., Willis, A., Nuseibeh, B.: Analyzing anaphoric ambiguity in natural language requirement. Requirements Engineering Journal 16, 163–189 (2011)
Dagan, I., Itai, A.: Automatic processing of large corpora for the resolution of anaphora references. In: Proceedings of the 13th International Conference on Computational Linguistics, pp. 1–3 (1990)
Jones, K.S.: Automatic summarizing: factors and directions. Advances in Automatic Text Summarization. MIT Press (1999)
Denber, M.: Automatic resolution of anaphora in English. Technical report. Eastman Kodak Co. (1998)
Brennan, S.E., Froedman, M.W., Pollard, C.J.: A centering approach to pronouns. In: Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 155–162 (1987)
Ng, V., Cardie, C.: Improving machine learning approaches to co reference resolution. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 104–111 (2002)
Soon, W.M., Ng, H.T., Lim, D.C.Y.: A machine learning approach to co reference resolution of noun phrases. Computational Linguistics - Special Issue on Computational Anaphora Resolution Archive 27, 521–544 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Subha, R., Palaniswami, S. (2013). Quality Factor Assessment and Text Summarization of Unambiguous Natural Language Requirements. In: Unnikrishnan, S., Surve, S., Bhoir, D. (eds) Advances in Computing, Communication, and Control. ICAC3 2013. Communications in Computer and Information Science, vol 361. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36321-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-36321-4_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36320-7
Online ISBN: 978-3-642-36321-4
eBook Packages: Computer ScienceComputer Science (R0)