Abstract
Improving software quality is one of the desired goals of software development teams. More so, financial companies must ensure the quality of the software product to guarantee financial transactions. For this reason, this study, based on the Design Science Research (DSR) approach, establishes the research question: Does implementing a test management framework in the development process improve the quality of the software product? The paper aims to answer this question we propose a software testing framework based on the processes described in the ISO/IEC/IEEE 29119–2 standard and the documentation templates of the ISO/IEC/IEEE 29119–3 standard, which became the technical guide for the evaluation of the developed software. Furthermore, we evaluated the framework through a case study applied in a financial company in Ecuador; we also publish the testing framework artifacts in a Zenodo open data repository. The principal results show an increase in bug detection in the range of 77% to 100%, reduction of defect density in the range of 95% to 0%, and a 12.5% reduction of previously reported software failures in production environments.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Software engineering is a branch of computer science that studies the software life cycle, from the requirements gathering to the maintenance phase; it aims to build quality software that meets the user requirements, budgets, and times established for its development [1]. The life cycle comprises several phases containing several activities [2], which usually require a series of support tools to complete these tasks in the best way and quality [3]. There are several software life cycle models, each with its characteristics, advantages, and disadvantages. These models agree on the main phases of software development: analysis, design, coding, testing, and maintenance; all models include the software test management phase as a fundamental phase of the software development process, for example, the Test-Driven Development methodology the software development is based purely on the execution of tests [4].
The software testing process is essential because it enables developers to deliver high-quality standards and minimize risks [5]. There are metrics such as Capability Maturity Model Integration (CMMI) or Quality Improvement Paradigm (QIP) that manage software testing in traditional methodologies. Still, they do not frame agile development testing because they do not measure the agility of the process [6]. Conversely, software testing is an integral part of the development; however, developers usually do not use an adequate framework, causing that test management is informal; they do not perform sufficient test cases for code validation. Furthermore, documentation is non-existent or incomplete [7]. Failure to manage software testing correctly generates problems such as the deterioration of software quality that negatively affects the software production process, causing the maintenance phase to become complex. It can also lead to incomplete software functionality and delays in developing new software projects due to lack of time. Other associated problems are increased resources regarding costs and time allocated for software development, but the most severe is the inconvenience generated to end-users [8].
In this context, we base this study on the Design Science Research (DSR) approach [9] to evaluate the value and usefulness criteria of the software product, so we pose the following research question: Does the application of a test management framework in the development process improve the quality of the software product? Therefore, the paper aims to answer this question by developing and implementing a software testing framework based on the processes described in ISO/IEC/IEEE 29119–2 and the documentation templates of ISO/IEC/IEEE 29119–3 [7]. We evaluate this proposal through a case study applied in the “Cooperativa de Ahorro y Crédito Atuntaqui” in Ecuador [10]. The rest of the paper is structured as follows: Sect. 2) Research Design: we establish the research activities based on DSR, theoretical foundation, design, and framework implementation (artifact). Section 3) Results: Evaluation of quality-in-use results of the framework artifact. Section 4) Discussion: discussion of the research. Section 5) Conclusions and future work.
2 Research Design
We followed the Design Science Research (DSR) guidelines for the research methodology, see Table 1.
2.1 Population and Sample
We evaluate this proposal through a case study applied in the “Cooperativa de Ahorro y Crédito Atuntaqui” in Ecuador [10]. The population of the present study was 8 people from the Technology Department. They are made up of a Director, a Development Administrator, three Programmer Analysts and three Technical Support staff.
2.2 Theoretical Foundation
V-model.
Different software development models include the waterfall model, the general V-model [11], or agile programming models. Figure 1 shows the V-model containing two phases; the first one corresponds to the project development phases and the second one to the project testing phases. Phases of the same level of development can run in parallel. For each development level, there is a test level. The tester must ensure that the results comply with the software verification and validation [12].
Software Testing.
Testing is a software engineering discipline that analyzes software or components to detect differences between the requirements and the existing functionality or to detect software failures [14]. It is difficult due to the exponential increase of test sequences; advantageously, testing techniques help perform this task [15]. The International Software Testing Qualifications Board defines software testing as the process in the static and dynamic life cycle activities; related to the planning, preparation, and evaluation of software products to determine that they meet and are suitable for the specified requirements detect defects [16].
A test is a set of activities that need to be planned and performed systematically [17]. For this reason, during the software development process, a template for elaborating and executing tests is defined; it consists of a set of steps that includes test methods and test case design techniques [18]. Software testing is part of a broader topic, usually referred to as software verification and validation (V&V) [19]. Validation is a set of tasks that ensure that the built software follows the requirements requested at the beginning of the process. Verification is the set of tasks that ensure that the software correctly implements a function [20].
Testing Techniques.
Testing aims to detect as many faults as possible; therefore, there are many techniques to accomplish this purpose. The techniques verify a program as systematically as possible, identifying the inputs that will produce expected behaviors of the program [21].
Unit Testing.
is the basis for verifying the smallest unit of software design: the software component or module. These tests can be performed simultaneously on multiple components and are considered adjuncts to the coding process. Typically, unit tests have access and are executed to the source code and with the support of debugging tools [22].
Integration Tests.
Their purpose is to detect defects not found during unit tests. They focus on integrating and testing two or more components. Once the tests no longer detect new defects, they are added in additional components [23].
System Testing.
Unit and integration tests find defects in individual components and the interfaces between components. System tests demonstrate that components are compatible, interact correctly, and transfer the correct data at the right time through their interfaces [13].
Acceptance Testing.
This type of testing is performed when the product is ready to be deployed in the customer’s environment. Then, they focus on testing the user requirements, i.e., to demonstrate compliance with the acceptance criteria of the requirements. Once these tests are passed, the customer must accept the product [12].
White Box Testing.
Focus on designing test cases to validate the internal behavior and structure of the program. The design of these tests aims to execute at least once all program statements and all conditions to check both true and false values. Thus, they examine the internal logic of the program without considering performance aspects [24].
Black Box Testing.
They are a way of selecting conditions, data, and test cases from the system requirements documentation. Black box testing tests only the inputs and outputs of the system, i.e., it ignores the internal mechanism of the software. Instead, they consider the behavior of the software from the point of view of an external observer [24].
Reviews.
The verification and validation process requires software inspections and reviews. The latter analyzes and checks system requirements, design models, program source code, and even proposed system tests. They are also named “static” techniques, where it verifies the software without executing it [25].
ISO/IEC/IEEE 29119.
Provide a set of internationally agreed standards for managing software testing in any software development life cycle or organization [26]. They are the only internationally recognized standards for software testing; they provide a high-quality approach communicable worldwide [27]. The standard aims to cover the software life cycle, including aspects related to testing organization, management, design, and execution [28]. Figure 2 shows the structure of the ISO/IEC/IEEE 29119 standard.
Part 1 - Concepts and Definitions.
This section introduces the set of standards, includes common definitions of all its parts, and describes elementary testing concepts. It explains the scope of the components and describes how to use standards for different lifecycle models [26].
Part 2 - Test Processes.
Defines testing processes using a three-layer model. The top layer corresponds to the organizational test process to generate and maintain organizational policies and strategies for testing. The middle layer includes test management processes for test planning, monitoring, control, and completion. Finally, the bottom layer corresponds to dynamic testing processes because the overall model does not include static testing processes such as static analysis and reviews [26] (see Fig. 3).
Part 3 - Test Documentation.
This section provides templates with content descriptions for the main types of test documents. There is a strong link between the standards in Part 2 (Processes) and Part 3 (Documentation), as the results of the processes defined in Part 2 correspond to the documentation specified in Part 3 [26].
Part 4 - Testing Techniques.
These techniques require users following Part 2 to develop test plans that specific test case design techniques and criteria for achieving test completion. Part 4 defines a wide range of test techniques and corresponding coverage measures [26].
Part 5 - Keyword Driven Testing.
This section defines requirements for keyword-driven testing and minimum requirements for the supporting tools needed to utilize the keyword-driven testing approach fully [26].
2.3 Methodological Proposal
The proposed testing framework covers only functional testing. We intend to iteratively increase other types of tests in the framework in future work as it is implemented in new software projects. The proposed methodology offers the following activities:
-
Elaborate a plan for software test management.
-
Establish the team structure for software testing.
-
Define the testing execution process.
-
Establish the documentation and deliverables of the testing process.
Roles.
The work team involved in the software testing process has the following roles and responsibilities, see Table 2.
Process Testing Flow.
Contains the activities and the order of execution in the framework, see. Fig. 4.
Planning.
The structure of this phase contains five activities:
Identify Test Requirements.
This activity determines the software requirements or features included and excluded for testing and the scope of testing.
Prioritization.
This activity prioritizes the list of test requirements according to a test priority. Two factors must be evaluated for each requirement: failure priority and frequency of use (see Table 3).
Identify Resources.
Estimate the number of resources needed to design, build and execute software testing. The types of resources are data, environment, tools, and human resources.
Create the Schedule.
This activity includes estimating time to design, build and execute the tests based on previous experiences and metrics.
Generate the Test Plan.
This activity identifies and defines the deliverables to be created, maintained, and available during test execution. Next, it will document the delivery schedule for those deliverables. Finally, it will combine the data from the previous steps and create a Software Test Plan.
Design.
This phase contains the following activities:
Identify Test Cases.
A test case combines conditions and inputs for a specific test requirement. Generally, each test requirement should have more than one test case.
Identify Test Data.
This activity identifies the data required for the test cases defined in the previous activity.
Construction.
This phase primarily develops the procedures and data needed for testing.
Test Procedure Creation.
A test procedure is a set of detailed instructions for preparing, executing, and evaluating the outcome of a test case or set of test cases.
Create Test Data.
This activity creates the data needed to execute the test procedures.
Execution.
This phase consists of executing the steps established in the test procedures; it verifies that the result obtained from the test cases matches the expected result; if it does not match, the test engineer must record and report the defect found.
Defect Registration.
The Test Engineer records and reports to the Development Area the defects found in the test execution. After the development team corrects the reported defect, the test engineer will validate the defect again and report the result. If, when validating the correction of a defect, another defect is found, the test engineer must register it as a new defect.
Monitoring and Control.
At the end of the testing period or even during the execution phase, the test engineer evaluates the test results using previously established metrics.
Test Coverage.
Provides an indicator of the number of requirements passed out of the total number of requirements specified, using the following formula:
Where:
ETC is the number of executed Test Cases.
TCD is the total number of Test Cases designed.
Test Maturity.
This indicator measures the satisfactory results of test cases calculated with the following formula:
Where:
TCS is the number of Test Cases with a satisfactory result.
TCD is the number of Test Cases designed for all requirements.
Defect Density.
Provides a measure of the ratio of defects to the number of specification items calculated with the following formula:
Where:
DF is the total number of defects found.
SIR is the number of specification items reviewed.
Defect Trend.
It is the number of defects as a function of time in an established classification.
Defect Percentage by Type.
This metric identifies, categorizes, and prioritizes defect types calculated with the following formula:
Where:
NDT is the number of defects by type.
TDI is the total number of defects identified.
Basic Path.
Corresponds to the percentage of primary independent paths tested concerning the total ways, sum the cyclomatic complexity of the program modules, calculated by the following formula:
Where:
NDT is the number of designed tests.
C(G) is the calculated cyclomatic complexity.
Finalization.
The status of the tests can be evaluated at any point in the execution using the metrics established in the previous sections to decide if the software release is possible.
2.4 Test Documentation
Documentation corresponds to the software testing forms and templates available for organizations, specific projects, or individual testing activities.
Test Plan.
The formal document prepared by the Test Leader plans the detail of software testing activities, times, and responsible persons.
Test Design Specification.
This document contains the set of software features to be evaluated, with their respective execution priority. The Test Leader elaborates it.
Test Case Specification.
This document details the test cases to be executed for each specified software feature to be evaluated in the Test Design.
Test Procedure.
It specifies the order and sequence of execution of the test cases, restrictions, and previous actions for each test case.
Defect Report.
The record of executed test cases whose obtained result does not coincide with the expected result; therefore, the test is not satisfactory.
Results Report.
It is a management document that summarizes the results obtained in the execution of the test cases to determine if the software is suitable to be put into production or, on the contrary, some errors still need to be corrected.
3 Results
In this section, we validate the proposed software testing framework through three instruments applied in the “Cooperativa de Ahorro y Crédito Atuntaqui” in Ecuador in 2019: 1) Implementation of the testing framework project involved in real-world practice. 2) A comparison of the number of failures reported in the maintenance phase of 6 software projects implemented in a production environment. 3) A satisfaction survey of the use of the proposed testing framework.
3.1 Implementing the Testing Framework in a Software Project
In this section, we validate the study proposal by implementing the proposed testing framework in a software development project; after its implementation in production, we measure the impact through 3 cycles of test plan execution in the maintenance phase. Table 4 shows the implementation result of the design framework in a software project.
In the results obtained (Table 4), we note that the Test Coverage in the three cycles executed is one hundred percent, i.e., we executed all test cases designed in the test plan. The Test Maturity metric starts with a value of 77% in the first cycle, increases to 99% in the second cycle, and ends with 100% in the third cycle; this means that as we executed the test cycles, software errors were detected and corrected, and at the end all the test cases designed were satisfactory. The Defect Density in the first cycle was 95%, with the correction of the detected errors; in the second cycle, it decreases to 5%, and in the third cycle, it ends with a value of 0%, which means that all the software errors found were corrected.
The testing framework artifacts are available in the following Zenodo open data repository [31].
3.2 Comparison of the Number of Failures of Projects in the Maintenance Phase
In this validation, we analyzed the number of software failures reported in production environments of three projects implemented without using the framework and three projects using the proposed framework. The selection of the software projects was made with the advice of the Development Leader, considering the number of functional requirements and the development time of each project, see Table 5.
Table 6 shows the number of failure support cases reported for the production environment projects in the study period.
We analyzed the number of software failures of the projects that used the proposed testing framework against the projects that did not use the framework. As a result, the proposal’s impact corresponds to the percentage of software failures achieved by the two types of projects, measured by the following formula:
The result of the calculation of the impact caused by using the software testing framework shows a reduction to 12.5% of the reported software failures instead of the projects that did not use the proposed framework in this study. In addition, none of the reported incidents corresponds to errors in the data, avoiding direct affectation to the database.
3.3 Satisfaction Survey
To measure the perception of the technical staff of the Development Area about the use of the proposed testing framework, we first conducted a diagnostic survey and another survey after the implementation of the proposed testing framework. Therefore, we conducted a diagnostic survey and then another satisfaction survey after implementing the proposed testing framework in the study company. We designed the surveys using and rating using the 3 Likert response scale (Very Good, Good, Fair). Table 7 shows the rating of the surveys and the impact of the implementation of the proposed framework.
4 Discussion
We consider it necessary to clarify that the development environment, the Framework artifacts, and templates are in Spanish since we believe that the replication of this study should consider carried out in the same language. However, we do not recommend doing it in a different language because we do not know if the translation of the artifacts would affect the validity of the research.
During the study, we also observed that the expertise of the development team members influences the quality of the software product; for this reason, we see the convenient use of a tool that manages software testing and describes the necessary templates for the documentation of the software evaluation process.
5 Conclusions and Future Work
In this study, based on the DSR research approach, we pose the research question: Does implementing a test management framework in the development process improve the quality of the software product? First, we answered this question by developing a software testing framework based on the processes described in ISO/IEC/IEEE 29119–2 and ISO/IEC/IEEE 29119–3 documentation templates. Then, we validated this instrument using three artifacts to measure the influence of the proposed framework on the quality of the developed software. The principal validation results show an increase in error detection in the range of 77% to 100%, reduction of defect density in the range of 95% to 0%, and a 12.5% reduction of software failures previously deployed in production environments. Therefore, we conclude that implementing a test management framework does improve the quality of the developed software product.
As future work, we propose to study the impact of the use of this proposal in software development teams with different levels of experience to check which segment has a more significant impact and is more necessary.
References
Guevara-Vega, C.P., Guzmán-Chamorro, E.D., Guevara-Vega, V.A., Andrade, A.V.B., Quiña-Mera, J.A.: Functional requirement management automation and the impact on software projects: case study in ecuador. In: Rocha, Á., Ferrás, C., Paredes, M. (eds.) ICITS 2019. AISC, vol. 918, pp. 317–324. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11890-7_31
Sabri, O., Alfifi, F.: Integrating knowledge life cycle within software development process to produce a quality software product. In: Proceedings 2017 International Conference Engineering Technology ICET 2017, vol. 2018, pp. 1–7 (2018)
Tüzün, E., Tekinerdogan, B., Macit, Y., İnce, K.: Adopting integrated application lifecycle management within a large-scale software company: an action research approach. J. Syst. Softw. 149, 63–82 (2019)
Tosun, A., Ahmed, M., Turhan, B., Juristo, N.: On the effectiveness of unit tests in test-driven development. In: ACM International Confernce Proceeding Series, pp. 113–122 (2018)
Spadini, D., Aniche, M., Storey, A., Bruntink, M., Bacchelli, A.: When testing meets code review: Why and how developers review tests. In: Proceedings - International Conference Software Engineering, pp. 677–687 (2018)
Kayes, I., Sarker, M., Chakareski, J.: Product backlog rating: a case study on measuring test quality in scrum. Innovations Syst. Softw. Eng. 12(4), 303–317 (2016). https://doi.org/10.1007/s11334-016-0271-0
Afzal, W., Alone, S., Glocksien, K., Torkar, R.: Software test process improvement approaches: a systematic literature review and an industrial case study. J. Syst. Softw. 111, 1–33 (2016)
Sawant, A.A., Bari, P.H., Chawan, P.: Software testing techniques and strategies. J. Eng. Res. Appl. 2(3), 980–986 (2012)
Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in is research. Manag. Inf. Syst. 28(1), 75–105 (2004)
Coop. Atuntaqui. Cooperativa de ahorro y crédito (2021). Atuntaqui. https://www.atuntaqui.fin.ec/
Qian, H.M., Zheng, C.: A embedded software testing process model. In: Proceedings of - 2009 International Conference Compuer Intelligence Software Engineering CiSE 2009 (2009)
El-Attar, M., Miller, J.: Developing comprehensive acceptance tests from use cases and robustness diagrams. Requir. Eng. 15(3), 285–306 (2010)
Malaek, S.M.B., Mollajan, A., Ghorbani, A., Sharahi, A.: A new systems engineering model based on the principles of axiomatic design. J. Ind. Intell. Inf. 3( 2) (2014)
Vasanthapriyan, S., Tian, J., X.B, J.: An ontology-based knowledge framework. 2, 212–226 (2017)
Melo, S.M., Carver, J.C., Souza, P.S.L., Souza, S.R.S.: Empirical research on concurrent software testing: a systematic mapping study. Inf. Softw. Technol., 105, 226–251 (2019)
Kramer, A., Legeard, B.: Model-Based Testing Essentials. Wiley (2016)
Bertolino, A., Faedo, I.A.: Software Testing Research : Achievements , Challenges , Dreams Software Testing Research : Achievements , Challenges , Dreams, September 2007 (2007)
Kitchenham, B.: Evidence-based software engineering and systematic literature reviews. In: Münch, J., Vierimaa, M. (eds.) PROFES 2006. LNCS, vol. 4034, pp. 3–3. Springer, Heidelberg (2006). https://doi.org/10.1007/11767718_3
Monteiro, P., Machado, R.J., Kazman, R.: Inception of software validation and verification practices within CMMI level 2. In: Fourth International Conference on Software Engineering AdvancesICSEA 2009, Incl. SEDES 2009 Simp. para Estud. Doutor. em Eng. Softw. pp. 536–541 (2009)
Tamura, G., et al.: Towards practical runtime verification and validation of self-adaptive software systems. In: de Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Software Engineering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 108–132. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35813-5_5
Vegas, S., Basili, V.: A characterisation schema for software testing techniques. Empir. Softw. Eng. 10(4), 437–466 (2005)
Daka, E., Campos, J., Fraser, G., Dorn, J., Weimer, W.: Modeling readability to improve unit tests. In: Proceedings of 2015 10th Joint Meeting European Software Engineering Conference ACM SIGSOFT Symposium Foundations Software Engineering ESEC/FSE 2015- , pp. 107–118 (2015)
Delamaro, M.E., Maldonado, J.C., Mathur, A.P.: Interface mutation: an approach for integration testing. IEEE Trans. Softw. Eng. 27(3), 228–247 (2001)
White, L.J.: Software testing and verification. Advances Computer, vol. 26, no. C, pp. 335–391 (1987)
Itkonen, J., Mäntylä, M.V.: Are test cases needed? replicated comparison between exploratory and test-case-based software testing. Empir. Softw. Eng. 19(2), 303–342 (2014)
I. 29119–1:2013, ISO/IEC/IEEE 29119–1:2013 - Software and systems engineering — Software testing — Part 1: Concepts and definitions, ISO/IEEE (2013)
Eckhart, M., Meixner, K., Winkler, D., Ekelhart, A.: Securing the testing process for industrial automation software. Comput. Secur. 85, 156–180 (2019)
Matalonga, S., Rodrigues, F., Travassos, G.H.: Matching context aware software testing design techniques to ISO/IEC/IEEE 29119. In: Rout, T., O’Connor, R.V., Dorling, A. (eds.) SPICE 2015. CCIS, vol. 526, pp. 33–44. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19860-6_4
Reid, S.: Achieving systems safety. Achiev. Syst. Saf. 7–9 (2012), May 2007
Reuys, A., Kamsties, E., Pohl, K., Reis, S.: Model-based system testing of software product families. In: Pastor, O., Falcão e Cunha, J. (eds.) CAiSE 2005. LNCS, vol. 3520, pp. 519–534. Springer, Heidelberg (2005). https://doi.org/10.1007/11431855_36
Guevara-Vega, C., Cárdenas, W., Landeta, P., Rea, M., Quiña-Mera, A.: Supplemental Material: Software Test Management to Improve Software Product Quality Zenodo (2021). https://doi.org/10.5281/zenodo.5150822
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Guevara-Vega, C.P., Cárdenas-Hernández, W.A., Landeta, P.A., Rea-Peñafiel, X.M., Quiña-Mera, J.A. (2022). Software Test Management to Improve Software Product Quality. In: Botto-Tobar, M., Montes León, S., Torres-Carrión, P., Zambrano Vizuete, M., Durakovic, B. (eds) Applied Technologies. ICAT 2021. Communications in Computer and Information Science, vol 1535. Springer, Cham. https://doi.org/10.1007/978-3-031-03884-6_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-03884-6_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-03883-9
Online ISBN: 978-3-031-03884-6
eBook Packages: Computer ScienceComputer Science (R0)