1 Introduction

In 2007, a report by ITCs-Liteary published that IT industries were responsible for more than 2% of global carbon emission. One more report by 1E in 2007 estimated that a company using 10,000 PCs wasted approximately $165,000 in electricity bill every year due to the computer systems being left on overnight. The report concluded that the company could reduce atmospheric Carbon Dioxide (CO\(_{2}\)) content by approximately 1381 tons by switching off their computer systems. Consumption of power and energy of PCs is a very important issue that should be taken into consideration seriously. Green IT tries to minimize the energy consumption of Information and Communication Technologies, which are induced during software development life cycle. Now a days, the current practices mainly emphasized on the data centers. Green Information Technology idea deals with decreasing the environmental impacts. This is done through Information Technology solutions. Most of the Information and Communication Technology solutions are based on software, so the power and energy consumption of software are very crucial to analyze.

To improve software quality, software testing targets to detect and rectify possible bugs in a software. Traditionally, software testing engineers couldn’t detect all possible errors in a software through software testing techniques. It was a technically big challenge to produce test data that cover all execution paths in an automated testing approach, due to presence of many complex conditional expressions and loops. To address these issues, Dynamic Symbolic Execution analysis or CONCOLIC (Concrete and Symbolic) testing to produce test data for finding maximum possible paths of a program, is introduced. In accordance with DO178B/RTCA standard, code coverage testing is one of the acceptable types of testing. Coverage based testing is a white-box testing technique. There are several coverage based testing techniques such as Statement Coverage(SC), Condition Coverage(CC), Branch Coverage(BC), Modified Condition/Decision Coverage (MC/DC), and Multiple Condition Coverage (MCC) testing. Modified Condition/Decision Coverage (MC/DC) is mandatory for Aerospace and Nuclear safety critical systems. Energy Consumption is a core area of concern for Green IT, Green Software Engineering, and Green Software Testing, whereas energy consumption of concolic and MC/DC testing are not yet proposed.

In this paper, we propose a framework to measure the modified condition/decision coverage and the amount of energy consumption in this process. We have named our tool Green JPCT JCUTE JCA Model (Green-J\(^{3}\) Model). The Green-J\(^{3}\) Model consists of mainly five modules. (1) Java Program Code Transformer (JPCT), (2) Java Concolic Tester (JCUTE), (3) Java Coverage Analyzer(JCA), (4) JouleMeter, and Energy Calculator. In this proposed approach, we develop JPCT to improve the MC/DC of Java programs by converting the original Java program into a transformed version. Our tool Green-J\(^{3}\) Model uses JCUTE to generate test data. We have developed JCA to compute MC/DC percentage of the original and transformed Java programs. To measure the energy consumption in this process, Green-J\(^{3}\) Model uses the tool JoulMeter. The abbreviations, we used in our work are listed in Table 1 for easy reference.

Table 1 Abbreviations used in our approach

The rest of the paper is organized as follows: Sect. 2 deals with some of the fundamental concepts. Section 3 explains the proposed Green-J\(^{3}\) Model and discusses the proposed algorithms. Section 4 explains the experimental study of our proposed work. In Sect. 5, we present some important threats to validity. In Sect. 6, we compare our work with some of the existing related work. Section 7 concludes the paper with some insights into our future work.

2 Fundamental concepts

In this section, we discuss some important fundamental concepts which are required to understand our work.

2.1 Green Software Engineering

To classify and sort some concerns of green and sustainable software and their engineering technology, Kern et al. [1] developed GREENSOFT Model as shown in Fig. 1. GREENSOFT Model consists of the following four parts: Life cycle of software products, Sustainability criteria for software products, Procedure models, and Recommendations for Action and tools. Green Software Engineering was the attempt to apply the “green” principles known from hardware products on software products, software development processes and their underlying software process models. For more detail knowledge on Green Software Engineering, the readers are suggested to refer [1, 2].

Fig. 1
figure 1

GREENSOFT Model [1, 2]

2.2 Energy consumption

The System Under Test (SUT) is the computer hosting the application whose induced energy consumption will be computed. The power readings define the energy consumption recorded with respect to timestamps. The total energy consumption for the entire run may be simply obtained by summing up the power values measured in Watts for the duration of interest. Since each value represents the power used in one second, the sum gives the total energy consumption in Joules.

2.3 Branch coverage

For achieving branch coverage, each decision in a program, should take all possible outcomes at least once, i.e. either true or false.

2.4 MC/DC

The RTCA standard of DO-178B stands for Radio Technical Commission for Aeronautics [3]. It is mandatory to achieve Modified Condition /Decision Coverage (MC/DC) for Level A certificate of safety critical applications [4]. MC/DC necessitates satisfaction of the followings:

  • All the entry and exit points of the input programs must be invoked at least once.

  • All possible outcomes of a decision must be affected by the changes made to each condition.

  • All possible outcomes of every decision must execute.

  • All the conditions in a decision must execute.

Let us discuss MC/DC with an example. Consider a program containing only one complex condition “if(\(A \bigwedge B\))”. Table 2 shows the truth table for “if(\(A \bigwedge B\))” and Table 3 shows the Extended truth table for the condition taken, which represents the core idea of MC/DC. From Table 3, we can observe that the test cases (TC) numbers {1,2,3} are the unique test cases that satisfy the MC/DC criterion for the given condition. Thus, the resultant test cases are {(TT), (TF), (FT)}. Here, both the conditions “A” and “B” are independently affected by changing the value of each condition and by observing the fact that the final outcome of the whole decision is changed. Therefore, two independently affected conditions out of two simple conditions, show that 100% MC/DC is achieved. It may be noted that the test suite to achieve MC/DC for a given program is in general not unique. It may be observed that the test cases (TT) is needed as it is the only one that returns a true output value. The test case (FT) is needed as it is the only test case that modifies the value of A and also it affects the outcome of the complex condition, by establishing the independent influence of A. In a similar way, the test cases (TT) and (TF) are nedded to show the independent influence of B.

Table 2 Truth table for “if(\(A \bigwedge B\))”
Table 3 Extended truth table for “if(\(A \bigwedge B\))”

2.5 Concolic testing

Concolic testing is the combination of concrete and symbolic execution, that performs exhaustive testing. Concolic testing was proposed to automatically generate test data that can execute all the reachable branches of a program [5]. It first generates a random input value. Then, it simulates execution of the program with the input values. Simultaneously, the symbolic constraints at every branch point with the execution path are collected. A set of selected input values corresponding to each of the branch point is made available at the end of the execution path. A path constraint is the conjunction of all these constraints. A component from a path constraint is chosen and it is negated to explore a new path. The last branch point from a path is generally chosen for negation but the selection can also be done randomly. A constraint solver is used to solve a new path constraint to generate some concrete input values that satisfy these paths. For more information on this approach, the reader is referred to [5].

We explain the concolic testing approach using an illustrative example. Consider the function weightCategory shown in Fig. 2. The process starts by executing the program with randomly generated inputs. Suppose the parameters mass and length are randomly set to 22.0 and −5.0. During execution, both the concrete and symbolic values (22.0 and \(mass_0\)) are selected for the specific executed path. The first branch instruction is encountered at Line 2. Here, the output of thepredicate evaluates to true because mass is set to 22.0. For an input value to take the same branch, it is necessary that the branch constraint \((mass>0.0)\) holds.

Fig. 2
figure 2

A sample program

The next branch point is detected when executing the if expression in line 6. Now the predicate evaluates to false, because length is set to a negative value. The branch constraint associated with this branch is:

$$\begin{aligned} \lnot (length>0.0) \end{aligned}$$
(1)

This branch constraint value is merged with the preceding branch constraint value to form the following new path constraint:

$$\begin{aligned} (mass>0.0)\wedge \lnot (length>0.0) \end{aligned}$$
(2)

After line 8 is executed, the else expression of the function weightCategory exits. The path constraint is obtained by negating one of the branch constraints. When the last branch constraint is negated, the resultant path constraint becomes:

$$\begin{aligned} (mass>0.0)\wedge (length>0.0) \end{aligned}$$
(3)

This new path constraint is then passed to a constraint solver to check if there exists a set of input values that makes the constraint true. A solution that satisfies this constraint is length = 1.0 and mass = 50.0. In the next loop iteration, the function is exercised with this set of input values and the path constraint value is again collected. The execution affects the function to return the VALUE OVERWEIGHT. This execution path corresponds to satisfaction of the following constraint:

$$\begin{aligned} (mass>0.0)\wedge (length>0.0)\wedge \lnot (bmi<18.5)\wedge \lnot (bmi<25.0) \end{aligned}$$
(4)

This step of manipulating the symbolic path constraint values, solving them to generate concrete inputs, and executing the resulting test inputs is repeated until a predeclared stopping criterion is met. The stopping criterion is usually either when the number of iterations exceeds a pre-specified threshold or when all the reachable branches are covered.

The constraint solver is invoked with another modified path constraint when it fails to calculate a set of test cases that satisfy the modified path constraints. The new modified path constraint is found by negating some other branch constraint value of the previous path constraint value. When a constraint solver is failing to evaluate any test cases for a path, the path is considered to be infeasible, i.e. no input exists that can satisfy the constraint.

figure a

3 Proposed approach: Green-J\(^{3}\) Model

In this section, we discuss about the proposed Green-J\(^{3}\) Model in detail. First, we discuss the block diagram of Green-J\(^{3}\) Model. Then, we present the algorithmic description in detail.

3.1 Block diagram of Green-J\(^{3}\) Model

Figure 3 represents the schematic representation of Green-J\(^{3}\) Model. The block diagram consists of mainly five modules: JPCT, JCUTE, JCA, JouleMeter and Energy Calculator. The flow starts with the initiation of recording power consumption for each 1 second of JPCT, JCUTE, and JCA by JouleMeter. JPCT takes a Java program as input and converts it into its transformed version. Then, the transformed program is supplied to JCUTE to generate test cases automatically. Now, the original program and the generated test cases are fed into JCA to compute the MC/DC. After these steps are performed, we stop recording the power consumption. Finally, using the total time taken by the above process and total power consumed by the technique, our tool Green J\(^3\)-Model measures the total energy consumption in Joules through Energy Calculator. Therefore, through Green-J\(^{3}\) Model, we measure the time taken, power consumed, and energy consumed for our proposed software testing technique.

Fig. 3
figure 3

Schematic representation of Green-J\(^{3}\) Model

3.2 Algorithmic description of our proposed approach

In this section, we discuss the algorithmic description of the Green-J\(^{3}\) Model and J\(^{3}\) Model in more detail.

3.2.1 Green-J\(^{3}\) Model

The overall pseudo-code for the proposed approach is mentioned in Algorithm 1. We can observe from Algorithm 1 that the proposed process requires the followings: (1) a Java program, (2) JPCT to transform the Java program, (3) JCUTE concolic tester to generate test cases, (4) JCA to measure MC/DC percentage, (5) JouleMeter to measure energy consumption and (6) Energy Calculator. Finally Algorithm 1 results in the total Energy Consumption(EC).

First two steps of Algorithm 1 are performed by JouleMeter to start the process. Step 3 deals with generation of Test_Suite_1 (TS_1) using JCUTE and measuring MC/DC_1% using JCA. The first experiment shows the execution of Java program without JPCT. Step 4 shows the execution of Java program through JPCT to convert it into transformed version. Step 5 shows the generation of Test_Suite_2(TS_2) using JCUTE and measures MC/DC_2% using JCA. It may be noted that in Step 5, the test suite is generated for the transformed Java program and computation of MC/DC_2% is done for the original Java program and TS_2. Step 6 finds the time taken to compute MC/DC, speed of test case generation, and difference of two MC/DCs. Steps 7 and 8 stop saving the energy consumption values and execution of JouleMeter. Step 9 finds the total time taken and power consumption. Step 10 finally computes the total energy consumption in Joules.

Here, we discuss about JouleMeter. JouleMeter [6] is a Microsoft developed open source tool. It is easily available on InternetFootnote 1. It is used to measure the power consumption of a software application, running in the computer system. JouleMeter computes the power usage of an application through a Power Model. The Power model relates the hardware power state and the computer resource usage in power consumption calculation. The hardware power state consists of processor frequency, monitor off/on state, screen brightness, process utilization and disk utilization. But in our proposed work, we have considered only a specific software application’s power usage. JouleMeter sets some power number parameters for the proper running of Power Model. The power number parameters are: Processor Peak Power (PPP—high frequency), Base Power, Processor Peak Power (PPP—low frequency) and monitor power. The power consumption of any application program indicates only the power consumption of the specific application program. The energy consumption of any software application is computed by adding the power usage per time stamp from the start of the application to the end of execution of the application. Then, the total power usage is multiplied with the time interval to get the energy consumption, which is defined in Energy Calculator.

3.2.2 J\(^{3}\) Model

Algorithm 2 presents the concept of J\(^{3}\) Model, that accepts a Java program as input and produces the MC/DC percentage as output. Steps 1–7 of Algorithm 2 are executed to perform JPCT. Initially program J is fed into JPCT to result in the transformed program. Step 2 identifies the predicates in the Java program under test. Steps 3 and 4 form sum of product (SOP) for each and every identified predicate. This SOP may be complex in nature, therefore we apply Quine–McCluskey method to minimize the SOP. Now, to test the parts of predicate more exhaustively and rigorously, we use Boolean Derivative Method. By using this method we insert empty nested if-else conditional expressions in the original Java program. These extra conditional expressions contribute to enhance MC/DC, since they help to generate more test cases so that they cover more branches and explore maximum paths of the execution tree while compiling and executing through concolic tester. Step 7 gathers all the statements and original programs that form the transformed version of the original Java program. Step 8 deals with execution of the transformed program through JCUTE and generates test cases. Steps 9–14 deal with JCA to measure MC/DC. In Step 9, we supply the generated test cases and original Java program into JCA. In Step 10, the Test case reader reads each test case and subsequently the predicate identifier identifies all predicates in the input Java program in Step 11. Now, in Step 12 we apply the concept of MC/DC to identify the independently affected conditions (I) and Number of Simple Conditions (C). Coverage Calculator uses these two values I and C, to compute MC/DC percentage as presented in Step 13. In Step 14, the algorithm exits.

Let us consider the function testLogical shown in Fig. 4. The function accepts three Boolean values as parameters and based on the satisfaction of the decision (a&& (b || c)) (Line 2), the true and false branches are executed. This technique transforms the function testLogical into the code with the dummy branches as shown in Fig. 5 to achieve MC/DC. The expressions shown in Lines 2–13 ensure to reach each branch, due to which we achieve high MC/DC.

Fig. 4
figure 4

Example function with a boolean expression

Fig. 5
figure 5

Transformed code of the example function given in Fig. 4 to achieve MC/DC

figure b

4 Experimental studies

We did our experimentation on forty-five benchmark Java programs taken from the OSL RepositoryFootnote 2 and some students assignments. We have our experimental setup as follows: We ran our programs on Windows-7 operating system with 4 GB RAM and Intel(R) Core(TM) i5 CPU having 2.40 GHz as processing speed. We have performed our experiments in Java environments JDK 1.6.0 and JRE 1.6.0. Table 4 explains the different properties of the Java programs which we have considered. Column 3 shows LOCs of original Java program and Column 4 shows LOCs of the transformed Java program. Here, we have considered programs up to a maximum of approximately 3000 LOCs. Columns 5 and 6 deal with the number of functions invoked and number of classes present in a Java program, respectively. In our proposed work, we basically work on concolic and MC/DC testing that strictly require predicates and branches covered. Columns 7 and 8 present the number of identified predicates and branches of the execution tree, respectively. Some important properties such as number of variables, DFS (Depth First Search) information, and errors are reported in Columns 9, 10, and 11 respectively.

Table 4 Characteristics of different experimental programs

For our experimentation, we have considered two scenarios. These two scenarios are defined below:

Scenario 1 It is the process of test case generation and measuring MC/DC percentage through JCUTE and JCA respectively.

Scenario 2 It is the process of transforming a Java program into it’s transformed version, generating the test cases, and measuring the MC/DC percentage through JPCT, JCUTE and JCA respectively.

Table 5 presents the important parameters for our experiments. We use JCUTE to test a Java program in a concolic manner. JCUTE automatically selects input values for different constraints to explore the possible paths in a execution tree. These different input files are the test cases, that are generated automatically through JCUTE tool. Total number of test cases generated for Scenarios 1 and 2 are reported in Column 3. It may be observed that for thirty-seven programs, Scenario 2 produces more test cases as compared to Scenario 1 due to the transformation of Java programs. In support to MC/DC percentage metric to improve quality of software, we measure total computational time to compute MC/DC and speed of test case generation. Column 4 presents the computational time for both the Scenarios 1 and 2. In Scenario 1, we have two modules i.e. JCUTE and JCA. We record the time values individually, then sum up them for getting the total time. On the other hand for Scenario 2, we have three modules i.e. JPCT, JCUTE, and JCA. Again, we record the time values individually, then sum up them for setting the total time. It may be observed that for almost thirty-two programs, Scenario 2 takes little more time as compared to Scenario 1. The reason is the use of additional module for obtaining the transformed version of the program in Scenario 2. It may be that, since we have two scenarios, therefore, we have executed our experiments for two times. Different execution of modules take different times.

Table 5 Result analysis for different programs

We have already mentioned that, we record time constraints individually for each modules. So, using time values through JCUTE and JPCT+JCUTE, we compute the speed of test case generation for Scenarios 1 and 2 respectively. Column 5 reports the speed of test case generation for Scenarios 1 and 2. We observed that for thirty programs Scenario 2 is delayed as compared to Scenario 1, due to the extra module JPCT added in Scenario 2.

Column 6 deals with MC/DC percentage analysis. We compute MC/DC_1% and MC/DC_2% for Scenario 1 and 2 respectively. To show enhancement in MC/DC, we take the difference of both the MC/DCs and this has been shown in third sub-column of Column 6. Here, we may observe that for thirty-one programs, we successfully achieved increased MC/DC. Among forty Java programs,in an average we achieved 24.03% high MC/DC. Figure 6 shows the MC/DC percentage analysis of the forty programs. In Fig. 6, the program are taken along X-axis. Y-axis represents the MC/DC percentage. Figure 7 presents the comparison of the two MC/DCs i.e. MC/DC_1 and MC/DC_2. It is clearly reflected from Fig. 7 that we get significant increase in MC/DC.

Fig. 6
figure 6

Comparison of MC/DC for different programs

Fig. 7
figure 7

Difference between the two MC/DC percentages

Table 6 discuses total energy consumption analysis of both the scenarios for forty-five programs as discussed in Algorithm 1. Columns 3 to 5 present timestamps recorded from, to, and total time in milliseconds respectively. Column 6 presents the consumption of power of an application in watt over the time taken. Figure 8 shows the correlation between time and energy consumption. Here, X-axis shows the input programs and Y-axis shows the total Power consumed by program. This is the graph of consumption of power vs time taken. We may observe three programs consume power more than 100 watts as reported in Column 6. We present energy consumption for all forty-five programs in Column 7. The range of energy consumption is from 379.8578 to 162100.0568 J. Figure 9 shows the comparison of consumption of energy for all forty-five programs. We have scaled down the value of energy consumption to show the comparison of all Java programs. In this graph we have taken 1000 J as threshold value. In Fig. 9, the programs are taken along X-axis. Y-axis represents the energy consumption in Joules. Here, we conclude that the total energy consumption for forty-five programs is 747.51 kJ.

Table 6 Power consumption and energy consumption of different programs
Fig. 8
figure 8

Recorded power consumption over time of interest

Fig. 9
figure 9

Comparison of computed energy consumption for different programs

5 Threats to validity

Here, we present some important threats to the validity of Green-J\(^{3}\) Model.

  1. 1.

    The programs taken for experimental studies are amenable to concolic testing, this is our first threat to the proposed work.

  2. 2.

    Our second threat to validity is related to some shortcomings of symbolic execution engine that is used in JCUTE. Some possible shortcomings of concolic tester are given below:

    • It may not scale up if the domain of input is large.

    • If the program exhibits non-deterministic behavior, it may follow a difference path than the intended one. This can lead to non-deterministic of the search and poor coverage.

    • Even in a deterministic program, a number of factors may lead to poor coverage, including imprecise symbolic representations, incomplete theorem proving, and failure to search the most fruitful portion of a large or infinite path tree.

  3. 3.

    The third threat to validity is that the system configuration JCUTE and JolueMeter are developed for windows 7 operating system only.

  4. 4.

    The proposed program transformation using Quine McCluskey technique works fine for small and moderate sized programs. But, for very large programs with many-many alternative paths, the time complexity and space complexity will significantly increase. When we compare QM technique with K-Map, then QM results in better output because K-Map is able to handle only upto four or five variables in a decision and QM is capable of handling n number of variables present in a decision. But, for very large programs, we have to bear this overhead, w.r.t. space and time.

6 Comparison with related work

In this section we discuss some existing related work to compare our proposed work.

Li et al. [7] proposed an approach for Energy-Directed Test Suite Optimization (EDTSO). EDTSO is a new test suite minimization approach that allows software testers to generate energy-efficient, and minimized test suites. Their proposed technique is based on encoding minimizing problems as integer linear programming problems. In our proposed work, we have achieved higher code coverage and computed energy consumption for the whole process.

Amsel and Tomlinson [8] have proposed a tool called Green Tracker. Green Tracker estimates the energy consumption of software in order to help the concerned users in taking suitable decisions about the software they use. Amsel and Tomlinson [8] aimed at creating awareness about the potential environmental hazards associated with software and improving software engineering techniques to reduce the energy consumption of software. Like Amsel and Tomlinson [8], we also intend to spread the awareness on Green Software Testing. In our work, we have discussed the energy consumption analysis of a testing tool.

Dick et al. [9] presented a method to compute and rate software-induced energy consumption of stand-alone software applications on desktop computers as well as interactive transaction-based software applications on servers. They have intended to support software developers, administrators, purchasers, and users in making informed decisions on software architecture and implementation as well as on software products they use or plan to use. In our proposed approach, we measure the performance of a software testing tool by computing some metrics. Like, Dick et al. [9], we also intend to advice software testers to choose energy efficient software testing tools. We advice the testers to choose a software which performs the same objectives as others, but with less power, and less energy.

Chen et al. [10] proposed and developed a tool called StressCloud to measure the performance and energy consumption for cloud based applications. In our proposed work, we also measure the energy consumption, but for a software testing tool J\(^3\)-Model through Green J\(^3\)-Model.

Capra et al. [11] have developed the hardware kit that used ammeter clamps and a workload simulator tool. Ammeter clamps were used to measure the energy absorbed by the server machine. Workload simulator tool generated benchmark workloads for different categories of applications. Capra et al. [11] measured the energy efficiency of software applications. In our proposed work, we have developed Green-J\(^3\)Model, that takes a software testing tool and measures the energy consumption, as Capra et al. [11] has measured.

Brown and Reams [12] discussed some approaches to achieve energy efficiency in computing systems. This is nothing but a literature survey on power consumption and energy consumption. In our work, we presented a detailed experimental study on energy consumption on a software testing tool.

Saxe [13] proposed and developed a tool called PowerTOP. Their work shows the explanation of e-waste. Also, they suggested which application was responsible for this e-waste. In our proposed approach, we have computed energy consumption of a testing tool. When our tool can compute, the energy consumption of different testing tools, then our tool can suggest the most energy efficient testing tool.

Das et al. [14] and Godboley et al. [1518] have proposed several code transformation techniques which supported to achieve higher code coverage. They have considered CREST tool as concolic tester which generated the test cases. Also, they have developed a coverage analyzer which accepts a program along with test cases as input and produces to measure MC/DC% as output. In our work, we also proposed the same core idea but using different technique i.e. Java Program Code Transformer (JPCT). In addition, we spread the awareness on green analysis.

Bokil et al. [19] have proposed and developed an AutoGen tool. This tool produced test cases and computed code coverages. When it compares the execution time of automated testing tools as compared to manual testing strategy, then they found that automated testing tools save one third time. In our work, we also computed the execution time of the process.

Burnim et al. [20], Kim et al. [21, 22], Majumdar et al. [23], Kim et al. [24], Sen et al. [5], and Kim et al. [25] have proposed and developed several concolic testers. Some of them have been implemented in C language and some of them have used Java language to implement the tool. Some of the works were implemented in distributed environment. In our proposed work, we have used jCUTE as the concolic tester to produce test cases. It may be noted that jCUTE is compatible with Java.

Hoing et al. [26] and Godboley et al. [27] have proposed and developed some tools that spread the awareness on energy consumption analysis of software applications. Our proposed work also targets the same objective to spread the awareness regarding the energy consumption.

Table 7 summarizes the comparison of some related work. We present the frameworks developed and used by various authors in the third column of Table 7. Brief description of various mentioned research work is provided in the fourth column of Table 7. We can observe from Table 7 that authors mentioned in sl. no. 1 to 7 proposed their research work based on power consumption and energy consumption. These work help to spread awareness for GREEN IT and GREEN Software Engineering. Authors listed in sl. no.number 8 to 19 explain about concolic testing and coverage based testing. Last row of Table 7 shows our proposed work. Here, we present Green-J\(^{3}\) Model, which is based on Concolic Testing, Branch Coverage, and Green Software Engineering. Green-J\(^{3}\) Model helps to enhance awareness about the importance of energy consumption in software testing.

Table 7 Comparison of different works on concolic and coverage based testing

Table 8 presents the comparison of different characteristics of existing approaches. These characteristics are Test Cases, Coverage%, Time Constraints, Speed, Power Consumption, and Energy Consumption. Among all the existing works, only Li et al. [7] have done the analysis of energy consumption for software testing. Please note that authors listed sl. no. 2–7 proposed approaches only for energy consumption and power consumption. They have not focused on software testing techniques. Again please note that the authors listed sl. no. 8–19 have only focused on software testing since they are unaware of Green IT and Green Software Engineering. Last row in Table 8 shows our proposed work. We have done our research on all the characteristic mentioned in Table 8. Our proposed work deals with software testing as well as Green IT, and Green Software Engineering.

Table 8 Characteristics of different approaches on concolic and coverage based testing

7 Conclusion and future work

We proposed a tool named Green-J\(^{3}\) Model to measure the energy consumption of modified condition/decision coverage using concolic testing. We discussed Green-J\(^{3}\) Model along with the model overview, the block diagram, and algorithmic description in detail. The experimental results show that the proposed approach of test case generation achieved better MC/DC in comparison to the existing methods. Green-J\(^{3}\) Model achieved 21.32% of average enhancement in MC/DC for forty-five programs. The total energy consumption of the whole experimental process is 747.51 kJ.

In the future, we will rectify some of the significant identified threats to validity of our work. We will develop other code transformers to experiment with Java program to achieve high MC/DC as compared to existing approaches. It is very important to compute the energy consumption of each modules of a software testing tool, therefor in our future work we will extend our work by computing individual energy consumptions. We will try to work on comparison of energy consumption of difference software testing techniques in concolic and MC/DC testing.