1 Introduction

As the fundamental principle of parallelizing compilers to detect parallelism and optimize programs, dependence testing techniques began to attract programmers’ attentions from the very beginning of parallelizing compilers. The semantic of the generated programs of a parallelizing compiler can be preserved only when the Fundamental Theorem of Dependence [1] holds, meaning the generated code can obtain the same result as the original program. The power of data dependence testing techniques therefore impacts heavily on the ability of a parallelizing compiler to exploit the parallelism of programs.

To transform a serial program into parallel, compiler developers proposed static parallelizing compilers at first. A static parallelizing compiler is usually composed of three parts: (1) It first uses data dependence techniques to determine the dependence relationships of the input program, (2) it then applies high-level program transformations and optimizations according to the dependences, and finally, (3) it generates parallelized code accordingly. In the whole process of a parallelizing compiler, dependence testing techniques serve as an underlying principle of a parallelizing compiler.

As both the dependence analysis and program transformations happen at compile time, a static parallelizing compiler always determines dependences by answering whether these dependences exist between two subscripted references to the same array in a loop nest. This kind of methods is referred to as data dependence testing techniques. In the full generality, dependence testing is an undecidable problem, since an array subscript can be arbitrary expressions of its enclosing loops’ variables. As a result, a data dependence testing technique has to be conservative, meaning that it has to suppose the program is dependent when it cannot prove the absence of dependences. Earlier works mainly studied data dependence testing methods for linear subscripts. Since the mid-1990s, programmers began to focus on the dependence testing techniques for nonlinear subscripts. The former is referred to as linear data dependence testing techniques, while the latter is nonlinear testing techniques.

With the further development of computer architectures, static parallelizing compilers cannot meet programmers’ requirement to parallel programs performance any longer. As a result, dynamic speculative parallelizing compilers and static–dynamic hybrid compilers were introduced. Since they can obtain the runtime information reflecting the relationship between two array references more exactly, dynamic compilers determine dependences by answering whether these dependences exist between two subscript values under representative inputs.

We can conclude that linear dependence testing algorithms have been developed better than the nonlinear ones. The reasons can be explained from two aspects. First, as the empirical study described in [2], linear subscripts consume more than 90% in dependence analysis system in PFC (Parallel Fortran Converter) [3], illustrating that solving a linear problem is more important than a nonlinear one, which is also supported by many existing literatures. Second, static parallelizing compilers have been developed for decades, but compiler developers pay more attention to speculative parallelizing compilers recently. So we argue that the static parallelizing compilation has encountered a bottleneck.

As a result, how to evaluate various existing data dependence testing techniques so that compiler developers can compare the relative power of these methods better, has become a challenging problem. Compiler designers would not only like to know their power and availability, but also desire to make their compilers disprove as many dependences as possible. In the early 1990s, some researchers studied and examined the impact of data dependence analysis in practice. Shen et al. [4] performed a preliminary empirical study on some numerical packages, including Linpack and Eispack,Footnote 1 comparing the relative power of some dependence testing techniques. Based on the above packages and the Perfect Benchmarks, Petersen and Padua [5] performed an experimental evaluation of a proposed sequence of dependence testing methods. Psarris and Kiriakopoulos [6] also showed the tradeoffs between the power and efficiency, and the time complexities of three representative techniques. All of these studies tried to give an empirical or experimental evaluation result, but none of these works shows a theoretical comparison on the power of data dependence testing techniques.

To enhance the ability of parallelizing compilers to determine dependences in programs, some later works combined a suite of testing methods to improve the power of dependence testing algorithms. Golf et al. [2] divided linear subscripts into zero index variable (ZIV), single index variable (SIV), multiple index variable (MIV), and proposed a sequence of dependence testing methods accordingly. They improved the power of their compiler to detect dependences by combining these testing methods. Maydan et al. [7] proposed a subscript pattern based dependence testing suite. If all these testing methods failed, Fourier–Motzkin elimination [8] is used as a time-consuming back up strategy. In fact, all these studies were trying to seek a complete set of dependence testing techniques, although none of them explained their purpose properly. However, they failed to construct such a complete set but only showed some experimental results instead.

To evaluate the power of existing testing techniques theoretically, as well as to find their upper bounds and complete sets, in this paper, we propose a formal system by restricting the first-order predicate logic system, to evaluate the relative power of different linear data dependence testing techniques. we not only show a theoretical evaluation of data dependence testing techniques, but also find some upper bounds and minimum complete sets of these dependence testing techniques. More specifically, the contributions of this work are as follows:

  1. (1)

    We propose a formal system K-DT that is designed to evaluate the relative power of existing linear data dependence testing techniques by supplying necessary axioms into the first-order predicate logic formal system. The soundness, adequacy, and consistency of the K-DT system are proved, respectively, which lays the foundation for proving the theorems in K-DT.

  2. (2)

    We evaluate all the linear data dependence testing techniques by proving their corresponding theorems in K-DT. The predicates of the theorems are domain-specific and the same with these methods’ names.

  3. (3)

    We show some upper bounds and minimum complete sets of linear data dependence testing techniques in different cases, and prove the completeness of these sets.

The paper is organized as follows. Section 2 introduces some basic definitions in data dependence testing area and illustrates our motivation. Section 3 presents the formal system K-DT, and proves its soundness, adequacy, and consistency, followed by some additional deductive rules. Section 4 shows the proofs of the theorems corresponding to the data dependence testing techniques. The upper bounds and minimum complete sets in different cases are presented in Sect. 5. Section 6 reviews related work, followed by the conclusion in Sect. 7.

2 Motivation

Dependence testing is the method used to determine whether dependences exist between two subscripted references to the same array in a loop nest [1]. It is a fundamental principle of a parallelizing compiler. In most cases, dependence testing problem can be illustrated by the code shown below. A dependence testing technique is used to determine whether the statement \(\varvec{S}_2\) depends on the statement \(\varvec{S}_1\) or the contrary.

figure a

To answer this question, a compiler must figure out whether these two statements access the same memory. Since \(\varvec{S}_1\) writes to array A while \(\varvec{S}_2\) reads the value from the same address, the problem has been transformed into determining whether the subscripts in these statements are equal to each other. In the absence of control dependences, there exist dependences between these two statements, if and only if the following system of dependence equations is satisfied:

$$\begin{aligned} \left\{ \begin{array}{c} f_{1}(i_{1}, \ldots , i_{n}, j_{1}, \ldots , j_{n})\equiv h_{1}(i_{1}, \ldots , i_{n})-g_{1}(j_{1}, \ldots , j_{n}) = 0 \\ f_{2}(i_{1}, \ldots , i_{n}, j_{1}, \ldots , j_{n})\equiv h_{2}(i_{1}, \ldots , i_{n})-g_{2}(j_{1}, \ldots , j_{n}) = 0 \\ \vdots \\ f_{m}(i_{1}, \ldots , i_{n}, j_{1}, \ldots , j_{n})\equiv h_{m}(i_{1}, \ldots , i_{n})-g_{m}(j_{1}, \ldots , j_{n}) = 0 \\ \end{array}\right. \end{aligned}$$
(1)

Therefore, for a couple of different subscripted references, the essence of dependence testing is to prove or disprove the solutions to the equation system (1) in its feasible region, while the feasible region is defined by each pair of loop bounds

$$\begin{aligned} {{\varvec{R}}}=\{L_k\le i_k, j_k\le U_k\vert 1\le k\le n, i_k, j_k\in \mathbb Z\} \end{aligned}$$
(2)

in which \({L}_k\) and \({U}_k\) can depend on \({i}_1,\ldots ,{i}_{k-1}\).

If a testing technique can prove the absence of the solutions, then no dependences between these two statements exist. In fact, it is the most desired result of a dependence testing algorithm, since programs can be parallelized by compilers in this case. Otherwise, the dependence testing algorithm will try to characterize the possible dependences in some manner, usually as a minimal complete set of distance [9] and direction vectors [10, 11].

A data dependence testing algorithm has to be conservative. That is to say, if it is not able to disprove the dependences, it must suppose there are dependences. In some cases, there is an additional restriction due to the direction vector

$$\begin{aligned} i_kD_kj_k(1\le k\le n) \end{aligned}$$
(3)

where \({D}_k\) represents the kth element of the direction vector.

So a dependence testing technique is used to analyze whether the system of Eq. (1) has solutions with the restrictions (2) and (3) that form the feasible region, i.e., the region of interest. Existing testing methods always determine whether the system of Eq. (1) has feasible solutions by relaxing the restriction (2), extending their feasible region. As a result, if a testing algorithm can prove the absence of feasible solutions in the extended region, then the original problem is definitely unsolvable in the region of interest. However, for parallelizing compiler designers, how to compare the power of data dependence testing methods, so that the compilers can prove or disprove more dependences, is a challenging problem. Existing studies achieved this goal by providing empirical studies or experimental evaluation results. To the best of our knowledge, no theoretical research on this issue has been published up to now. Therefore, how to compare their power theoretically and find a minimum complete set of existing dependence tests is a problem demanding prompt solutions.

As a matter of fact, the restriction (2) constructs the feasible region R of the system of Eq. (1), while the restriction (3) only defines the lexicographical order of the variables of the system of Eq. (1) in R. As we described in last paragraph, existing dependence testing methods always determine whether the system of Eq. (1) has a feasible solution in an extended region R’. The feasible region R transforms into the whole field of real numbers in the absence of the restriction (2); otherwise it is the whole field of integer numbers or bounded real number field when taking no account of the bound or integer constraint respectively. Here the bound is defined by the upper and lower bounds of each loop index. So we classify the region R into four categories: the unbounded real number field UR, the unbounded integer number field UI, the bounded real number field BR and the bounded integer number field BI. Their relationship is shown in Fig. 1.

Since the restriction (2) represents a set of integer numbers bounded by the loop indexes bounds, the region of interest R is equal to BI. Our purpose is to compare the power of different dependence testing techniques, and each testing algorithm corresponds to a kind of region shown in Fig. 1, so we can compare their relative power through the relationship among their corresponding regions in Fig. 1.

Fig. 1
figure 1

The relationship between four feasible regions

At this point, for any two dependence tests D-test1 and D-test2, we are trying to answer such a question: Can D-test2 assure there is no dependence as long as D-test1 disproves the dependences? If we view D-test1 and D-test2 as two predicates in the first-order formal system K\(_\mathcal{L}\) [12], then we only need to prove whether D-test1\(\rightarrow \)D-test2 is a theorem in K\(_\mathcal{L}\). Our intention is to construct such a formal system, and prove each theorem like D-test1\(\rightarrow \)D-test2 in the system.

3 The K-DT system

A data dependence testing technique usually returns three kinds of results: yes, no or maybe. Accordingly, one can easily map these results to intuitionistic logic. If a data dependence testing technique returns no, saying there are no dependences, the truth value of its corresponding proposition is true. We define this way because the testing result is always expected to return no, so that the compiler can reorder the programs safely and more compile time optimizations can be applied. If it returns yes, saying there are dependences, the truth value of its corresponding proposition is false. If it returns maybe, saying it cannot determine whether there are dependences, the truth value of its corresponding proposition is unknown.

When parallelizing programs, safety always comes before performance. More specifically, when a parallelizing compiler translates serial programs into parallel codes, maintaining the semantics of generated codes’ semantic is more important than boosting the speedup. Thus all data dependence testing techniques have to be conservative. In other words, when returning maybe, they have to assume conservatively the existence of dependences in case the compiler performs optimizations which will violate potential dependences. Therefore, when a testing algorithm cannot confirm whether there are dependences, we also treat it as it returns yes. Hence the truth value of its corresponding proposition is also false. We transform the intuitionistic logic problem into classical logic area by this means.

We transform the problem into classical logic based on the following considerations. First, it is reliable when a dependence testing algorithm returns no. Some methods do not make specific distinction between the remaining two cases. If the problem is viewed as an intuitionistic logical issue, it will be difficult to formalize the remaining truth values, i.e., false and unknown. Second, if we solve the problem using classical logic, the truth value of a proposition will be either true or false, allowing us to use the reduction to absurdity, simplifying the deduction process of theorems. Finally, a classical logic based predicate calculus is more concise than that based on an intuitionistic one.

3.1 Constructing the K-DT system

We attempt to solve the problem based on the first-order formal system K\(_\mathcal{L}\), because its soundness, adequacy, and consistency have been proved. However, there are only six basic axioms in K\(_\mathcal{L}\), none of which has a definite relationship with the theorems we would like to prove, making K\(_\mathcal{L}\) not suitable for our purpose. It is necessary to supply some additional axioms to form a new formal system. Since it is a K\(_\mathcal{L}\) based system and designed to evaluate data dependence testing techniques, we name the system as K-DT.

Before adding necessary axioms to K\(_\mathcal{L}\), we need to append some domain-specific predicate letters, including GCD, Banerjee, I, EGCD, \(\lambda \), Power, Omega, Delta, \(suff\_test\), \(nec\_test\), Solvable, Include, \(F\_M\), \(complete\_test\), \(single\_test\), \(DT\_test\), to this system. \(Suff\_test\), representing a kind of dependence testing algorithm as well as a sufficient condition to determine the system of Eq. (1), and \(nec\_test\), representing a kind of dependence testing technique as well as a necessary condition, are unitary predicates. Solvable and Include are both binary predicates. One states that the tested system of equations is solvable in some feasible region, while the other tells that a feasible region includes another one. \(Complete\_test\), \(single\_test\), and \(DT\_test\) are unitary predicates, which will be explained in the following context, while the remaining represents different kinds of dependence testing techniques.

Suppose \(\mathcal L_{DT}\) refer to a first-order language, so we can define the formal deductive system K-DT by the axioms followed.

(D1)

\(A\rightarrow (B\rightarrow A)\)

(D2)

\((A\rightarrow (B\rightarrow C))\rightarrow ((A\rightarrow B)\rightarrow (A\rightarrow C))\)

(D3)

\( (\sim B\rightarrow \sim A)\rightarrow (A\rightarrow B)\)

(D4)

\( (\forall e_i)A\rightarrow A\)   (if \(e_i\) does not occur free in A)

(D5)

\( (\forall e_i)A\rightarrow A(t)\)   (if \(A(e_i)\) is a wf. of \(\mathcal L_{DT}\) and t is a term in \(\mathcal L_{DT}\) which is free for e in \(A(e_i)\))

(D6)

\( (\forall e_i)(A\rightarrow B)\rightarrow (A\rightarrow (\forall e_i)B)\) (if A contains no free occurrence of the variable \(e_i\))

(D7)

\( Suff\_test(e)\rightarrow \sim Solvable(e, r)\)

(D8)

\( \sim Solvable(e, r)\rightarrow Nec\_test(e)\)

(D9)

\( Solvable(u(e_1, e_2), r)\rightarrow Solvable(e_1, r)\)

(D10)

\( Include(r_2, r_1)\rightarrow (Solvable(e, r_1)\rightarrow Solvable(e, r_2))\)

Axioms (D1)–(D6) correspond to Axioms (K1)–(K6) of K\(_\mathcal{L}\). Axiom (D7) represents that the system of equations is unsolvable in a feasible region r provided a sufficient condition dependence testing algorithm returns no. Axiom (D8) says that if the system of equations is unsolvable in a feasible region r, a necessary condition dependence test will certainly return a result saying there is no dependence. Axiom (D9) can be understood as that if a system of equations composed of \(e_1\) and \(e_2\) is solvable on a feasible region r, \(e_1\) is also solvable on r. Axiom (D10) shows that if a feasible region \(r_2\) contains \(r_1\) and the system of equations is solvable on \(r_1\), it will also be solvable on \(r_2\).

Notice that K-DT also has two deductive rules known as modus ponens and generalization, which is the same with K\(_\mathcal{L}\).

3.2 Properties of the K-DT system

Before performing the deductive process of its theorems, we need to prove some properties of the K-DT system. The following problems should be taken into account: (1) whether K-DT is sound; (2) whether K-DT is adequate; (3) whether K-DT is consistent. Only are these problems proved true, the theorem proofs based on K-DT can be convincing. We use \(\vdash A\) represents that A is a theorem in K-DT, and \(\models A\) represents A is a logically valid well-formed formula (wf.) of \(\mathcal L_{DT}\).

Theorem 1

(The Soundness Theorem for K-DT) For any wf. A of \(\mathcal L_{DT}\), if \(\vdash A\) then A is logically valid.

Proof

By induction on the number s of steps in a proof of A. If s = 1, meaning A has a one-step proof, then A is an axiom of K-DT. Each axiom of K-DT is logically valid. Suppose that \(s = n (n>1)\) and all theorems of K-DT with proofs in fewer than n steps are logically valid. A appears in a proof, so only the following two cases may happen. (1) A is an axiom; (2) A follows from previous wfs. For case (1), A is logically valid, as above. For case (2), A is also logically valid as shown in [12]. This completes our proof by induction.

Theorem 2

(The Adequacy Theorem for K-DT) For any wf. A of \(\mathcal L_{DT}\), if \(\models A\) then A is a theorem of K-DT.

Proof

Only the predicate letters of K-DT are different from those of K\(_\mathcal{L}\). All the wfs. in K-DT are defined in the same way as those in K\(_\mathcal{L}\). As K\(_\mathcal{L}\) is adequate, all the wfs. without domain-specific predicates are theorems of K-DT. For the wfs. with domain-specific predicates, their proof is similar to that of the Adequacy Theorem for K\(_\mathcal{L}\) in [12]. So it is true in each case that if \(\models A\) then A is a theorem of K-DT.

Theorem 3

(The Consistency Theorem for K-DT) For no wf. A are both A and \(\sim A\) theorems of K-DT.

Proof

Suppose that \(\vdash A\) and \(\vdash \sim A\) for some wf. A of \(\mathcal L_{DT}\). So both A and \(\sim A\) are logically valid wfs. by Theorem 1, i.e., A and \(\sim A\) are both tautologies, which violates the Excluded Middle of the classical logic. So K-DT must be consistent.

Since the soundness and adequacy have been proved, we can perform the theorem proofs in K-DT. However, we find that the proofs would become very redundant with only the modus ponens and generalization rules, we therefore propose the following rules to simplify the deductive process.

Theorem 4

(The Deduction Theorem for K-DT) Let A and B be wfs. of \(\mathcal L_{DT}\) and let \(\mathcal T\) be a set (possibly empty) of wfs. of \(\mathcal L_{DT}\). If \(\mathcal {T}\cup \{A\}\vdash B\), and the deduction contains no application of Generalization involving a variable which occurs free in A, then \(\mathcal {T}\vdash (A\rightarrow B)\).

Proof

As shown in Theorem 2, only the predicate letters of K-DT are different from those of K\(_\mathcal{L}\). All the wfs. in K-DT are defined same as those in K\(_\mathcal{L}\). So the Deduction Theorem for K-DT is same with that for K\(_\mathcal{L}\), while the latter has been proved in [12].

Theorem 5

(The Hypothetical Syllogism Theorem for K-DT) Let A, B and C be wfs. of \(\mathcal L_{DT}\). If \(\vdash (A\rightarrow B)\) and \(\vdash (B\rightarrow C)\) then \(\vdash (A\rightarrow C)\).

Proof

  1. 1.

    \(A\rightarrow B\qquad (Assumption)\)

  2. 2.

    \(B\rightarrow C\qquad (Assumption)\)

  3. 3.

    \((B\rightarrow C)\rightarrow (A\rightarrow (B\rightarrow C))\qquad (D1)\)

  4. 4.

    \(A\rightarrow (B\rightarrow C)\qquad (2, 3, MP)\)

  5. 5.

    \((A\rightarrow (B\rightarrow C))\rightarrow ((A\rightarrow B)\rightarrow (A\rightarrow C))\qquad (D2)\)

  6. 6.

    \((A\rightarrow B)\rightarrow (A\rightarrow C)\qquad (4, 5, MP)\)

  7. 7.

    \(A\rightarrow C\qquad (1, 6, MP)\)

4 Theorem proofs based on the K-DT system

When determining dependences of multi-dimensional subscripted references, a subscript position is said to be separable if its indexes do not occur in the other subscripts [13, 14]. If two subscripted references contain the same index, they are coupled [15]. What kind of dependence testing techniques a compiler uses is decided by whether an index of a subscript occurs in others.

Mathematically speaking, if the subscripts are separable, the system of Eq. (1) is unsolvable if and only if at least one of its equations is unsolvable. If the subscripts are coupled, the unsolvability of the system of Eq. (1) is only a necessary condition of the unsolvability of its each equation. Considering this matter, we divide existing linear dependence testing techniques into those for separated and for coupled subscripts.

4.1 Theorems of dependence tests for separated subscripts

For separable subscripts, the system of Eq. (1) is unsolvable if and only if at least one of its equations is unsolvable, as mentioned above. A dependence testing method for separated subscripts is allowed to seek solutions for a single equation rather than the whole equation system. Any equation of the system of equation system (1) can be written as

$$\begin{aligned} f(i_1, \ldots , i_n, j_1, \ldots , j_n)\equiv h(i_1, \ldots , i_n)-g(j_1, \ldots , j_n) = 0 . \end{aligned}$$
(4)

Without loss of generality, we can assume h and g have the following forms

$$\begin{aligned} h(i_{1}, \ldots , i_{n})= & {} a_{1}i_{1}+a_{2}i_{2}+\cdots +a_{n}i_{n}+a_{0} \end{aligned}$$
(5)
$$\begin{aligned} g(j_{1}, \ldots , j_{n})= & {} b_{1}j_{1}+b_{2}j_{2}+\cdots +b_{n}j_{n}+b_{0} . \end{aligned}$$
(6)

Hence

$$\begin{aligned} f(i_{1}, \ldots , i_{n}, j_{1}, \ldots , j_{n})\equiv a_{0}-b_{0}+a_{1}i_{1}-b_{1}j_{1}+\cdots +a_{n}i_{n}-b_{n}j_{n}=0. \end{aligned}$$
(7)

We first analyze the GCD (greatest common divisor) test [16]. The GCD test uses a fundamental theorem about the linear diophantine equations [17] to determine whether an equation has dependences. Rearranging Eq. (7) yields the following

$$\begin{aligned} a_{1}i_{1}-b_{1}j_{1}+\cdots +a_{n}i_{n}-b_{n}j_{n}=b_{0}-a_{0}. \end{aligned}$$
(8)

The fundamental theorem is the following.

Lemma 1

(GCD test) Equation (8) has a solution if and only if \(gcd(a_1,\ldots ,a_n,b_1,\ldots ,b_n)\) divides \(b_0-a_0\).

That is to say, if the greatest common divisor of all the coefficients of variables does not divide the right-hand term of Eq. (8), no solution exists anywhere. Also, there are no dependences. Otherwise it has solutions somewhere. The GCD test is a necessary and sufficient condition, and its feasible region is UI.

Banerjee test [18] is a dependence testing algorithm with feasible region BR. Its main idea is to calculate the extreme values of the left-hand terms of Eq. (8) and determine whether the right-hand term exists in the scope bounded by these extreme values. When determining a dependence with direction vector \(D = (D_1, \ldots , D_n)\), Banerjee test works based on the following.

Lemma 2

(Banerjee Inequality) There exists a real solution to Eq. (8) for direction vector \(D = (D_1, \ldots , D_n)\) if and only if the following inequality is satisfied on both sides:

$$\begin{aligned} \sum \limits _{k=1}^{n}H_{k}^{-}D_k\le b_0-a_0\le \sum \limits _{k=1}^{n}H_{k}^{+}D_k \end{aligned}$$

where \(H_{k}^{+}\) and \(H_{k}^{-}\) represent the maximum and minimum values of left-hand terms figured out according to \(D_k\). A \(D_k\) can be ‘>’, ‘<’ ‘=’, or ‘*’, with ‘*’ representing that this component is unknown. Definitions on each can be found in [1, 18]. The Banerjee test is a necessary and sufficient condition and its feasible region is BR.

Neither the feasible regions of GCD test nor Banerjee test is the region of interest BI. Considering this problem, Kong et al. [19] proposed an improved dependence testing algorithm called I test. The I test abstracts Eq. (8) as an interval equation

$$\begin{aligned} a_{1}I_{1}+a_{2}I_{2}+\cdots +a_{s}I_{s}=[L, U] \end{aligned}$$
(9)

and supposes there are \(s(s=2n)\) variables in the equation, with each coefficient is \(a_k(1\le k\le s)\). L and U are natural numbers. As a result, its determining process can be described as follows. If \(\vert a_s \vert >1\), reduce Eq. (9) with Lemma 3. Next, repeat Lemma 4 to perform equation elimination until only one variable is left in the left-hand expression. At this point, determine whether dependences exist according to the bounds of loop indexes and the GCD test. In the meanwhile of applying Lemma 4, output a result and return whether the Banerjee test can prove there are no dependences.

Lemma 3

Let \(d=gcd(a_1, a_2, \ldots , a_s)\), Eq. (9) is \((M_1, N_1; M_2, N_2; \cdots ; M_s, N_s)\)-integer solvable iff the interval equation \((a_1/d)I_1+(a_2/d)I_2+\cdots +(a_s/d)I_s=[ \lceil L/d\rceil , \lfloor U/d\rfloor ]\) is \((M_1, N_1; M_2, N_2; \cdots ; M_s, N_s)\)-integer solvable, where each of \(M_k\) and \(N_k\) be either an integer or the distinguished symbol “*” and \(M_k\le N_k(1\le k\le s)\) if they are both integers.

Lemma 4

If \(\vert a_s \vert \le U-L+1\), then Eq. (9) is \((M_1, N_1; M_2, N_2; \cdots ; M_s, N_s)\)-integer solvable iff the interval equation \(a_1I_1+a_2I_2+\cdots +a_{s-1}I_{s-1}=[L-a^{+}_{s}N_s+a^{-}_{s}M_s, U-a^{+}_{s}M_s+a^{-}_{s}N_s]\) is \((M_1, N_1; M_2, N_2; \cdots ; M_s, N_s)\)-integer solvable, where \(a^{+}_{s}\) and \(a^{-}_{s}\) represent the positive and negative parts respectively.

Lemma 4 reduces the number of variables of Eq. (9) and defines new bounds for the generated interval equations with the positive and negative parts. The I test is also a necessary and sufficient condition, with the feasible region BI.

To sum up, for the GCD, Banerjee and I tests, their properties and feasible regions are shown in Table 1. Now we can prove the theorems whose predicates are the same with these testing techniques names in K-DT. As numerous theorems in K\(_\mathcal{L}\) have been proved in [12] and they are also theorems in K-DT, we label these theorems with TK representing a Theorem in K\(_\mathcal{L}\) in following proofs.

Table 1 The properties and feasible regions of dependence tests for separated subscripts

Theorem 6

\((\forall e)(GCD(e)\rightarrow I(e)).\)

Proof

  1. 1.

    \(Include(UI, BI)\qquad (Tautology)\)

  2. 2.

    \(Include(UI,BI)\rightarrow (Solvable(e,BI)\rightarrow Solvable(e,UI))\qquad (D10) \)

  3. 3.

    \(Solvable(e,BI)\rightarrow Solvable(e,UI)\qquad (1, 2, MP)\)

  4. 4.

    \((Solvable(e,BI)\rightarrow Solvable(e,UI))\rightarrow (\sim Solvable(e,UI)\rightarrow \sim Solvable(e,BI))\qquad (TK)\)

  5. 5.

    \(\sim Solvable(e,UI)\rightarrow \sim Solvable(e,BI))\qquad (3, 4, MP)\)

  6. 6.

    \(GCD(e)\rightarrow \sim Solvable(e,UI)\qquad (D7)\) Footnote 2

  7. 7.

    \(GCD(e)\rightarrow \sim Solvable(e,BI)\qquad (6, 5, HS)\)

  8. 8.

    \(\sim Solvable(e,BI)\rightarrow I(e)\qquad (D8)\)

  9. 9.

    \((GCD(e)\rightarrow I(e))\qquad (7, 8, HS)\)

  10. 10.

    \((\forall e)(GCD(e)\rightarrow I(e))\qquad (Generalization)\)

Theorem 7

\((\forall e)(Banerjee(e)\rightarrow I(e)).\)

Proof

  1. 1.

    \(Include(BR, BI)\qquad (Tautology)\)

  2. 2.

    \(Include(BR,BI)\rightarrow (Solvable(e,BI)\rightarrow Solvable(e,BR))\qquad (D10)\)

  3. 3.

    \(Solvable(e,BI)\rightarrow Solvable(e,BR)\qquad (1, 2, MP)\)

  4. 4.

    \((Solvable(e,BI)\rightarrow Solvable(e,BR))\rightarrow (\sim Solvable(e,BR)\rightarrow \sim Solvable(e, BI))\qquad (TK)\)

  5. 5.

    \(\sim Solvable(e,BR)\rightarrow \sim Solvable(e,BI))\qquad (3, 4, MP)\)

  6. 6.

    \(Banerjee(e)\rightarrow \sim Solvable(e,BR)\qquad (D7)\)

  7. 7.

    \(Banerjee(e)\rightarrow \sim Solvable(e,BI)\qquad (6, 5, HS)\)

  8. 8.

    \(\sim Solvable(e,BI)\rightarrow I(e)\qquad (D8)\)

  9. 9.

    \((Banerjee(e)\rightarrow I(e))\qquad (7, 8, HS)\)

  10. 10.

    \((\forall e)(Banerjee(e)\rightarrow I(e))\qquad (Generalization)\)

We can prove neither \((\forall e)(GCD(e)\rightarrow Banerjee(e))\) nor \((\forall e)(Banerjee(e)\rightarrow GCd(e))\), validating neither testing technique is more accurate than the other in theory, as explained by Kong et al. [19]. However, we proved formally that both the GCD and Banerjee tests can be reduced to the I test. In other words, for all system of equations, as long as the GCD test or Banerjee test can disprove dependences, the I test is certainly able to assure the absence of dependences.

4.2 Theorems of dependence tests for coupled subscripts

As described above, for coupled subscripts, the unsolvability of the system of Eq. (1) is only a necessary condition of the unsolvability of its each equation. In other words, if the m equations are solvable individually, the system of Eq. (1) is certainly solvable, which is formalized as the axiom (D9). If a parallelizing compiler uses a testing algorithm for coupled subscripts (subscript-by-subscript checking), there may be no dependences even if the result is yes. Some newer introduced testing algorithms are usually proposed to handle the coupled subscripts.

Earlier dependence testing techniques for coupled subscripts are designed based on the GCD test and Banerjee inequality. Researchers extended these methods so as to give a more accurate result for coupled cases. Since the GCD test can only analyze separated subscripts, Knuth [20] extended its algorithm to find whether a set of linear equations with integer coefficients has any integer solutions. Banerjee provided a way to enumerate all those solutions [21]. We call it EGCD test as it derived from the GCD algorithm.

Suppose A is a \(2n\times m\) coefficient matrix filled with the coefficients of the system of Eq. (1). If there are linearly dependent equations in the system of Eq. (1), eliminate the redundant ones with the GCD algorithm. The goal of the EGCD test is to seek whether there is an integer solution x for the system of Eq. (1), by checking whether a set of equations

$$\begin{aligned} \mathbf {xA}=c(c\text { is a vector with}\,m\,\text {elements}) \end{aligned}$$
(10)

has an integer solution.

The checking process is following. Initialize a \(2n\times m\) matrix D with the elements of A, and a \(2n\times 2n\) matrix U with the identity matrix. Store these two matrices in one \(2n\times (2n+m)\) matrix ( U \(\vert \) D). Reduce the matrix D to upper triangular form, as shown below, by a series of elementary integer row operations. At this point, in the kth \((1\le k<2n)\) column of matrix D, all the elements in rows k+1 through 2n are zeros. Applying such elementary operations until the identity matrix U satisfies U A= D. Now the matrix U is transformed into a unimodular one (det( U) = ±1).

Finally, if there is an integer solution t for tD = c, then x = tU is an integer solution to the set of Eq. (10). As a result, the EGCD test is a necessary condition and its feasible region is UI.

The \(\lambda \) test [15, 22] is the first work to consider all the subscripts of coupled subscripts together. It is a multi-dimensional version of Banerjee inequalities. For the system of Eq. (1), the \(\lambda \) test supposes there are \(s=2n\) variables and it will transform into

$$\begin{aligned} \left\{ \begin{array}{c} a^{(1)}_{1}v^{(1)}+a^{(2)}_{1}v^{(2)}+\cdots +a^{(s)}_{1}v^{(s)}+c_1 = 0 \\ a^{(1)}_{2}v^{(1)}+a^{(2)}_{2}v^{(2)}+\cdots +a^{(s)}_{2}v^{(s)}+c_2 = 0 \\ \vdots \\ a^{(1)}_{m}v^{(1)}+a^{(2)}_{m}v^{(2)}+\cdots +a^{(s)}_{m}v^{(s)}+c_m = 0 \\ \end{array}\right. \end{aligned}$$
(11)

Equation (11) is then linearly combined as

$$\begin{aligned} <\sum _{i=1}^{m}\lambda _i\mathbf {a_i},\mathbf {v}>+\sum _{i=1}^{m}c_i=0 \end{aligned}$$
(12)

where \(\mathbf {a_i}=(a^{(1)}_{i},a^{(2)}_{i}, \ldots , a^{(s)}_{i}), \mathbf {v}=(v^{(1)},v^{(2)},\ldots ,v^{(s)})\).

After such a linear combination, the \(\lambda \) test will eliminate one or multiple newly introduced variables \(\lambda _i\) with the canonical solutions defined in [15, 22]. If all variables \(\lambda _i\) are eliminated, it then applies the Banerjee test to solve Eq. (12). However, when variables \(\lambda _i \) cannot be eliminated, which usually happens in cases when \(m > 2\), the \(\lambda \) test cannot give an accurate result. Therefore, when all the introduced variables \(\lambda _i \) can be eliminated, the test is a necessary and sufficient condition with feasible region BR. Otherwise, it will not work.

The power test [23] is a technique combining the EGCD test and Fourier–Motzkin elimination. It first tests the subscripts with the EGCD test. If the EGCD test cannot prove the system of Eq. (1) has no solution, the power test attempts to give a feasible solution. However, whether this feasible solution is in the region of interest R is not decidable. Hence the power test determines whether there are dependences according to the restrictions (2) and (3).

In the EGCD test algorithm, as the matrix D will change into upper triangular form after a series of elementary operations, its first m rows hold nonzero elements. The EGCD test is able to figure out first m elements of the integer solution t and thus \(t_{m+1}, t_{m+2}, \ldots , t_{2n}\) are free elements. Next, the power test tries to compute the dependence distance. If only the coefficients of \(t_1\) through \(t_m\) are nonzero, the dependence distance is fixed and the power test is a necessary and sufficient condition with feasible region BI. If there are nonzero coefficients for any \(t_v (v>m)\), then the dependence distance is not constant. At this point, the power test will apply the Fourier–Motzkin elimination, implying that the system of inequalities after eliminating a variable has solutions if and only if the original system has solutions. In this case, the power test is a sufficient condition, and its feasible region is BR. It cannot be a necessary condition, since the power test may not report there is no dependence when there is no solution for the system of Eq. (1).

The omega test [24] is also a Fourier–Motzkin elimination-based dependence testing technique. It considers the system of Eq. (1) and restrictions (2) and (3) together, and transforms them all into a set of inequalities by eliminating redundant elements. As a consequence, this set of inequalities is treated as a polyhedron object. Intuitively, it finds the \(n-1\)-dimensional shadow cast by an n-dimensional object and calculates the “real shadow” and “dark shadow”Footnote 3 of the object respectively. When the “real shadow” and “dark shadow” are identical, there are integer solutions to the set of inequalities if and only if there are integer solutions to the shadow. Otherwise: (a) There is no integer solution to the set of inequalities if there is no integer solution to the “real shadow”; (b) there are integer solutions to the set of inequalities if there are integer solutions to the “dark shadow”; (c) otherwise, it considers a set of planes parallel to a lower bound and close to a lower bound and analyzes the problem by some expensive and complicated steps. In practice, the last case is rarely used, and hence we say the omega test is only a sufficient condition but not necessary. Although it is similar to the power test to a certain extent, its feasible region is always BI.

Since most subscripts found in practice are SIV (Single Index Variable) and their tests are usually simple and accurate, the main idea behind delta test [5] is to propagate the constraints produced by the SIV subscripts to other subscripts in the same group without losing accuracy. In most cases, such propagation will simplify the testing of other subscripts and produce a precise set of direction vectors. The main process delta test can be described as follows. The delta test first tests SIV parts in the coupled subscripts. If they are independent, the test will return no. Otherwise, it converts the information gleaned by these SIV subscripts into constraints and propagates it to all possible MIV (Multiple Index Variable) subscripts. Repeat this phase until no constraint can be found. Then propagate all the results to coupled RDIV (restricted double-index variable, which has the form \(< a_1i_1 + c_1, a_2i_2 + c_2>\)) subscripts. Test all the remaining MIV subscripts and intersect the results with current constraints. If the intersection set is null, return there is no dependence. Otherwise, return the produced information about the dependences.

As a matter of fact, the delta test may be viewed as a restricted form of the \(\lambda \) test that trades generality for greater efficiency and precision [1]. When applying the delta test in practice, one can easily find out that it is an application of the \(\lambda \) test on two-dimensional cases. Therefore, the delta test is a necessary and sufficient condition, with the feasible region being BI.

As a result of the above discussion, we can summarize the properties and feasible regions of dependence testing techniques for coupled subscripts, which are shown in Table 2. At present we can prove the theorems in K-DT corresponding to these tests. First, as the delta test is a restricted application of the \(\lambda \) test, and the power test algorithm always invokes the EGCD test, we have the following

Table 2 The properties and feasible regions of dependence tests for coupled subscripts

Proposition 1

\((\forall e)(Delta(e)\rightarrow \lambda (e))\).

Proposition 2

\((\forall e)(EGCD(e)\rightarrow Power(e))\).

The number of variables of the system of Eq. (1) is 2n, and it has m equations. The power test is a necessary and sufficient condition with feasible region BI when Fourier–Motzkin elimination is not invoked. As a consequence, we have

Theorem 8

When the power test does not invoke Fourier–Motzkin elimination, \((\forall e)(suff\_test(e)\rightarrow Power(e))\).

Proof

  1. 1.

    \(Include (r, BI)\qquad (Tautology)\)

  2. 2.

    \(Include (r, BI)\rightarrow (Solvable(e, BI)\rightarrow Solvable(e, r)) (D10)\)

  3. 3.

    \(Solvable(e, BI)\rightarrow Solvable(e, r)\qquad (1, 2, MP)\)

  4. 4.

    \((Solvable(e, BI)\rightarrow Solvable(e, r))\rightarrow (\sim Solvable(e, r)\rightarrow \sim Solvable (e, BI))\quad (TK)\)

  5. 5.

    \(\sim Solvable(e, r)\rightarrow \sim Solvable(e, BI)\qquad (3, 4, MP)\)

  6. 6.

    \(Suff\_test(e)\rightarrow \sim Solvable(e, r)\qquad (D7)\)

  7. 7.

    \(Suff\_test(e)\rightarrow \sim Solvable(e, BI)\qquad (5, 6, HS)\)

  8. 8.

    \(\sim Solvable(e, BI)\rightarrow Power(e)\qquad (D8)\)

  9. 9.

    \(Suff\_test(e)\rightarrow Power(e)\qquad (7, 8, HS)\)

  10. 10.

    \((\forall e)(Suff\_test(e)\rightarrow Power(e))\qquad (Generalization)\)

Theorem 9

When the power test invokes Fourier–Motzkin elimination, \((\forall e) (Power(e)\rightarrow Omega(e))\).

Proof

Before showing the proof, we should emphasize that the power test invokes Fourier–Motzkin elimination, while this method implies that, the system of inequalities after eliminating a variable has solutions if and only if the original system has solutions. Hence the following

$$\begin{aligned} \sim Solvable(e, BR)\leftrightarrow \sim Solvable(e, BR') \end{aligned}$$

is maintained by the power test. In the above, BR’ represents the region after elimination. As for the omega test, case (a) can be formalized as

$$\begin{aligned} \sim Solvable(e, BR')\leftrightarrow Omega(e)) \end{aligned}$$

Hence we have the following proof.

  1. 1.

    \(Power(e)\rightarrow \sim Solvable(e, BR)\qquad (D7)\)

  2. 2.

    \(\sim Solvable(e, BR)\rightarrow \sim Solvable(e, BR')\qquad (Tautology)\)

  3. 3.

    \(Power(e)\rightarrow \sim Solvable(e, BR')\qquad (1, 2, HS)\)

  4. 4.

    \(\sim Solvable(e, BR')\rightarrow Omega(e)\qquad (Tautology)\)

  5. 5.

    \((Power(e)\rightarrow Omega(e))\qquad (3, 4, HS)\)

  6. 6.

    \((\forall e)(Power(e)\rightarrow Omega(e))\qquad (Generalization)\)

It is necessary to have a discussion here. The omega test applies Fourier–Motzkin elimination as well, but the feasible region is always BI but not BR. As a result, the tautology 2) in above proof is not logically valid for the omega test.

From previous two theorems we can find that, when the power test does not invoke Fourier–Motzkin elimination, for any dependence testing method discussed in this study, the power test will assure the absence of dependences provided any dependence testing algorithm returns independence. When the power test invokes Fourier–Motzkin elimination, for the dependence testing methods discussed in this study, the EGCD test and power test can be reduced to the omega test. Considering the delta test can be reduced to the \(\lambda \) test, we only talk about the relationship between the omega test and the \(\lambda \) test.

Theorem 10

\((\forall e)(\lambda (e)\rightarrow Omega(e))\).

Proof

For the omega test, if we consider without the integer constraint, we will obtain.

Proposition 3

\(F\_M(e)\leftrightarrow \sim Solvable(e, BR).\)

Here the \(F\_M(e)\) represents that there is no real solution to e by applying Fourier–Motzkin elimination. Now consider the integer constraint again. The omega test discusses the integer solution based on the matter that the system of Eq. (1) has a real solution. Hence

Proposition 4

\(F\_M(e)\rightarrow Omega(e).\)

Hence we have the following proof.

  1. 1.

    \(\lambda (e)\rightarrow \sim Solvable(e, BR)\qquad (D7)\)

  2. 2.

    \(\sim Solvable(e, BR)\rightarrow F\_M(e)\qquad \) (Proposition 3)

  3. 3.

    \(\lambda (e)\rightarrow F\_M(e)\qquad (1, 2, HS)\)

  4. 4.

    \(F\_M(e)\rightarrow Omega(e)\qquad \) (Proposition 4)

  5. 5.

    \((\lambda (e)\rightarrow Omega(e))\qquad (3, 4, HS)\)

  6. 6.

    \((\forall e)(\lambda (e)\rightarrow Omega(e))\qquad (Generalization)\)

Therefore, when the power test invokes Fourier–Motzkin elimination, the omega test will assure there are no dependences provided any dependence testing algorithm for coupled subscripts returns an independent result.

5 Upper bounds and minimum complete sets

When proving the above-mentioned theorems, we think about whether there is a kind of dependence testing algorithm that can imply all the remaining ones. More specifically, we say that a dependence testing technique D-test2 is more powerful than D-test1 when the theorem D-test1\(\rightarrow \)D-test2 can be proved. A dependence testing technique is supposed to be a upper bound if any dependence testing techniques can be reduced to it. A set of testing techniques is defined as the minimum complete set if they can be most powerful when working together. Obviously, none of the dependence testing methods discussed in this paper satisfies the requirement of the upper bound. However, can we give an equivalent condition of such dependence testing algorithms or are there some upper bounds in different cases? Beyond that, what are the minimum complete sets of these dependence testing techniques in different cases?

5.1 Upper bounds of existing linear dependence tests

Evidently, if there is such a dependence testing technique, it must be for coupled subscripts. The reason has been explained in previous section. Hence we say

Theorem 11

If there exists such a dependence test \(complete\_test\) that \(complete\_test(e)\leftrightarrow \sim Solvable(e, BI)\) is true in each case, then \((\forall e)(suff\_test(e)\rightarrow complete\_test(e))\).

Proof

As the proof of Theorem 8, substituting the Power with \(complete\_test\) everywhere will obtain the proof of this theorem.

Although none of the mentioned dependence testing algorithms is able to satisfy the hypothesis of Theorem 11, we can get the following conclusion from Theorem 8. In the case that Fourier–Motzkin elimination is not invoked, the power test is an upper bound of all the dependence testing methods.

In a similar way, for all the dependence testing algorithms for separated subscripts discussed in this study, the following is always preserved.

Theorem 12

If \(single\_test\) that is a dependence test for separated subscripts, then \((\forall e)(single\_test(e)\rightarrow I(e))\).

Proof

As the proof of Theorem 8, substituting the Power with \(single\_test\) everywhere will obtain the proof of this theorem.

From Theorems  6,  7 and  11, we can conclude that the I test is the upper bound of the discussed dependence tests for separated subscripts.

5.2 Minimum complete sets of existing dependence tests

Since the power test is the upper bound of the discussed dependence tests in the case that Fourier–Motzkin elimination is not invoked, the minimum complete set in this case is {power test}. The proof can be acquired from Theorem  8 and  10. However, it is not so easy to figure out whether the power test invokes Fourier–Motzkin elimination. By further analyzing the process of the power test, we find that when \(2n\le m\), saying the number of variables of the system of Eq. (1) is smaller than its equation number, the power test does not invoke it for sure.

Otherwise, if the power test invokes Fourier–Motzkin elimination, we know, from Theorem 9, that all the dependence tests for coupled subscripts can be reduced to the omega test, while each dependence testing algorithm for separated subscripts is reduced to the I test. Neither of the omega test and I test can be reduced to each other, so the minimum complete set in this case is {I test, omega test}. We should prove that

Theorem 13

For any \(DT\_test\in Predicate\_DT\), the following \((\forall e)(DT\_test (e)\rightarrow \) \( (\sim I(e)\rightarrow Omega(e)))\) is always preserved.

Proof

We should consider two cases. First, if \(DT\_test\) is a dependence test for separated subscripts, then

  1. 1.

    \((\forall e)(DT\_test(e)\rightarrow I(e))\qquad \) (Theorem 12)

  2. 2.

    \((\forall e)(DT\_test(e)\rightarrow I(e))\rightarrow (DT\_Test(e)\rightarrow I(e))\qquad (D4)\)

  3. 3.

    \((DT\_test(e)\rightarrow I(e))\qquad (1, 2, MP)\)

  4. 4.

    \(I(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (TK)\)

  5. 5.

    \(DT\_test(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (3, 4, HS)\)

  6. 6.

    \((\forall e)DT\_test(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (Generalization)\)

Second, if \(DT\_test\) is a dependence test for coupled subscripts, then

  1. 1.

    \((\forall e)(DT\_test(e)\rightarrow Omega(e))\qquad \) (Theorems 9, 10)

  2. 2.

    \((\forall e)(DT\_test(e)\rightarrow Omega(e))\rightarrow (DT\_test(e)\rightarrow Omega(e))\qquad (D4\)

  3. 3.

    \((DT\_test(e)\rightarrow Omega(e))\qquad (1, 2, MP)\)

  4. 4.

    \( Omega(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (D1)\)

  5. 5.

    \(DT\_test(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (3, 4, HS)\)

  6. 6.

    \((\forall e)DT\_test(e)\rightarrow (\sim I(e)\rightarrow Omega(e))\qquad (Generalization)\)

Therefore, for all the discussed dependence tests, we have a conclusion as shown in Table 3.

Table 3 The minimum complete sets in different cases

As a matter of fact, the former two cases rarely happen in practical applications, while the last case is relatively common. Therefore, the authors are eager to use {I test, omega test} as the minimum complete set for most cases. For the remaining cases, we say {power test} is the minimum complete set of all dependence testing techniques.

6 Related work

Since the parallelizing compilation was proposed, the studies on data dependence analysis have been widely developed [2, 4,5,6,7,8, 16, 18,19,20,21,22,23,24] up to now. To evaluate the power of the proposed data dependence testing techniques, literature [4,5,6] showed empirical based evaluations so as to analyze the accuracy, efficiency, time complexity and tradeoffs between these properties. None of them gave a theoretical result. Golf et al. [2] attempted to find a combination of different dependence tests based on their categories on the variable pattern. The delta test is used in the most complicated case, but we have proved that it is reduced to the omega test. Maydan et al. [7] proposed a subscript pattern based dependence test suite. If all these tests failed, Fourier–Motzkin elimination [8] is used as a time-consuming back up test. However, each test in the suite can be viewed as a restricted form of the Fourier–Motzkin elimination with less time cost. They proposed to apply the Bound and Branch Method to seek integer solutions when all simple tests fail and the Fourier–Motzkin elimination proves there are real solutions somewhere. Ideally, this technique can be transformed into a necessary and sufficient condition with the feasible region BI, but it is really a nightmare in practice due to the time complexity. So is the last case of the omega test [24]. Therefore, it is only a sufficient condition in practice. The classical polyhedral compilation framework also investigated data dependence analysis [25,26,27]. In the polyhedral compilation framework, the dependence test is not only about finding solution to the equality system, but also satisfying the constraint including iteration space bound and dependence polyhedron.

Besides the linear dependence tests, developers proposed some nonlinear dependence tests for static parallelizing compilers in recent years. In 1995, Mohammad [28] presented a nonlinear dependence test, making the symbolic analysis serve as a fundamental step for new dependence testing techniques. Range test was proposed by Blume and Eigenmann [29]. It can handle the nonlinear expressions. Engelen et al. [30] described a unified approach, which is composed of nonlinear GCD test, nonlinear value range test and nonlinear extreme value test, for nonlinear dependence testing. The quadratic test and the QP (Quadratic Programming) test were introduced by Wu and Chu [31] and our research group [32] respectively to handle quadratic subscripts. The former is an exact test, but can only handle array subscripts with one-dimensional quadratic expression. The latter can determine multi-dimensional cases, but it has to be conservative when it cannot assure the absence of dependences. Zhou and Zeng [33] proposed Integer Interval Theory-based nonlinear test called PVI (Polynomial Variable Interval) test. It can be viewed as an extension of the I test. We adopted the quadratic case of PVI test to cooperate with the QP test [34]. The general dependence test presented by Hummel et al. [35] for dynamic, pointer-based data structures is also a nonlinear dependence testing algorithm.

7 Conclusions

We proposed a formal system K-DT that is designed to evaluate the relative power of existing linear data dependence testing techniques, and proved its soundness, adequacy, and consistency respectively. We theoretically evaluated the power of different linear dependence testing techniques, and proposed the upper bounds as well as the minimum complete sets in different cases based on the theorems in K-DT. As K-DT is a classical logic-based formal system that merges the results yes and maybe into one case, it is not able to distinguish the cases when a dependence test returns yes and maybe. As a result, it can only analyze the ability of a dependence test to disprove dependences.

There are still many aspects that need to be improved for the K-DT system. First, we analyzed the linear dependence tests and their minimum complete sets. There has been a great deal of work [28,29,30,31,32,33,34,35] for nonlinear dependence tests proposed, so we should pay our attention to these techniques for the future work. Second, K-DT is designed for the system of equations based dependence tests. For those [25,26,27, 29, 35] which are not analyzed based on the system of Eq. (1) but other techniques, it cannot analyze their power although the number of the methods like this is very limited.