Keywords

1 Introduction

In literature, there are many methods (heuristic as well as concatenation) to construct optimized Boolean functions. But heuristic techniques are mostly used as complexity of these methods are comparatively less and we can generate Boolean functions on higher variables also. In [1] Aguirre et al. have given a very good approach for multiobjectives. They took two and three objectives, and compared the results with two stage optimization. In [2], Camion et al. mentioned a new approach based on orthogonal arrays and constructed Boolean functions having good correlation immunity. In [3], Clark et al. gave a new two stage method based on simulated annealing, and result listed in this paper was better than previous results. But that method was able to get the optimum Boolean functions only for some limited variables. In [10, 13, 14], Maitra et al. constructed correlation immune functions keeping their nonlinearity optimal. First time they have constructed 1-resilient Boolean function with maximum nonlinearity for 8 variables and that method was based on concatenation. In [6, 8, 9], Clark et al. have found some functions with best tradeoff among Boolean function’s properties. In [11, 12, 15], there are some construction methods but these methods are not applicable for multiobjectives and complexity of these methods is not considerable. In [7], we have given a method based on multiobjective optimization (based on genetic algorithms) but were able to get the functions only for 4, 5, 6 and 7 variables and complexity of method was high.

There are many others heuristic and other types of techniques available in literature. But to find good trade-off among the properties, only a heuristic technique is not sufficient. We want a technique that is having less complexity and should work for large number of variables. If we want to optimize many properties simultaneously, technique should be multiobjective also. So, by introducing biasedness concept [4] in heuristic technique, we tried to find good Boolean functions with less complexity. So, in present paper, we have given a new concept (biasedness) and got some optimum results.

2 Some Definition [1, 7]

2.1 Boolean Function

Any function \(g:{\mathbb {K}}_2^n\rightarrow {\mathbb {K}}_2\) is called a Boolean function of n-variables. \({\mathbb {K}}_2^n\) is vector space (n-dimensional) over \({\mathbb {K}}_2\) where \({\mathbb {K}}_2\) represents a field of two elements. \(\mathcal {Z}_n\) is called the set of all Boolean functions (n-variables).

2.2 Balancedness

If number of 0’s in truth table representation is same as the number of 1’s, than function is called balanced and the property is known as the balancedness.

2.3 Walsh Hadamard Transform

Boolean function can be represent in term of Walsh Hadamard Transform (WHT) also. If \(L_{\lambda }\) is linear function, specified by \({{\lambda }\in {\mathbb {K}}_2^n}\), the we denote WHT by \(H_g(\lambda )\) and can be defined as

$$\begin{aligned} H_g(\lambda ) = \sum _{x\in {\mathbb {K}}_2^n} (-1)^{g(x) \oplus \lambda .x}. \end{aligned}$$
(1)

2.4 Non-linearity

Nonlinearity of a Boolean function is minimum hamming distance of that function from the set of all affine functions. It can be given by

$$\begin{aligned} nl(g)=(2^n-\max _{\lambda \in {\mathbb {K}}_2^n}|H_{g}(\lambda )|)/2. \end{aligned}$$
(2)

2.5 Autocorrelation

The derivative of Boolean function g(x), with respect to a vector s, is defined as \(g(x)\bigoplus g(x+s)\), where x and \(s\in \mathbb {K}_2^n\). So, in polar form, derivative can be defined as \(\widehat{g}(x)\widehat{g}(x+s)\). The autocorrelation of a function g is denoted by \(A_g(s)\) and is defined by

$$A_g(s)=\sum _{x\in \mathbb {K}_2^n} \hat{g} (x)\hat{g}(x+s),$$

where \(\hat{g} (x) = (-1)^{g(x)}\).

For a good Boolean function g, value of \(A_g\) should small.

2.6 Correlation Immunity

A Boolean function \(g\in \mathcal {B}_n\) is said to be correlation immune (order m) if \({H_g{(\alpha )}=0}\) for all \(\alpha \in \mathbb {K}_2^n\) such that \(1\le w_H(\alpha )\le m\). Moreover, if g is balanced than it is called the m-resilient.

3 Non-dominated Sorting Genetic Algorithm II (NSGA-II) with Biasedness

Deb et al. [5] doveloped NSGA-II, that is a generational Multiobjective Optimization Evolutionary Algorithm (MOEA). It is based on three modules and we have explained the method in [7]. We have applied the algorithms on our developed method and got some good Boolean function [7]. But only this technique was not sufficient to get desired Boolean functions as complexity of method was comparatively high. Deb [4] discussed a sharing approach which uses a biased distance metric. By introducing biasedness means we give extra weightage to some specific objective function by introducing a constraint (same as objective function) into MOOP. In present paper, we have introduced a new concept of biasedness in NSGA-II to reduce the complexity. In our MOOP, we have formed first objective to optimize nonlinearity and nonlinearity is most important property here to optimize. So, in our MOOP, first objective and first constraint are same.

4 Formulation of MOOP

It consists of (i) Introduction of biasedness into MOOP and (ii) Application of NSGA-II.

(i) Formulation of MOOP with biasedness: Our main task is to form objective functions. To get optimum value of Nonlinearity, balancedness, autocorrelation and resiliency is our motive. We have formed first objective to optimize nonlinearity, second to optimize resiliency, and we have optimized autocorrelation by third objective. To get balanced functions, we have introduced two constraints. Nonlinearity is very important property. Hence to give extra weightage to first objective we have introduced concept of biasedness and added another constraints that is same as first objective.

First objective function: Based on the definition of nonlinearity [1, 7]

$$nl= 2^{(n-1)}-1/2 (\max _{\lambda }{H_g(\lambda ))},$$

We know maximum value of nonlinearity for 6 variables is 48 and for 7 variables is 56. So, to form first objective function we have introduced a new constant say T. Now, we want nl to take the value equal to T. So, first objective can be formed as follows:

$$\begin{aligned} g^1 =|nl-T|, \end{aligned}$$
(3)

\(g^1\) is our the first objective function, where T is constant value for a fixed number of variables. (Here we take its value as 48 for 6 variables and as 56 for seven variables.)

Second objective function: Second objective is to optimize autocorrelation. So, we have directly assigned the value of autocorrelation equal to second objective.

To formulate second objective, we have used definition of autocorrelation (Definition 2.5). According to the above definition of autocorrelation, we have formulated

$$A_g(\lambda )={\sum _{x\in \mathbb {K}_2^n} (-1)^{g(x)\oplus g(x+\lambda )}},$$

and \( A_g(0)\) is maximum,

So,

$$\begin{aligned} g^2= \max _\lambda |A_g(\lambda )| \end{aligned}$$
(4)

is our second objective function, where \({\lambda \in \mathbb {K}_2^n}\) and \(\lambda \ne zero\)

Now,

$$ g^1 = |nl-T|,$$
$$\begin{aligned} g^1=|2^{(n-1)}-1/2 {\sum _{x\in \mathbb {K}_2^n} (-1)^{g(x)\oplus \lambda .x}}-T|, \end{aligned}$$
(5)

Similarly, for all \(\lambda \in \mathbb {K}_2^n\),

$$\begin{aligned} g^2= \max _\lambda {\sum _{x\in \mathbb {K}_2^n} (-1)^{g(x)\oplus g(x+\lambda )}}. \end{aligned}$$
(6)

Now

Third objective function: According to the definition 2.6, for a Boolean function to be m resilient, value of Walsh Hadamard Transform should be zero corresponding to all \(x\in \mathbb {K}_2^n\) having weight \( \le \) m. So, to form out third objective, we take all WHT corresponding to all such \(x\in \mathbb {K}_2^n\). We added all WHT and assigned them to the third objective. Now our purpose is to minimize this third objective (equal to zero). This is because with zero value of third objective, we will get m-resilient functions. So, our third objective is,

$$\begin{aligned} g^3 =\sum _\lambda |{H_f(\lambda )}| \end{aligned}$$
(7)

where \(w_H(\lambda )\le m\) for \(\lambda \in \mathbb {F}_2^n\)

So, we design MOOP as:

$$\begin{aligned} \left. \begin{array}{lll} \min F =(g^1, g^2, g^3) \\ subject\quad to\\ \sum \nolimits _{x\in \mathbb {K}_2^n}g(x)=2^{n-1},\\ nl=T. \end{array} \right\} \end{aligned}$$
(8)

\(\sum _{x\in \mathbb {K}_2^n} {g(x)}\) should be equal to \(2^{n-1}\) for balanced function. To use biasedness sharing technique, the second constraint \(nl=T\) is taken to give more weightage to the first objective.

(ii) Application of above method: After applying above method (with biasedness concepts) to the MOOP, we get the desired results. Results are given in Sect. 5. The list of parameters are listed in Table 2 (for 6 variables) and 3 (7 variables).

5 Result and Discussion

We got desired results by applying our method (In Sect. 4) on MOOP and got some good Boolean functions from cryptography point of view. These balanced functions have the best trade-off among non-linearity, autocorrelation and resiliency. In Table 1, we have listed those functions for 6 and 7 variables and parameters are given in Table 2 respectively. We have compared our results with literature [1, 3] and can conclude that our results are at least as better.

Table 1. Obtained results
Table 2. Parameters

6 Conclusion

In present paper, we have developed a new method to design good Boolean functions from cryptography point of view. We got Boolean functions for 6 and 7 variables that are better or at least comparable with [1, 3]. So, we can conclude, our method is at least as better as the methods available in the literature.