Keywords

1 Introduction

The Unconstrained Binary Quadratic Programming (UBQP) problem is one of the most studied NP-hard problem with its various practical applications. The multi-objective UBQP problem can be mathematically formulated as follows [15]:

$$\begin{aligned} f_k(x) = x'Q^kx=\sum _{i=1}^{n}\sum _{j=1}^{n}q^k_{ij}x_{i}x_{j} \end{aligned}$$
(1)

where \(f_k(x)\) \((k \in \{1, \dots , m\})\) is the \(k^{th}\) objective and to be maximized, \(Q^k=(q^k_{ij})\) is an \(n\times n\) matrix of constants and x is an n-vector of binary (zero-one) variables, i.e., \(x_{i} \in \{0, 1\} \ (i=1, \dots , n)\).

The formulation of UBQP is notable for its ability to represent a wide range of important combinatorial optimization problems, including traffic management [9], financial analysis [20], molecular conformation [25], cellular radio channel allocation [26], and so on. The literature reports a large number of heuristic and metaheuristic algorithms to deal with the UBQP problem [14], which include scatter search [2], directed local search [7], simulated annealing [1, 13], evolutionary algorithms [6, 17, 22], tabu search [10, 23, 24], etc.

Moreover, Liefooghe et al. [15] first extended the single UBQP problem into the multi-objective case and proposed a hybrid metaheuristic algorithm to solve the multi-objective UBQP problem. In [16], they further proposed three versions of multi-objective local search algorithms with different search strategies to solve the bi-objective UBQP problem.

In the current paper, we study a multi-parent crossover based genetic algorithm for the bi-objective UBQP problem, which integrates a multi-parent crossover within the framework of hypervolume-based multi-objective optimization algorithm. The proposed algorithm consists of two main procedures: hypervolume contribution selection procedure and genetic algorithm with multi-parent crossover. The hypervolume contribution selection procedure iteratively improves the Pareto approximation set until it can not be improved any more. Then, the multi-parent crossover is used to further improve the entire quality of the Pareto approximation set.

The remaining part of the paper is organized as follows. In the next section, we introduce the basic notations and definitions of multi-objective optimization. In Sect. 3, we briefly review the previous work related to the uniform crossover and the multi-parent crossover. Afterwards, we describe our proposed multi-objective genetic algorithm with multi-parent crossover in Sect. 4. Section 5 is dedicated to the computational results and concluding remarks are given in the last section.

2 Multi-objective Optimization

In this section, we present the basic notations and definitions of multi-objective optimization. Let X denote the search space of the optimization problem under consideration and Z the corresponding objective space. Without loss of generality, we assume that \(Z = \mathfrak {R}^n\) and that all n objectives are to be maximized. Each \(x \in X\) is assigned exactly one objective vector \(z \in Z\) on the basis of a vector function \(f : X \rightarrow Z\) with \(z = f(x)\), and the mapping f defines the evaluation of a solution \(x \in X\) [8].

Actually, we are often interested in those solutions that are Pareto optimal with respect to f. The relation \(x_1 \succ x_2\) means that the solution \(x_1\) is preferable to \(x_2\). The dominance relation between two solutions \(x_1\) and \(x_2\) is often defined as follows [8]:

Definition 1

(Pareto Dominance). A decision vector \(x_1\) is said to dominate another decision vector \(x_2\) (written as \(x_1 \succ x_2\)), if \(f_i(x_1) \ge f_i(x_2)\) for all \(i \in \{{1,\dots ,n}\}\) and \(f_j(x_1)>f_j(x_2)\) for at least one \(j \in \{1,\dots ,n\}\).

Definition 2

(Pareto Optimal Solution). \(x \in X\) is said to be Pareto optimal if and only if there does not exist another solution \(x' \in X\) such that \(x' \succ x\).

Definition 3

(Pareto Optimal Set). S is said to be a Pareto optimal set if and only if S is composed of all the Pareto optimal solutions.

Definition 4

(Non-dominated Solution). \(x \in S\) \((S \subset X)\) is said to be non-dominated if and only if there does not exist another solution \(x' \in S\) such that \(x' \succ x\).

Definition 5

(Non-dominated Set). S is said to be a non-dominated set if and only if any two solutions \(x_1 \in S\) and \(x_2 \in S\) such that \(x_1 \nsucc x_2\) and \(x_2 \nsucc x_1\).

In fact, there does not exist the total order relation among all the solutions in multi-objective optimization. Thus, the aim is to generate the Pareto optimal set, which keeps the best compromise among all the objectives.

Nevertheless, in most cases, it is impossible to generate the Pareto optimal set in a reasonable time. Therefore, we are interested in finding a non-dominated set which is as close to the Pareto optimal set as possible, and the overall goal is often to identify a good Pareto approximation set.

3 Related Work

The uniform crossover and its variants are usually integrated into the hybrid metaheuristics as an important part for further improvement, which are widely used to solve many combinatorial optimization problems, such as quadratic assignment problem [5], gate assignment problem [12], single-objective UBQP problem [14]. In this section, we briefly review the literature on solving the UBQP problem with the uniform crossover and the multi-parent crossover.

Merz and Freisleben [21] proposed a hybrid genetic algorithm, which incorporates a simple local search into the traditional genetic algorithm. A variant of uniform crossover is used to generate offspring solutions based on Hamming distance from the parents. Computational results on the UBQP problem show that the proposed algorithm is sufficient to find best known results for the problem instances with less than 200 variables, but not very effective on the problem instances with large size.

Lodi et al. [17] presented an effective evolutionary method for solving the UBQP problem. In this algorithm, a uniform crossover operator is used to produce the offspring solutions, where the variables with common values in parental solutions are temporarily fixed in the current round of local search. Computational results on the problem instances with up to 500 variables show the attractiveness and the effectiveness of the proposed method, especially on the small problem instances.

Lü et al. [18] proposed a hybrid metaheuristic approach for solving the UBQP problem, which incorporates a tabu search procedure into the framework of evolutionary algorithms. In this algorithm, a uniform crossover operator and a diversification-guided combination operator are used to generate offspring solutions in order to further enforce the search capacity of the proposed algorithm. The extensive computational studies on problem instances with up to 7000 variables reveal that their proposed algorithm is very competitive.

Wang et al. [27] integrated four multi-parent crossover operators (called MSX, Diagonal, U-Scan and OB-Scan) within the memetic algorithm framework for dealing with unconstrained binary quadratic programming problem. Their proposed algorithms apply these crossover operators to further improve the results generated by the tabu search procedure. The experimental results and the analysis on the behavior of the algorithm provide the evidences and the insights as to key role of the crossover operators.

4 Multi-parent Crossover Based Genetic Algorithm

The Multi-Parent Crossover based Genetic Algorithm (MPCGA) is proposed to solve the bi-objective UBQP problem, which consists of two main procedures: hypervolume contribution selection and genetic algorithm with the multi-parent crossover. The general architecture of the MPCGA algorithm is described in Algorithm 1.

figure a

In MPCGA, all the individuals in an initial population are randomly generated, i.e., each variable of an individual is randomly assigned a value 0 and 1 (Step 1). Then, each individual is calculated a fitness value by the Hypervolume Contribution (HC) indicator defined in [4] (Step 3) and optimized by the hypervolume contribution selection procedure. Afterwards, we employ the multi-parent crossover operator proposed in [19] to produce the offsprings, in order to further improve the quality of Pareto approximation set.

4.1 Hypervolume Contribution Selection

After the fitness assignment for each individual, we apply the Hypervolume Contribution Selection (HCS) procedure [4] presented in Algorithm 2 to the initial population, in order to generate a set of efficient individuals.

figure b

In the HCS procedure, an individual \(x^*\), which is one of the unexplored neighbors of x in the population P, is assigned to a fitness value by the HC indicator. If \(x^*\) is dominated, the fitness values of all the individuals in P remain unchanged. If \(x^*\) is non-dominated, we need to update the fitness values of non-dominated neighbors of \(x^*\).

Actually, the neighborhood of UBQP is usually defined by the simple one-flip move, which flips the value 0 (or 1) of the \(k^{th}\) variable of each solution \(x \in P\) to 1 (or 0) to obtain a new individual \(x^*\) as the neighbor of x [11]. Then, we calculate the objective function values of this new neighbor with the fast incremental neighborhood evaluation formula [19] below:

$$\begin{aligned} \varDelta _{i}=(1-2x_{i})(q_{ii}+\sum _{j\in N, j\ne i,x_{j}=1}q_{ij}) \end{aligned}$$
(2)

Afterwards, the individual \(\omega \) with the worst fitness value is deleted from the population P. If \(\omega \) is dominated, the fitness values of the other individuals remain unchanged. If \(\omega \) is non-dominated, the fitness values of the non-dominated neighbors of \(\omega \) need to be updated. The HCS procedure will repeat until the termination criterion is satisfied.

4.2 Genetic Algorithm

The main idea of uniform crossover is to assign values to the variables of offspring that represent assignments made in common by both parents, and to randomly assign values to remaining variables of the offspring solution [18]. Based on this idea, a multi-parent crossover operator called MSX is proposed to solve the UBQP problem [19]. In this work, we employ the MSX crossover operator to improve the Pareto approximation set A generated by the HCS procedure. The exact steps are presented in Algorithm 3.

figure c

In our algorithm, we randomly select a set E (\(|E| = s\)) of non-dominated individuals from the Pareto approximation set A. Let \(E = \{ x^{(1)}, x^{(2)}, \dots , x^{(s)} \}\), where \(x^{(i)} = \{ x_1^{(i)}, x_2^{(i)}, \dots , x_n^{(i)} \}\) and the individuals in E are ordered in terms of their fitness values, i.e., \(x^{(1)}\) is the best individual in E and \(x^{(s)}\) is the worst individual in E. As suggested in [19], we set s to be a random number between 4 and 8. Then, the MSX crossover operator is defined below [27]:

MSX Crossover Operator: we define a weight w(i) for the individual \(x^{(i)}\) and a strength value Strength(j) for variables \(x_j\) as: \(w(i) = 1 / \sum sum(i) = 1 / \sum _{j=1}^n x_j^{(i)}\) and \(Strength(j) = \sum _{i=1}^s w(i)x_j^{(i)}\).

The value Strength(j) gives a relative indication of the tendency of the individuals in E to favor \(x_j = 1\) or \(x_j = 0\). Furthermore, we take an advantage of the sum(i) values over E to get a value for the number of \(x_j\) components that should be 1 in an average individual, denoted by \(Avg = \sum _{i=1}^s sum(i) / s\) [19]. Then, the variables with the first Avg largest Strength values receive assignment 1 and other variables receive assignment 0. Afterwards, a new offspring is generated and inserted into the Pareto approximation set A with the HC indicator for further improvement.

5 Experiments

In order to evaluate the efficiency of our proposed algorithm, we carry out the experiments on 10 benchmark instances of bi-objective UBQP problem, which are generated by the tools provided in [15]. The MPCGA algorithm is programmed in C++ and compiled using Dev-C++ 5.0 compiler on a PC running Windows 7 with Core 2.50 GHz CPU and 4 GB RAM.

5.1 Parameters Settings

The MPCGA algorithm requires to set a few parameters, we mainly discuss two important ones: the running time and the population size. The exact information about the instances and the parameter settings is presented in Table 1.

Table 1. Parameter settings used for bi-objective UBQP instances: instance dimension (D), population size (P) and running time (T).

5.2 Performance Assessment Protocol

In this paper, we evaluate the efficiency of multi-objective optimization algorithms using a test procedure that has been undertaken with the performance assessment package provided by Zitzler et al.Footnote 1. The quality assessment protocol works as follows: we first create a set of 20 runs with different initial populations for each algorithm and each benchmark instance. Afterwards, we calculate the reference set \(PO^*\) in order to determine the quality of k different sets \(A_0\dots A_{k-1}\) of non-dominated solutions. Furthermore, we define a reference point \(z=[w_1,w_2]\), where \(w_1\) and \(w_2\) represent the worst values for each objective function in \(A_0\cup \dots \cup A_{k-1}\). Then, the evaluation of a set \(A_i\) of solutions can be determined by finding the hypervolume difference between \(A_i\) and \(PO^*\) [28], and this hypervolume difference has to be as close as possible to zero.

5.3 Computational Results

In this subsection, we present the computational results obtained by our proposed MPCGA algorithm, the indicator-based multi-objective local search algorithm (IBMOLS) proposed in [3] and the hypervolume-based multi-objective local search algorithm (HBMOLS) proposed in [4].

The computational results are summarized in Table 2. Each line in this table contains a value both in bold and in grey box, which is the best result obtained on the considered instance. The values both in italic and in bold mean that the corresponding algorithms are not statistically outperformed by the algorithm which obtains the best result (with a confidence level greater than 95%).

Table 2. The computational results on bi-objective UBQP problem obtained by the algorithms: IBMOLS, HBMOLS and MPCGA

From Table 2, we can observe that all the best results are obtained by MPCGA, and the most significant result is achieved on the instance bubqp_5000_ 01, where the average hypervolume difference value obtained by MPCGA is much smaller than the values obtained by IBMOLS and HBMOLS.

However, the values on the instances (bubqp_1000_01, bubqp_3000_02, bubqp _4000_01 and bubqp_5000_02) obtained by HBMOLS are not statistically outperformed by MPCGA. Actually, the new offsprings generated by the MSX crossover operator evidently improve the entire quality of Pareto approximation set, which makes the MPCGA algorithm have a chance to search the high-quality individuals in the objective space. Thus, MPCGA has a better performance on all the instances.

6 Conclusion

In this paper, integrating the multi-parent crossover within the hypervolume-based multi-objective optimization algorithm to further improve the overall quality of Pareto approximation set, the MPCGA is proposed to deal with the bi-objective unconstrained binary quadratic programming problem. The computational results of MPCGA on 10 benchmark instances have shown the feasibility of the improvements and the effectiveness of MPCGA for the bi-objective UBQP problem.