Keywords

1 Introduction

In many real-world applications from engineering or finance, one has to face the issue that several objectives have to be optimized concurrently. If more than three objectives are involved, the resulting problem is often termed a many objective optimization problem (MaOP) in the literature. Though the treatment of MaOPs is a relatively young research field (so far, mainly problems with two or three objectives have been studied) it is a very important one as the decision making processes are getting more and more important nowadays. One important characteristic of MaOPs is that its solution set, the so-called Pareto set, does typically not consist of one single solution as for ‘classical’ scalar optimization problems (SOPs, i.e., one objective is considered). Instead, the Pareto set of a continuous MaOP typically forms a (\(k-1\))-dimensional object, where k is the number of objectives involved in the problem. For discrete MaOPs, as we will consider here, the magnitude of the solution set typically grows exponentially with k [16]. Specialized evolutionary algorithms have caught the interest of many researchers over the last decades; see, e.g., [11, 12, 32] and references therein. Reasons for this include that these algorithms are applicable to a wide range of problems, are of global nature and hence in principle do not depend on the initial candidate set (i.e., the initial population). Further, due to their set-based approach, they compute a limited-size representation of the entire Pareto set in a single run of the algorithm. Most of these specialized algorithms, called MOEAs (multi-objective evolutionary algorithms) are designed for the treatment of problems with just few objectives (say, 2 to 4). However, more and more algorithms are proposed that deal with many objectives. For instance the use of MAOPs with large population size (e.g. 10,000 individuals) [20], the Dynamical Multi-objective Evolutionary Algorithm (DMOEA) [34] and the Grid-Based Evolutionary Algorithm (GrEA) [31]. However, due to their huge magnitude, the Pareto sets of MaOPs can typically not be computed efficiently, and further on, these sets cannot be visualized adequately. Thus, even with the aid of evolutionary algorithms, if the number of objectives is high, one cannot expect that the solution selected by the decision maker (DM) is in fact the preferred solution of the given MaOP with respect to the given setting.

In this paper, we argue that a fine tuning of a selected optimal solution makes sense. More precisely, we propose an approach where, starting from a given initial solution \(x_0\), further solutions \(x_i\), \(i=1,\ldots ,N\), are generated such that the sequence of the candidate solutions performs a movement into user-specified directions. Numerical results on different scenarios will show the benefit of the novel approach. We stress that the idea to steer the search along the Pareto set/front into a given user-specified direction is not new. The Directed Search Descent Method [28] and the Pareto Tracer [23] are capable of performing such a search. However, they are both restricted to continuous optimization problems, and they cannot be extended to discrete domains as gradient information is required. Moreover, several specialised MOEAs to solve MaOPs have surged in recent years, for instance in [3] a method which employ the hypervolume for this purpose is proposed and a posteriori method to deal with MaOPs is considered in [7].

The remainder of this paper is organized as follows: in Sect. 2, we will shortly present the required background and will discuss the related work. In Sect. 3, we propose a method for the fine tuning of a given solution from the considered MaOP, and we compare possible realizations of this framework. In Sect. 4, we present some numerical results with a selected approach, and finally, we will conclude and give paths for future research in Sect. 5.

2 Background

2.1 Definitions

From a mathematical point-of-view, a multi-objective optimization problem (MOP) can be defined as follows:

$$\begin{aligned} \max _{x\in \varOmega \subset \mathbb {R}^n } \; F(x), \text { s.t. } g(x) \le 0 \text { and } h(x) = 0, \end{aligned}$$
(1)

where \(F:D \rightarrow \mathbb {R}^k\) is a vector function mapping the feasible set \(D := \{ x~\in ~\varOmega ~|\) \( g(x)\le 0 \text { and } h(x)=0\}\) to its image \(Z := \{ F(x) = (f_1(x),\ldots ,f_k(x)) ~|~ x \in D\} \subseteq \mathbb {R}^k\), where \(f_i :\varOmega \rightarrow \mathbb {R}\) stands for the i-th objective. In the combinatorial case, feasible solutions forms a discrete set D. In a maximization context, an objective vector \(z \in Z\) is dominated by an objective vector \(z^\prime \in Z\), denoted by \(z \prec z^\prime \), iff \(\forall i \in \{1,2,\dots ,k\}\), \(z_i \leqslant z_i^\prime \) and \(\exists i \in \{1,2,\dots ,k\} \) such that \(z_i < z_i^\prime \). Similarly, a solution \(x \in D\) is dominated by a solution \(x^\prime \in D\), denoted by \(x \prec x^\prime \), iff \(F(x) \prec F(x^\prime )\). A solution \(x^\star \in D\) is said to be Pareto optimal, if there does not exist any other solution \(x \in D\) such that \(x^\star \prec x\). The set of all Pareto optimal solutions is called the Pareto set (PS), and its mapping in the objective space is called the Pareto front (PF). One of the most challenging task in multi-objective optimization is to identify a minimal complete Pareto set, i.e., one Pareto optimal solution for each point from the Pareto front. In the combinatorial case, generating a complete Pareto set is often infeasible for two main reasons [16]: (i) the number of Pareto optimal solutions is typically exponential in the size of the problem instance, and (ii) deciding if a feasible solution belongs to the Pareto set may be NP-complete. Therefore, the overall goal is often to identify a good Pareto set approximation. For this purpose, heuristics in general, and evolutionary algorithms in particular, have attracted a lot of attention from the optimization community since the late eighties [10, 12].

One of the most studied NP-hard problem from combinatorial optimization is the multi-objective (multi-dimensional) 0–1 knapsack problem (MOKP). Given a collection of n items and a set of k knapsacks, the MOKP seeks a subset of items subject to capacity constraints based on a weight function vector \(w: \{0,1\}^{k \times n}\), while maximizing a profit function vector \(p: \{0,1\}^{k \times n}\). More formally, it can be stated as:

$$\begin{aligned} \begin{array}{rlcl} \max ~ &{} \sum \nolimits _{j=1}^{n} p_{ij} \cdot x_{j} &{} &{} i \in \{1,\ldots ,k\}\\ \text {s.t. } &{} \sum \nolimits _{j=1}^{n} w_{ij} \cdot x_{j} \leqslant c_i &{} &{} i \in \{1,\ldots ,k\}\\ &{} x_j \in \{0,1\} &{} &{} j \in \{1,\ldots ,n\} \end{array} \end{aligned}$$

where is the profit of item j on knapsack i, , is the weight of item j on knapsack i, and is the capacity of knapsack i. For the MOKP, the cardinality of the Pareto set can grow exponentially with the problem size [16], which makes this problem very appealing to investigate. Notice however, that the proposed approach is not specific to the MOKP and could be applied for other combinatorial problems as well.

2.2 Literature Overview

The success of MOEAs can be essentially attributed to the fact that they do not require special features or properties from the objective functions, by relying on stochastic search procedures that are able to deal with complex MOPs and a large variety of application domains. MOEAs are in fact inspired by the basic principles of the evolutionary process on a population (a set of individuals or solutions), by means of the so-called evolutionary operators. Considering the wide variety of MOEAs, and the different principles guiding their design, e.g., Pareto dominance [14], indicator-based [5], aggregation-based [32] methods, a lot of progress have been made to better harness the complexity of solving MOPs using an evolutionary process and to better understand the main challenges one has to face. In particular, the dimensionality of the objective space is believed to be one of the main difficult challenges one has to address. In fact, several researchers have pointed out different issues on the use of MOEAs to solve problems having 4 or more objectives [17, 21, 27]. These issues are essentially related to the fact that, as the number of objectives increases, the proportion of non-dominated elements in the population grows. An expression for the portion e in a k-dimensional criteria domain, such that the dominance concept classifies as equivalent solutions is given by \(e=\frac{2^k-2}{2^k}\) [17].

In recent years, approaches for improving the behavior and the performance of EMO algorithms in order to deal with so-called MaOPs have received a growing interest. We can classify them in two groups: (i) Methods using alternative preference relations, and (ii) Methods transforming the original MaOP into a SOP. In the first group, different preference relations were reported such as, to cite a few, the so-called Preference Ordering [15], a generalization of Pareto optimality which uses two more stringent definitions of optimality, or fuzzy relations [17] based on the number of components which are bigger, smaller or equal between two objective vectors. In the second group, different algorithmic approaches can be found such as those based on indicators (e.g., hypervolume [4]), those based on dimensional reduction techniques (e.g., the so-called Pareto Corner Search Evolutionary Algorithm [29]) or those based on space partitioning (e.g., the so-called \(\epsilon \) Ranking-Evolutionary Multi-objective Optimizer [1]).

It is important to remark that the rationale behind any MOEA is the computation of a representative set of solutions from which the DM can eventually pick one or some. However, and apart from difficulties inherent to the optimization/solving process itself, taking the DM requirements into account is a challenging issue when tackling a MaOP. This is essentially because a MOEA might actually fail in providing high-quality solutions in the regions of interest for the DM, or also to entirely cover the Pareto front due to the high dimensionality of MaOPs. For this purpose, taking the DM preferences into account is a hot issue that is being increasingly addressed in the evolutionary multi-objective optimization and multi-criteria decision making communities; see, e.g., [6, 9]. Although reviewing all the literature on the subject is beyond the scope of this paper, we can still comment on three classes that one might encounter [9, 24], namely, (i) a priori approaches that aim at guiding the evolutionary process through pre-defined regions of interest provided by the decision maker, e.g., [22], (ii) interactive approaches [26] where the decision maker preference(s) is iteratively and progressively refined during the evolutionary search procedure until the decision maker is satisfied, and (iii) a posteriori approaches, e.g., [18], taking into account the preference of the decision makers solely after the evolutionary process has ended up with a reasonable approximation set.

The work presented in this paper falls at the crossroads of interactive and a priori approaches. In fact, on the one hand, the proposed approach allows the DM to refine a given solution iteratively by providing a direction where to steer the evolutionary search process. On the other hand, we use a reference point approach to transform the direction provided by the decision maker into a reference-point SOP, which is solved using a multi-objective search process. It is to notice that there exist some work in line with our proposal. For instance, Cheng et al. [8], and Deb and Jain [13] proposed different methods that allow the decision maker to attain preferred solutions using reference vectors. However, that methods depart from our proposal in several aspects. First, it falls into the a posteriori class of approaches. More importantly, it aims at providing the decision makers with additional preferred solutions that map different preferred objective regions, whereas we aim at allowing the decision maker to navigate through the Pareto front and to locally explore nearby solutions, hence refining his/her preferences and eventually discovering new preferred solutions. This is similar to the so-called NIMBUS system [25], but our approach allows to include the preferences of the DM modeled using a direction in the objective space more explicitly.

3 Fine Tuning Method and Application to Knapsack

3.1 Basic Idea and Motivations

In the following, we introduce the design principles of the proposed method. Basically, the motivations of our proposal is to allow the DM to navigate along the Pareto front by starting from a given initial solution \(x_0\), possibly coming from the output of any available optimization technique. This initial solution \(x_0\) is to be viewed as a departure point in the objective space from where the DM can refine his/her preferences by discovering on-line the vicinity of \(x_0\), and eventually finding new preferred points in the objective space. Consequently, we shall provide the DM with the necessary tools in order to explore a whole path of solutions being as near as possible to the Pareto front and to locally explore the landscape of Pareto optimal solutions in an iterative manner, i.e., from one solution to a nearby one. For this purpose, the DM is required to provide a direction in the objective space, in which the search process shall be steered. Informally speaking, steering the search along the given direction means providing the DM with a sequence of candidate solutions \(x_i\), \(i=1,\ldots ,N\), that can be viewed as forming a path in the objective space and such that every solution \(x_i\) is improving the previous solution \(x_{i-1}\) with respect to the direction given by the DM in the objective space.

In order to effectively set up this idea for combinatorial MOPs, we need to precisely define the role of the direction provided by the DM and the meaning of improving a given solution according to this direction. Before going further, let us comment that specifying a direction with respect to a starting solution can easily be thought by the DM in many different ways and for different purposes. For the sake of illustration, let us consider a MOP with three objectives \((f_1, f_2,f_3)\) and the following scenario: the DM has an optimal solution for this problem that he/she is not fully satisfied with, e.g., he/she would like to minimize the value of \(f_2\) as much as possible. However, a lot of options can be considered for the above example. For instance, the vector \(d_1 = (1,-1,0)\) can refer to a direction (in the objective space) aiming at reducing the second objective, while increasing the first one. Similarly, the direction \(d_2 = (0,-1,1)\) would imply a reduction of the second objective together with a growth on the third objective. In both cases, we could obtain the minimum value for \(f_2\), but following the direction \(d_1\) or \(d_2\) typically produces different paths that can be associated with different DM preferences. By defining a direction, the DM can actually decides which objectives to improve and which ones to “sacrifice” in order to refine his/her preferences. Performing such local movements in the objective space while following a whole path of Pareto optimal solutions with respect to the preferred direction of the DM is the goal of the proposed framework.

For the target framework to work, it is important to keep in mind the notion of Pareto optimality when performing a movement in the objective space. For instance, assuming that the starting solution is a Pareto optimal solution, then it is obviously not possible to improve all the objectives simultaneously. Consequently, the DM can still define a direction which does not involve an optimal movement, because no prior knowledge on the shape of the Pareto front is assumed. In the rest of this section, we provide a step-by-step description of the proposed framework and the necessary algorithmic components for its proper realization. This shall also allow us to better highlight the different issues one has to address in such context.

3.2 Framework for the Fine Tuning Method

As mentioned above, the fine tuning method proposed in this paper is based on the assumption that the DM has a preferred direction d in the objective space. More formally, given an initial solution \(x_0\), we assume that the DM is interested in a solution \(x_1\) which is in the vicinity of \(x_0\) such that the following holds:

$$\begin{aligned} F(x_1) \approx F(x_0) + t d, \end{aligned}$$
(2)

where \(t>0\) is a given (typically small) step size.

As it is very unlikely that such a point \(x_1\) exists where the exact equality in Eq. (2) holds (note also that the Pareto front around \(F(x_0)\) is not known), we propose to consider a ‘best approximation’ using an ‘approximated’ reference point \(Z_1\) as follows:

$$\begin{aligned} Z_1 := F(x_0) + \bar{t}d, \end{aligned}$$
(3)

where \(\bar{t}>0\) is a given, fixed (problem dependent) step size. Then, we propose to compute the ‘closest’ Pareto optimal solution to the reference point \(Z_1\) in the objective space, which will hence constitute the next point \(x_1\) to be presented to the DM. Notice that we still have to define a metric specifying the closeness of optimal points with respect to the reference point – this will be addressed later. Once \(x_1\) is computed, the DM can consequently change his/her mind or not, by providing a new direction or by keeping the old one. The framework then keeps updating the sequence of reference points and providing the DM with the corresponding closest optimal solutions in an interactive manner. The proposed framework is hence able to provide the DM with a sequence of candidate solutions such that the respective sequence of objective vectors (ideally) performs a movement in the specified direction d.

figure a

In Algorithm 1, we summarize the high-level pseudo code of the proposed method. We first remark that the procedure RefPoint implements the idea of transforming the direction d provided by the DM into a reference point, which we simply define as follows:

$$\begin{aligned} Z_{i} = F(x_{i-1}) + \delta d, \end{aligned}$$
(4)

where \(x_{i-1}\) is the previous (starting) solution and \(\delta \) a parameter specifying the magnitude of the movement in the objective space. Actually, \(\delta \) can be viewed as the preferred Euclidian distance between two consecutive solutions \(\Vert F(x_{i-1})-F(x_i)\Vert \) in the objective space. It hence defines the preferred step size of the required movement, which is kept at the discretion of the DM. Notice also that both the direction d and the step size \(\delta \) can be changed interactively by the DM, which we do not include explicitly in the framework of Algorithm 1 for the sake of simplicity.

Given this reference point, one has to specify more concretely which solution should be sought for the decision maker. This is modeled by function G, which takes the current reference point into account and output a (single-objective) scalar optimization problem (SOP) to be solved. At last, the procedure Emo_Optimizer refers to the (evolutionary) algorithm that effectively computes the next solution to be presented to the DM. At this stage of the presentation, it is still not fully clear how to define function G and how to effectively implement the evolutionary solving procedure, which is at the core of this paper. Before going into the technical details of these crucially important issues, let us comment on Fig. 1 showing two hypothetical scenarios in the two-objective case, chosen for the sake of a better visualization. For \(F(\overline{x}_0)\), the reference point \(\overline{Z}_1\) is feasible when choosing the direction \(d=(1,-1)^T\). That is, there exists a point \(\overline{x}\) such that \(F(\overline{x}) = \overline{Z}_1\). We want a function \(G(\overline{Z}_1)\) that prevents x to be actually chosen. Instead, the solution \(\overline{x}_1\) should be a natural candidate since it is a Pareto optimal solution where \(F(\overline{x}_1)\) is the closest element to \(\overline{Z}_1\) in the Pareto front. The second scenario is for a given point \(x_0\) such that \(Z_1\) is infeasible. Here it is clear that the solution of \(G(Z_1)\) must be a Pareto optimal solution whose image \(F(x_1)\) is the closest to the given reference point. Notice that in both cases, the Pareto Front is not known when defining function G(Z).

Fig. 1.
figure 1

Illustrative example

In the following, we propose a possible answer for the definition of G, as well as some alternative (single- and multi-objective) evolutionary procedures for solving the corresponding SOP.

3.3 Framework Instantiation

Defining the Next Single-Objective Problem to Be Solved. Since the direction provided by the DM could be arbitrary, and given that we do not assume any prior knowledge neither about the Pareto front, nor on the initial solution from where to steer the search, we propose the following modeling of function G(x|Z), defining the next point to be computed by our framework. We rely on the so-called Wierzbicki’s achievement scalarizing function (WASF) [30]. More precisely, let \(\lambda \in \mathbb {R}^m\) be a weighting coefficient vector (which is different from the direction d provided by the DM). Given a reference point Z, the WASF is defined as follows:

$$\begin{aligned} g(x \vert Z, \lambda ) := \max _{i = 1, \ldots , k} { \lbrace \lambda _i (f_i(x)-Z_i) \rbrace } + \rho \sum _{i=1}^{k}{ \lambda _i (f_i(x)-Z_i) }, \end{aligned}$$
(5)

where the parameter \(\rho \) is the so-called augmentation coefficient, that must be set to a small positive value. The motivation of using such a function is that the optimal solution to Problem (5) is a Pareto optimal solution [24], independently of the choice of the reference point. This is an interesting property of the WASF that allows us to deal with reference points that might be defined on the feasible or the infeasible region of the objective space. Notice that this is to contrast to other scalarizing functions, such as the widely-used Chebychev function, that constraint the reference point to be defined beyond the Pareto front.

In our framework, the WASF is intended to capture the DM preferences, expressed by the reference point computed with respect to the DM preferred direction. However, the weighting coefficient vector still has to be specified. It is known that for a given reference point, the solutions generated using different weight vectors are intended to produce a diverse set of solution in the objective space. We here choose to set the weight vector \(\lambda \) as \((1/k,1/k, \ldots ,1/k)\), which can be viewed as one empirical choice implying a relative fairness among the objectives while approaching the reference point.

The Evolutionary Solving Process. In order to solve the previously defined SOP, we investigate two alternative evolutionary approaches.

The first one consists in using a standard Genetic Algorithm (GA). More precisely, and with respect to the experimented knapsack problem, we use the same evolutionary mechanisms and parameters than [2], i.e., a parent selection via a random binary tournament with probability 0.7, an elitist replacement strategy that keeps the best individual, a binary crossover operator with probability 0.5, a single point mutation, and an improve and repair procedure [2] for handling the capacity constraints. However, the initial population of the GA is adapted with respect to the iterations of our proposed framework as follows. Each time the SOP defined by the WASF and the corresponding reference point is updated, we initialize the population with 1/4 of the best individuals from the previous iteration, that we complement with randomly generated individuals. In the first iteration of our framework, the initial population is generated randomly. In our preliminary experiments, this was important in order to obtain a good trade-off between quality and diversity within the evolutionary process. Actually, this observation leads us to consider the following alternative evolutionary algorithm, where population diversity is maintained in a more explicit manner, by using an MOEA for solving the target SOP.

More precisely, our second alternative solving procedure is based on an adaptation of the MOEA/D framework [32]. We recall that MOEA/D is based on the decomposition of a given MOP into multiple subproblems using different weight vectors, which are then solved cooperatively. In contrast to the original algorithm, where the entire Pareto front is approximated using an ideal reference point and a diverse set of weight vectors, typically generated in a uniform way in the objective space; we are here interested in a single solution with respect to the target reference point. Hence, we still consider a set of uniformly-distributed weight vectors, but we use the WASF where the reference point is fixed in order to focus the search process on the region of interest for the DM. At the end of the MOEA/D search process, we output the best-found solution for the weight vector \(\lambda =(1/k,1/k, \ldots ,1/k)\), which precisely corresponds to the target SOP defined with respect to the DM preferred direction. Similarly to the GA, our preliminary experiments revealed that the choice of the initial population for MOEA/D has an important impact on the quality of the target solution. Accordingly, apart from the first iteration where the initial population is generated at random, we choose to systematically initialize MOEA/D with the population obtained with respect to the previous reference point. Due to the explicit diversity of the MOEA/D population, this initialization strategy revealed a reasonable choice in our initial experiments.

Illustrative Scenarios. To exemplify the possible scenarios, we experiment in the following the tuning method on the MOKP with different assumptions. This is in order to highlight the behavior of the framework under some possible representative scenarios and to identify the main raised issues.

We define the exemplary scenarios changing the input values of the fine tuning method, i.e., the initial optimal solution \(F(x_0)\), the direction in objective space d and the step size \(\delta \). For each investigated scenario, we provide plots rendering the computed reference points, the projection of the selected solutions (\(x_i\)) in the objective space, and the final population of each of the two considered evolutionary algorithms, together with the best-known PF approximation. This is reported in Fig. 2 for a bi-objective MOKP instance from [33]. Notice that, since we are interested in the impact of the input parameters, we assume that the initial solution \(x_0\) could be optimal or not, which implies that the first-obtained reference point can also be optimal or not. Thus, by simplicity we omit the first update of the reference point, and we consider that \(Z_1 = F(x_0)\). At last, we consider 10 iterations of the proposed method, a population size of 150, and 7, 500 function evaluations when running the solving procedure in each iteration.

Fig. 2.
figure 2

Illustrative scenarios on a bi-objectives. The best-known PF approximation is in thin black points. Reference points are shown in red squared points. The output solutions are in shown as circled black point. The population is depicted using a variable color scale. \(\delta = 500\). (Color figure online)

In Fig. 2 (left), we consider a Pareto optimal solution as a starting point and a fixed direction vector (provided by the DM) corresponding to the scenario where the second objective is to be refined repeatedly. We can clearly see that running the proposed framework is able to gradually improve the output solutions and to effectively steer the search along the desirable input direction. However, the output solutions are not necessary optimal, which we clearly attribute to the relatively few amount of computational effort used when running the evolutionary solving procedure. Interestingly, using the MOEA/D algorithm as a solving procedure (bottom) for the single-objective reference point target problem appears to work much better than the single-objective GA (top). We clearly attribute this to the diversity issues that the evolutionary process is facing when trying to find Pareto optimal solutions. This is confirmed in our second scenario depicted in Fig. 2 (middle), where the initial solution is chosen to be a non-optimal one. This second scenario also demonstrates that the proposed approach behaves in a coherent manner even if the solution considered in each iteration is not optimal. Notice that these two scenarios consider the same preferred direction, which is actually pointing to regions where there exist some non-dominated points. In Fig. 2 (right), we instead consider the scenario where a non-optimal direction \(d = (-1,0)\) is provided, that is a direction that points towards a dominated region of the objective space. This leads to the critical situation where the computed reference point might be dominated. Again, we notice that the proposed approach can handle this situation properly and that the MOEA/D-based solving procedure performs better than the GA.

4 Numerical Results

In this section, we present some numerical results of our approach using the modified (fine tuning) MOEA/D as a solver, since it was shown to provide better performance than the fine tuning GA. In order to appreciate the behavior of the proposed method, we consider to compare it against the original MOEA/D algorithm. However, since the original MOEA/D is intended to compute an approximation of the whole PF, a special care has to be taken. First, the proposed method enables to only output a path of solutions based on the computed reference points. Hence, we consider a modification of the Inverted Generational Distance (\(IGD_Z\)) [19], which allows us to work with a set of reference points. More specifically, given the set Z of reference points and a reference set archive A, the distance of Z toward F(A) is measured as follows:

$$\begin{aligned} IGD_Z(F(A),Z) := \frac{1}{\vert Z \vert } \sum \nolimits _{i=1}^{\vert Z \vert }{\min _{j=1,\ldots ,\vert A \vert }{dist(Z^*_i,F(a_j))}}, \end{aligned}$$
(6)

where \(Z_i^*\) denotes the point from the PF which is closest to \(Z_i\), i.e., \(\Vert Z_i - Z^*_i \Vert = dist(Z, F(P_Q))\). Hereby, dist measures the distance between point and set and between two sets as \(dist(u,A) = \inf _{v \in A} \Vert u-v \Vert \) and \(dist(B,A) = \sup _{u \in B} dist(u,A)\), where u and v denote points from sets A and B. Like this, the optimal \(IGD_Z\) value is always zero. Notice, however, that the evaluation of the \(IGD_Z\) value requires the knowledge of the exact PF. Instead, we use the best-available PF approximations.

The value of \(IGD_Z\) can be straightforwardly computed for our approach using the set of reference points computed at each iteration. For the original MOEA/D, we consider to first extract from the archive maintained by MOEA/D the nearest solutions (in the objective space) to the same reference points computed by our approach. Then, these solutions are considered in order to compute an \(IGD_Z\) value for the original MOEA/D. By comparing the \(IGD_Z\) values for our method as well as for the original MOEA/D, our intent is to highlight the benefits that can be expected when locally steering the search along a preferred direction in an interactive way, against computing a global approximation set form which we steer the search a posteriori. It is however worth-noticing that such a comparison is only conducted for the sake of illustrating the accuracy of our approach and its effective implementation which should not be considered as an alternative to existing (global) multi-objective optimization algorithms.

In the following, we consider some benchmark instances of the considered MOKPFootnote 1, as specified in Table 1, which also summarizes the different parameter setting used for the proposed method. MOEA/D was experimented using the same setting than in the original paper [32]. Notice that, overall, the same number of function evaluations are used for both the original MOEA/D and the proposed method. Table 2 shows the obtained results for the consider scenarios over 20 independent runs for each instance and algorithm.

Table 1. Parameter setting of the fine tuning method for numerical results. Number of knapsacks (KS), items, population size (P), maximal number of functions evaluations for each reference point (ZEvs), number of considered reference points (|Z|), step size \(\delta \), initial reference point \(Z_0\) and desirable direction \(d_k\).
Table 2. Numerical results. Number of knapsacks (KS), items, population size (P), maximal number of functions evaluations (Ev), and \(IGD_Z\); minimum, average, standard deviation (in small font) and maximum of 20 independent runs are presented for \(IGD_Z\).

We notice that, for \(k=2\), the original MOEA/D is able to obtain better results than the proposed Fine Tuning method. This is because MOEA/D can generate points close to the entire PF when the number of objectives is limited. However, we can observe that, the larger the number of objectives, the better the Fine Tuning approach. This is because MOEA/D requires more approximation points and function evaluations in order to cover the entire PF as the dimensionality grows, while the Fine Tuning approach is able to naturally focus on certain regions of the PF. We remark that this fact also improves the execution time, because the Fine Tuning approach does not require any external archive, while for MOEA/D to output a high-quality global PF approximation, an archive is actually used for the MOKP.

5 Conclusions and Future Work

In this work, we addressed a decision making tool for discrete many objective optimization problems, where we used the multidimensional multi-objective 0–1 knapsack problem as demonstrator. Since the number of non-dominated solutions grows (even exponentially) with the number of objectives k, it becomes difficult or even intractable to compute an approximation of the entire Pareto set for \(k\geqslant 4\). Instead, it is likely that each of the chosen solutions obtained by a solver does not represent the most-preferred one from the set of given optimal alternatives. In order to overcome this issue, we proposed a local fine tuning method that allows the search process to be steered from a given solution along the Pareto front in a user-specified direction. More precisely, we presented a framework and two possible realizations of it: one by means of a GA for directly solving the dynamic reference point problem, and another one based on MOEA/D that focuses on a region of the Pareto front delimited by the reference point. Given that only a particular segment of the Pareto front is computed, one retrieves a much more accurate search efficiency compared against the classical method (i.e., aiming to compute all Pareto optimal solutions), which we have demonstrated on several benchmark problems. We think that this method can be used as a post-processing step to all existing many objective optimization solvers, and that this will actually help the decision maker to identify his/her most-preferred solution.

Though this work demonstrates as proof-of-principle the application of the novel approach, there is still much to be done. First of all, the tuning of the method is an issue to guarantee in order to get better solutions in lower time, as well as other solvers might be interesting to consider. Next, we think that the tuning method can be extended by other approaches to steer the search along the Pareto front. Finally, it would be interesting to apply the novel method on many objective optimization problems derived from real-worlds applications in order to appreciate its impact on the decision making process.