1 Introduction

In this paper, we survey six commonly used scalarization methods in multiobjective optimization. These are the weighted sum, the \(\varepsilon \)-constraint, the Benson, the weighted Chebyshev, the Pascoletti–Serafini and the conic scalarization methods. These methods are compared by using common characteristics, and for some of them, new properties are formulated and proved. The short version of this paper is published in [1] as a conference proceeding, where only relations between the conic scalarization and the other methods are presented.

The properties of these methods are investigated with respect to the basic characteristics such as ordering cone, convexity and boundedness, the ability of generating proper efficient solutions, the ability to consider reference points which is a choice of decision maker as a solution and weighting preferences of decision maker, the number of additional constraints and decision variables. The paper also presents new characteristics for these methods and relations between them.

We would like to emphasize that not only these six scalarization methods can be compared with respect to the characteristics mentioned above, because there are many interesting methods in the literature. But for the beginning, we have chosen these six methods and our aim is to prepare a source where the main characteristics of scalarization methods can be found.

For example, the elastic constraint method (see [2, 3]) which is a modification of the \(\varepsilon \)-constraint method gives conditions on the characterization for efficient and properly efficient solutions. Note that the another special case of the \(\varepsilon \)-constraint method is given in [4], where epsilon is replaced by a (more informed) quantity related to the structure of the problem. The augmented weighted Chebyshev scalar problem, formulated by adding an augmented \(l_1-\)norm term to the objective function of the weighted Chebyshev scalarization method, suggested by Steuer and Choo (see [5]), is shown to generate properly efficient solutions for appropriately selected values of weights and augmentation parameter. Miettinen gave explanatory analysis on many scalarization methods in [6] (see also [7]). Wierzbicki proposed several achievement scalarizing functions for using a reference point as preference information [8]. Flores-Bazan and Hernandez introduced a unified vector optimization problem in [9] and by using a well-known scalarizing function earlier used in the literature (see, for example, [10,11,12]), studied many useful properties of efficient solutions for this problem.

It should be noted that the comparison of different methods is a delicate problem, because not all methods may have the same comparable features. Although all these methods are widely discussed in the literature, and the comparisons of multiobjective optimization methods have been realized in different ways, (see, for example, works [1, 2, 6,7,8, 13,14,15,16,17]), we aim to give an original and detailed analysis on these methods by mentioning new features and relations, and to collect the main characteristics of these methods in the same work.

Earlier in the literature (see [15]) it was proven that the Pascoletti–Serafini scalarization method is a generalization of the weighted sum scalarization, the \(\varepsilon -\) constraint and the weighted Chebyshev scalarization methods. In this paper we establish new relations between the conic scalarization method and Benson’s and Pascoletti–Serafini scalarization methods. It is emphasized that only the weighted sum and the conic scalarization methods guarantee to generate proper efficient solutions easily, just by selecting the preference weights in an appropriate way. Nevertheless, the weighted sum scalarization method may guarantee to generate all proper efficient solutions under the convexity assumption, while the conic scalarization method does not require any kind of convexity or boundedness conditions and guarantees to generate all proper efficient solutions. On the other hand, since the use of every useful property may have some trade-off, the conic scalarization method, having the above-mentioned properties, should be taken into account that it uses a nonsmooth scalarizing function.

The rest of the paper is organized as follows. Section 2 gives some preliminaries. Main characteristics of the six scalarization methods are given in six subsections in Sect. 3. In Sect. 3.5, besides the existing properties of the Pascoletti–Serafini method, new feature for this method is presented (see, Theorem 10). In Sect. 3.6, the conic scalarization method is analyzed, and the main characteristics of the method are comprehensively explained. Two new relations between this and the Pascoletti–Serafini and the Benson’s scalarization methods are established, and the related theorems are given in two subsubsections (see, Sects. 3.6.1 and 3.6.2). In Sect. 4, an illustrative example explaining the analyzed features of all six methods is presented. In this section, all the properties are illustrated on the same example for every scalarization method separately. Finally, Sect. 5 draws some conclusions from the paper.

2 Preliminaries

We begin this section with standard definitions from multiobjective optimization. Let \({\mathbb {R}}^n_+ := \{y=(y_1,...,y_n): y_i \ge 0, i=1, \ldots ,n\}\), and let \({\mathbb {Y}} \subset {\mathbb {R}}^n\) be a nonempty set. Throughout the paper, \({\mathbb {R}}_+\) denotes the set of nonnegative real numbers. \(\textsf {cl}({\mathbb {Y}})\), \( \textsf {bd}({\mathbb {Y}})\), \(\textsf {int} ({\mathbb {Y}})\), and co\(({\mathbb {Y}})\) denote the closure, the boundary, the interior, and the convex hull of a set \({\mathbb {Y}}\), respectively. A nonempty subset \({\mathbb {C}}\) of \({\mathbb {R}}^n\) is called a cone if \( y \in {\mathbb {C}}, \lambda \ge 0 \Rightarrow \lambda y \in {\mathbb {C}}.\) Pointedness of \({\mathbb {C}}\) means that \( {\mathbb {C}} \cap (-{\mathbb {C}}) =\{0_{{\mathbb {R}}^n} \}.\) We will assume that \({\mathbb {R}}^n\) is partially ordered by a closed convex pointed cone \({\mathbb {C}} \subset {\mathbb {R}}^n\). A set \({\mathbb {Y}} \subset {\mathbb {R}}^n\) is called bounded below (with respect to the ordering cone \({\mathbb {C}}\)), if there exists a point \(y_0 \in {\mathbb {R}}^n\) such that \({\mathbb {Y}} \subset \{y_0\} + {\mathbb {C}}.\)

Definition 1

  1. 1.

    An element \(y \in {\mathbb {Y}}\) is called a minimal element of \({\mathbb {Y}}\) (with respect to the ordering cone \({\mathbb {C}}\)) if \((\{y\}-{\mathbb {C}})\cap {\mathbb {Y}} =\{y\}\).

  2. 2.

    An element \(y \in {\mathbb {Y}}\) is called a weakly minimal element of \({\mathbb {Y}}\) if \((\{y\}- \textsf {int}({\mathbb {C}})) \cap {\mathbb {Y }}=\emptyset , \) provided that \( \textsf {int}({\mathbb {C}}) \ne \emptyset .\)

  3. 3.

    An element \(y \in {\mathbb {Y}}\) is called a properly minimal element of \({\mathbb {Y}}\) in the sense of Benson [18] if y is a minimal element of \({\mathbb {Y}}\) and the zero element of \({\mathbb {R}}^n\) is a minimal element of \(\textsf {cl}(\textsf {cone}({\mathbb {Y}}+{\mathbb {C}} - \{y\}))\), where \(\textsf {cone}({\mathbb {Y}}) := \{\lambda y: \lambda \ge 0, y \in {\mathbb {Y}}\}\).

  4. 4.

    An element \(\overline{y}\in {\mathbb {Y}} \) is called a properly minimal element of \({\mathbb {Y}}\) in the sense of Henig [19] if it is a minimal element of \({\mathbb {Y}}\) with respect to some closed convex pointed cone \({\mathbb {K}}\) with \({\mathbb {C}}{\setminus } \{0_{{\mathbb {R}}^n}\} \subset \textsf {int}({\mathbb {K}})\).

Henig proved that in the case when the vector space is partially ordered by a closed convex pointed cone, the two proper efficiency notions given in Definition 1 are equivalent (see [19, Theorem 2.1]). Therefore, in the sequel we simply will use the proper efficiency term.

Consider a multiobjective optimization problem (in short MOP):

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} [f_1(x), ... , f_n(x)], \end{aligned}$$
(1)

where \({\mathbb {X}} \subset {\mathbb {R}}^m\) is a nonempty set of feasible solutions and \(f_i : {\mathbb {X}} \rightarrow {\mathbb {R}}, i=1,...,n\) are real-valued functions. Let \(f(x)=(f_1(x),\ldots , f_n(x))\) for every \(x \in {\mathbb {X}}\) and let \({\mathbb {Y}}:=f({\mathbb {X}})\).

Definition 2

A feasible solution \(x \in {\mathbb {X}}\) is called efficient, weakly efficient or properly efficient solution of multiobjective optimization problem (1) if \(y=f(x)\) is a minimal, weakly minimal or properly minimal element of \({\mathbb {Y}}\), respectively.

Let \(y=(y_{1},\ldots ,y_{n})\in {\mathbb {R}}^{n}\). The notations \(\Vert y \Vert _{1}=\sum _{i=1}^{n} |y_{i}|\), \(\Vert y\Vert _2 = (y_1^2+\cdots +y_n^2)^{1/2}\), and \(\Vert y\Vert _\infty = \max \{|y_1|,\ldots ,|y_n|\}\) denote the \(l_{1}\), \(l_2\) (Euclidean), and \(l_\infty \) norms of y, respectively.

Let \({\mathbb {C}}\) be a given cone in \({\mathbb {R}}^n.\) Recall that the dual cone \({\mathbb {C}}^{*}\) of \({\mathbb {C}}\) and its quasi-interior \({\mathbb {C}}^{\#}\) are defined by

$$\begin{aligned} {\mathbb {C}}^*=\{w \in {\mathbb {R}}^{n}: w^Ty \ge 0 \text{ for } \text{ all } y \in {\mathbb {C}}\} \end{aligned}$$
(2)

and

$$\begin{aligned} {\mathbb {C}}^{\#}=\{w \in {\mathbb {R}}^{n}: w^Ty > 0 \text{ for } \text{ all } y \in {\mathbb {C}} {\setminus } \{0\} \}, \end{aligned}$$
(3)

respectively, where \(w^T\) denotes the transpose of vector w,  and \(w^Ty = \sum _{i=1}^n w_iy_i\) is the scalar product of vectors \(w=(w_1,\ldots w_n)\) and \(y=(y_1, \ldots , y_n).\) The elements of these cones define monotone and strongly monotone linear functionals whose level sets (hyperplanes) are used to characterize support points of convex sets.

3 Main Characteristics of the Six Scalarization Methods

In this section we give a brief review of the six scalarization methods for MOPs defined by (1) and will assume that the objective space \(R^n\) is partially ordered by a closed convex pointed cone C. For every method considered, the main characteristics and geometrical interpretation emphasizing important properties will be given.

3.1 Weighted Sum Scalarization (WSS) Method

WSS method was suggested by Gass and Saaty [20] in 1955 and is probably the most commonly used scalarization technique for MOPs.

Consider problem (1). The WSS method associates each objective function with a weighting coefficient which is determined by decision maker and optimizes real-valued function of weighted sum of the objectives. In general, the weight vector \(w^T=(w_1,\ldots ,w_n)\) is chosen from the dual cone \({\mathbb {C}}^*{\setminus }\{0\}\), where \({\mathbb {C}}\) denotes the ordering cone, and every \(w_i\) is associated with an objective function \(f_i(x)\) for \(i=1,\ldots ,n.\) The scalar problem for the given weight vector w is written as follows:

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} \sum _{i=1}^nw_if_i(x) \quad (\mathrm{WSS}(w)) \end{aligned}$$

The set of solutions of scalar problem \((\mathrm{WSS}(w))\) will be denoted by \(\mathrm{Sol}(\mathrm{WSS}(w)).\) The geometrical illustrations for solutions of \((\mathrm{WSS}(w))\) are given in Fig. 1a, b.

Fig. 1
figure 1

Illustration of WSS method. a Convex objective space, b nonconvex objective space

The well-known characteristics for WSS method are given in the following two theorems which can be viewed in [14, 15, 21].

Theorem 1

Let \(w \in {\mathbb {C}}^*{\setminus }\{0\}\) be a given weight vector, and let \(\mathrm{Sol}(\mathrm{WSS}(w))\ne \emptyset .\) Then, the following hold.

  • (i) [14, Theorem 3.4] Every element of \(\mathrm{Sol}(\mathrm{WSS}(w))\) is a weakly efficient solution of MOP.

  • (ii) [14, Propositon 3.8] If \(\mathrm{Sol}(\mathrm{WSS}(w))\) consists of a single element, then this element is an efficient solution of MOP.

  • (iii) [14, Theorem 3.15] If \(w\in {\mathbb {C}}^{\#}\), then an element of \(\mathrm{Sol}(\mathrm{WSS}(w))\) is a properly efficient solution of MOP.

Theorem 2

Assume that (1) is a convex problem.

  • (i) [21, Theorem 3.1] If \(\bar{x}\) is a weakly efficient solution of (1), then there exists \(w \in {\mathbb {C}}^{*},\) such that \(\bar{x}\) is optimal for \((\mathrm{WSS}(w))\).

  • (ii) [21, Theorem 3.2] If \(\bar{x}\) is a properly efficient solution of (1), then there exists \(w \in {\mathbb {C}}^{\#},\) such that \(\bar{x}\) is optimal for \((\mathrm{WSS}(w))\).

The main properties of WSS method are:

  • The WSS method can be applied for any closed pointed convex cone C serving as an ordering cone.

  • The boundedness below of objective space is not an essential condition when the WSS method is applied. Nevertheless, for some weights w, the problem \((\mathrm{WSS}(w))\) may not have a finite optimal solution.

  • The convexity condition is essential. The WSS method gives a complete characterization of weakly efficient and properly efficient solutions if the problem (1) is convex.

  • The weights of objectives are taken, while the reference point information is not taken into account in this method (for illustration see Fig. 1).

  • The WSS method does not use additional constraints.

3.2 \(\varepsilon \)-Constraint (EC) Method

In this section, we discuss the EC method which was introduced by Haimes et al. in 1971  [22] (see also [23]).

Consider problem (1). In the most common type of \(\varepsilon \)-constraint problem, one primary objective \(f_k\) is selected to be optimized, and the other objectives are converted into inequality constraints.

Let \(\varepsilon =(\varepsilon _1, \ldots ,\varepsilon _n) \in R^n \), where every \(\varepsilon _i\) is associated with an objective function \(f_i(x)\) for \(i=1, \ldots , n.\) The scalar problem for the given vector \(\varepsilon \) and some \(k = 1,\ldots ,n\) is written as:

$$\begin{aligned}&\mathrm{min}_{x \in {\mathbb {X}}} f_k(x)\quad (\mathrm{EC}(\varepsilon ,k)) \\&s.t.\qquad \qquad \qquad \qquad \qquad \\&f_i(x) \le \varepsilon _i, i=1, \ldots , n, i\ne k. \ \\ \end{aligned}$$

A geometrical illustration of EC method for \(n=2,\) with \(\mathrm{min} f_2(x),\) subject to \(f_1(x) \le \varepsilon _1,\) is given in Fig. 2.

Fig. 2
figure 2

Geometrical illustration of the EC method

The following two theorems quoted from [21, 23] characterize efficient solutions of (1) in terms of the solutions of the scalar problem \((\mathrm{EC}(\varepsilon ,k)).\)

Theorem 3

  • (i) [21, Theorem 2] Every optimal solution of \((\mathrm{EC}(\varepsilon ,k))\) is a weakly efficient solution of (1), and the set of all optimal solutions of \((\mathrm{EC}(\varepsilon ,k))\) contains at least one efficient solution of (1).

  • (ii) [21, Theorem 2] If \(\bar{x}\) is an efficient solution of (1), then there exists an index \(k \in {1,\ldots ,n}\) and vector \(\varepsilon \) such that \(\bar{x}\) is an optimal solution of \((\mathrm{EC}(\varepsilon ,k)).\)

  • (iii) [23, Theorem 4.1] \(\bar{x}\) is an efficient solution of (1) if and only if \(\bar{x}\) solves \((\mathrm{EC}(\varepsilon ,k))\) for every \(k \in {1, \ldots , n}\) and \(\varepsilon =f(\bar{x}).\)

  • (iv) [23, Theorem 4.2] If \(\bar{x}\) solves \((\mathrm{EC}(\varepsilon ,k))\) for \(\varepsilon =f(\bar{x})\) and some \(k \in {1, \ldots , n}\) and if the solution is unique, then \(\bar{x}\) is an efficient solution of (1).

Remark 1

Note that the part ”...and the set of all optimal solutions of \((\mathrm{EC}(\varepsilon ,k))\) contains at least one efficient solution of (1),...” of Theorem 3 (i) is not true in general. This part may remain true if the boundedness and the closedness conditions are satisfied (see illustrative example given in Sect. 4.2).

The following theorem establishes relationship between the WSS and the EC methods.

Theorem 4

Assume that the ordering cone \({\mathbb {C}}\) equals \({\mathbb {R}}^n_+.\)

  • (i) [23, Lemma 4.1] Assume that (1) is a convex problem. If for a given k, \(\bar{x}\) solves \((\mathrm{EC}(\varepsilon ,k))\) with \(\varepsilon = f(\bar{x}),\) then there exists \(w \in R^n_+ \) such that \(\bar{x}\) also solves \((\mathrm{WSS}(w)).\)

  • (ii) [23, Lemma 4.2] If there exists \(w \in R^n_+ \) such that \( w_k > 0 \) and \(\bar{x}\) solves \((\mathrm{WSS}(w))\), then \(\bar{x}\) also solves \((\mathrm{EC}(\varepsilon ,k))\) with \(\varepsilon =f(\bar{x}).\)

  • (iii) [23, Lemma 4.2] If \(\bar{x}\) is a unique solution of \((\mathrm{WSS}(w))\), then \(\bar{x}\) solves \((\mathrm{EC}(\varepsilon ,k))\) for all \(k \in {1,\ldots ,n}\) with \(\varepsilon =f(\bar{x}).\)

The main properties of this method are:

  • The EC method can be applied only in the case when the ordering cone equals \(R_+^n.\)

  • The boundedness from below is not an essential condition for EC method. Nevertheless, the method parameter \(\varepsilon \) should be chosen carefully. For some \(\varepsilon \), the problem \((\mathrm{EC}(\varepsilon ,k))\) may not have a finite optimal solution, if the problem is not bounded below. On the other hand, the scalar problem \((\mathrm{EC}(\varepsilon ,k))\) becomes infeasible if \(\{x:f_i(x)\le \varepsilon _i, i=1,\ldots ,n, i\ne k \}\cap {\mathbb {X}} = \emptyset \) for the chosen \(\varepsilon .\)

  • The EC method does not require convexity condition on the problem under consideration.

  • The EC method generates weakly efficient solutions and does not provide conditions for generating properly efficient solutions.

  • Decision maker’s preferences such as weights of objectives and reference point information are not taken into account in this method.

  • The problem size increases due to adding the constraints \(f_i(x)\le \varepsilon _i, i=1, \ldots , n, i\ne k.\)

3.3 Benson’s Scalarization (BS) Method

In this section we discuss the scalarization method suggested by Benson [24] (see also [2]). The idea of this method is as follows: choose some initial feasible solution \(x^{0}\in X\) and, if it is not itself efficient, produce a dominating solution that is. To do so, nonnegative deviation variables \(l_{i}=f_{i}(x^{0})-f_{i}(x)\) are introduced, and their sum is maximized. This results in an x dominating \(x^{0}\), if one exists, and the objective ensures that it is efficient, pushing f(x) as far from \(f(x^{0})\) as possible. The corresponding scalar problem for given \(x^{0}\) is:

$$\begin{aligned}&\max \sum \nolimits _{i=1}^n l_i \qquad \left( BS\left( x^0\right) \right) \\&s.t.\qquad \qquad \qquad \qquad \qquad \\&f_{i}(x^{0})-l_{i}-f_{i}(x)=0, \quad i=1,\ldots , n \\&l\geqq 0, x \in {\mathbb {X}}. \end{aligned}$$

An illustration in objective space demonstrates the idea (see Fig. 3). The initial feasible, but dominated, point \(f(x^{0})\) has coordinates greater than the ones of efficient point \(f(\bar{x})\). Maximizing the total deviation \(l_1+l_2\), the intention is to find a dominating solution, which is efficient.

Fig. 3
figure 3

Geometrical illustration of the BS method

The following three theorems which characterize BS method are quoted from [2].

Theorem 5

[2, Theorem 4.14] Assume that the ordering cone \({\mathbb {C}}\) equals \({\mathbb {R}}^n_+.\) The point \(x^{0}\in {\mathbb {X}}\) is an efficient solution of problem (1) if and only if the optimal objective value of \((\mathrm{BS}(x^0))\) is 0.

The strength of the method lies in the fact that whenever problem \((\mathrm{BS}(x^0))\) has a finite optimal solution value, the optimal solution is efficient. The following theorem explains this assertion.

Theorem 6

[2, Proposition 4.15] Assume that the ordering cone \({\mathbb {C}}\) equals \({\mathbb {R}}^n_+.\) If problem \((\mathrm{BS}(x^0))\) has an optimal solution \((\bar{x},\bar{l})\), then \(\bar{x}\) is an efficient solution of MOP.

The main properties of this method are:

  • The BS method can be applied only in the case when the ordering cone C equals \(R_+^n.\)

  • Although the boundedness is not an essential condition for BS method, in applications it may not be easy to find a “successful” initial solution \(x^0\) such that problem \((\mathrm{BS}(x^0))\) has a finite solution. The reason is that scalar problem \((\mathrm{BS}(x^0))\) may have no finite solution if set \((f(x^0)-{\mathbb {R}}_+^{n})\cap f(X)\) is unbounded (see illustrative example given in Sect. 4.3). Soleimani-damaneh and Zamani proved that the set of properly efficient solutions is empty when the BS problem is unbounded (see [25, Theorem  1]).

  • The BS method does not require convexity assumptions.

  • The method provides necessary and sufficient conditions for efficient solutions, but does not provide conditions guaranteeing the generation of properly efficient solutions.

  • The preferences of decision maker such as weights of objectives and reference points are not taken into account in this method.

  • The problem size increases due to adding the n new decision variables \(l_{i},\)n functional constraints \(f_{i}(x^{0})-l_{i}-f_{i}(x)=0, i=1,\ldots , n,\) and n nonnegativity constraints \(l\geqq 0,\) for the new decision variables.

3.4 Weighted Chebyshev Scalarization (WCS) Method

The WCS method originally suggested in [26] (see also [21, 27]) uses preference information received from a decision maker to find a set of efficient solutions. The preference information for (1) consists of a weight vector \(w=(w_1,\ldots ,w_n) \in {\mathbb {R}}_+^n\) and an ideal point \(z^{*}=(z^*_1,\ldots ,z^*_n),\) which is defined as follows:

$$\begin{aligned} z^*_i=\text {min}_{x \in {\mathbb {X}}}f_i(x). \end{aligned}$$

Then, the weighted Chebyshev scalar problem corresponding to (1) can be written as follows:

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} \Vert f(x)-z^{*}\Vert _\infty ^{w}\ \end{aligned}$$
(4)

where \(\Vert f(x)-z^{*}\Vert _\infty ^{w}= \text {max}_{i=1,\ldots , n}\{w_i(f_i(x)-z_i^*)\}\) is the weighted Chebyshev distance between the ideal point \(z^*\) and the point \(f(x)\in {\mathbb {Y}}\). Note that due to the definition of the ideal point we have \(f_i(x)-z_i^* \ge 0\) for all \(x \in {\mathbb {X}}\), \(i=1,\ldots ,n\).

By linearizing the “max” term in the objective function, the problem (4) can be reformulated in the following form:

$$\begin{aligned} \mathrm{min} \quad t \qquad (\mathrm{WCS}(w,z^*)) \\ \text{ s.t. } \qquad w_i(f_i (x)-z_i^*) \le t, \qquad i=1,\ldots ,n \\ x \in {\mathbb {X}}. \qquad \qquad \qquad \qquad \end{aligned}$$

The geometrical illustration of the method is given in Fig. 4.

Fig. 4
figure 4

Geometrical illustration of the WCS method

The following results on characteristics of the WCS method are well known (see, for example, [21]).

Theorem 7

Assume that the ordering cone \({\mathbb {C}}\) equals \({\mathbb {R}}^n_+.\)

(i):

[21, Theorem 5.1] Every optimal solution of \((\mathrm{WCS}(w,z^*))\) is weakly efficient for (1), and the set of all optimal solutions of \((\mathrm{WCS}(w,z^*))\) contains at least one efficient solution of (1). If the optimal solution of \((\mathrm{WCS}(w,z^*))\) is unique, then it is efficient for (1).

(ii):

[21, Theorem 5.2] If \(\bar{x}\) is an efficient solution of (1), then there exists \( w>0\) such that \(\bar{x}\) is optimal for \((\mathrm{WCS}(w,z^*))\).

The main characteristics of this method are listed below. For illustration of these characteristics, see Sect. 4.4.

  • The WCS method is applied only in the case when the ordering cone C equals \(R_+^n.\)

  • To apply the WCS method, the objective space of the problem is required to be bounded from below. This condition guarantees the existence of ideal point for the problem. In Theorem 7(i), the lower boundedness or the existence of ideal point condition is essential. Otherwise the set of solutions of \((\mathrm{WCS}(w,z^*))\) may not involve an efficient solution (for illustrations see Sect. 4.4). In applications it may not be easy to find the ideal point.

  • No convexity assumption is needed in this method.

  • The WCS method generates weakly efficient solutions (besides efficient ones), but does not provide conditions guaranteeing the generation of properly efficient solutions.

  • The method takes into consideration decision maker’s preferences on weights of objectives.

  • The ideal point used in this method can be considered as a reference point in some sense. The ideal point and the weight vector together may lead to an efficient (or weakly efficient) solution which is close (in some sense) to the “ideal point,” but this is not guaranteed generally. The method computes support points of the decision space with respect to (n-dimensional) rectangles, whose side lengths are proportional to weight vector’s components. The ideal point plays the role of the center of this rectangle. Therefore, in dependence of sides of the rectangle obtained, the scalar problem \((\mathrm{WCS}(w,z^*))\) may lead to support points which are not too close to the center of this rectangle.

  • The method uses n additional functional constraints \(w_i(f_i (x)-z_i^*) \le t, i=1,\ldots ,n\) which may complicate the solution process.

3.5 Pascoletti–Serafini Scalarization (PSS) Method

The method known as the Pascoletti–Serafini scalarization method is related to the scalar problem introduced by Gerstewitz in [28], which was also studied in [10, 11, 29] by Tammer, Weidner, Winkler, Pascoletti, and Serafini.

The scalar problem of the PSS method is defined as follows:

$$\begin{aligned} \mathrm{min} \quad t \qquad (\mathrm{PSS}(a,r)) \ \\ \text{ s.t. } \qquad a+tr-f(x) \in {\mathbb {C}} \qquad \qquad \qquad \ \\ x \in {\mathbb {X}}, t \in {\mathbb {R}}, \qquad \qquad \qquad \end{aligned}$$

where \(a \in {\mathbb {R}}^n \) and \(r \in {\mathbb {C}} \) are parameters of \((\mathrm{PSS}(a,r))\). The problem \((\mathrm{PSS}(a,r))\) can also be written in the form (see [15])

$$\begin{aligned}&\mathrm{min} \quad t \nonumber \\&\text{ s.t. } \qquad a+tr-{\mathbb {C}} \cap f(X) \ne \emptyset , \qquad t \in {\mathbb {R}}. \end{aligned}$$
(5)

This problem can be interpreted in the following form. The ordering cone \({\mathbb {C}}\) is moved in direction \(-r\) along the line \(a+tr\) till the set \((a+tr- {\mathbb {C}}) \cap f(X)\) is reduced to the empty set. The smallest value \(\bar{t}\) for which \((a+ \bar{t} r- {\mathbb {C}}) \cap f(X) \ne \emptyset \) is the solution of (5). If the pair \((\bar{t},\bar{x})\) is a solution of \((\mathrm{PSS}(a,r))\), the element \(\bar{y} = f(\bar{x})\) with \(\bar{y} \in (a+ \bar{t} r- {\mathbb {C}}) \cap f(X) \) will be characterized as a weakly minimal solution of (1) (see Fig. 5).

Fig. 5
figure 5

Graphical illustration of PSS method

The following two theorems quoted from [15] and [11] explain main properties of PSS method.

Theorem 8

[15, Theorem 2.1] Assume that \(\mathrm{int}( {\mathbb {C}}) \ne \emptyset .\)

(i):

If \(\bar{x}\) is a weakly efficient solution of (1), then \((0,\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\) for the parameters \(a=f(\bar{x})\) and arbitrary \(r \in \mathrm{int}({\mathbb {C}}).\)

(ii):

If \(\bar{x}\) is an efficient solution of (1), then \((0,\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\) for the parameters \(a=f(\bar{x})\) and arbitrary \( r \in {\mathbb {C}} {\setminus } \{0_{n}\}.\)

(iii):

If \((\bar{t},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\), then \(\bar{x}\) is a weakly efficient solution of (1) and \(a+\bar{t}r-f(\bar{x}) \in \partial {\mathbb {C}} \) with \(\partial {\mathbb {C}}\) the boundary of \({\mathbb {C}}.\)

If \((\bar{t},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\) with \(\bar{x}\) being weakly efficient solution of (1) but not efficient, we have the following property for the points dominating the point \(f(\bar{x}).\)

Theorem 9

[11, Theorem 3.3] [15, Theorem 2.8] If the point \((\bar{t},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\) with \(\bar{k}:=a+tr-f(\bar{x})\) and if there is a point \(y=f(x)\in f(X)\) dominating \(f(\bar{x})\) w.r.t. the cone \( {\mathbb {C}},\) then \((\bar{t},x)\) is also optimal solution of \((\mathrm{PSS}(a,r))\) and there exists a \(k \in \partial {\mathbb {C}}, k \ne 0_{R^n}\), with \(a+\bar{t}r-f(x)=\bar{k}+k.\)

From this theorem the following two conclusions can be made.

Corollary 1

[15, Corollary 2.9] If the point \((\bar{t},\bar{x})\) is an image-unique optimal solution of the scalar problem \((\mathrm{PSS}(a,r))\) w.r.t. f,  i.e., there is no other optimal solution (tx) with \(f(x)=f(\bar{x})\), then \(\bar{x}\) is an efficient solution of (1).

Corollary 2

[11, Theorem 3.7] [15, Corollary 2.10] A point \(\bar{x}\) is an efficient solution of (1) if

(i):

there is some \(t\in R\) so that \((\bar{t},\bar{x})\) is optimal solution of \((\mathrm{PSS}(a,r))\) for some parameters \(a\in R\) and \(r\in \mathrm{int}(C),\) and

(ii):

\(((a+\bar{t}r)-\partial {\mathbb {C}})\cap (f(\bar{x})-\partial {\mathbb {C}})\cap f(X)=\{f(\bar{x})\}\).

Remark 2

It follows from Theorems 8 and 9 and Corollaries 1 and 2 that it is very difficult to identify the case when the weakly efficient solutions generated by the PSS method are efficient. For example, to check the relation given in Corollary 2 (ii) is equivalent to the checking the definition of minimality. If \((\bar{t},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\), then the point \(\bar{x}\) is a weakly efficient solution of (1) and by varying the parameters \((a,r) \in {\mathbb {R}}^n \times \mathrm{int}({\mathbb {C}}),\) different weakly efficient solutions of (1) can be obtained by solving \((\mathrm{PSS}(a,r)).\) However, the set of weakly efficient solutions obtained by solving \((\mathrm{PSS}(a,r))\) strongly depends on the parameter a. Theorem 10 below clarifies this situation.

Theorem 10

Let \(\mathrm{int}( {\mathbb {C}}) \ne \emptyset .\) Assume that \(a \in {\mathbb {R}}^n\) satisfies the following property

$$\begin{aligned} \{a+tr: t \in {\mathbb {R}}\} \cap f(X) = \emptyset \end{aligned}$$
(6)

for all \(r \in \mathrm{int}({\mathbb {C}}).\) Then, the set of weakly efficient solutions obtained by solving \((\mathrm{PSS}(a,r))\) is the same for all \(r \in \mathrm{int}({\mathbb {C}})).\)

Proof

Let \(r_1\in \mathrm{int}({\mathbb {C}}))\) be arbitrary, and let \(\{t_1\}\times X_1\) be a set of optimal solutions to \((\mathrm{PSS}(a,r_1)).\) Then, it is clear that f(x) is a weakly minimal point of f(X) for every \(x \in X_1.\) Let \(F(a,r_1) = \{f(x) : x \in X_1\}.\) Then, \(a+r_1t_1 \notin f(X)\) by the hypothesis, and the following relations are satisfied.

$$\begin{aligned}&(a+r_1\tilde{t} - {\mathbb {C}}) \cap f(X) = \emptyset \text{ for } \text{ every } \tilde{t}<t_1,\\&(a+r_1t_1 - {\mathbb {C}}) \cap f(X) = F(a,r_1),\\&(a+r_1t_1 - \mathrm{int}({\mathbb {C}})) \cap f(X) = \emptyset . \end{aligned}$$

We also have, \((f(x) + C) \cap \{a+r_1t : t \in {\mathbb {R}}\} = \{a+r_1t: t \ge t_1\}\) for every \(f(x) \in F(a,r_1).\)

Let \(r_2\in \mathrm{int}({\mathbb {C}})\) be another vector with \(r_2 \ne r_1.\) Then, since \(\mathrm{int}({\mathbb {C}}) \ne \emptyset \) and \({\mathbb {C}}\) is convex, \({\mathbb {C}}\) is reproducing and thus \({\mathbb {R}}^n = {\mathbb {C}} - {\mathbb {C}}.\) Therefore, there exists \(t_2 \in {\mathbb {R}}\) such that

$$\begin{aligned} (F(a,r_1) + {\mathbb {C}}) \cap \{a+r_2t : t \in {\mathbb {R}}\} = \{a+r_2t : t\ge t_2\} \notin f(X). \end{aligned}$$

Clearly,

$$\begin{aligned} (\{a+r_2t_2\} - {\mathbb {C}}) \cap f(X) = F(a,r_1). \end{aligned}$$

We now show that \(\{t_2\}\times X_1\) is the set of optimal solutions to \((\mathrm{PSS}(a,r_2)).\) Assume to the contrary that there exists a pair \((\tilde{t}_2,\tilde{x}_2)\) such that \(\tilde{t}_2 < t_2,\) and that \(\tilde{x}_2 \notin X_1.\) Then, we obtain \(a+\tilde{t}_2r_2 < a+r_2t_2, \) and hence \(f(\tilde{x}_2) \in \{a+r_2t_2\} - \mathrm{int}({\mathbb {C}}).\) Thus, \(f(\tilde{x}_2) \in F(a,r_1) - \mathrm{int}({\mathbb {C}}),\) which implies that \((f(\tilde{x}_2) + \mathrm{int}({\mathbb {C}})) \cap F(a,r_1) \ne \emptyset . \) This means that there exists \(f(x_1) \in F(a,r_1)\) with \(f(\tilde{x}_2 < f(x_1).\) Since \(f(x_1)\) is a weakly minimal point of f(X),  this is a contradiction, and the theorem is proved. \(\square \)

Remark 3

Assume that

$$\begin{aligned} \{a+tr: t \in {\mathbb {R}} \} \cap f(X) = \emptyset \end{aligned}$$
(7)

for some \(a \in {\mathbb {R}}^n,\) and for some r from the boundary of \( {\mathbb {C}}.\) Then, problem \((\mathrm{PSS}(a,r))\) may not have an optimal solution (for illustration see Fig. 6). Moreover, for such a vector a,  the same set of optimal solutions may be obtained for all \(r \in {\mathbb {C}}.\) This is illustrated in Fig. 6, where for the vector a,  the set of solutions placed on the line segment joining the points \((-2,2)\) and \((-2,3)\) will be obtained for all \(r \in \mathrm{int}({\mathbb {R}}^n_+).\) This situation also demonstrates that the vector r cannot be thought as a reference point in PSS method.

Fig. 6
figure 6

Graphical illustration of Remark 3

The following theorem establishes relationship between the PSS and the WSS methods.

Theorem 11

[15, Theorem 2.39] A point \(\bar{x}\) is an optimal solution of \((\mathrm{WSS}(w))\) for the parameter \(w \in \mathbb {C^*} {\setminus }\{0_{R^n}\} \) if and only if there is some \(\bar{t}\) such that \((\bar{t},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\) with \(a \in {\mathbb {R}}^n\) arbitrarily chosen, cone \({\mathbb {C}}_w := \{y \in {\mathbb {R}}^n | w^{'}y \ge 0 \} \) and \(r \in \mathrm{int}({\mathbb {C}}_w).\)

Remark 4

It has been shown under certain conditions that solutions obtained by solving the \(\varepsilon \)-constraint scalar problem \((\mathrm{EC}(\varepsilon , k))\) can be obtained by solving the Pascoletti–Serafini scalar problem \((\mathrm{PSS}(a,r))\) and vice versa, with \(a_{i} = \varepsilon _{i},\) for all \(i \in \{i, \ldots , n\} \backslash \{k\},\)\(a_{k} = 0,\)\(r = e_{k}\) with \(e_k\) the kth unit vector in \({\mathbb {R}}^n,\) and the ordering cone \({\mathbb {C}} = {\mathbb {R}}^n_+,\) see, for example, [15, Theorem 2.27]. It follows from theorem that the \(\varepsilon \)-constraint method is a special case of the PSS method with the parameter a chosen only from the hyperplane \(H=\{y \in {\mathbb {R}}^n | y_k = 0\}\) and the constant parameter \(r=e_k.\)

The following theorem establishes relationship between the PSS and the WCS methods.

Theorem 12

[15, Theorem 2.35] A point \((\bar{t}, \bar{x}) \in {\mathbb {R}} \times {\mathbb {X}}\) is a minimal solution of \((\mathrm{PSS}(a,r))\) with \({\mathbb {C}} = {\mathbb {R}}^n_+\) and parameters \(a \in {\mathbb {R}}^m, a_i < \mathrm{min}_{x \in {\mathbb {X}}} f_i(x), i=1, \ldots , n\) and \(r \in \mathrm{int}({\mathbb {R}}^n_+)\) if and only if \(\bar{x}\) is a solution of \((\mathrm{WCS}(w,a))\) with weights \(w_i = 1/r_i > 0, i=1, \ldots , n.\)

Thus, a variation of the weights in the norm corresponds to a variation of the direction r, and a variation of the reference point is like a variation of the parameter a in the PSS method.

The main characteristics of this method can be listed as follows.

  • The PSS method can be applied for arbitrary ordering cone C.

  • The boundedness from below is not an essential condition for the PSS method.

  • The PSS method does not require convexity conditions on the problem.

  • Any solution of a Pascoletti–Serafini scalar problem is at least weakly efficient. Besides, any weakly efficient solution can be found as a solution of a Pascoletti–Serafini scalar problem under the lower boundedness condition, that is if \(f(X) \subset \{a\} + C,\) see [30, Example 7.7.1]. The method does not provide conditions for obtaining (solely) minimal and properly minimal points.

  • The method does not take into account the information on the weights.

  • The method uses additional functional constraints of the form \(a+tr-f(x) \in {\mathbb {C}}\) which may complicate the solution process. In particular case, if \(C=R_+^n,\) the number of additional constraints becomes n.

  • If for the given parameter \(a \in {\mathbb {R}}^n,\) the straight line \(\{a+tr: t\in {\mathbb {R}} \}\) does not intersect the objective space f(X) for some \(r \in {\mathbb {C}},\) then the same set of weakly minimal points will be obtained for all \(r \in \mathrm{int}({\mathbb {C}}).\)

  • Under certain conditions, solutions obtained by solving the Pascoletti–Serafini scalar problem can be obtained by solving the WSS, EC, and WCS problems and vice versa.

  • BS method cannot be subsumed under the PSS method.

  • It is important to select the parameter r from the interior of the ordering cone, to guarantee existence of an optimal solution to \((\mathrm{PSS}(a,r)).\)

3.6 Conic Scalarization (CS) Method

The history of development of the CS method goes back to the paper [31], where Gasimov introduced a class of monotonically increasing sublinear functions on partially ordered real normed spaces and showed without convexity and boundedness assumptions that support points of a set obtained by using these functions are properly minimal in the sense of Benson [18]. The question of “can every properly minimal point of a set be calculated in a similar way” was answered only in the case when the objective space is partially ordered by a certain Bishop–Phelps cone. Since then, different theoretical and practical applications by using the suggested class of sublinear functions have been realized (see, for example, [12, 32,33,34,35,36,37]). The theoretical fundamentals of the conic scalarization method in general form was firstly explained in [36]. The full description of the method is given in [38].

Recall the following definitions of augmented dual cones introduced in [36].

$$\begin{aligned}&{\mathbb {C}}^{a*}=\{(y^{*},\alpha ) \in {\mathbb {C}}^{\#} \times \mathbb {R_{+}}: y^{*T}y- \alpha \Vert y\Vert \ge 0 \text{ for } \text{ all } y \in {\mathbb {C}} \}, \end{aligned}$$
(8)
$$\begin{aligned}&{\mathbb {C}}^{a\circ }=\{(y^{*},\alpha ) \in {\mathbb {C}}^{\#} \times {\mathbb {R}}_{+}: y^{*T}y- \alpha \Vert y\Vert > 0 \text{ for } \text{ all } y \in {\mathrm{int}} ({\mathbb {C}}) \}, \end{aligned}$$
(9)

and

$$\begin{aligned} {\mathbb {C}}^{a\#}=\{(y^{*},\alpha ) \in {\mathbb {C}}^{\#} \times {\mathbb {R}}_{+}: y^{*T}y - \alpha \Vert y\Vert > 0 \text{ for } \text{ all } y \in {\mathbb {C}} {\setminus } \{0\} \}, \end{aligned}$$
(10)

where \({\mathbb {C}}\) is assumed to have a nonempty interior in the definition of \({\mathbb {C}}^{a\circ }\).

The idea of the CS method is very simple: choose preference parameters which consist of a weight vector \(w \in {\mathbb {C}}^{\#}\) and a reference point \(a\in {\mathbb {R}}^n,\) determine an augmentation parameter \(\alpha \in {\mathbb {R}}_+\) such that \((w,\alpha ) \in {\mathbb {C}}^{a*}\) (or \((w,\alpha ) \in {\mathbb {C}}^{a\circ },\) or \((w,\alpha )\in {\mathbb {C}}^{a\#}\)), where for convenience the \(l_1-\)norm is used, and solve the scalar optimization problem:

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} \sum _{i=1}^nw_i(f_i(x) - a_i) + \alpha \sum _{i=1}^n|f_i(x) - a_i| \qquad \qquad (\mathrm{CS}(w,\alpha ,a)) \end{aligned}$$

The set of optimal solutions of this problem will be denoted by \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a)).\) Reference point \(a=(a_1, \ldots , a_n)\) may be identified by a decision maker in cases he/she desires to calculate minimal elements that are close to some point. The CS method does not impose any restrictions on the ways for determining reference points. The reference point can be chosen arbitrarily.

The following theorem quoted from [38] explains main properties of solutions obtained by the conic scalarization method in the case when \({\mathbb {C}} = {\mathbb {R}}_+^n.\) This special case for the cone determining the partial ordering allows one to explicitly determine augmented dual cones which are used for choosing scalarizing parameters \((w,\alpha ).\) For the general case of this theorem see [36, Theorem 5.4].

Theorem 13

[38, Theorem 6] Let \({\mathbb {Y}} \subset {\mathbb {R}}^n\) be a given nonempty set, let \(a \in {\mathbb {R}}^n\) be a given reference point, and let \({\mathbb {C}} = {\mathbb {R}}_+^n.\) Assume that \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a))\ne \emptyset \) for a given pair \((w,\alpha )\in {\mathbb {C}}^{a*}\). Then, the following hold.

  • (i) If

    $$\begin{aligned} (w,\alpha )\in {\mathbb {C}}^{a\circ } = \{((w_1,\ldots ,w_n),\alpha ): 0 \le \alpha \le w_i, w_i>0, i=1, \ldots , n \\ \qquad \qquad \text{ and } \text{ there } \text{ exists } k \in \{1, \cdots , n\} \text{ such } \text{ that } w_k > \alpha \}, \end{aligned}$$

    then every element of \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a))\) is a weakly efficient solution of (1).

  • (ii) If \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a))\) consists of a single element, then it is an efficient solution of (1).

  • (iii) If

    $$\begin{aligned} (w,\alpha )\in {\mathbb {C}}^{a\#} = \{((w_1,\ldots ,w_n),\alpha ): 0 \le \alpha < w_i, i=1, \ldots , n\}, \end{aligned}$$

    then every element of \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a))\) is a properly efficient solution of (1), and conversely, if \(\overline{x}\) is a properly efficient solution of (1), then there exists \((w,\alpha )\in {\mathbb {C}}^{a\#}\) and a reference point \(a \in {\mathbb {R}}^n\) such that \(\overline{x}\) is a solution of \(\mathrm{Sol}(\mathrm{CS}(w,\alpha ,a)).\)

The following theorem gives simple characterization of minimal elements.

Theorem 14

[38, Theorem 7] Let \({\mathbb {Y}} \subset {\mathbb {R}}^n\) be a given nonempty set and let \({\mathbb {C}} = {\mathbb {R}}^n_+.\) If \(\overline{y}\) is a minimal element of \({\mathbb {Y}},\) then \(\overline{y}\) is an optimal solution of the following scalar optimization problem:

$$\begin{aligned} \mathrm{min}_{y \in {\mathbb {Y}}} \left\{ \sum _{i=1}^n(y_i- \overline{y}_i) + \sum _{i=1}^n |y_i- \overline{y}_i| \right\} . \end{aligned}$$
(11)

By using assertions of Theorems 13 and 14, we arrive at the following conclusion. By solving the problem \((\mathrm{CS}(w,\alpha ,a))\) for “all” possible values of the augmentation parameter \(\alpha \) between 0 and \(\mathrm{min} \{w_1,\ldots ,w_n\}\), one can calculate all the efficient solutions corresponding to the decision maker’s preferences (the weighting vector \(w=(w_1,\ldots ,w_n)\) and the reference point a).

The following two remarks illustrate the geometry of the CS method.

Remark 5

It is clear that in the case when \(\alpha =0\) (or, if \(f({\mathbb {X}}) \subseteq \{a\} \pm {\mathbb {C}}\)) the objective function of the scalar optimization problem \((\mathrm{CS}(w,\alpha ,a))\) becomes an objective function of the weighted sum scalarization method. The minimization of such an objective function over a feasible set enables to obtain only those efficient solutions x (if the corresponding scalar problem has a solution), for which the minimal vector f(x) is a support point of the objective space with respect to some hyperplane

$$\begin{aligned} H(w) = \left\{ y : w^Ty=\beta \right\} , \end{aligned}$$

where \(\beta = w^Tf(x).\) It is obvious that minimal points which are not support points of the objective space with respect to some hyperplane cannot be detected by this way. By augmenting the linear part in \((\mathrm{CS}(w,\alpha ,a))\) with the norm term (using a positive augmentation parameter \(\alpha \)), the hyperplane H(w) becomes a conic surface defined by the cone

$$\begin{aligned} S(w,\alpha )=\{y \in {\mathbb {R}}^n : w^Ty + \alpha \Vert y\Vert \le 0\}, \end{aligned}$$
(12)

and therefore the corresponding scalar problem \((\mathrm{CS}(w,\alpha ,a))\) computes solution x,  for which the corresponding vector f(x) is a support point of the objective space with respect to this cone. The change in the \(\alpha \) leads to a different supporting cone. The supporting cone corresponding to some weight vector w becomes narrower as \(\alpha \) increases, and the smallest cone (which always contains the ordering cone) is obtained when \(\alpha \) equals its maximum allowable value (for example, \(\mathrm{min} \{w_1,\ldots ,w_n\},\) if \((w,\alpha ) \in {\mathbb {C}}^{a\#}\)). This analysis shows that by changing the \(\alpha \) parameter, one can compute different minimal points of the problem corresponding to the same weight vector. And since the method computes support points of the decision space with respect to cones (if \(\alpha \ne 0\)), it becomes clear why this method does not require convexity and boundedness conditions and why it is able to find optimal points which cannot be detected by hyperplanes. Since the cases \(\alpha =0\) or \(f({\mathbb {X}}) \subseteq \{a\} \pm {\mathbb {C}}\) lead to the objective function of the weighted sum scalarization method, we can say that the CS method is a generalization of the weighted sum scalarization method. The geometrical illustration of CS method is presented in Fig. 7, where two solutions corresponding to the same weight vector w,  and the same reference point a with different values of augmentation parameter, are depicted. As it can be seen from this figure, these solutions are the support points of the decision space with respect to cones corresponding to the objective functions of scalar problems \((\mathrm{CS}(w,\alpha _i,a)), i=1,2,\) where \(\alpha _2 > \alpha _1.\) Since \(\alpha _2 > \alpha _1,\) the cone corresponding to \(\alpha _2\) is narrower than the one for \(\alpha _1,\) and it is clear from the graphical illustration that the minimal point \(f(x_2)\) cannot be computed using the weighted sum scalarization method.

Fig. 7
figure 7

Geometrical illustration of the CS method

Remark 6

It follows from the definition of augmented dual cone that \(w^Ty - \alpha \Vert y\Vert \ge 0\) for every \((w,\alpha ) \in {\mathbb {C}}^{a*}\) and all \(y \in {\mathbb {C}}.\) Hence,

$$\begin{aligned} {\mathbb {C}} \subset C(w,\alpha )=\{y\in {\mathbb {R}}^n : wy-\alpha \Vert y\Vert \ge 0 \}, \end{aligned}$$
(13)

where \(C(w,\alpha )\) is known as the Bishop–Phelps cone corresponding to a pair \((w,\alpha ) \in {\mathbb {C}}^{a*}.\) It has been proved that if \({\mathbb {C}}\) is a closed convex pointed cone having a weakly compact base, then

$$\begin{aligned} {\mathbb {C}} = \cap _{(w,\alpha ) \in {\mathbb {C}}^{a*}} C(w,\alpha ), \end{aligned}$$

see [36, Theorems 3.8 and 3.9].

On the other hand, since \(w^Ty - \alpha \Vert y\Vert \ge 0\) for every \((w,\alpha ) \in {\mathbb {C}}^{a*}\) and all \(y \in {\mathbb {C}},\) then clearly \(w^Ty + \alpha \Vert y\Vert \le 0\) for every \(y \in -{\mathbb {C}}.\) Thus, we conclude that all the cones \(S(w,\alpha )=\{y \in {\mathbb {R}}^n : w^Ty + \alpha \Vert y\Vert \le 0\}\) (see (12)) with \((w,\alpha ) \in {\mathbb {C}}^{a*},\) contain the ordering cone \(-{\mathbb {C}}.\) Moreover, if \((w,\alpha ) \in {\mathbb {C}}^{a\#}\), then we have [36, Lemma 3.6]

$$\begin{aligned} -{\mathbb {C}} {\setminus } \{0\} \in \mathrm{int}(S(w,\alpha ))=\{y \in {\mathbb {R}}^n : w^Ty + \alpha \Vert y\Vert < 0\}. \end{aligned}$$

Due to this property, the CS method guarantees to calculate “all” properly efficient solutions corresponding to the given weights and the given reference point. That is, every solution of the scalar problem \((\mathrm{CS}(w,\alpha ,a))\) is a properly efficient solutions of the multiobjective optimization problem (1), if \((w,\alpha ) \in {\mathbb {C}}^{a\#},\) see Theorem 13 (iii).

In some cases for a given cone \({\mathbb {C}}\) and a given norm, it is possible to find a pair \((w,\alpha ) \in {\mathbb {C}}^{a*}\) such that \({\mathbb {C}} = C(w,\alpha ).\) For example, if \({\mathbb {C}} = {\mathbb {R}}_+^n\), then

$$\begin{aligned} {\mathbb {R}}_+^n = C(w^1,\alpha ^1)=\{y\in {\mathbb {R}}^n : w^1y-\alpha ^1\Vert y\Vert |_1 \ge 0 \}, \end{aligned}$$
(14)

where \(w^1=(1,...,1) \in {\mathbb {R}}^n,\)\(\alpha ^1 =1,\) and the \(l_1\) norm is used in the definition (see [38, Lemma 4]). Similarly, \({\mathbb {R}}_-^n\) can be represented as a level set \(S(w^1,\alpha ^1)\) (see (12)) of the function

$$\begin{aligned} g_{(w^1,\alpha ^1)}(y)=y_1 + \ldots + y_n +|y_1| + \ldots + |y_ n| \end{aligned}$$
(15)

in the form:

$$\begin{aligned} {\mathbb {R}}_-^n = S(w^1,\alpha ^1) = \{(y_1, \ldots , y_n) \in {\mathbb {R}}^n : y_1 + \ldots + y_n +|y_1| + \ldots + |y_ n| \le 0 \}. \end{aligned}$$
(16)

Hence, it becomes clear that the presented scalarization method enables one to calculate minimal elements which are “support” elements of \(f({\mathbb {X}})\) with respect to the conic surfaces like \(S(w,\alpha )\) (see (12)). In practice, one can divide the interval between 0 and \(\mathrm{min} \{w_1,\ldots ,w_n\}\) into several parts, and for all these values of the augmentation parameter \(\alpha ,\) the scalar problem \((\mathrm{CS}(w,\alpha ,a))\) can be solved for the same weights and the same reference point chosen. This will enable decision maker to compute different efficient solutions (if any) with respect to the same set of weights.

3.6.1 Relations Between the Conic Scalarization (CS) and the Pascoletti–Serafini Scalarization (PSS) Methods

In this section we present a theorem establishing relation between the CS and the PSS methods. It is shown that the same efficient solution obtained by the PSS method or the better one can be computed by the CS method.

Theorem 15

Assume that \({\mathbb {C}}\) is a closed convex pointed cone with nonempty interior and that \(a \in {\mathbb {R}}^n,\)\(r \in \mathrm{int}({\mathbb {C}})\) and \(({\bar{t}},\bar{x})\) is an optimal solution of \((\mathrm{PSS}(a,r))\). Then, there exists a weight vector \({\bar{w}}=({\bar{w}}_1, ... {\bar{w}}_n) \in {\mathbb {C}}^{\#}\) and an augmentation parameter \({\bar{\alpha }} \ge 0\) with \(({\bar{w}}, {\bar{\alpha }}) \in C^{a\circ }\) such that

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} {\bar{w}}^T(f(x) - a) + {\bar{\alpha }} \Vert f(x) - a\Vert \le {\bar{t}}. \end{aligned}$$

Proof

Let \((w,\alpha ) \in C^{a\circ }.\) By definition of \(C^{a\circ }\) (see also (13)), \({\mathbb {C}} \subset C(w,\alpha ).\) Then, problem \((\mathrm{PSS}(a,r))\) can be written in the following form with possibly a broader set of feasible solutions:

$$\begin{aligned} \mathrm{minimize} \qquad t \qquad (\mathrm{PSS}_{C(w,\alpha )}(a,r)) \ \\ \text{ s.t } \qquad a+tr-f(x) \in C(w,\alpha ) \qquad \qquad \qquad \ \\ x \in {\mathbb {X}}, \qquad \qquad \qquad \end{aligned}$$

By definition of \(C(w,\alpha ),\) the inclusion \(a+tr-f(x) \in C(w,\alpha )\) implies

$$\begin{aligned} w^T(a+tr-f(x))-\alpha \Vert a+tr-f(x)\Vert \ge 0, \end{aligned}$$

or

$$\begin{aligned} w^T(f(x)-a-tr)+\alpha \Vert f(x)-a-tr\Vert \le 0. \end{aligned}$$

Obviously,

$$\begin{aligned} \alpha (\Vert f(x)-a\Vert -\Vert tr\Vert ) \le \alpha \Vert f(x)-a-tr\Vert . \end{aligned}$$

Then, if we change the norm term in the previous inequality by the left-hand side in the above inequality, the set of feasible solutions of \((\mathrm{PSS}_{C(w,\alpha )}(a,r))\) may again be extended:

$$\begin{aligned} w^T(f(x)-a)+\alpha \Vert f(x)-a\Vert \le tw^Tr+|t|\alpha \Vert r\Vert . \end{aligned}$$
(17)

In dependence on the sign of \({\bar{t}}\) we can consider only positive or only negative range for t in (17). If only negative (or only positive) values of t will be considered, then the right-hand side of (17) becomes \(t(w^Tr-\alpha \Vert r\Vert )\) (or \(t(w^Tr+\alpha \Vert r\Vert )\)). Since \(r \in \mathrm{int}(C)\) and \((w, \alpha ) \in C^{a\circ }\) we have \(w^Tr-\alpha \Vert r\Vert >0\) (or \(w^Tr+\alpha \Vert r\Vert >0\)).

Thus, by dividing both sides of (17) with \(w^Tr-\alpha \Vert r\Vert >0\) (or \(w^Tr+\alpha \Vert r\Vert >0\)) and denoting \({\bar{w}} = w/(w^Tr-\alpha \Vert r\Vert )\) and \({\bar{\alpha }} = \alpha /(w^Tr-\alpha \Vert r\Vert )\) (or \({\bar{w}} = w/(w^Tr+\alpha \Vert r\Vert )\) and \({\bar{\alpha }} = \alpha /(w^Tr+\alpha \Vert r\Vert )\)), we obtain that the problem \((\mathrm{PSS}_{C(w,\alpha )}(a,r))\) can be written (with a possibly broader feasible set) in the form:

$$\begin{aligned}&\mathrm{minimize} \quad t \ \end{aligned}$$
(18)
$$\begin{aligned}&\text{ s.t } \qquad {\bar{w}}^T(f(x)-a)+{\bar{\alpha }}\Vert f(x)-a\Vert \le t \qquad \ \end{aligned}$$
(19)
$$\begin{aligned}&x \in {\mathbb {X}}, \qquad \end{aligned}$$
(20)

This problem is equivalent to the following problem \((\mathrm{CS}({\bar{w}},{\bar{\alpha }},a))\):

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} [{\bar{w}}^T(f(x)-a)+{\bar{\alpha }}\Vert f(x)-a\Vert ]. \end{aligned}$$

Since the set of feasible solutions of problem (18)–(20) is larger than the one of \((\mathrm{PSS}(a,r)),\) we obtain

$$\begin{aligned} \mathrm{min}_{x \in {\mathbb {X}}} [{\bar{w}}^T(f(x)-a)+{\bar{\alpha }}\Vert f(x)-a\Vert ] \le {\bar{t}}, \end{aligned}$$

which completes the proof of theorem. \(\square \)

Next we prove that the conic scalarization method is a generalization of the Benson’s method.

3.6.2 Relationship Between the Conic Scalarization and the Benson’s Methods

In this section we explain relationship between the BS and the CS methods. It is shown that efficient and properly efficient solutions calculated by the BS method can be obtained by the CS method, and that the CS method can be considered as a generalization of the BS method.

Theorem 16

Assume that \(\bar{x}\) is an optimal solution of Benson scalar problem \((\mathrm{BS}(x^0))\) for a feasible solution \(x^0 \in X\) and that \(\bar{x}\) is an efficient solution to (1). Then, \(\bar{x}\) is an optimal solution of the conic scalar problem \(\mathrm{CS}(w^1,\alpha ^1,f(x^0)),\) where \(w^1=(1,\ldots ,1) \in {\mathbb {R}}^n,\)\(\alpha ^1 =1,\) and the \(l_1\) norm is used:

$$\begin{aligned} \mathrm{min}_{x\in X} \sum _{i=1}^n\left( f_i(x)-f_i\left( x^0\right) \right) + \sum _{i=1}^n|f_i(x)-f_i(x^0)|. \end{aligned}$$

Proof

Since \(f(x^0) \in f(X),\)\(f(\bar{x}) \in f(x^0) - R^{n}_{+}\) and (see (16))

$$\begin{aligned} -R^{n}_{+}=\left\{ y:(w^1)^Ty + \alpha ^1 \Vert y \Vert _1 \le 0\right\} , \end{aligned}$$

we have:

$$\begin{aligned} \sum _{i=1}^n\left( f_i(x)-f_i\left( x^0\right) \right) + \sum _{i=1}^n|f_i(x)-f_i(x^0)|=0 \end{aligned}$$

for all \(x \in X_0=\{x\in X : f(x) \in f(x^0) - R^{n}_{+} \},\) and in particular for \(x=\bar{x}.\) Obviously,

$$\begin{aligned} \sum _{i=1}^n\left( f_i(x)-f_i\left( x^0\right) \right) + \sum _{i=1}^n|f_i(x)-f_i(x^0)|>0 \end{aligned}$$

for all \(x \in X {\setminus } X_0\) which completes the proof. \(\square \)

Theorem 17

Assume that \(\bar{x}\) is an optimal solution of Benson scalar problem \((\mathrm{BS}(x^0))\) for a feasible solution \(x^0 \in X\) and that \(\bar{x}\) is a properly efficient solution to (1). Then, there exists \(\bar{\alpha } \in [0,1)\) such that \(\bar{x}\) is an optimal solution of the conic scalar problem \(\mathrm{CS}(w^1,\bar{\alpha },f(\bar{x})),\) where \(w^1=(1,\ldots ,1) \in {\mathbb {R}}^n\) with the \(l_1\) norm:

$$\begin{aligned} \mathrm{min}_{x\in X} \sum _{i=1}^n(f_i(x)-f_i(\bar{x})) + \bar{\alpha }\sum _{i=1}^n|f_i(x)-f_i(\bar{x})|. \end{aligned}$$
(21)

Proof

We have:

$$\begin{aligned} -R^{n}_{+}=\left\{ y:\left( w^1\right) ^Ty + \alpha ^1 \Vert y \Vert _1 \le 0\right\} , \end{aligned}$$

and clearly

$$\begin{aligned} -{\mathbb {R}}^n_+ {\setminus } \{0\} \in \mathrm{int}(\{y \in {\mathbb {R}}^n : (w^1)^Ty + \alpha \Vert y\Vert \le 0\}), \end{aligned}$$

for every \(\alpha \in [0,1).\) Moreover, it is clear that

$$\begin{aligned} int\left( \left\{ y \in {\mathbb {R}}^n : (w^1)^Ty + \alpha \Vert y\Vert \le 0\})= \{y \in {\mathbb {R}}^n : (w^1)^Ty + \alpha \Vert y\Vert < 0\right\} \right) . \end{aligned}$$

Then, since \(\bar{x}\) is a properly efficient solution to (1), there exists \(\bar{\alpha } \in [0,1)\) such that

$$\begin{aligned} \{f(\bar{x})\} + \{y \in {\mathbb {R}}^n : (w^1)^Ty + \bar{\alpha } \Vert y\Vert \le 0\} \cap f(X) = \{f(\bar{x})\}. \end{aligned}$$

This leads

$$\begin{aligned} \{y \in {\mathbb {R}}^n : (w^1)^T(y-f(\bar{x})) + \bar{\alpha } \Vert y-f(\bar{x})\Vert \le 0\} \cap f(X) = \{f(\bar{x})\}. \end{aligned}$$

The last relation means that

$$\begin{aligned} (w^1)^T(f(x)-f(\bar{x})) + \bar{\alpha } \Vert f(x)-f(\bar{x})\Vert \ge 0 \end{aligned}$$

for every \(x \in X.\) which proves the theorem. \(\square \)

The following main characteristics of the conic scalarization method can be emphasized.

  • The CS method can be applied for arbitrary ordering cone C.

  • The boundedness from below is not an essential condition for the CS method; however, for some parameters, the corresponding scalar problem may be unbounded.

  • The CS method does not require convexity conditions on the problem.

  • The CS method always yields efficient, weakly efficient, or properly efficient solutions if the corresponding scalar problem has a finite solution. By choosing a suitable scalarizing parameter set consisting of a weighting vector, an augmentation parameter, and a reference point, the decision maker may guarantee a most preferred efficient or properly efficient solution. The method provides conditions for obtaining (solely) properly efficient solutions, and all properly efficient solutions can be detected in this method.

  • The preference and reference point information of decision maker is taken into consideration by the CS method.

  • The CS method does not use neither additional constraints, nor additional decision variables.

  • The conic scalarization method is a generalization of the weighted sum, the Benson’s and the Pascoletti–Serafini scalarization methods.

4 Illustrative Example

In this section we interpret the properties of scalarization methods discussed in the previous sections, on the demonstrative example which is quoted from [38]. This example is chosen in order to demonstrate not the abilities of the considered methods, but just for demonstration of the common properties discussed.

Example 1

Consider the two-objective problem with \(f_1(x_1,x_2) = x_1,\)\(f_2(x_1,x_2) = x_2,\) where the set of feasible solutions is defined in the following form. Let

$$\begin{aligned} {\mathbb {X}}_1={\mathbb {X}}_{11}\cup {\mathbb {X}}_{12} \cup {\mathbb {X}}_{13} \cup {\mathbb {X}}_{14}, \end{aligned}$$

where \({\mathbb {X}}_{11}=\{(x_1,x_2) : x_1 \ge -2, x_2 \ge 2\},\)\({\mathbb {X}}_{12}=\{(x_1,x_2) : x_1 \ge 1, x_2 \ge 1\},\)\({\mathbb {X}}_{13}=\{(x_1,x_2) : x_1 \ge 2, x_2 \ge -1\},\) and \({\mathbb {X}}_{14}=\{(x_1,x_2) : x_1 \ge 3\}.\)

Then, the objective space can be defined in an analogous way by setting \({\mathbb {Y}}_1 = {\mathbb {X}}_1\) (see Fig. 8). Note that the objective space \({\mathbb {Y}}_1\) is unbounded from below. Points \((-2,2)\), (1, 1) and \((2,-1)\) are minimal points of \({\mathbb {Y}}_1\) which are not properly minimal, and all the boundary points of \({\mathbb {Y}}_1\) are weakly minimal points.

Fig. 8
figure 8

Illustration of set \({\mathbb {Y}}_1\) in Example 1

4.1 Illustration of the WSS Method

Solutions obtained with WSS method for some weight values are presented in Table 1. As it can be seen from this table, the objective function of \((\mathrm{WSS}(w))\) goes to \(-\infty \) for any pair of nonzero weights \(w_1>0,w_2>0.\) The reason is that the objective space is unbounded below. The only finite value of \((\mathrm{WSS}(w))\) is obtained for weights \(w_1=1, w_2=0,\) the solution set of which contains an infinite number of weakly minimal elements.

Table 1 Results obtained by the WSS method for Example 1

4.2 Illustration of the EC Method

Solutions obtained with EC method for different \(\varepsilon \) values are presented in Tables 2 and 3. As it can be seen from these tables, the \(\varepsilon \) numbers for constructing the corresponding scalar problem should be chosen carefully. In dependence of the values of these numbers, the corresponding scalar problem may be infeasible, may be unbounded from below, or may lead to an infinite set of weakly efficient solutions which does not contain any efficient solution (see for example, line 1 in Table 3).

Table 2 Results obtained by solving \(\mathrm{EC}(\varepsilon ,2)\) problem (in the case of min \(f_2(x)\)) for Example 1
Table 3 Results obtained by solving \(\mathrm{EC}(\varepsilon ,1)\) problem (in the case of min \(f_1(x)\)) for Example 1

4.3 Illustration of the BS Method

Illustrative results obtained with BS method for different initial solutions are presented in Table 4. These results demonstrate that the BS method computes efficient solution which is “farthest” (in some sense) to the initial solution and that if the initial solution is chosen appropriately, the BS method correctly computes efficient solutions. One more point which should be emphasized is that the initial solution for BS method should be chosen carefully, because for some initial solutions the scalar problem may have no a finite solution, as it was the case with the initial solution \(x^0 = (3,1).\) It is clear that the same result will be obtained for all initial solutions \(x^0=(x_1^0,x_2^0)\) with \(x_1^0 \ge 3.\)

Table 4 Results obtained by the BS method for Example 1

4.4 Illustration of the WCS Method

Results obtained with WCS method for different w and \(z^*\) values are presented in Table 5. The problem is not bounded from below; hence, there is no an ideal point. However, we have chosen different (reference) points and weights and computed solutions of the corresponding scalar problem. The solutions obtained demonstrate that the method leads mainly to weakly efficient solutions and an infinite number of solutions may be obtained. The method may also lead to a solution set which does not contain an efficient solution (the case of \(w=(1,20),\)\(z^*=(1,-2)\)). On the other hand, for the same reference point \(z^*=(0,0),\) two different solutions have been obtained using the weights \(w=(1,20)\) and \(w=(20,1),\) which are considerably far away from each other. This situation leads to the question whether the ideal point can be considered as a reference point or not in WCS method.

Table 5 Results obtained by the WCS method for Example 1

4.5 Illustration of the z Method

Results obtained with PSS method for different a and r values are presented in Table 6, where \(r_1>0,\)\(r_2>0\). The solutions obtained demonstrate that the method leads mainly to weakly efficient solutions and an infinite number of solutions may be obtained. The method may also lead to a solution set which does not contain an efficient solution (the case of \(r=(4r_1,r_1),\)\(a=(1,-2)\)). The parameter a cannot be considered as a reference point, because the solution computed depends not only on a,  but mainly on r.

Table 6 Results obtained by the PSS method for Example 1

4.6 Illustration of the CS Method

Results obtained by solving scalar problem \(\mathrm{CS}(w,\alpha ,a)\) for different values of preference parameters w and a and augmentation parameter \(\alpha \) are presented in Table 7. Four reference points (3, 1), (1, 2), (0, 0),  and \((1,-2)\) are chosen for illustration. The points (3, 1) and (1, 2) were earlier considered by the BS method, where no solution has been found for (3, 1),  and the point (1, 2) leads to the efficient solution \((-2,2)\) (see Table 4). The other two points were considered by WCS and PSS methods. The computational results illustrate the main property of the CS method, which states that the scalar problem \(\mathrm{CS}(w,\alpha ,a)\) computes efficient solutions of the multiobjective problem (1) which correspond to support points of the objective space with respect to the cone

$$\begin{aligned} S(w,\alpha )=\left\{ y \in {\mathbb {R}}^n : w^Ty + \alpha \Vert y\Vert \le 0\right\} \end{aligned}$$

(see (12)). It can be easily seen from the results depicted in Table 7 that different efficient solutions can be obtained for the same reference point, by changing the parameters \((w,\alpha )\) in the scalar problem \(\mathrm{CS}(w,\alpha ,a)\). For example, the third and fourth rows present efficient solutions computed for the same reference point (3, 1) and different weighting and augmentation parameters \((w,\alpha ).\) The supporting cone \(S(w,\alpha )\) corresponding to the parameters \(w=(2,1)\) and \(\alpha =1\) is larger than the cone which corresponds to the parameters \(w=(3,2)\) and \(\alpha =2.\) Due to this property the efficient solution (1, 1) is found for the second set of parameters (that is for parameters \(w=(3,2)\) and \(\alpha =2\)). This is because, the point (1, 1) is a “more inner” point of the objective space than the point \((-2,2)\) (see, Fig. 8). That is why we need a narrower cone to support it, and it becomes clear why this point cannot be computed by the WSS method.

The similar explanations can be made for the rows 5, 6, 7, and for the rows 8 and 9 of Table 7.

Table 7 Results obtained by the CS method for Example 1
Table 8 Main characteristics of six scalarization methods

Remark 7

Note that all computational results depicted in Table 7 were obtained for the pairs \((w,\alpha )\) which belong to \(C^{a*} {\setminus } C^{a\#}.\) This is because, for every \((w,\alpha ) \in {\mathbb {C}}^{a\#},\) the objective function of scalar problem \(\mathrm{CS}(w,\alpha ,a)\) will become unbounded from below. This is a nice interpretation of the fact that the multiobjective problem considered in Example 1 has no properly efficient solutions, and by Theorem 13 (iii) (see also Remark 6) for every pair \((w,\alpha ) \in {\mathbb {C}}^{a\#},\) any solution of scalar problem \(\mathrm{CS}(w,\alpha ,a)\) must be properly efficient. All these features can easily be illustrated on new examples (see, for example, [36, 38, 39]). We will not present additional illustrations on new examples, in order not to unnecessarily lengthen the paper. Note only that, by (slightly) changing the set \({\mathbb {X}}_{14}\) in Example 1, to the set

$$\begin{aligned} {\mathbb {X}}_{24}=\{(x_1,x_2) : x_1 \ge 3, x_2 \ge -5\}, \end{aligned}$$

all efficient solutions of Example 1 will become properly efficient ones (including the new solution \((3,-5)\)), and all features related to properly efficient solutions can be demonstrated on the new example, by considering the same parameters for all scalarization methods used above in illustrations for Example 1.

5 Conclusions

In this paper, six commonly used scalarization methods are analyzed and compared with respect to the following conditions:

  • The method can be applied for arbitrary ordering cone C,  or only for \(R_+^n.\)

  • The boundedness from below is an essential condition for the method.

  • The method requires convexity conditions on the problem.

  • The method provides conditions for obtaining (solely) properly efficient solutions.

  • The preference and reference point information of decision maker is taken into consideration by the method.

  • The method uses additional constraints or additional decision variables.

All these characteristics are given in Table 8.

The paper also discusses relations between different scalarization methods. Earlier it was proven in the literature that the Pascoletti–Serafini scalarization method is a generalization of the weighted sum scalarization, the \(\varepsilon \)-constraint and the weighted Chebyshev scalarization methods. In this paper, new features of the Pascoletti–Serafini scalarization method are established. Additionally, it has been shown that the conic scalarization method is a generalization of the weighted sum, the Benson’s and the Pascoletti–Serafini scalarization methods. This result together with the generalization properties of the Pascoletti–Serafini scalarization method means that the conic scalarization method is the most general method and all methods considered in this paper are subsumed under this method. All the characteristics discussed in the paper are demonstrated on the same example.

Finally note that in future studies we plan to prepare a comprehensive paper by adding the comparison of other methods from the literature similarly, and we are going to take into account new developments.