Abstract
In this paper, a new vector exponential penalty function method for nondifferentiable multiobjective programming problems with inequality constraints is introduced. First, the case when a sequence of vector penalized optimization problems with vector exponential penalty function constructed for the original multiobjective programming problem is considered, and the convergence of this method is established. Further, the exactness property of a vector exact penalty function method is defined and analyzed in the context of the introduced vector exponential penalty function method. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered nondifferentiable multiobjective programming problem and the associated vector penalized optimization problem with the vector exact exponential penalty function. This equivalence is established for nondifferentiable vector optimization problems with inequality constraints in which involving functions are r-invex.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The field of multiobjective programming, also known as vector programming, has attracted a lot of attention since many real-world problems in decision theory, economics, engineering problems, game theory, management sciences, physics, optimal control can be modeled as nonlinear vector optimization problems. Therefore, many approaches were developed in the literature to address these problems. The properties of the objective function and the constraints determine the applicable technique. Considerable attention has been given recently to devising new methods which solve the given multiobjective programming problem by means of some associated optimization problem (see, for example, [1–3]).
Exact penalty function methods are important analytic and algorithmic techniques in nonlinear mathematical programming for solving a nonlinear constrained scalar optimization problem. Exact penalty function methods transform the considered optimization problem into a single unconstrained optimization problem or into a finite sequence of unconstrained optimization problems, avoiding thus the infinite sequential process of the classical penalty function methods. Nondifferentiable exact penalty functions were introduced by Zangwill [4] and Pietrzykowski [5]. Much of the literature on nondifferentiable exact penalty functions is devoted to the study of scalar convex optimization problems (see, for example, [6–16], and others). However, some results on exact penalty functions used for solving various classes of nonconvex optimization problems have been proved in the literature recently (see, for example, [17, 18]). Namely, in [17], Antczak introduced a new approach for solving nonconvex differentiable optimization problems involving r-invex functions. He defined a new exact absolute value penalty function method, called the exact exponential penalty function method, for solving nonconvex constrained scalar optimization problems. Further, under r -invexity hypotheses, Antczak established the equivalence between the sets of optimal solutions in the original scalar optimization problem with both inequality and equality constraints and its associated penalized optimization problem with the exact exponential penalty function. Furthermore, in [17], a lower bound on the penalty parameter was provided such that this result is satisfied if the penalty parameter is larger than this value.
In [19], Antczak defined a new vector exact \(l_{1}\) penalty function method and used it for solving nondifferentiable convex multiobjective programming problems. He gave conditions guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered convex nondifferentiable multiobjective programming problem and its associated vector penalized optimization problem with the vector exact \(l_{1}\) penalty function.
An exponential penalty function method was proposed by Murphy [20] for solving nonlinear differentiable scalar optimization problems. Exponential penalty function methods have been used widely in optimization theory by several authors for solving optimization problems of various types (see, for example, [21–29], and others).
The aim of this paper is to show that unconstrained global optimization methods can be used also for solving nondifferentiable constrained multiobjective programing problems, by resorting to an exact penalty approach. Namely, we extend the exact exponential penalty function method introduced by Antczak [17] to the vectorial case. Hence, we introduce a new vector exponential penalty function method, and we use it for solving a class of nondifferentiable multiobjective programming problems involving r-invex functions (with respect to the same function \(\eta \)). This method is based on such a construction of an exact absolute value penalty function, which is minimized in the exponential penalized optimization problem constructed in this method. It is the sum of a certain “merit” function (which reflects the objective function of the original problem) and a penalty term which reflects the constraint set. The merit function is chosen as the composition of the exponential function and the original objective function, while the penalty term is obtained by multiplying a suitable function, which represents the constraints (in this case, it is also the sum of the composition of an exponential function and the suitable function, which represents the constraint) by a positive parameter c, called the penalty parameter.
This work is organized as follows. In Sect. 2, some preliminary results are given that are useful in proving the main results in the paper. In Sect. 3, a new vector exponential penalty function method is introduced, and its algorithmic aspect is presented. The convergence of the sequence of weak Pareto solutions of vector subproblems generated in the described method is established. In Sect. 4, the exactness of the penalization is extended to the case of an exact vector penalty function method. Then the results for vector exterior exponential penalty function algorithm are reviewed, and the relationship between the weak Pareto solution in the original multiobjective programming problem and weak Pareto optimal solutions in the associated penalized optimization subproblems is commented. Thus, the exactness property is defined for the introduced vector exponential penalty function method. Namely, we prove that there exists a finite lower bound of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem coincides with an unconstrained (weak) Pareto solution in its associated vector penalized optimization problem with the vector exact exponential penalty function. Also under nondifferentiable r-invexity, the converse result is established for sufficiently large penalty parameter exceeding the finite threshold. Hence, the equivalence between the nonconvex nondifferentiable multiobjective programming problems is established for sufficiently large penalty parameters under the assumption that all functions constituting the considered nonsmooth multiobjective programming problem are r-invex (with respect to the same function \(\eta \)). The results established in the paper are illustrated by suitable examples of nonconvex nondifferentiable vector optimization problems which we solve by using the vector exact exponential penalty function method defined in this paper. Finally, in Sect. 5, we discuss the consequences of the extension of the exact exponential penalty function method defined by Antczak [17] for scalar optimization problems to the vectorial case and its significance for vector optimization.
2 Preliminaries
The following convention for equalities and inequalities will be used throughout the paper.
For any \(x=\left( x_{1},x_{2},\ldots ,x_{n}\right) ^{T}, \ y=\left( y_{1},y_{2},\ldots ,y_{n}\right) ^{T}\), we define
-
(i)
\(x=y\) if and only if \(x_{i}=y_{i}\) for all \(i=1,2,\ldots ,n\);
-
(ii)
\(x<y\) if and only if \(x_{i}<y_{i}\) for all \(i=1,2,\ldots ,n\);
-
(iii)
\(x\leqq y\) if and only if \(x_{i}\leqq y_{i}\) for all \(i=1,2,\ldots ,n\);
-
(iv)
\(x\le y\) if and only if \(x\leqq y\) and \(x\ne y.\)
Definition 1
A function \(f:R^{n}\rightarrow R\) is locally Lipschitz at a point \(x\in R^{n} \) if there exist scalars \(K_{x}>0\) and \(\varepsilon >0\) such that, the following inequality
holds for all \(y, z\in x+\varepsilon B\), where B signifies the open unit ball in \(R^{n}\), so that \(x+\varepsilon B\) is the open ball of radius \( \varepsilon \) about x.
Definition 2
[30] The Clarke generalized directional derivative of a locally Lipschitz function \(f:X\rightarrow R\) at \(x\in X\) in the direction \(v\in R^{n}\), denoted \(f^{\,0}\left( x;v\right) \), is given by
Definition 3
[30] The Clarke generalized subgradient of a locally Lipschitz function \(f:X\rightarrow R\) at \(x\in X\), denoted \(\partial f\left( x\right) \), is defined as follows:
Lemma 4
[30] Let \(f:X\rightarrow R\) be a locally Lipschitz function on a nonempty open set \(X\subset R^{n}, u\) be an arbitrary point of X and \(\lambda \in R\). Then
Proposition 5
[30] Let \(f_{i}:X\rightarrow R , i=1,\ldots ,k\), be locally Lipschitz functions on a nonempty set \(X\subset R^{n}, u\) be an arbitrary point of \(X\subset R^{n}\). Then
Equality holds in the above relation if all but at most one of the functions \(f_{i}\) is strictly differentiable at u.
Corollary 6
[30] For any scalars \(\lambda _{i}\), one has
and equality holds if all but at most one of the functions \(f_{i}\) is strictly differentiable at u.
Theorem 7
[30] Let the function \(f:R^{n}\rightarrow R\) be locally Lipschitz at a point \(\overline{x}\in R^{n}\) and attain its (local) minimum at \(\overline{x}\). Then
Proposition 8
[30] Let the functions \( f_{i}:R^{n}\rightarrow R, i\in I=\left\{ 1,\ldots ,k\right\} ,\) be locally Lipschitz at a point \(\overline{x}\in R^{n}\). Then the function \( f:R^{n}\rightarrow R\) defined by \(f(x):=\underset{i=1,..,k}{\max }f_{i}(x)\) is also locally Lipschitz at \(\overline{x}\). In addition,
where \(I(\overline{x}):=\left\{ i\in I:f(\overline{x})=f_{i}(\overline{x} )\right\} \).
Now, for the reader’s convenience, we give the definition of a nondifferentiable vector-valued (strictly) r-invex function (see [31] for a scalar case and [32] in the vectorial case).
Definition 9
Let X be a nonempty subset of \(R^{n}\) and \( f:R^{n}\rightarrow R^{k}\) be a vector-valued function such that each its component is locally Lipschitz at a given point \(\overline{x}\in X\). If there exist a function \(\eta :X\times X\rightarrow R^{n}\) and a real number r such that, for \(i=1,\ldots ,k\), the following inequalities:
hold for each \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) \) and all \(x\in X\), then f is said to be a nondifferentiable r-invex vector-valued function at \(\overline{x}\) on X (with respect to \(\eta \)). If inequalities (1) are satisfied at any point \(\overline{x}\in X\), then f is said to be a nondifferentiable r-invex function on X (with respect to \(\eta \)).
Each function \(f_{i}, i=1,\ldots ,k\), satisfying (1), is said to be locally Lipschitz r-invex at \(\overline{x}\) on X (with respect to \(\eta \) ).
Definition 10
Let X be a nonempty subset of \(R^{n}\) and \(f:R^{n}\rightarrow R^{k}\) be a vector-valued function such that each its component is locally Lipschitz at a given point \(\overline{x}\in X\). If there exist a function \(\eta :X\times X\rightarrow R^{n}\) and a real number r such that, for \(i=1,\ldots ,k\), the following inequalities
hold for each \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) \) and all \(x\in X, x\ne \overline{x}\), then f is said a nondifferentiable vector-valued strictly r-invex function at \(\overline{x}\) on X (with respect to \(\eta \)). If inequalities (2) are satisfied at any point \( \overline{x}\in X\), then f is said to be a nondifferentiable strictly r-invex function on X (with respect to \(\eta \)).
Remark 11
In order to define an analogous class of vector-valued r-incave functions with respect to \(\eta \), the direction of the inequalities (1) should be changed to the opposite one.
Remark 12
Note that in the case when \(r=0\), the definition of a (strictly) r-invex vector-valued function reduces to the definition of nondifferentiable (strictly) invex vector-valued function (see, for example, [33, 34]).
Remark 13
For more details on the properties of nondifferentiable r-invex functions, we refer, for example, to see Antczak [31] for a scalar case and Antczak [32] in the vectorial case.
Now, we prove the useful result which we will use in proving the main result in the paper.
Theorem 14
Let \(\overline{x}\in X\subset R^{n}\) and \( q:X\rightarrow R\) be a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to \(\eta :X\times X\rightarrow R\). Further, let \( \frac{1}{r}\left( e^{rq^{+}(x)}-1\right) :=\max \left\{ 0,\frac{1}{r}\left( e^{rq(x)}-1\right) \right\} \). Then the function \(\frac{1}{r}\left( e^{rq^{+}(\cdot )}-1\right) \) is a locally Lipschitz invex function at \( \overline{x}\in X\) on X with respect to the same function \(\eta \).
Proof
We consider the following cases:
-
(1)
\(q\left( \overline{x}\right) >0\).
Then \(\frac{1}{r}\left( e^{rq^{+}(x)}-1\right) =\frac{1}{r}\left( e^{rq(x)}-1\right) \) on some neighborhood of \(\overline{x}\), and so \(\partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) =\partial \left( \frac{1}{r}\left( e^{rq(\overline{x})}-1\right) \right) \). By assumption, q is a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to \(\eta :X\times X\rightarrow R\). Therefore, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) \right) \) and all \(x\in X\), we have
$$\begin{aligned} \left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle\leqq & {} e^{rq(\overline{x})}q^{0}\left( \overline{x};\eta \left( x,\overline{x} \right) \right) \leqq \frac{1}{r}\left( e^{rq\left( x\right) }-e^{rq( \overline{x})}\right) \nonumber \\\leqq & {} \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) - \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) . \end{aligned}$$(3) -
(2)
\(q\left( \overline{x}\right) <0\).
Then, by definition, \(\frac{1}{r}\left( e^{rq^{+}(x)}-1\right) =0\) on some neighborhood of \(\overline{x}\), and so \(\partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) =\left\{ 0\right\} \). Therefore, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{ x})}-1\right) \right) \) and all \(x\in X\), we have
$$\begin{aligned} 0=\left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle = \frac{1}{r}\left( e^{rq^{+}\left( \overline{x}\right) }-1\right) \leqq \frac{ 1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}( \overline{x})}-1\right) . \end{aligned}$$(4) -
(3)
\(q\left( \overline{x}\right) =0\).
By Proposition 8, it follows that
$$\begin{aligned} \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) \subset conv\left\{ \partial \left( \frac{1}{r}\left( e^{rq(\overline{x} )}-1\right) \right) ,0\right\} . \end{aligned}$$(5)By assumption, q is a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to the function \(\eta \). Therefore, by definition, for any \(\zeta \in \partial q^{+}(\overline{x})\) and all \(x\in X\), we have
$$\begin{aligned}&\left\langle \zeta ,\eta \left( x,\overline{x}\right) \right\rangle \leqq e^{rq(\overline{x})}q^{0}\left( \overline{x};\eta \left( x,\overline{x} \right) \right) \leqq \frac{1}{r}\left( e^{rq\left( x\right) }-e^{rq( \overline{x})}\right) \nonumber \\&\quad =\frac{1}{r}\left( e^{rq\left( x\right) }-1\right) \leqq \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) =\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \nonumber \\ \end{aligned}$$(6)or
$$\begin{aligned} 0=\left\langle 0,\eta \left( x,\overline{x}\right) \right\rangle =-\frac{1}{r }\left( e^{rq^{+}(\overline{x})}-1\right) \leqq \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) . \end{aligned}$$(7)Hence, by (6) and (7), the following relations
$$\begin{aligned}&\left\langle \lambda \zeta +\left( 1-\lambda \right) 0,\eta \left( x, \overline{x}\right) \right\rangle \nonumber \\&\quad \leqq \lambda \left( \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1 }{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) +\left( 1-\lambda \right) \Bigg (\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) \nonumber \\&\qquad -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \Bigg )=\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) \end{aligned}$$(8)hold for every \(\lambda \in \left[ 0,1\right] \). Thus, (8) implies that, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}( \overline{x})}-1\right) \right) \) and all \(x\in X\), the following inequality
$$\begin{aligned} \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \geqq \left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle \end{aligned}$$(9)holds. Hence, by (3), (4), (9) and Remark 12, we conclude that \(\frac{1}{r}\left( e^{rq^{+}(\cdot )}-1\right) \) is a nondifferentiable invex function at \(\overline{x}\in X\) on X with respect to \(\eta \).
\(\square \)
In general, the unconstrained nonsmooth vectorial optimization problem is represented as follows:
where the objective functions \(f_{i}:X\rightarrow R, i\in I=\{1,\ldots ,k\}\), are locally Lipschitz on X, where X is a nonempty open subset of \(R^{n}\).
In general, the concept of an optimal solution defined in scalar optimization problems does not work in multiobjective programming problems. For such multicriterion optimization problems, an optimal solution is defined in terms of a (weak) Pareto solution [(weakly) efficient solution] in the following sense:
Definition 15
A feasible point \(\overline{x}\) is said to be a Pareto solution (efficient solution) for a vector optimization problem if and only if there exists no other feasible solution x such that
Definition 16
A feasible point \(\overline{x}\) is said to be a weak Pareto solution (weakly efficient solution, weak minimum) for a vector optimization problem if and only if there exists no other feasible solution x such that
It is easy to verify that every Pareto solution is a weak Pareto solution.
The following result gives the necessary optimality condition for the unconstrained vectorial optimization problem (UVP) (see [35]).
Theorem 17
A necessary condition for the point \(\overline{ x}\) to be (weak) Pareto optimal in the nondifferentiable vector optimization problem (UVP) is that there exists multiplier \(\overline{\lambda }\in R^{k}\) such that
Often, the feasible set of a multiobjective programming problem can be represented by functional inequalities and, therefore, we consider the nondifferentiable constrained vector optimization problem in the following form
where \(f_{i}:X\rightarrow R, i\in I=\{1,\ldots ,k\}\) and \(g_{j}:X\rightarrow R , j\in J=\{1,\ldots ,m\}\), are locally Lipschitz functions on a nonempty open set \(X\subset R^{n}\).
Let
denote the set of all feasible solutions of the constrained multiobjective programming problem (VP).
It is well known (see, for example, [33–38]) that the following conditions, known as the generalized form of the Karush–Kuhn–Tucker conditions, are necessary for a (weak) Pareto solution in the considered nondifferentiable vector optimization problem (VP).
Theorem 18
Let \(\overline{x}\in D\) be a (weak) Pareto solution in problem (VP) and a constraint qualification (see, for example, [30, 36, 39]) be satisfied at \(\overline{x}\). Then, there exist the Lagrange multipliers \(\overline{\lambda }\in R^{k}\) and \(\overline{\mu }\in R^{m}\) such that
Definition 19
The point \(\overline{x}\in D\) is said to be a Karush–Kuhn–Tucker point in the considered multiobjective programming problem (VP) if there exist the Lagrange multipliers \(\overline{\lambda }\in R^{k}, \overline{\mu }\in R^{m}\) such that the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) are satisfied at \(\overline{x}\).
3 Convergence of a New Vector Exponential Penalty Function Method for a Multiobjective Programming Problem
For the considered nonlinear multiobjective programming problem (VP), we introduce a new vector exponential penalty function method as follows:
where r is a finite real number not equal to 0 and \(e=\left( 1,\ldots ,1\right) \in R^{k}\). Note that, for a given constraint \(g_{j}(x)\leqq 0 \), the function \(\frac{1}{r}\left( e^{rg_{j}^{+}(\cdot )}-1\right) \) defined by
is equal to zero for all x that satisfy the constraint and that it has a positive value whenever this constraint is violated. Moreover, large violations in the constraint \(g_{j}(x)\leqq 0\) result in large values for \( \frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) \). Thus, the function \(\frac{1}{ r}\left( e^{rg_{j}^{+}(\cdot )}-1\right) \) has the penalty features relative to the single inequality constraint \(g_{j}\). However, observe that at points, where \(g_{j}(x)=0\), the foregoing objective function might not be differentiable, even though \(g_{j}\) is differentiable.
As it follows from (13), the vector penalized problem \((\hbox {VP}_{r}(c))\) constructed in the vector exponential penalty function method is such an unconstrained vector optimization problem, in which a vector objective function is the sum of a certain vector “merit” function (which reflects the vector objective function of the given multiobjective programming problem) and the same penalty term which reflects the constraint set being added to each component of the vector “merit” function. The vector merit function is chosen as the composition of the exponential function and the original vector objective function, while the penalty term (the same for each component of merit function) is obtained by multiplying a suitable function, which represents the constraints, by a positive parameter c, called the penalty parameter.
Remark 20
Note that \(P_{r}:R^{n}\times R_{+}\rightarrow R^{k}\) and \( P_{r}(x,c)=(P_{r1}(x,c),\ldots ,P_{rk}(x,c))\), where \(P_{ri}(x,c):=\frac{1}{r} e^{rf_{i}(x)}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) , i=1,\ldots ,k.\)
Remark 21
In the case when \(r=0\), the definition of the vector penalized problem \((\hbox {VP} _{r}(c))\) reduces to the following form:
Thus, it can be set that in the case when \(r=0\), we obtain the definition of the classical vector \(l_{1}\) penalty function method (see Antczak [19]).
Now, we show that a weak Pareto solution in the considered multiobjective programming problem (VP) can be obtained by solving a sequence of problems ( 13) with the penalty parameter c selected from an increasing sequence of parameters \(\left( c_{n}\right) \).
Therefore, for the considered multiobjective programming problem (VP), we now construct a sequence of vector penalized optimization problems \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function \(n=1,2,\ldots \) as follows:
where \(c_{n}>0\) and \(\underset{n\rightarrow \infty }{\lim }c_{n}=\infty \). Moreover, we denote by \(\overline{x}_{n}\) an approximate weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exact exponential penalty function.
An algorithmic framework that forms the basis for the introduced vector exponential penalty function method is as follows:
Now, we prove the convergence theorem for the introduced vector exponential penalty function method. Namely, we show that if \(\left( \overline{x} _{n_{s}}\right) \) is any convergence subsequence of \(\left( \overline{x} _{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim }\overline{x} _{n_{s}}=\overline{x}\in D\), then \(\overline{x}\) is a weak Pareto solution in the considered multiobjective programming problem (VP).
First, we show that any limit point \(\overline{x}\) of the sequence \(\left( \overline{x}_{n}\right) \), that is, the sequence of approximate weak Pareto solutions in vector penalized optimization problems \((\hbox {VP}_{r}(c_{n}))\) with the vector exact exponential penalty function, is feasible in the considered multiobjective programming problem (VP).
Lemma 22
Let \(c_{n}>0\) and \(\underset{n\rightarrow \infty }{ \lim }c_{n}=\infty \). If \(\overline{x}=\underset{n\rightarrow \infty }{\lim } \overline{x}_{n}\), then \(\overline{x}\) is feasible in the considered multiobjective programming problem (VP).
Proof
Let \(\overline{x}=\underset{n\rightarrow \infty }{\lim }\overline{x}_{n}\). Thus, there exists a subsequence \(\{\overline{x}_{n_{s}}\}\) of \(\{\overline{x }_{n}\}\) such that \(\overline{x}_{n_{s}}\) is an approximate weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}})), s=1,2,\ldots , \) with the vector exponential penalty function and, moreover, \(\underset{ s\rightarrow \infty }{\lim }\overline{x}_{n_{s}}=\overline{x}\). We proceed by contradiction. Suppose, contrary to the result, that \(\overline{x}\notin D \). If we take \(\widetilde{x}\in D\), then, according to the definition of a vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}}))\) and Definition 16, there exists \(i_{n_{s}}\in \left\{ 1,\ldots ,k\right\} \) such that
Since \(\widetilde{x}\in D\), by (14), we have
By assumption, \(\overline{x}\notin D\). This means that there exists \(j\in \{1,2,\ldots ,m\}\) such that \(g_{j}\left( \overline{x}\right) >0\). Then, by (14), it follows that
Hence, (19) implies that
for some \(\varepsilon >0\). By assumption, \(\underset{s\rightarrow \infty }{ \lim }\overline{x}_{n_{s}}=\overline{x}\). Since \(\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}_{n_{s}}\right) }-1\right) \rightarrow \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x} \right) }-1\right) \), for all s sufficiently large, by (20), we have
Therefore, using (18) and (21), for s sufficiently large, we have,
This is a contradiction to (17). This means that \(\overline{x}\in D\), and the proof of this lemma is completed. \(\square \)
The following theorem shows that if a sequence of approximate weak Pareto solutions in the vector penalized problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function converges to \(\overline{x}\), then \( \overline{x}\) is also a weak Pareto solution in the considered multiobjective programming problem (VP).
Theorem 23
Let \(\overline{x}_{n}\) be an approximate weak Pareto optimal solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function, \(n=1,2,\ldots \). If \(\left( \overline{x} _{n_{s}}\right) \) is any convergent subsequence of \(\left( \overline{x} _{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim }\overline{x} _{n_{s}}=\overline{x}\), then \(\overline{x}\) is a weak Pareto solution in the considered multiobjective programming problem (VP).
Proof
Let \(\left( \overline{x}_{n_{s}}\right) \) be any convergent subsequence of \( \left( \overline{x}_{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim } \overline{x}_{n_{s}}=\overline{x}\). By Lemma 22, it follows that \(\overline{x}\) is feasible in the considered multiobjective programming problem (VP). We proceed by contradiction. Suppose, contrary to the result, that \(\overline{x}\) is not a weak Pareto solution in the considered multiobjective programming problem (VP). Hence, by Definition 16, it follows that there exists \(\widetilde{x}\in D\) such that
Thus,
Since \(\overline{x}_{n_{s}}\) is a weak Pareto solution in vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}}))\), there exists \(i_{n_{s}}\in \left\{ 1,\ldots ,k\right\} \) such that
Since \(\widetilde{x}\in D\), by (14), we have
Also by (14), it follows that
By (23), it follows that
Since \(\underset{s\rightarrow \infty }{\lim }\overline{x}_{n_{s}}=\overline{x }\), for sufficiently large s, (28) implies that the following inequality
holds, contradicting (27). This means that any limit point of any convergence subsequence of \(\left( \overline{x}_{n}\right) \) is a weak Pareto solution in the considered multiobjective programming problem (VP). The proof of this theorem is completed. \(\square \)
It turns out that the strategy for choosing the penalty parameter \(c_{n}\) is crucial to the practical success of the algorithm presented above. If the initial choice \(c_{0}\) is too small, many cycles of the algorithm presented above may be needed to determine an appropriate solution.
In order to illustrate the difficulties caused by an inappropriate value of c, we consider the following multiobjective programming problem.
Example 24
Consider the following vector optimization problem:
Note that \(D=\left\{ x\in R:0\leqq x\leqq 1\right\} \) and \(\overline{x}=1\) is a Pareto solution in the considered multiobjective programming problem (VP0). Note that all functions constituting problem (VP0) are 1-invex on R with respect to the same function \(\eta \), where \(\eta \left( x, \overline{x}\right) =x-\overline{x}\). We define the vector exponential penalty function \(P_{1}(\cdot ,c)\) as follows:
Now, we consider various values of the penalty parameter c and draw graphs of each component of the vector exponential penalty function \(P_{1}(\cdot ,c) \) for the considered multiobjective programming problem (VP0) according to these values of the penalty parameter c on below figures (Fig. 1).
Note that each component of the vector exponential penalty function \( P_{1}(\cdot ,c)\) is a monotonically increasing function when c is smaller than 0.15. On the other hand, when c is chosen from the interval (0.15, 1), then the vector exponential penalty function has a Pareto solution, but not at the feasible solution \(\overline{x}=1\). While the vector exponential penalty function \(P_{1}(\cdot ,c)\) has a Pareto solution at \(\overline{x}=1\), when \(c>1\).
Therefore, if, for example, the current iterate in the above algorithm is \( \overline{x}_{n}=\frac{1}{4}\) and the penalty parameter \(c_{n}\) is chosen to be less than 1, then almost any implementation of the vector exponential penalty method will give a step that moves away from the Pareto solution \( \overline{x}=1\). This behavior of the algorithm will be repeated, producing increasingly poorer iterates, until the penalty parameter c is increased above the threshold equal to 1.
It turns out in our further considerations that this value of the threshold is not accidental (see Remark 37 below).
4 Exactness of the Introduced Vector Exponential Penalty Function Method
In order to avoid the need of a sequence of unbounded penalty parameters, in other words, a sequence of unbounded penalized optimization problems, we now prove that the introduced vector exponential penalty function method is exact in the sense that a (weak) Pareto solution in the original multiobjective programming problem is equivalent to a (weak) Pareto solution in the associated vector penalized problem, for a finite (sufficiently large) value of the penalty parameter. In order to prove this result, we assume that each component of the functions constituting the considered multiobjective programming problem (VP) is locally Lipschitz r-invex with respect to the same function \(\eta \).
Now, in a natural way, we extend the well-known definition of exactness property for a scalar exact penalty function to the vectorial case.
Definition 25
If a threshold value \(\overline{c}\geqq 0\) exists such that, for every \(c>\overline{c}\),
then the function \(P_{r}(x,c)\) is termed a vector exact exponential penalty function.
According to the definition of the function \(P_{r}(x,c)\), we call \((\hbox {VP}_{r}( c))\), defined by (13), the vector penalized problem with the vector exact exponential penalty function.
It is clear that, conceptually, if \(P_{r}\left( x,c\right) \) is a vector exact exponential penalty function, we can find the constrained (weak) Pareto in the considered multiobjective programming problem (VP), by looking for the unconstrained (weak) Pareto solutions of the function \(P_{r}\left( x,c\right) \), for sufficiently large (but finite) values of the penalty parameter c.
Now, for sufficiently large values of the penalty parameter c, we prove the equivalence between the sets of (weak) Pareto solutions of problem (VP) and the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.
First, we establish that a Karush–Kuhn–Tucker point in the considered multiobjective programming problem is a weak Pareto solution of the vector exact exponential penalty function in the associated vector penalized problem \((\hbox {VP}_{r}(c))\), for sufficiently large penalty parameters c greater than the given threshold.
Theorem 26
Let \(\overline{x}\) be a feasible solution in the nonsmooth multiobjective programming problem (VP) and the generalized Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) be satisfied at \(\overline{x}\) with the Lagrange multipliers \( \overline{\lambda }_{i}, i\in I, \overline{\mu }_{j}, j\in J\). Furthermore, assume that the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \) and \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) } , i\in I\right\} \). If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{ \mu }_{j}, j\in J\right\} \)), then \(\overline{x}\) is also a weak Pareto solution in any associated vector penalized optimization problem \((\hbox {VP} _{r}(c))\) with the vector exact exponential penalty function.
Proof
We proceed by contradiction. Suppose, contrary to the result, that \( \overline{x}\) is not a weak Pareto solution in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. Therefore, by Definition 16, there exists \(\widetilde{x}\in X\) such that
By definition of the vector penalized problem (VP\(_{{r}}({c})\)) [see (13)], we have
Thus,
Since \(\overline{x}\) is a feasible solution in the nonsmooth multiobjective programming problem (VP), (14) yields
By assumption, \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) }, i\in I\right\} \) and, moreover, \(c\geqq M\max \left\{ \overline{\mu }_{j} , j\in J\right\} \). Hence, (32) gives
Thus, (33) yields
By the Karush–Kuhn–Tucker necessary optimality condition (12), it follows that
and, for at least one \(i^{*}\in I\), we have
Adding both sides of (34) and (35), we get
By the Karush–Kuhn–Tucker necessary optimality condition (12), we have that \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\). Hence, (36) yields
By assumption, the objective function f and the constraint function g are locally Lipschitz r-invex at \(\overline{x}\) on X with respect to the same function \( \eta \). Then, by Definition 9, the following inequalities
hold for all \(x\in X\). Therefore, they are also satisfied for \(x=\widetilde{x }\in X\). Using the Karush–Kuhn–Tucker necessary optimality condition (12), we get, respectively,
Adding both sides of the above inequalities, we get that the following inequality
holds for every \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) , i=1,\ldots ,k,\) and \(\zeta _{j}\in \partial g_{j}\left( \overline{x}\right) , j=1,\ldots ,m\). By the Karush–Kuhn–Tucker necessary optimality condition (10), the following inequality
holds. Thus,
Hence, using the Karush–Kuhn–Tucker necessary optimality condition (11 ), we obtain
By (14), it follows that the following inequality
holds, contradicting (37). Hence, the proof of this theorem is completed. \(\square \)
The following corollary follows directly from Theorem 26.
Corollary 27
Let \(\overline{x}\) be a weak Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 26 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j}, j\in J\right\} \)), then \( \overline{x}\) is also a weak Pareto solution in the associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.
Now, under stronger assumptions, we establish the relationship between a Karush–Kuhn–Tucker point in the considered multiobjective programming problem (VP) and a Pareto solution in its associated vector penalized problem (VP(c)) with the vector exact exponential penalty function.
Theorem 28
Let \(\overline{x}\) be a feasible solution in the multiobjective programming problem (VP) and the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) be satisfied at \( \overline{x}\) with the Lagrange multipliers \(\overline{\lambda }_{i}, i\in I, \overline{\mu }_{j}, j\in J\). Furthermore, assume that one of the following hypotheses is satisfied:
-
(i)
the Lagrange multipliers \(\overline{\lambda }_{i}, i\in I\), associated to the objectives \(f_{i}\), are positive real numbers and, moreover, the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to \(\eta \),
-
(ii)
the objective function f is strictly r-invex at \(\overline{x}\) on X with respect to \(\eta \) and the constraint function g is r-invex at \(\overline{x}\) on X with respect to \(\eta \),
If \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) }, i\in I\right\} \) and the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j} , j\in J\right\} \)), then \(\overline{x}\) is also a Pareto solution in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.
Proof
Proof of this theorem is similar to that of Theorem 26. \(\square \)
Corollary 29
Let \(\overline{x}\) be a Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 28 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j}, j\in J\right\} \)), then \(\overline{x}\) is a Pareto solution also in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.
Now, under stronger assumptions, we establish the converse results to those ones proved above. Namely, we prove that, for sufficiently large values of the penalty parameter c, if \(\overline{x}\) is a (weak) Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(\overline{c}))\) with the vector exact exponential penalty function, then it is also a (weak) Pareto solution in the original multiobjective programming problem (VP). To prove this result, we assume that both the objective function and the constraint functions are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \). We also show that there exists the finite threshold \(\overline{c}\) of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function is a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP).
Theorem 30
Let D be a compact subset of \(R^{n}\) and \( \overline{x}\) be a weak Pareto solution in the vector penalized problem \((\hbox {VP} _{r}(c))\) with the vector exact exponential penalty function, where c is assumed to be sufficiently large. Further, assume that the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \). Then \(\overline{x}\) is also a weak Pareto solution in the given multiobjective programming problem (VP).
Proof
By assumption, \(\overline{x}\) is a weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. We consider two cases. First, assume that \(\overline{x}\in D\). Then, by definition of the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function, it follows that
Hence, by (14), (38) implies that the relation
holds, by which we conclude that \(\overline{x}\) is a weak Pareto optimal in the considered multiobjective programming problem (VP). Then, for any \( c\geqq \overline{c}\) (where \(\overline{c}\) is equal to the penalty parameter for which the vector penalized problem (VP\(_r (c)\)) is defined), a weak Pareto solution \(\overline{x}\) in each vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function is a weak Pareto solution also in the considered multiobjective programming problem (VP).
In the case when \(\overline{x}\in D\), the result follows directly from (39).
Now, suppose that \(\overline{x}\notin D\). Since \(\overline{x}\) is a weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c))\), by Theorem 17, there exists \(\overline{\lambda }\in R^{k}, \overline{\lambda }\ge 0, \sum _{i=1}^{k}\overline{\lambda }_{i}=1\), such that
By definition of the vector exact exponential penalty function, it follows that
By assumption, all functions \(g_{j}, j=1,\ldots ,m\), are locally Lipschitz on X. Then, by definition, the functions \(\frac{1}{r}\left( e^{rg_{j}^{+}\left( \cdot \right) }-1\right) , j=1,\ldots ,m\), are also locally Lipschitz on X. Since all \(\overline{\lambda }_{i}\) are nonnegative, equality holds in Corollary 6. Thus, (41) yields
Thus,
Since \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\), (42) gives
Then, by Lemma 4, it follows that
Hence, by Proposition 5, we have
Thus, by Theorem 2.3.9 [30], it follows that
By assumption, the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \( \eta \). Since the constraint functions \(g_{j}, j\in J\), are locally Lipschitz on X and r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \), by Theorem 14, the functions \(\frac{1 }{r}\left( e^{rg_{j}^{+}\left( \cdot \right) }-1\right) , j\in J\), are invex at \(\overline{x}\) on X with respect to the same function \(\eta \). Then, the following inequalities
hold for all \(x\in X\). Multiplying (46) by \(c>0\), we get
Adding both sides of the inequalities (47), we obtain
Thus, by (45) and (48), for any \(i=1,\ldots ,k\), the following inequalities
hold for all \(x\in X\), for every \(\xi _{i}\in \partial f_{i}\left( \overline{ x}\right) , i=1,\ldots ,k\), and \(\zeta _{j}^{+}\in \partial \left( \frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , j=1,\ldots ,m\). Multiplying (49) by \(\overline{\lambda }_{i}\), we get
Adding both sides of the above inequalities, we obtain
Since \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\), (51) implies that the inequality
holds for every \(\xi _{i}\in \partial f_{i}(\overline{x}), i=1,\ldots ,k,\) and \(\zeta _{j}^{+}\in \partial \left( \frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , j=1,\ldots ,m\). Hence, by (44), the following inequality
holds for all \(x\in X\). By (14), for each \(x\in D\), it follows that
Combining (53) and (54), we get that the following inequality
holds for all \(x\in D\). Since \(\overline{x}\) is not feasible in the given multiobjective programming problem (VP), by (14), it follows that
By assumption, c is sufficiently large. Let c satisfy
Now, we prove that \(\overline{c}\geqq 0\). Indeed, by assumption, \(\overline{x }\) is a weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. Thus, by (39), it follows that, for every \(x\in D\), there exist at least one \( i\in I\) such that \(\frac{1}{r}e^{rf_{i}(x)}-\frac{1}{r}e^{rf_{i}(\overline{x} )}\geqq 0\). Hence, in fact, (57) implies that \(\overline{c}\geqq 0\).
We now show that
Suppose, contrary to the result, that
Since (57) is fulfilled for all \(c>\overline{c}\), there exists the penalty parameter \(c^{*}>\overline{c}\) such that
Combining (57), (58) and (60), we have
Hence, (61) gives
Since \(\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) =0\) for all \(x\in D\), (62) implies that the following inequality
holds, which contradicting the weak efficiency of \(\overline{x}\) in the vector penalized optimization problem \((\hbox {VP}_{r}(c^{*}))\). Thus, (58) is satisfied.
By assumption, \(\overline{x}\) is a weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function for sufficiently large c (as it follows by ( 57), for all \(c>\overline{c}\)). Hence, by Definition 16, it follows that
Thus, (13) gives
Since \(D\subset X\), the inequality (64) yields
The above relation is equivalent to
Thus, (66) implies that the following relation
holds, contradicting (57). Therefore, the case \(\overline{x}\notin D\) is impossible. This means that \(\overline{x}\) is feasible in the multiobjective programming problem (VP). Hence, by the feasibility of \( \overline{x}\) in constrained optimization problem (VP), (66) yields
By Definition, 16, the above inequality implies that \(\overline{x}\) is a weak Pareto solution in the multiobjective programming problem (VP). This completes the proof of this theorem. \(\square \)
Theorem 31
Let D be a compact subset of \(R^{n}\) and \(\overline{x} \) be a Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(\overline{c }))\) with the vector exact exponential penalty function. Further, assume that the objective function f is strictly r-invex at \(\overline{x}\) on X and the constraint function g is r-invex at \(\overline{x}\) on X with respect to the same \(\eta \). If the penalty parameter \(\overline{c}\) is sufficiently large, then \(\overline{x}\) is also a Pareto solution in the considered multiobjective programming problem (VP).
Proof
Proof of this theorem is similar to the proof of Theorem 30. \(\square \)
Corollary 32
Let all hypotheses of Corollary 27 (or Corollary 29) and Theorem 30 (or Theorem 31) be fulfilled. Then the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem \((VP_{r}(c))\) with the exact vector exponential penalty coincide.
The importance of this result is to guarantee the existence of a finite penalty parameter c such that the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the exact vector exponential penalty are the same.
Remark 33
Note that since the lower bound for the penalty parameter is finite, therefore, the sequence of vector penalized subproblems (16) generated by the above presented algorithm is also finite, in opposition to nonexact penalty function methods.
Now, we illustrate the result established above by means of a nondifferentiable multiobjective programming problem with r-invex functions, which we solve by using the introduced vector exact exponential penalty function method.
Example 34
Consider the following nonsmooth multiobjective programming problem:
Note that \(D=\left\{ \left( x_{1},x_{2}\right) \in R^{2}:0\leqq x_{1}\leqq 1\wedge 0\leqq x_{2}\leqq 1\right\} \) and \(\overline{x}=\left( 0,0\right) \) is a Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP1). Further, it is not difficult to prove, by Definition 10, that the objective function f is strictly 1-invex on \(R^{2}\) with respect to the function \( \eta :R^{2}\times R^{2}\rightarrow R^{2}\) and, by Definition 9, the constraints \(g_{1}\) and \(g_{2}\) are 1-invex on \( R^{2}\) with respect to the same function \(\eta \), where
We use the vector exact exponential penalty method for solving the considered nonconvex nondifferentiable vector optimization problem (VP1). Hence, we construct the following unconstrained vector penalized problem \((VP1 _{1}(c))\) with the vector exact exponential penalty function as follows:
where
Further, the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) are satisfied at \(\overline{x}=\left( 0,0\right) \) with the Lagrange multipliers \(\overline{\lambda }=\left( \overline{\lambda }_{1}, \overline{\lambda }_{2},\overline{\lambda }_{3}\right) \ge 0\) and \( \overline{\mu }=\left( \overline{\mu }_{1},\overline{\mu }_{2}\right) \) satisfying \(\overline{\lambda }_{1}\xi _{1}-\overline{\mu }_{1}=0, - \overline{\lambda }_{2}+\overline{\lambda }_{3}\xi _{3}-\overline{\mu }_{2}=0 , \overline{\lambda }_{1}+\overline{\lambda }_{2}+\overline{\lambda }_{3}=1 \), where \(\xi _{1}\in \left[ -1,1\right] , \xi _{3}\in \left[ -1,1\right] \) . As it follows from these relations, \(max\left\{ \overline{\mu }_{j}, j=1,2\right\} =1\) and \(M=1\). Therefore, if we set \(c\geqq 1\), then, by Corollary 27, \(\overline{x}=\left( 0,0\right) \), being a Pareto solution in the considered multiobjective programming problem (VP1), is also a Pareto solution in each its associated vector penalized optimization problem (VP\(_{1}\)(c)) with the vector exact exponential penalty function. Since also hypotheses of Theorem 31 are fulfilled, the converse result is also true. Note that for the considered constrained multiobjective programming problem (VP1), it is not possible to use the similar result established under convexity assumptions by Antczak [17]. This follows from the fact that none of the functions constituting the considered nonsmooth vector optimization problem (VP1) is convex on \(R^{2}\). It is also difficult to show that the functions involved in problem (VP1) are invex with respect to the same function \( \widetilde{\eta }:R^{2}\times R^{2}\rightarrow R^{2}\). Therefore, it would be difficult to prove the similar result under invexity assumption. However, the results proved in the paper are applicable for the considered nonconvex multiobjective programming problem (VP1), since the functions involved in it are 1-invex on \(R^{2}\) with respect to the same function \(\eta \). Thus, the introduced vector exact exponential penalty function method is applicable to a larger class of nonconvex vector optimization problems than the classical vector exact penalty function method considered in [19].
Now, we consider an example of such a nondifferentiable vector optimization problem in which not all functions constituting it are r-invex. We show in such a case that there is no equivalence between the sets of Pareto solution in the considered nondifferentiable vector optimization problem and its associated vector penalized problem constructed in the introduced vector exponential penalty function method.
Example 35
Consider the following nondifferentiable multiobjective programming problem:
Note that \(D=\left\{ x\in X:0\leqq x\leqq 1\right\} \) and \(\overline{x}=0\) is a Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP2) with the optimal value \(f\left( \overline{x}\right) =\left( \ln 27,\ln 81\right) \). Further, none of the objective functions is r-invex with respect to any function \(\eta :X\times X\rightarrow R\) (see Theorem 12 [31]). However, we use the vector exact exponential penalty method for solving the considered nonconvex nondifferentiable vector optimization problem (VP2). Therefore, we construct the following vector penalized optimization problem \((\hbox {VP2}_{r}(c))\) with the vector exact exponential penalty function as follows:
It is not difficult to see that \(P_{r}(x,c)\) does not have a Pareto solution at \(\overline{x}=0\) for any \(c>0\). This follows from the fact that the downward order of growth of f exceeds the upward of growth of g at x when moving from x towards smaller values. Indeed, note that, for any \(r>0 , P_{r}(x,c)\rightarrow \left( \frac{1}{r}c\left( 13^{r}-1\right) , \frac{1}{r}c\left( 13^{r}-1\right) \right) <\left( \frac{1}{r}27^{r}, \frac{1}{r}81^{r}\right) = P_{r}(0,c)\) when \(x\rightarrow - 3\) for \(c\in \left( 0,\frac{27^{r}}{13^{r}-1}\right) \) whereas for any \(r<0, P_{r}(x,c)\rightarrow \left( -\infty ,-\infty \right) \) when \(x\rightarrow - 3\) for any \(c<0\) As it follows even from this example, r-invexity notion is an essential assumption to prove the equivalence between the sets of (weak) Pareto solutions in the original multiobjective programming problem and its exact penalized vector optimization problem with the vector absolute value penalty function for any penalty parameters exceeding the given threshold.
In the next example, we compare the presented exact vector exponential penalty function method and the classical exact vector \(l_{1}\) penalty function method.
Example 36
Consider the following nondifferentiable multiobjective programming problem:
Note that \(D=\left\{ x\in R:0\leqq x\leqq 1\right\} \) and \(\overline{x}=0\) is a Pareto solution in the considered nonconvex nondifferentiable vector optimization problem (VP3). It can be shown by definition that the objective function is strictly 1-invex and the constraint function is 1-invex with respect to the same function \(\eta :R\times R\rightarrow R\), where \(\eta (x, \overline{x})=x-\overline{x}\).
If we use the presented exact vector exponential penalty function method for solving (VP3), then we construct the following unconstrained vector optimization problem \((\hbox {VP3}_{1}(c))\):
If we use the classical exact vector \(l_{1}\) penalty function method for solving (VP3), then we have to solve the following unconstrained vector optimization problem \((\hbox {VP3}_{0}(c))\):
Note that, in the first case, we have to solve a convex vector optimization problem, whereas, in the second case, an unconstrained vector optimization problem constructed in the classical exact \(l_{1}\) penalty function method is not convex. Therefore, it is not possible to use for solving it the methods for solving convex unconstrained vector optimization problems.
Remark 37
Now, let us return yet to Example 24. One of strategies to overcome the difficulties associated with too small value of penalty parameter in the presented algorithm is just to set the penalty parameter \( c_{n}\) in the presented algorithm to be large than the threshold equal to \( Mmax\{\overline{\mu },j\in J\}\). For the multiobjective programming problem (VP0) considered in Example 24, \(\overline{\mu }=1\) and \(M=e^{0}=1\) and, therefore, the threshold of penalty parameter is equal to 1. Thus, it is now clear, why the vector exponential penalty function \(P_1(\cdot ,c)\) in Example 24 has a Pareto solution at \(\bar{x}=1\) for all penalty parameters \(c> 1\).
5 Conclusion
In this paper, the exact vector exponential penalty function method has been used for solving nonconvex nondifferentiable multiobjective programming problems with inequality constraints. The convergence of the introduced vector exponential penalty function method has been established. Further, it has been proved that there exists the finite threshold value \(\overline{c}\) of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, any (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP) is a (weak) Pareto solution in its associated vector penalized problem \((\hbox {VP}_{r}( c))\) with the vector exact exponential penalty function. We have established this result for nondifferentiable multiobjective programming problems involving r-invex functions with respect to the same function \( \eta \). Also the converse result has been established for such nonconvex nondifferentiable multiobjective programming problems and under assumption that the penalty parameter c is sufficiently large. Thus, the equivalence between the set of (weak) Pareto optimal solutions in the considered nonconvex multiobjective programming problem (VP) and its associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function has been proved for sufficiently large penalty parameters c. Thus, the vector exact exponential penalty function method analyzed in the paper turns out to be useful for solving a class of nonconvex nonsmooth multiobjective programming problems with r-invex functions (with respect to the same function \(\eta \)), that is, the larger class of vector optimization problems than convex and invex ones. Also, in some cases, the exact vector exponential penalty function method turns out more useful than the classical exact vector \(l_{1}\) penalty function method. This is consequence of the fact that, in some cases, a vector penalized problem with the exact vector exponential penalty function is easier to solve than a vector penalized problem constructed in the classical exact vector \(l_{1}\) penalty function method. This property of the introduced vector exact exponential penalty function method is, of course, important from the practical point of view. In this way, due to the importance of the exact \(l_{1}\) penalty function methods in nonlinear scalar programming, similarly results have been established in the vectorial case, which show that we are in a position to solve of nonconvex nonsmooth vector optimization problems by using such methods, in the considered case, the vector exact exponential penalty function method.
References
Antczak, T.: An \(\eta \)-approximation method in nonlinear vector optimization. Nonlinear Anal. Theory Methods Appl. 63, 225–236 (2005)
Antczak, T.: An \(\eta \)-approximation method for nonsmooth multiobjective programming problems. Anziam J. 49, 309–323 (2008)
Jahn, J.: Scalarization in vector optimization. Math. Program. 29, 203–218 (1984)
Zangwill, W.I.: Nonlinear programming via penalty functions. Manag. Sci. 13, 344–358 (1967)
Pietrzykowski, T.: An exact potential method for constrained maxima. SIAM J. Numer. Anal. 6, 299–304 (1969)
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. Wiley, New York (1991)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press Inc, New York (1982)
Charalambous, C.: On conditions for optimality of the nonlinear \(l_{1}\)-problem. Math. Program. 17, 123–135 (1979)
Charalambous, C.: A lower bound for the controlling parameters of the exact penalty functions. Math. Program. 15, 278–290 (1978)
Di Pillo, G., Grippo, L.: Exact penalty functions in constrained optimization. SIAM J. Control Optim. 27, 1333–1360 (1989)
Evans, J.P., Gould, F.J., Tolle, J.W.: Exact penalty functions in nonlinear programming. Math. Program. 4, 72–97 (1973)
Fletcher, R.: An exact penalty function for nonlinear programming with inequalities. Math. Program. 5, 129–150 (1973)
Han, S.P., Mangasarian, O.L.: Exact penalty functions in nonlinear programming. Math. Program. 17, 251–269 (1979)
Mangasarian, O.L.: Sufficiency of exact penalty minimization. SIAM J. Control Optim. 23, 30–37 (1985)
Peressini, A.L., Sullivan, F.E., Uhl Jr., J.J.: The Mathematics of Nonlinear Programming. Springer, New York (1988)
Rosenberg, E.: Exact penalty functions and stability in locally Lipschitz programming. Math. Program. 30, 340–356 (1984)
Antczak, T.: A new exact exponential penalty function method and nonconvex mathematical programming. Appl. Math. Comput. 217, 6652–6662 (2011)
Antczak, T.: The exact \(l_{1}\) penalty function method for nonsmooth invex optimization problems. In: Hömberg, D., Trö ltzsch, F. (eds.) System modelling and optimization, pp. 461–471. 25th IFIP TC conference, CSMO 2011, AITC 391. Springer, Berlin. Sept 2011 (2013)
Antczak, T.: The vector exact \(l_{1}\) penalty method for nondifferentiable convex multiobjective programming problems. Appl. Math. Comput. 218, 9095–9106 (2012)
Murphy, F.: A class of exponential penalty functions. SIAM J. Control 12, 679–687 (1974)
Alvarez, F.: Absolute minimizer in convex programming by exponential penalty. J. Convex Anal. 7, 197–202 (2002)
Alvarez, F., Cominetti, R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming. Math. Program. Ser. A 93, 87–96 (2002)
Bertsekas, D.P., Tseng, P.: On the convergence of the exponential multiplier method for convex programming. Math. Program. 60, 1–19 (1993)
Jayswal, A., Choudhury, S.: An exact l1 exponential penalty function method for multiobjective optimization problems with exponential-type invexity. J. Oper. Res. Soc. China 2, 75–91 (2014)
Jayswal, A., Choudhury, S.: Convergence of exponential penalty function method for multiobjective fractional programming problems. Ain Shams Eng. J. 5, 1371–1376 (2014)
Mandal, P., Giri, B.C., Nahak, C.: Variational problems and l1 exact exponential penalty function with \(\left( p, r\right) \)-\(\rho \)-\( \left( \eta,\theta \right) \)-invexity. AMO 16, 243–259 (2014)
Liu, S., Feng, E.: The exponential penalty function method for multiobjective programming problems. Optim. Methods Softw. 25, 667–675 (2010)
Parwadi, M.: Exponential penalty methods for solving linear programming problems. In: Proceedings of the World Congress on Engineering and Computer Science 2011 Vol II WCECS 2011. San Francisco. 19–21 Oct 2011
Strodiot, J.J., Nguyen, V.U.: An exponential penalty method for nondifterentiable minimax problems with general constraints. J. Optim. Theory Appl. 27, 205–219 (1979)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Antczak, T.: Lipschitz \(r\)-invex functions and nonsmooth programming. Numer. Funct. Anal. Optim. 23, 265–283 (2002)
Antczak, T.: Optimality and duality for nonsmooth multiobjective programming problems with \(V\)-\(r\)-invexity. J. Glob. Optim. 45, 319–334 (2009)
Giorgi, G., Guerraggio, A.: The notion of invexity in vector optimization: smooth and nonsmooth case. In: Crouzeix, J.P., Martinez-Legaz, J.E., Volle, M. (eds.) Generalized convexity, generalized monotonicity. Proceedings of the fifth symposium on generalized convexity, Luminy. Kluwer Academic Publishers (1997)
Kim, D.S., Schaible, S.: Optimality and duality for invex nonsmooth multiobjective programming problems. Optimization 53, 165–176 (2004)
Craven, B.D.: Nonsmooth multiobjective programming. Numer. Funct. Anal. Optim. 10, 49–64 (1989)
Ishizuka, Y., Schimizu, K.: Necessary and sufficient conditions for the efficient solutions of nondifferentiable multiobjective problems. IEEE Trans. Syst. Man Cybern. 14, 625–629 (1984)
Minami, M.: Weak Pareto-optimal necessary conditions in a nondifferentiable multiobjective program on a Banach space. J. Optim. Theory Appl. 41, 451–461 (1983)
Luc, D.T.: Theory of Vector Optimization. Lecture Notes in Economics and Mathematical Systems, vol. 319. Springer, Berlin (1989)
Craven, B.D.: Invex functions and constrained local minim. Bull. Aust. Math. Soc. 24, 357–366 (1981)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Anton Abdulbasah Kamil.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Antczak, T. Vector Exponential Penalty Function Method for Nondifferentiable Multiobjective Programming Problems. Bull. Malays. Math. Sci. Soc. 41, 657–686 (2018). https://doi.org/10.1007/s40840-016-0340-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-016-0340-4
Keywords
- Multiobjective programming
- (weak) Pareto solution
- Vector exact exponential penalty function
- Penalized vector optimization problem
- r-Invex function