1 Introduction

Multiobjective bilevel programming problem with equilibrium constraints is a generalization from Multiobjective bilevel programming problems. They is known as a combination of two multiobjective programming problems in which the feasible region of upper level problem with equilibrium constraints, the so-called included set, can be formulated as the set of minimal solutions of a multiobjective programming problem, see, for instance, Eichfelder (2010). As far as we know, this problem belongs to the class of NP-hard multiobjective programming problems even either only with inequality and set constraints or the objective and constraint functions are linear, e.g., in Ben-Ayed and Blair (1990). On optimality for this artificial problem is formulated by moving them into a multiobjective single-level programming problem where the nonsmooth Mangasarian–Fromovitz and generalized standard Abadie type constraint qualification may true at any feasible solution, e.g., in Dempe (2002), Colson et al. (2007), Gadhi and Gadhi and Dempe (2013), Dempe et al. (2013), and the references therein.

In recent years, Dempe et al. (2013) obtained necessary optimality conditions for efficiency in terms of the directional convexificators and as well as duality theorems for a multiobjective bilevel programming problem with constraints and its Wolfe and Mond-Weir type dual problem. The authors in Dempe et al. (2013) used the tools in Gadhi and Dempe’ paper Gadhi and Dempe (2013) and a special scalarization function in Hiriart-Urruty (1979); Hiriart-Urruty and Lemaréchal (1993) for their investigation. Luu and Mai (2018) formulated a Wolfe and Mond-Weir type dual problem for a vector equilibrium problem with constraints via the directional convexificators, and then, they obtained weak and strong duality theorems for the same. Pandey and Mishra (2016, 2018) formulated a Wolfe and Mond–Weir type dual problem for a single-level nonsmooth multiobjective programming problem with equilibrium constraints, and then, they established sufficient optimality conditions for the GA-stationary solution of mathematical programming problem with equilibrium constraints. Most recently, Su and Dinh (2020) constructed a Wolfe and Mond–Weir types dual problem, and then, they provided results on duality theorems to the interval-valued pseudoconvex optimization problem with equilibrium constraints in terms of contingent epiderivatives.

A fundamental question here is why we should study necessary optimality conditions and duality theorems for a nonsmooth multiobjective bilevel programming problem with equilibrium constraints and its Wolfe and Mond–Weir types dual problem via the directional convexificators. We try to answer these question with a few words. Many papers have been published in the last decade about bilevel programs, but there are only few of them dealing with multiobjective bilevel programming problem with equilibrium constraints (see Babahadda and Gadhi 2006; Chuong 2018; Dempe 1992, 2002; Dempe and Pilecka 2015; Dempe and Zemkoho 2012; Eichfelder 2010; Gadhi and Dempe 2013; Suneja and Kohli 2011; Ye and Zhu 1995 and the references therein). Especially, the Wolfe and Mond–Weir type dual problems are very popular in the area of applicable and this is a motivation for our present work.

Necessary optimality conditions for efficiency and as well as results on duality for constrained nonsmooth multiobjective single-level and two-level programming problems are an active research area in the recent years (see, e.g.,  Babahadda and Gadhi 2006; Ben-Ayed and Blair 1990; Clarke 1983; Chuong 2018; Dempe 1992, 2002; Dempe and Pilecka 2015; Dutta and Chandra 2004; Ehrgott 2005; Jahn 2004; Gong 2010; Li and Zhang 2006; Luo et al. 1996; Luc 1989; Luu 2014, 2016; Luu and Mai 2018; Luu and Hang 2015; Mangasarian 1969; Mond and Weir 1981; Movahedian and Nabakhtian 2010; Pandey and Mishra 2016, 2018; Su and Hang 2019; Su and Hien 2021; Su 2019, 2020; Ye 2005; Wolfe 1961 and the references therein). For instance, Babahadda and Gadhi (2006) gave necessary optimality conditions via the Lagrange–Kuhn–Tucker multipliers in terms of the directional convexificators for a bilevel programming problem; Dempe and Pilecka (2015) established primal and dual optimality conditions by means of the directional convexificators for an optimistic bilevel programming problem; Gadhi and Dempe (2013) obtained necessary optimality conditions via the Clarke generalized Jacobians for a multiobjective bilevel programming problem; Dempe et al. (2013) derived necessary optimality conditions in terms of Mordukhovich’s subdifferentials for a semivectorial bilevel programming problem; Chuong (2018) formulated a relaxation multiobjective formulation for a nonsmooth multiobjective bilevel programming problem and examine the relationships of solutions between them, and then, the author received Fritz John and Karush–Kuhn–Tucker necessary optimality conditions to problem such via its relaxation. More recently, Dempe et al. (2020) provided necessary optimality conditions for a vectorial bilevel programming problem in terms of the directional convexificators, using the optimal-value reformulation and a scalarization technique. It should be mentioned that an application of the obtained result for establishing the Wolfe and Mond–Weir type dual problem for a nonsmooth multiobjective bilevel programming problem with equilibrium constraints is not considered. This is the reason why we study optimality and duality for a nonsmooth multiobjective bilevel programming problem with equilibrium constraints ((MBPP) for short) via the convexificators in the present paper.

Motivated by the preceding discussions, our aim in this paper is to study and develop necessary optimality conditions for the local weak efficient solution of nonsmooth multiobjective bilevel programming problem with equilibrium constraints and construct the Wolfe and Mond-Weir type dual problem for the original problem (MBPP). We, in addition, obtain various duality theorems for the problem (MBPP) and its dual problem via the convexificators under some suitable assumptions. The content of the paper is constructed as follows. In Sect. 2, we recall some preliminaries and concepts of directional convexificator with extended-real-valued functions. Section 3 provides fundamental notations and necessary optimality conditions for the local weak efficient solution of nonsmooth multiobjective bilevel programming problem with equilibrium constraints via the convexificators. Some applications are also presented in this section. Section 4 is devoted to constructing the Wolfe and Mond–Weir types dual problem and establishing the weak and strong duality theorems for a nonsmooth multiobjective bilevel programming problem with equilibrium constraints and its dual problem. Illustrative examples are also provided in the literature.

2 Preliminaries and definitions

In this section, we recall some basic concepts and results, which will be needed in what follows. As usual, one writes \(\mathbb {R}, \mathbb {N},\) \(\mathbb {R}^n_+\) and \(\mathbb {R}^n_-\) instead of the set of real numbers, the set of natural numbers, the nonnegative orthant cone, and the nonpositive orthant cone, respectively. The origin of any space \(\mathbb {R}^n\) is expressed as 0. We use the symbol \(\overline{B}_{\mathbb {R}^n}\) to denote the closed unit sphere of \(\mathbb {R}^n\) and \(I_n:=\{1,2,\ldots , n\}.\) Given a nonempty subset \(C\subset \mathbb {R}^n,\) bdC,  intC,  clC,  convC, and coneC stand for the topological boundary of C, the topological interior of C, the topological closure of C, the convex hull of C, and the cone generated by C,  respectively, where \(\text {cone}C:=\{tc\,:\,t\ge 0,\,\,\,c\in C\}.\) We use the symbol \(C^-\) to denote the negative polar cone of C,  i.e., \(C^-=\{v\in \mathbb {R}^n: \left\langle v, x\right\rangle \le 0\,\,\,\,\forall \,x\in C\},\) and \(t_k\rightarrow 0^+\) to instead of the positive real numbers sequence \((t_k)\) with limit 0. The contingent cone to the set C at the point \(\overline{x}\in \text {cl}C\) is given as:

$$\begin{aligned} T(C, \overline{x})=\Big \{v\in \mathbb {R}^n\,|\, \exists \,t_k\rightarrow 0^+,\,\,\exists \,v_k\rightarrow v\,\,\text{ such } \text{ that }\,\,\overline{x}+t_kv_k\in C\,\,\forall \,k\ge 1\Big \}.\end{aligned}$$

If C is convex, then \(T(C, \overline{x})=\text {cl cone}(C-\overline{x}),\) and so, \(T(C, \overline{x})\) is a closed and convex cone. In the sequel, in \(\mathbb {R}^n,\) consider the order given by a cone \(\mathbb {R}^n_+,\) that is:

$$\begin{aligned} \begin{aligned}&x, y\in \mathbb {R}^n,\,\,\,\, x\ge y \Longleftrightarrow x-y\in \mathbb {R}^n_+, \\&x, y\in \mathbb {R}^n,\,\,\,\, x> y \Longleftrightarrow x\ge y\,\,\,\text{ and } \text{ not }\,\,\,y\ge x,\,\,\,\text{ in } \text{ the } \text{ words }\,\,\,x\in y+\mathbb {R}^n_+\setminus \{0\},\\&x, y\in \mathbb {R}^n,\,\,\,\, x\gg y \Longleftrightarrow x-y\in \text {int}\mathbb {R}^n_+\cup \{0\}\,\,\,\text{ and } \text{ not }\,\,\,x-y\in \text {int}\mathbb {R}^n_-\cup \{0\}. \end{aligned} \end{aligned}$$

The nonnegative scalar function \(\varDelta _C: \mathbb {R}^n\rightarrow \mathbb {R}_+\) is defined as:

$$ \varDelta _C(y)={\left\{ \begin{array}{ll} d(y, C),\,\,\,&{}\text{ if }\,\,\, y\in \mathbb {R}^n\setminus C,\\ -d(y, \mathbb {R}^n\setminus C),\,\,\,&{}\text{ if }\,\,\, y\in C, \end{array}\right. }$$

where the distance from the vector y to the set C is \(d(y, C):=\inf _{c\in C}\Vert y-c\Vert . \)

We mention that the function \(\varDelta _C\) was first introduced by Hiriart-Urruty (1979), used after by Hiriart-Urruty and Lemaréchal (1993), Ehrgott (2005) and Dempe et al. (2020). In sense, \(C\ne \mathbb {R}^n\) is a closed and convex cone with nonempty interior, and then, \(\varDelta _C\) is 1-Lipschitzian, positively homogeneous, convex, and decreasing on \(\mathbb {R}^n.\) Furthermore:

$$\begin{aligned} \begin{aligned}&\text{ bd }C=\{y\in \mathbb {R}^n\,:\,\varDelta _C(y)=0\}, \\&\text{ int }C=\{y\in \mathbb {R}^n\,:\,\varDelta _C(y)<0\}, \\&\mathbb {R}^n\setminus C=\{y\in \mathbb {R}^n\,:\,\varDelta _C(y)>0\}. \end{aligned} \end{aligned}$$

From now on, if not otherwise specified, we always assume that the bifunctions \(F=(F_1, \ldots , F_q): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^q,\) \(g=(g_1, \ldots , g_m): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^m,\) \(h=(h_1, \ldots , h_n): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^n,\) \(G=(G_1, \ldots , G_p): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^p,\) \(H=(H_1, \ldots , H_p): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^p,\) \(k=(k_1, \ldots , k_l): \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^l\) and \(f: \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R},\) where \(F_i\) (\(i\in I_q\)), \(g_j\) \((j\in I_m),\) \(h_k\) \((k\in I_n),\) \(G_l\) and \(H_l\) \((l\in I_p),\) \(k_m\) \((m\in I_l)\), and f are locally Lipschitz real-valued bifunctions on \(\mathbb {R}^{q_1}\times \mathbb {R}^{q_2}.\)

We consider a nonsmooth multiobjective bilevel programming problem with equilibrium constraints (in short, (MBPP)) of the following form:

$$\begin{aligned}&\underset{x, y}{\text {Minimize}}\,\,\,F(x, y)=\Big (F_1(x, y),\ldots , F_q(x, y)\Big ) \\ {}&\text{ subject } \text{ to }\,\,\,\,\, g(x, y)\le 0,\,\, h(x, y)=0,\,\, \\ {}&\qquad \qquad \quad G(x, y)\ge 0,\,\, H(x, y)\ge 0,\,\, \\ {}&\qquad \qquad \quad G(x, y)^TH(x, y)=0, \\ {}&\qquad \qquad \quad y\in S(x), \end{aligned}$$
(MBPP)

where \(^T\) indicates the transpose and for any \(x\in \mathbb {R}^{q_1},\) S(x) is the solution set of the following parametric programming problem (or the lower level problem):

$$\begin{aligned}&\,\,\,\, \underset{y}{\text {Minimize}}\,\,\,f(x, y) \\ {}&\text{ subject } \text{ to }\,\,\,\,\, k(x, y)\le 0. \end{aligned}$$
(MPPP-x)

The sets K and \(K_x\) are said to be the feasible regions to the multiobjective bilevel programming problem with equilibrium constraints (MBPP) and the lower level problem (MPPP-x), respectively. In sense, the constraint functions hGH are omitted, and then, the problem (MBPP) is said to be a multiobjective bilevel programming problem with constraints (MBBP\(^*\)).

We also introduce next the notation for the efficiency of problem (MBPP).

Definition 1

A pair \((\overline{x}, \overline{y})\) is said to be a local efficient (resp., local weak efficient) solution of problem (MBPP) iff there exists a neighborhood V of \((\overline{x}, \overline{y})\), such that:

$$\begin{aligned} \begin{aligned}&F(K\cap V)\subset F(\overline{x}, \overline{y})+\big (\mathbb {R}^{q_1+q_2}\setminus \mathbb {R}^{q_1+q_2}_-\big )\bigcup \big \{0\big \} \\ \Big (\text{ resp., }\,\,&F(K\cap V)\subset F(\overline{x}, \overline{y})+\big (\mathbb {R}^{q_1+q_2}\setminus \text {int}\mathbb {R}^{q_1+q_2}_-\big ) \Big ), \end{aligned} \end{aligned}$$

where \(\mathbb {R}^{q_1+q_2}:=\mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\) and \(\mathbb {R}^{q_1+q_2}_-:=\mathbb {R}^{q_1}_-\times \mathbb {R}^{q_2}_-.\)

It should be mentioned that if \((\overline{x}, \overline{y})\) is a local weak efficient solution of problem (MBPP) then one can find some neighborhoods \(U_0\) of \(\overline{x}\) and \(V_0\) of \(\overline{y}\), such that:

$$\begin{aligned} F(x, y)-F(\overline{x}, \overline{y})\not \in -\text {int}\mathbb {R}^q_+\,\,\,\,\forall \,(x, y)\in K\cap (U_0\times V_0). \end{aligned}$$

This means that there exists no \((x_0, y_0)\in K\cap (U_0\times V_0)\) satisfying:

$$\begin{aligned} F_k(x_0, y_0)-F_k(\overline{x}, \overline{y}) < 0\,\,\,\,\forall \,k\in I_q.\end{aligned}$$

Let X be a real Banach space, one denotes \(X^*\) the topological dual space of X and \(\overline{\mathbb {R}}:=\mathbb {R}\cup \{+\infty \}\cup \{-\infty \}.\) Given a mapping \(l: X\rightarrow \overline{\mathbb {R}},\) we use the symbols \(l^-_d(\overline{x}; u):=\liminf _{t\rightarrow 0^+}\frac{l(\overline{x}+tu)-l(\overline{x})}{t}\) and \( l^+_d(\overline{x}; u):=\limsup _{t\rightarrow 0^+}\frac{l(\overline{x}+tu)-l(\overline{x})}{t}\) to denote the lower and upper Dini directional derivatives of l at \(\overline{x}\in X\) in the direction \(u\in X,\) respectively. The following concepts related to convexificators can be found in Jeyakumar and Luc (1999).

Definition 2

(Jeyakumar and Luc (1999)) An extended real-valued function l defined on X is said to admit an upper (resp.  lower) convexificator \(\partial ^*f(\overline{x})\) at \(\overline{x}\) iff \(\partial ^*f(\overline{x})\subset X^*\) (resp.  \(\partial _*f(\overline{x})\subset X^*\)) is weakly* closed, and for every \(u\in X,\)

$$\begin{aligned}\begin{aligned}&l^-_d(\overline{x}; u)\le \sup _{\xi \in \partial ^*l(\overline{x})} \left\langle \xi , u\right\rangle \\ \Big (\text{ resp. } \,\,\,&l^+_d(\overline{x}; u)\ge \inf _{\xi \in \partial _*l(\overline{x})}\left\langle \xi , u\right\rangle \Big ) . \end{aligned}\end{aligned}$$

A weakly* closed set \(\partial ^*f(\overline{x})\subset X^*\) is said to be a convexificator of l at \(\overline{x}\) iff it is both upper and lower convexificators of l at \(\overline{x}.\)

Definition 3

An extended real-valued function l defined on X is said to admit an upper (lower) semi-regular convexificator \(\partial ^*f(\overline{x})\) (resp.    \(\partial _*f(\overline{x})\)) at \(\overline{x}\) iff \(\partial ^*f(\overline{x})\subset X^*\) (resp.  \(\partial _*f(\overline{x})\subset X^*\)) is weakly* closed, and for every \(u\in X:\)

$$\begin{aligned}&l^+_d(\overline{x}; u)\le \sup _{\xi \in \partial ^*l(\overline{x})} \left\langle \xi , u\right\rangle \end{aligned}$$
(2.1)
$$\begin{aligned} \Big (\text{ resp., }\,\,\,&l^-_d(\overline{x}; u)\ge \inf _{\xi \in \partial _*l(\overline{x})}\left\langle \xi , u\right\rangle \Big ) . \end{aligned}$$
(2.2)

If the equality in (2.1) (resp.  (2.2)) holds then \(\partial ^*f(\overline{x})\) (resp.  \(\partial _*f(\overline{x})\)) is called an upper (resp. lower) regular convexificator of l at \(\overline{x}.\)

Definition 4

An extended real-valued function l defined on X that has an upper semi-regular convexificator at \(\overline{x}\in X.\) Then, l is said to be \(\partial ^*-\) convex at \(\overline{x}\) iff for all \(x\in X,\)

$$\begin{aligned}l(x)\ge l(\overline{x})+\left\langle \xi , x-\overline{x}\right\rangle ,\,\,\,\,\forall \,\xi \in \partial ^*l(\overline{x}).\end{aligned}$$

At the end of this section, we provide two propositions, which play an important role in the literature.

Proposition 1

(Gadhi and Dempe 2013) Let \(Q\subset \mathbb {R}^q\) be a nonempty, closed. and convex cone with \(\text {int}Q\ne \emptyset .\) For every \(x\in \mathbb {R}^q\), we have \(0\not \in \partial _C\varDelta _Q(x),\) where the set \(\partial _Cf(x)\) designs the subdifferential of convex analysis of f at x.

Proposition 2

(Jeyakumar and Luc 1999) Let \(\overline{x}\in \mathbb {R}^p,\) and for each \(i\in I_2,\) let \(f_i: \mathbb {R}^p\rightarrow \mathbb {R}\) be a continuous function and admits an upper convexificator \(\partial ^*f_i(\overline{x})\) at \(\overline{x}\). The function h is defined by \(h(x)=\max \{f_1(x),\,\,f_2(x)\}.\) Then, \(\partial ^* h(\overline{x}):=\underset{i\in \{i\in I_2\,:\, h(\overline{x})=f_i(\overline{x})\}}{\bigcup }\partial ^*f_i(\overline{x}) \) is an upper convexificator of h at \(\overline{x}.\)

3 Optimality conditions

The goal of this section is to present necessary optimality conditions for the local weak efficient solution of multiobjective bilevel programming problem with equilibrium constraints (MBPP) via the directional convexificators.

To begin with, we provide some important notations: given a feasible pair \((\overline{x}, \overline{y})\in K\) to the problem (MBPP), the following index sets will be used:

$$\begin{aligned}&I_g:=I_g(\overline{x}, \overline{y})=\{i\in I_m: g_i(\overline{x}, \overline{y})=0\}; \\&I_k:=I_k(\overline{x}, \overline{y})=\{i\in I_l: k_i(\overline{x}, \overline{y})=0\}; \\&\alpha :=\alpha (\overline{x}, \overline{y}):=\{i\in I_p: G_i(\overline{x}, \overline{y})=0,\,\,H_i(\overline{x}, \overline{y})>0\}; \\&\beta :=\beta (\overline{x}, \overline{y}):=\{i\in I_p: G_i(\overline{x}, \overline{y})=0,\,\,H_i(\overline{x}, \overline{y})=0\}; \\&\gamma :=\gamma (\overline{x}, \overline{y}):=\{i\in I_p: G_i(\overline{x}, \overline{y})>0,\,\,H_i(\overline{x}, \overline{y})=0\}; \\&\nu _1:=\alpha \cup \beta ;\,\,\, \nu _2=\beta \cup \gamma .\,\,\,\, \\&\alpha ^+_{\mu }:=\{i\in \alpha : \mu ^G_i>0\};\,\,\,\, \\&\gamma ^+_{\mu }:=\{i\in \gamma : \mu ^H_i>0\}; \\&\beta ^G_{\mu }:=\{i\in \beta : \mu _i^H=0,\,\,\,\mu _i^G>0\}; \\&\beta ^H_{\mu }:=\{i\in \beta : \mu _i^G=0,\,\,\,\mu _i^H>0\}; \\&L_{\mu }:=\alpha ^+_{\mu }\cup \gamma ^+_{\mu }\cup \beta ^G_{\mu }\cup \beta ^H_{\mu }. \end{aligned}$$

The set \(\beta \) is the degenerate set and if, in addition, the set \(\beta \) is empty,, then a pair \((\overline{x}, \overline{y})\) is said to satisfy the strict complementarity condition. Furthermore, we put:

$$\begin{aligned}\begin{aligned} \mathcal {M}:=\Big \{({\lambda }, \mu )&=\Big ({\lambda }^g, {\lambda }^k, {\lambda }^h, {\lambda }^{\varPsi }, {\lambda }^G, {\lambda }^H, {\mu }^h, {\mu }^G, {\mu }^H\Big )\in \mathbb {R}^{m+l+2n+2|J(\overline{x}, \overline{y})|+4p}\,\big |\, \lambda ^g_{I_g}\ge 0,\,\,\\ {}&\lambda ^k_{I_k}\ge 0,\,\,\, \lambda ^{\varPsi (z)}_i\ge 0\,\,(i\in I(\overline{x}, \overline{y}),\,\,z\in J(\overline{x}, \overline{y})),\,\,\, \lambda _j^h\ge 0,\,\,\mu _j^h\ge 0\,\,(j\in I_n),\,\,\, \\&\qquad \lambda _i^G\ge 0,\,\lambda _i^H\ge 0,\,\mu _i^G\ge 0,\,\mu _i^H\ge 0\,\,(i\in I_p),\,\,\, \\&\qquad \qquad \lambda _{\gamma }^G=\lambda _{\alpha }^H= \mu _{\gamma }^G=\mu _{\alpha }^H=0,\,\,\, \forall \,i\in \beta ,\,\,\,\mu _i^G=0,\,\,\mu _i^H=0 \Big \}, \end{aligned} \end{aligned}$$
$$\begin{aligned} \varGamma (\overline{x}, \overline{y})=\Big [\underset{i\in I_m}{\bigcup }\text {conv}\partial ^*g_i (\overline{x}, \overline{y}) \Big ]^-&\bigcap&\Big [\underset{i\in I_n}{\bigcup }\text {conv}\partial ^*h_i(\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-h_i)(\overline{x}, \overline{y}) \Big ]^- \\&\bigcap&\Big [\underset{i\in \alpha }{\bigcup }\text {conv}\partial ^*G_i (\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-G_i) (\overline{x}, \overline{y}) \Big ]^-\\&\bigcap&\Big [\underset{i\in \gamma }{\bigcup }\text {conv}\partial ^*H_i(\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-H_i) (\overline{x}, \overline{y}) \Big ]^-\\&\bigcap&\Big [\underset{i\in \beta }{\bigcup }\text {conv}\partial ^*(-G_i) (\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-H_i) (\overline{x}, \overline{y}) \Big ]^-. \end{aligned}$$

We consider the set-valued mapping \(Y: \mathbb {R}^{q_1}\rightrightarrows \mathbb {R}^{q_2}\) is defined by \(Y(x)=K_x\) for all \(x\in \mathbb {R}^{q_1}.\) We fixed a feasible vector \((\overline{x}, \overline{y})\in K\), such that they become a local weakly efficient solution of problem (MBPP), and a bounded neighborhood \(U(\overline{x}, \overline{y})\) of \((\overline{x}, \overline{y}).\) Until now, taking \(U_0\) and \(V_0\) are given as in Definition 1 and it is always assumed that \(U:=\Big \{x\in \mathbb {R}^{q_1}: \exists \, y\in \mathbb {R}^{q_2},\,\,\text{ such } \text{ that }\,\,(x, y)\in U(\overline{x}, \overline{y})\Big \},\) the set-valued mapping Y is uniformly bounded around \(\overline{x}\) and the set \(\underset{x\in U}{\bigcup }Y(x)\) is bounded, which yields that the set \(\underset{x\in U_{\overline{x}}}{\bigcup }Y(x)\) is bounded too, where \(U_{\overline{x}}:=U_0\cap U.\) Consequently, the set \(cl\underset{x\in U_{\overline{x}}}{\bigcup }Y(x)\) is compact. We set:

$$\begin{aligned} \varTheta =\Big (\text {cl}\,\bigcup _{x\in U_{\overline{x}}}Y(x)\Big )+\overline{B}_{\mathbb {R}^{q_2}}. \end{aligned}$$

According to Gadhi and Dempe (2013), the set \(\varTheta \) is nonempty compact that contains an open neighborhood of \(\text {cl}\,\bigcup _{x\in U_{\overline{x}}}Y(x).\) This allows us to define the real-valued function:

$$\begin{aligned}\varPsi : \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}\end{aligned}$$

by

$$\begin{aligned} \varPsi (x, y)=\max _{z\in \varTheta }\,\psi (x, y, z)\,\,\,\,\,\forall (x, y)\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2},\end{aligned}$$

where the mapping \(\psi \) from \(\mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\times \varTheta \) into \(\mathbb {R}\) is given as:

$$\begin{aligned} \psi (x, y, z):=\min \Big \{f(x, y)-f(x, z),\,\,\,-\varDelta _{\mathbb {R}^l_-}(k_1(x, z), \ldots , k_l(x, z))\Big \}.\end{aligned}$$

For simplicity, for each \((x, y)\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2},\) we denote by:

$$\begin{aligned} J(x, y)=\{z\in \varTheta \,:\, \psi ({x}, {y}, z)=\varPsi (x, y)\}. \end{aligned}$$

Consider the scalar functions \(\psi _1\) and \(\psi _2\) defined on \(\mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\times \varTheta \) are defined, respectively, as:

$$\begin{aligned} \psi _1(x, y, z)= & {} f(x, y)-f(x, z),\quad \\ \psi _2(x, y, z)= & {} -\varDelta _{\mathbb {R}^l_-}(k_1(x, z), \ldots , k_l(x, z)).\end{aligned}$$

By the above definitions, it holds that:

$$\begin{aligned} \varPsi (x, y)=\max _{z\in \varTheta }\,\min \big \{\psi _1(x, y, z),\,\,\, \psi _2(x, y, z)\big \}. \end{aligned}$$

For this sense, we put:

$$\begin{aligned} I({x}, {y})=\{i\in \{1, 2\}\,:\, \psi _i({x}, {y}, z)=\psi ({x}, {y}, z)\}. \end{aligned}$$

It is not difficult to check that if \(k_i(x, z)\le 0\) for all \(i\in I_l\), then

$$\begin{aligned}\psi _2(x, y, z)=d(k(x, z),\,\, \mathbb {R}^l\setminus \mathbb {R}^l_-)>0.\end{aligned}$$

If, in addition, that \(y\in S(x),\) we have the following equality:

$$\begin{aligned} \psi (x, y, z)= \psi _1(x, y, z). \end{aligned}$$

The following propositions play a key role in the next section.

Proposition 3

(Gadhi and Dempe (2013)) If \((\overline{x}, \overline{y})\) is a local weak efficient solution for the problem (MBPP), then the solution set of the problem \(\max _{z\in \varTheta }\psi (\overline{x}, \overline{y}, z)\) is given by \(S(\overline{x}).\)

Proposition 4

Let \((\overline{x}, \overline{y})\) be a local weak efficient solution for the problem (MBPP). Assume that for each \(\overline{z}\in J(\overline{x}, \overline{y})\) and \(i\in I(\overline{x}, \overline{y}),\) \(\psi _i\) is continuous and admits an upper convexificator \(\partial ^*\psi _i(\overline{x}, \overline{y}, \overline{z})\) at \((\overline{x}, \overline{y}, \overline{z}),\) one gets:

$$\begin{aligned} \text {conv}\,\partial ^*\varPsi (\overline{x}, \overline{y})\subset \text {conv}\,\Big (\underset{\overline{z}\in J(\overline{x}, \overline{y})}{\bigcup }\,\,\text {conv}\,\Big ( \underset{i\in I(\overline{x}, \overline{y})}{\bigcup }\partial ^*\psi _i(\overline{x}, \overline{y}, \overline{z})\Big )\Big ). \end{aligned}$$
(3.1)

Proof

Let us see that:

$$\begin{aligned} \text {conv}\,\partial ^*\varPsi (\overline{x}, \overline{y})\subset \text {conv}\,\underset{\overline{z}\in J(\overline{x}, \overline{y})}{\bigcup }\partial ^*\psi (\overline{x}, \overline{y}, \overline{z}).\end{aligned}$$
(3.2)

In fact, in view of Proposition 3, it follows that there exists \(\overline{z}\in S(\overline{x})\), such that:

$$\begin{aligned} \varPsi (\overline{x}, \overline{y})=\min \Big \{\psi _1(\overline{x}, \overline{y}, \overline{z}), \psi _2(\overline{x}, \overline{y}, \overline{z})\Big \}. \end{aligned}$$

By a similar argument as in the proof of Rule 4.4 Jeyakumar and Luc (1999), we deduce that (3.2) holds. Again taking account of Proposition 2, we have:

$$\begin{aligned} \partial ^*\psi (\overline{x}, \overline{y}, \overline{z})=\text {conv}\, \underset{i\in I(\overline{x}, \overline{y})}{\bigcup }\partial ^*\psi _i(\overline{x}, \overline{y}, \overline{z}).\end{aligned}$$

This allows us to conclude that (3.1) holds, which proves the claim. \(\square \)

To derive necessary optimality conditions for the problem (MBPP), we introduce the following two related problems:

$$\begin{aligned} \,\,\,&\underset{x, y}{Minimize}\,\,\,F(x, y):=\Big (F_1(x, y),\ldots , F_q(x, y)\Big ) \\ {}&\text{ subject } \text{ to }\,\,\,\,\, g(x, y)\le 0,\,\, h(x, y)=0,\,\, \nonumber \\ {}&\qquad \qquad \quad G(x, y)\ge 0,\,\, H(x, y)\ge 0,\,\, \\ {}&\qquad \qquad \quad G(x, y)^TH(x, y)=0,\,\,\, \\ {}&\qquad \qquad \quad k(x, y)\le 0,\,\,\,\,\varPsi (x, y)\le 0, \\ {}&\qquad \qquad \quad x\in \mathbb {R}^{q_1},\,\,y\in \mathbb {R}^{q_2}, \nonumber \end{aligned}$$
(MBPP1)
$$\begin{aligned} \text{ and }\quad&\underset{x, y}{\text {Minimize}}\,\,\,\overset{\leftrightarrow }{\varPsi }(x, y):=\Big (F(x, y),\,\,k(x, y),\,\varPsi (x, y)\Big ) \\ {}&\text{ subject } \text{ to }\,\,\,\,\,g(x, y)\le 0,\,\, h(x, y)=0,\,\, \\ {}&\qquad \qquad \quad G(x, y)\ge 0,\,\, H(x, y)\ge 0,\,\, \\ {}&\qquad \qquad \quad G(x, y)^TH(x, y)=0,\,\, \\ {}&\qquad \qquad \quad x\in \mathbb {R}^{q_1},\,\,y\in \mathbb {R}^{q_2}. \end{aligned}$$
(MBPP2)

We also propose the following constraint qualification of the (CQ) and (GS-ACQ) types [(CQ) and (GS-ACQ) are generalizations of the Mangasarian-Fromovitz constraint qualification and the Abadie constraint qualification, respectively]:

Definition 5

The following constraint qualification of the (CQ) type is considered:

(CQ):  there exist   \(v_0\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\) and the positive real numbers \(a^g_i\) (\(i\in I_g(\overline{x})\)), \(a^k_i\) (\(i\in I_k(\overline{x})\)), \(a_i^G\) (\(i\in \nu _1\)), \(a_i^H\) (\(i\in \nu _2\)), \(a_i^{\varPsi (\overline{z})}\) (\(i\in I(\overline{x}),\,\,\,\overline{z}\in J(\overline{x})\)) satisfying:

  1. (i)

    \(\left\langle \xi _i^g, v_0\right\rangle \le -a^g_i\) \(\Big (\forall \,\xi _i^g\in \partial ^*g_i(\overline{x}),\,\,\forall \,i\in I_g(\overline{x})\Big );\)

  2. (ii)

    \(\left\langle \xi _i^k, v_0\right\rangle \le -a^k_i\) \(\Big (\forall \,\xi _i^k\in \partial ^*k_i(\overline{x}),\,\,\forall \,i\in I_k(\overline{x})\Big );\)

  3. (iii)

    \(\left\langle \xi _i^G, v_0\right\rangle \le -a^G_i\) \(\Big (\forall \,\xi _i^G\in \partial ^*(-G_i)(\overline{x}),\) \(\forall \,i\in \nu _1\Big );\)

  4. (iv)

    \(\left\langle \xi _i^H, v_0\right\rangle \le -a^H_i\) \(\Big (\forall \,\xi _i^H\in \partial ^*(-H_i)(\overline{x}),\,\,\forall \,i\in \nu _2\Big );\)

  5. (v)

    \(\left\langle \xi _i^h, v_0\right\rangle =0\) \(\Big (\forall \,\xi _j^h\in \partial ^*h_j(\overline{x}),\,\,\forall \,j\in I_n\Big );\)

  6. (vi)

    \(\left\langle \eta _i^h, v_0\right\rangle =0\) \(\Big (\forall \,\eta _j^h\in \partial ^*(-h_j)(\overline{x}),\,\,\forall \,j\in I_n\Big );\)

  7. (vii)

    \(\left\langle \xi _i^{\varPsi (\overline{z})}, v_0\right\rangle \le -a_i^{{\varPsi (\overline{z})}}\) \(\Big (\forall \,\xi _i^{\varPsi (\overline{z})}\in \partial ^*\psi _i(\overline{x}, \overline{z}),\,\,\forall \, i\in I(\overline{x}),\,\,\forall \,\overline{z}\in J(\overline{x})\Big ).\)

Definition 6

Let \((\overline{x}, \overline{y})\) be a feasible point of problem (MBPP), and assume that all of the functions have an upper convexificator at \((\overline{x}, \overline{y})\). We say that the generalized standard Abadie constraint qualification (GS-ACQ) holds at \((\overline{x}, \overline{y})\) if at least one of the dual sets used in the definition of \(\varGamma (\overline{x}, \overline{y})\) is nonzero and \(\varGamma (\overline{x}, \overline{y})\subset T\big (K, (\overline{x}, \overline{y})\big ). \)

In what follows, a necessary optimality condition for the local weakly efficient solution of problem (MBPP) will be derived.

Theorem 1

(Necessary optimality condition) Let \((\overline{x}, \overline{y})\in K\) be a local weakly efficient solution to the problem (MBPP). Suppose that:

  1. (i)

    \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x}, \overline{y})\)), \(k_i\) \((i\in I_k(\overline{x}, \overline{y})),\) \(\pm h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)), \(-G_i\) (\(i\in \nu _1\)) and \(-H_i\) (\(i\in \nu _2\)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at \((\overline{x}, \overline{y});\)

  2. (ii)

    \(F_i\) \((i\in I_q)\), \(k_i\) \((i\in I_k(\overline{x}, \overline{y}))\) and \(\varPsi \) are locally Lipschitz near \((\overline{x}, \overline{y});\)

  3. (iii)

    the constraint qualifications (CQ) and (GS-ACQ) at \((\overline{x}, \overline{y})\) hold;

  4. (iv)

    \(L_{\mu }=\emptyset .\)

Then, there exist \({s}=({s}_i)_{i=1}^q\subset \mathbb {R}_+\) and \(({\lambda }, \mu )=\Big ({\lambda }^g, {\lambda }^k, {\lambda }^h, {\lambda }^{\varPsi }, {\lambda }^G, {\lambda }^H, \mu ^h, {\mu }^G, {\mu }^H\Big )\in \mathcal {M}\) with \(\sum _{i\in I_q}s_i=1\) satisfying:

$$\begin{aligned} 0&\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(\overline{x}, \overline{y})+ \sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g \text {conv}\,\partial ^*g_{i}(\overline{x}, \overline{y}) \\ {}&+ \sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k \text {conv}\,\partial ^*k_{i}(\overline{x}, \overline{y}) +\sum _{z\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(\overline{x}, \overline{y}, z) \\ {}&\qquad +\sum _{j\in I_n}\Big [\lambda _j^h\text {conv}\,\partial ^* h_{j}(\overline{x}, \overline{y})+\mu _j^h\text {conv}\,\partial ^*(-h_j)(\overline{x}, \overline{y}) \Big ] \\ {}&\qquad \qquad +\sum _{i\in I_p}\Big [\lambda _i^G\text {conv}\,\partial ^* (-G_i)(\overline{x}, \overline{y})+\lambda _i^H\text {conv}\,\partial ^* (-H_i)(\overline{x}, \overline{y}) \Big ]. \end{aligned}$$

Proof

We consider the mapping \(\overset{\leftrightarrow }{\varPsi }: \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\rightarrow \mathbb {R}^{q+l+1}\) is defined by:

$$\begin{aligned} \overset{\leftrightarrow }{\varPsi }(x, y)=\Big (F(x, y),\,\,k(x, y),\,\varPsi (x, y)\Big ),\,\,\,\,\forall \, (x, y)\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}.\end{aligned}$$

Since \((\overline{x}, \overline{y})\in K\) is a local weakly efficient solution of problem (MBPP), and it is also a local weakly efficient solution of problem (MBPP1). According to Lemma 3.4 in Gadhi and Dempe (2013), we shall be allowed to deduce that the vector \((\overline{x}, \overline{y})\) is a local weakly efficient solution of problem (MBPP2). Let \(K'\) be a feasible set of problem (MBPP2) and set \(M:=l+1\). Consider a neighborhood V of \((\overline{x}, \overline{y})\), such that:

$$\begin{aligned} \overset{\leftrightarrow }{\varPsi }(x, y)-\overset{\leftrightarrow }{\varPsi }(\overline{x}, \overline{y})\not \in -\text {int}\mathbb {R}^{q+M}_+\,\,\,\,(\forall \,(x, y)\in V\cap K'), \end{aligned}$$

which yields that:

$$\begin{aligned} \varDelta _{\text {int}\mathbb {R}^{q+M}_-} \Big (\overset{\leftrightarrow }{\varPsi }(x, y)-\overset{\leftrightarrow }{\varPsi }(\overline{x}, \overline{y})\Big )\ge 0\,\,\,\,(\forall \,(x, y)\in V\cap K').\end{aligned}$$

We set:

$$\begin{aligned}\overset{\sim }{\varPsi }(x, y):=\overset{\leftrightarrow }{\varPsi }(x, y)-\overset{\leftrightarrow }{\varPsi }(\overline{x}, \overline{y}),\end{aligned}$$

which ensures that \(\overset{\sim }{\varPsi }(\overline{x}, \overline{y})=0.\) By the definitions, we have:

$$\begin{aligned} \partial ^*\overset{\leftrightarrow }{\varPsi }_i(\overline{x}, \overline{y})=\partial ^*\overset{\sim }{\varPsi }_i(\overline{x}, \overline{y})\,\,\,\forall \,i=1,\ldots \, q+|I_k(\overline{x},\overline{y})|+1, \end{aligned}$$

where \(\overset{\leftrightarrow }{\varPsi }_i\) is ith component of \(\overset{\leftrightarrow }{\varPsi }\) and \(\overset{\sim }{\varPsi }_i\) is ith component of \(\overset{\sim }{\varPsi }.\) It is plain that:

$$\begin{aligned}\varDelta _{\text {int}\mathbb {R}^{q+M}_-}\big (\overset{\sim }{\varPsi }(\overline{x}, \overline{y})\big )=0,\end{aligned}$$

which yields that \(\overline{x}\) is a local weak minimum of the following scalar problem (MBPP3):

$$\begin{aligned} \,\,\,&\underset{x, y}{\text {Minimize}}\,\,\,\varDelta _{\text {int}\mathbb {R}^{q+M}_-} \Big (\overset{\sim }{\varPsi }(x, y)\Big ) \\ {}&\text{ subject } \text{ to }\,\,\,\,\, g(x, y)\le 0,\,\, h(x, y)=0,\,\, \nonumber \\ {}&\qquad \qquad \quad G(x, y)\ge 0,\,\, H(x, y)\ge 0,\,\,\nonumber \\ {}&\qquad \qquad \quad G(x, y)^TH(x, y)=0,\,\,\nonumber \\ {}&\qquad \qquad \quad x\in \mathbb {R}^{q_1},\,\,y\in \mathbb {R}^{q_2}.\nonumber \end{aligned}$$
(MBPP3)

It is well known that the real-valued function \(\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}\) is convex and locally Lipschitz, the subdifferential \(\partial _C\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}(0)\) is a bounded convexificator of \(\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}\) at \(\overset{\sim }{\varPsi }(\overline{x}, \overline{y}).\) On the other hand, by the initial assumptions, it follows that the mapping \(\overset{\sim }{\varPsi }\) is locally Lipschitz near \((\overline{x}, \overline{y})\), which ensures that \({\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}}o\,\overset{\sim }{\varPsi }\) is locally Lipschitz near \((\overline{x}, \overline{y})\) too. Taking account of Theorem 3.2 in Pandey and Mishra (2016), for any \(u\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2},\) there exist \(\xi \in \text {conv}\partial ^*{\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}}o\,\overset{\sim }{\varPsi }(u), \) \(\xi _i^g\in \text {conv}\partial ^*g_i(u)\,\, (i\in I_g(\overline{x}, \overline{y})), \) \(\xi ^h_j\in \text {conv}\partial ^*h_j(u)\,\,(j\in I_n), \) \(\eta ^h_j\in \text {conv} \partial ^*(-h_j)(u)\,\,(j\in I_n), \) \(\xi _i^G\in \partial ^*(-G_i)(u)\,\, (i\in I_p) \) and \(\xi _i^H\in \partial ^*(-H_i)(u)\,\, (i\in I_p)\) satisfying \(\lambda ^g_{I_g(\overline{x}, \overline{y})}\ge 0,\) \(\lambda _j^h\ge 0, \,\,\mu _j^h\ge 0\) \((j\in I_p),\) \(\lambda _i^G\ge 0,\,\, \lambda _i^H\ge 0,\,\, \mu _i^G\ge 0,\,\, \mu _i^H\ge 0\) \((i\in I_l),\) \(\lambda ^G_{\gamma }=\lambda ^H_{\alpha }=\mu ^G_{\gamma }=\mu ^H_{\alpha }=0,\) \(\forall \,i\in \beta ,\) \(\mu _i^G=0,\) \(\mu _i^H=0\) and:

$$\begin{aligned} \xi +\sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g\xi _i^g+\sum _{j\in I_n}\big (\lambda _j^h\xi _j^h +\mu _j^h \eta _j^h\big )+\sum _{i\in I_p}\big (\lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H\big )=0. \end{aligned}$$
(3.3)

On one side, Luu and Mai’s Result Luu and Mai (2018) was pointed that the set-valued mapping \(\partial ^*\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x})|+1}_-}\) is upper semicontinuous at \(\overset{\sim }{\varPsi }(\overline{x}, \overline{y}).\) By the initial hypotheses, \(\partial ^*F_i(\overline{x}, \overline{y})\) (\(i\in I_q\)), \(\partial ^*k_i(\overline{x}, \overline{y})\) (\(i\in I_k(\overline{x}, \overline{y})\)) and \(\partial ^*\varPsi (\overline{x}, \overline{y})\) are bounded upper convexificators of \(F_i\) (\(i\in I_q\)), \(k_i\) (\(i\in I_k(\overline{x}, \overline{y})\)) and \(\varPsi \) at \((\overline{x}, \overline{y}), \) respectively. On the other side, the set-valued mappings \(\partial ^*F_i\) (\(i\in I_q\)), \(\partial ^*k_i\) (\(i\in I_k(\overline{x}, \overline{y})\)) and \(\partial ^*\varPsi \) are upper semicontinuous at \((\overline{x}, \overline{y}).\) In view of the chain rule in Jeyakumar and Luc (1999), the set \(\partial _C\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x},\overline{y})|+1}_-}\Big (\overset{\sim }{\varPsi }(\overline{x},\overline{y})\Big )\):

$$\begin{aligned} \Big (\partial ^*F_1(\overline{x},\overline{y}), \ldots , \partial ^*F_q(\overline{x},\overline{y}), \partial ^*k_1(\overline{x},\overline{y}), \ldots , \partial ^*k_{|I_k(\overline{x},\overline{y})|}(\overline{x},\overline{y}), \partial ^*\varPsi (\overline{x},\overline{y})\Big ) \end{aligned}$$

is a bounded convexificator of \({\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-}}o\,\overset{\sim }{\varPsi }\) at \((\overline{x}, \overline{y}).\) Therefore, one can find a:

$$\begin{aligned} \overset{\sim }{\xi }\in \partial _C\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-}\Big (\overset{\sim }{\varPsi }(\overline{x}, \overline{y})\Big )\end{aligned}$$

such that

$$\begin{aligned} \xi \in \overset{\sim }{\xi } _0\,\Big (\partial ^*F_1(\overline{x}, \overline{y}), \ldots , \partial ^*F_q(\overline{x}, \overline{y}), \partial ^*k_1(\overline{x}, \overline{y}), \ldots , \partial ^*k_{|I_k(\overline{x}, \overline{y})|}(\overline{x}, \overline{y}), \partial ^*\varPsi (\overline{x}, \overline{y})\Big ). \end{aligned}$$

Adapting the concept of the subdifferential of \({\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-}}\) at 0 in the sense of convex analysis, for every \(v\in \mathbb {R}^{q+|I_k(\overline{x})|+1},\) the following inequality holds:

$$\begin{aligned} {\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-}}(v) \ge \left\langle \overset{\sim }{\xi }, v\right\rangle . \end{aligned}$$

Consequently, for any \(v\in -\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_+,\) it holds that:

$$\begin{aligned} \left\langle \overset{\sim }{\xi }, v\right\rangle \le {\varDelta _{\text {int}\mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-}}(v)=-d\big (v, \mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|}\setminus \mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_-\big )\le 0, \end{aligned}$$

which proves that:

$$\begin{aligned}\overset{\sim }{\xi }\in \mathbb {R}^{q+|I_k(\overline{x}, \overline{y})|+1}_+.\end{aligned}$$

In other words, it follows from Proposition 1 that \(\overset{\sim }{\xi }\ne 0\) and moreover:

$$\begin{aligned} \xi \in \overset{\sim }{\xi } _0\,\Big (\partial ^*F_1(\overline{x}, \overline{y}), \ldots , \partial ^*F_q(\overline{x}, \overline{y}), \partial ^*k_1(\overline{x}, \overline{y}), \ldots , \partial ^*k_{|I_k(\overline{x}, \overline{y})|}(\overline{x}, \overline{y}), \partial ^*\varPsi (\overline{x}, \overline{y})\Big ).\end{aligned}$$

We set

$$\begin{aligned}\overset{\sim }{\xi }=\Big ((s_i)_{i\in I_q}, (\lambda _i^k)_{i\in I_k}, \lambda ^{\varPsi }\Big ).\end{aligned}$$

Making use of Proposition 4 to deduce that:

$$\begin{aligned}\begin{aligned} \xi \in \sum _{i=1}^qs_i\text {conv}\,\partial ^*F_i(\overline{x}, \overline{y})&+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k\text {conv}\,\partial ^*k_i(\overline{x}, \overline{y}) \\ {}&+\lambda ^{\varPsi }\text {conv}\,\Big (\underset{\overline{z}\in J(\overline{x}, \overline{y})}{\bigcup }\,\,\text {conv}\,\Big ( \underset{i\in I(\overline{x}, \overline{y})}{\bigcup }\partial ^*\psi _i(\overline{x},\overline{y}, \overline{z})\Big )\Big ). \end{aligned} \end{aligned}$$

This yields the existence of \(\xi _i^F\in \text {conv}\,\partial ^*F_i(\overline{x}, \overline{y})\) \((i\in I_q),\) \(\xi _i^k\in \text {conv}\,\partial ^*k_i(\overline{x}, \overline{y})\) \((i\in I_k(\overline{x}, \overline{y})),\) and \(\forall \, \overline{z}\in J(\overline{x}, \overline{y});\) there exist \(\lambda _i^{\varPsi (\overline{z})}\ge 0,\) \(\xi _i^{\varPsi (\overline{z})}\in \partial ^*\psi _i(\overline{x}, \overline{y}, \overline{z})\) \((i\in I(\overline{x}, \overline{y}))\) satisfying:

$$\begin{aligned} \xi =\sum _{i=1}^qs_i\xi _i^F+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k\xi _i^k+\sum _{\overline{z}\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (\overline{z})}\xi _i^{\varPsi (\overline{z})}. \end{aligned}$$
(3.4)

Combining (3.3) with (3.4) yields that:

$$\begin{aligned} \begin{aligned} \sum _{i=1}^qs_i\xi _i^F&+\sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g\xi _i^g+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k\xi _i^k+\sum _{\overline{z}\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (\overline{z})}\xi _i^{\varPsi (\overline{z})} \\ {}&+\sum _{j\in I_n}\big (\lambda _j^h\xi _j^h +\mu _j^h \eta _j^h\big )+\sum _{i\in I_p}\Big (\lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H\Big ) =0,\end{aligned}\end{aligned}$$
(3.5)

where \(s_i\ge 0\) \((i\in I_q),\) \(\lambda ^g_{I_g(\overline{x}, \overline{y})}\ge 0,\) \(\lambda ^k_{I_k(\overline{x}, \overline{y})}\ge 0,\) \(\lambda _j^h\ge 0, \,\,\mu _j^h\ge 0\) \((j\in I_p),\) \(\lambda _i^G\ge 0,\,\, \lambda _i^H\ge 0,\,\, \mu _i^G\ge 0,\,\, \mu _i^H\ge 0\) \((i\in I_l),\) \(\lambda _i^{\varPsi (\overline{z})}\ge 0\) \( (i\in I(\overline{x}, \overline{y}),\,\,\, \overline{z}\in J(\overline{x}, \overline{y})),\) \(\lambda ^G_{\gamma }=\lambda ^H_{\alpha }=\mu ^G_{\gamma }=\mu ^H_{\alpha }=0,\) \(\forall \,i\in \beta ,\) \(\mu _i^G=0,\) \(\mu _i^H=0.\)

Finally, we need to point out that \(s=(s_i)_{i=1}^q\ne 0.\) In fact, if this is fail, then \(s_i=0\) for every \(i\in I_q.\) It follows further from equality (3.5) that:

$$\begin{aligned} \sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g\xi _i^g&+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k\xi _i^k+\sum _{j\in I_n}\big (\lambda _j^h\xi _j^h +\mu _j^h \eta _j^h\big ) \\ {}&+\sum _{i\in I_p}\Big (\lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H\Big ) +\sum _{\overline{z}\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (\overline{z})}\xi _i^{\varPsi (\overline{z})} =0. \end{aligned}$$

Let \(v_0\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\) be arbitrary. It is evident that:

$$\begin{aligned} \begin{aligned}&\sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g\left\langle \xi _i^g, v_0\right\rangle +\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k\left\langle \xi _i^k, v_0\right\rangle +\sum _{j\in I_n}\left\langle \lambda _j^h\xi _j^h+\mu _j^h \eta _j^h, v_0\right\rangle \\&+\sum _{i\in I_p}\left\langle \lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H, v_0\right\rangle +\sum _{\overline{z}\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\left\langle \lambda _i^{\varPsi (\overline{z})}\xi _i^{\varPsi (\overline{z})}, v_0\right\rangle =0. \end{aligned}\end{aligned}$$
(3.6)

In other words, under the constraint qualification of the (CQ) type, there exist \(v_0\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}\) and positive real numbers \(a^g_i\) (\(i\in I_g(\overline{x}, \overline{y})\)), \(a^k_i\) (\(i\in I_k(\overline{x}, \overline{y})\)), \(a_i^G\) (\(i\in \nu _1\)), \(a_i^H\) (\(i\in \nu _2\)), \(a_i^{\varPsi (\overline{z})}\) (\(i\in I(\overline{x}, \overline{y}),\,\,\,\overline{z}\in J(\overline{x}, \overline{y})\)), such that the conditions from (i) to (vii) in Theorem 1 are fulfilled. In this sense, the left-hand side of (3.6) is smaller than or equal to:

$$\begin{aligned}\begin{aligned} -\Big (\sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^ga_i^g&+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k a_i^k+\sum _{i\in I_p}\Big (\lambda _i^G a_i^G+ \lambda _i^H a_i^H\Big ) \\ {}&+\sum _{\overline{z}\in J(\overline{x}, \overline{y})}\Big (\lambda _1^{\varPsi (\overline{z})} a_1^{\varPsi (\overline{z})}+\lambda _2^{\varPsi (\overline{z})}a_2^{\varPsi (\overline{z})} \Big )\Big )<0, \end{aligned}\end{aligned}$$

which contradicting inequality (3.6). We thus have shown that \(s\ne 0\), and without loss of generality, we may assume that \(\sum _{i=1}^qs_i=1,\) which completes the proof. \(\square \)

Theorem 1 is illustrated by the following example.

Example 1

Consider the following problem in \(\mathbb {R}^2\times \mathbb {R}:\)

$$\begin{aligned} (MBPP)\quad&\text{ Minimize}_{x_1, x_2, y}\,\,F(x_1, x_2, y):=\frac{1}{2}|x_1|+|x_2|+\frac{1}{2}|y|\\ {}&\text{ subject } \text{ to }\,\,g(x_1,x_2, y):=|x_2|\le 0,\,\,h(x_1, x_2, y):=x_1x_2=0,\,\,\\ {}&G(x_2, x_2, y):=x_1\ge 0,\,\, H(x_1, x_2, y)=x_2\ge 0,\\ {}&G(x_1, x_2, y)H(x_1, x_2, y)=0,\,\,y\in S(x), \end{aligned}$$

where S(x) is the solution set the following lower level problem in \(\mathbb {R}:\)

$$\begin{aligned} (MPPP-x)\quad&\text{ Minimize}_{y}\,\,f(x_1, x_2, y):=x_1+x_2\\ {}&\text{ subject } \text{ to }\,\,k(x_1,x_2, y):=-|y-x_1-1|\le 0. \end{aligned}$$

We take \(\overline{x}=(0, 0)\) and \(\overline{y}=0.\) It can be seen that \(I_m=I_n=I_p=I_q=I_l=\{1\},\) \(I_g=\{1\},\) \(I_k=\emptyset ,\) \(\alpha =\gamma =\emptyset ,\) \(\beta =\{1\},\) \(\nu _1=\nu _2=\{1\},\) \(\alpha ^+_{\mu }=\gamma ^+_{\mu }=\emptyset ,\) \(\beta ^G_{\mu }=\{i=1: \mu _1^H=0,\,\mu _1^G>0\},\) \(\beta ^H_{\mu }=\{i=1: \mu _1^H>0,\,\mu _1^G=0\},\) \(L_{\mu }=\emptyset ,\) \(K_x=\{y\in \mathbb {R}: k_1(x, y)\le 0\}=\mathbb {R}\) and \(S(x)=\mathbb {R}\) for all \(x=(x_1, x_2)\in \mathbb {R}^2.\) Therefore, the feasible set of problem (MBPP) is \(K=\mathbb {R}_+\times \{0\}\times \mathbb {R},\) and so, \(T(K, (\overline{x}, \overline{x}))=\mathbb {R}_+\times \{0\}\times \mathbb {R}.\) It is plain that \((\overline{x}, \overline{y})\) is a local weakly efficient solution of problem (MBPP), and then, we may take \(\varTheta =[-1, 1]+[-1, 1]=[-2, 2]\), and thus, \(\varPsi : \mathbb {R}^2\times \mathbb {R}\rightarrow \mathbb {R}\) is defined by:

$$\begin{aligned} \varPsi (x,y)=\max _{z\in \varTheta }\psi (x, y, z)\,\,\,(\forall \,(x, y)\in \mathbb {R}^2\times \mathbb {R}), \end{aligned}$$

where \(\psi (x, y, z)=f(x, y)-f(x,z)=0\) for all \((x, y)\in \mathbb {R}^2\times S(x).\) Consequently, \(\varPsi (x, y)=0\) for all \(\forall \,(x, y)\in \mathbb {R}^2\times \mathbb {R}.\)

Directly calculating, we obtain that: \(\text {conv}\partial ^*F_1(\overline{x}, \overline{y})=[-\frac{1}{2}, \frac{1}{2}]\times [-1, 1]\times [-\frac{1}{2}, \frac{1}{2}],\) \(\text {conv}\partial ^*(\pm h)_1(\overline{x}, \overline{y})=\{(0, 0, 0)\},\) \(\text {conv}\partial ^*g_1(\overline{x}, \overline{y})=\{0\}\times [-1, 1]\times \{0\},\) \(\text {conv}\partial ^*G_1(\overline{x}, \overline{y})=\{(1, 0, 0)\},\) \(\text {conv}\partial ^*\psi _1(\overline{x}, \overline{y}, z)=\{(0, 0, 0)\}\) \((\forall \,z\in \varTheta =[-2, 2]),\) \(\text {conv}\partial ^*(-G)_1(\overline{x}, \overline{y})=\{(-1, 0, 0)\},\) \(\text {conv}\partial ^*H_1(\overline{x}, \overline{y})=\{(0, 1, 0)\}\) and \(\text {conv}\partial ^*(-H)_1(\overline{x}, \overline{y})= \{(0, -1, 0)\}.\) In consequence:

$$\begin{aligned}\begin{aligned} \varGamma (\overline{x}, \overline{y})&=\Big [\text {conv}\partial ^*g_1 (\overline{x}, \overline{y}) \Big ]^-\bigcap \Big [\text {conv}\partial ^*h_1(\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-h_1)(\overline{x}, \overline{y}) \Big ]^- \\&\bigcap \Big [\text {conv}\partial ^*(-G)_1 (\overline{x}, \overline{y})\cup \text {conv}\partial ^*(-H_1) (\overline{x}, \overline{y}) \Big ]^-=\mathbb {R}_+\times \{0\}\times \mathbb {R}. \end{aligned} \end{aligned}$$

By the definition 6, the generalized standard Abadie constraint qualification (GS-ACQ) holds at \((\overline{x}, \overline{y}).\) By virtue of Theorem 1, we conclude that there exist \({s}=({s}_i)_{i=1}^q\subset \mathbb {R}_+\) and \(({\lambda }, \mu )=\Big ({\lambda }^g, {\lambda }^k, {\lambda }^h, {\lambda }^{\varPsi }, {\lambda }^G, {\lambda }^H, \mu ^h, {\mu }^G, {\mu }^H\Big )\in \mathcal {M}\) satisfying:

$$\begin{aligned} 0&\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(\overline{x}, \overline{y})+ \sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g \text {conv}\,\partial ^*g_{i}(\overline{x}, \overline{y})+ \sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k \text {conv}\,\partial ^*k_{i}(\overline{x}, \overline{y})\\&+\sum _{z\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(\overline{x}, \overline{y}, z) \\&\quad +\sum _{j\in I_n}\Big [\lambda _j^h\text {conv}\,\partial ^* h_{j}(\overline{x}, \overline{y})+\mu _j^h\text {conv}\,\partial ^*(-h_j)(\overline{x}, \overline{y}) \Big ]\\&\quad +\sum _{i\in I_p}\Big [\lambda _i^G\text {conv}\,\partial ^* (-G_i)(\overline{x}, \overline{y})+\lambda _i^H\text {conv}\,\partial ^* (-H_i)(\overline{x}, \overline{y}) \Big ]. \end{aligned}$$

In fact, in this setting, we may take \(s=s_1=1\ge 0,\) \(\lambda _1^g=1\ge 0,\) \(\lambda _i^{\varPsi (z)}=1\ge 0,\) where \(z\in J(\overline{x}, \overline{y})=\varTheta \) and \(I(\overline{x}, \overline{y})=\{1\},\) \(\lambda _1^h=\mu _1^h=1\ge 0,\) \(\lambda _1^G=\lambda _1^H=\frac{1}{2}\ge 0,\) we have:

$$\begin{aligned} 0&\in s_1\text {conv}\,\partial ^*F_1(\overline{x}, \overline{y})+ \lambda _1^g \text {conv}\,\partial ^*g_{1}(\overline{x}, \overline{y}) +\sum _{z\in \varTheta }\lambda _1^{\varPsi (z)}\text {conv}\,\partial ^*\psi _1(\overline{x}, \overline{y}, z) \\ {}&+\Big [\lambda _1^h\text {conv}\,\partial ^* h_{1}(\overline{x}, \overline{y})+\mu _1^h\text {conv}\,\partial ^*(-h_1)(\overline{x}, \overline{y}) \Big ] \\ {}&+\Big [\lambda _1^G\text {conv}\,\partial ^* (-G_1)(\overline{x}, \overline{y})+\lambda _1^H\text {conv}\,\partial ^* (-H_1)(\overline{x}, \overline{y}) \Big ] \\ {}&=[-1, 0]\times [-\frac{5}{2}, \frac{3}{2}]\times [-\frac{1}{2}, \frac{1}{2}], \end{aligned}$$

as it was checked.

For the next result providing a necessary optimality condition for the local weak efficient solution of problem (MBPP\(^*\)).

Corollary 1

Let \((\overline{x}, \overline{y})\in K\) be a local weak efficient solution to the multiobjective bilevel programming problem with constraints (MBBP\(^*\)). Assume that:

  1. (i)

    the functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x}, \overline{y})\)), \(k_i\) \((i\in I_k(\overline{x}, \overline{y}))\) and \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at \((\overline{x}, \overline{y});\)

  2. (ii)

    the functions \(F_i\) \((i\in I_q)\), \(k_i\) \((i\in I_k(\overline{x}, \overline{y}))\) and \(\varPsi \) are locally Lipschitz near \((\overline{x}, \overline{y});\)

  3. (iii)

    the constraint qualifications (CQ) and (GS-ACQ) at \((\overline{x}, \overline{y})\) hold.

Then, there exist \({s}=({s}_i)_{i=1}^q\subset \mathbb {R}_+\) and \({\lambda }=\Big ({\lambda }^g, {\lambda }^k, {\lambda }^{\varPsi }\Big )\in \mathbb {R}^{m+l+2|J(\overline{x}, \overline{y})|}\) with \(\sum _{i\in I_q}s_i=1,\) \(\lambda ^g_{I_g}\ge 0,\,\,\lambda ^k_{I_k}\ge 0,\,\, \lambda ^{\varPsi (z)}_i\ge 0,\,\,i\in I(\overline{x}, \overline{y})\,\,\text {and}\,\,z\in J(\overline{x}, \overline{y})\) satisfying:

$$\begin{aligned} 0&\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(\overline{x}, \overline{y}) + \sum _{i\in I_g(\overline{x}, \overline{y})}\lambda _i^g \text {conv}\,\partial ^*g_{i}(\overline{x}, \overline{y}) \\ {}&+\sum _{i\in I_k(\overline{x}, \overline{y})}\lambda _i^k \text {conv}\,\partial ^*k_{i}(\overline{x}, \overline{y}) +\sum _{z\in J(\overline{x}, \overline{y})}\sum _{i\in I(\overline{x}, \overline{y})}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(\overline{x}, \overline{y}, z). \end{aligned}$$

Proof

It is a direct consequence of the Theorem 1, which completes the proof. \(\square \)

Remark 1

We mention that in sense, the solution set of problem (MPPP-x) is \(\mathbb {R}^{q_2},\) and then, the problem (MBPP) reduces to the optimization problem with equilibrium constraints, say (P), in Pandey and Mishra (2016). Therefore, the preceding obtained result is fail in the case the assumption (iv) in Theorem 1 is removed (see Theorem 3.1 Pandey and Mishra (2016) or Theorem 4.1 Pandey and Mishra (2018), for instance).

Remark 2

The preceding obtained results are still true if the constraint and objective functions are affine. Especially, if \(I_q\) is singleton (or \(|I_q|=1\)) and \(\varPsi \equiv 0,\) these results reduce to the well-known result in single-level multiobjective programming problem with equilibrium constraints, e.g., in Pandey and Mishra (2016, 2018).

4 Applications to duality for multiobjective bilevel programming problem with equilibrium constraints

Our aim here is to construct Wolfe and Mond–Weir type dual problem for the nonsmooth multiobjective bilevel programming problem with equilibrium constraints (MBPP), using the tool of convexificators with generalized \(\partial ^*-\) convex objective and constraint functions. Besides, we establish the weak and strong duality theorems for the same.

4.1 The Wolfe duality for (MBPP)

The Wolfe type dual problem for the nonsmooth multiobjective bilevel programming problem (MBPP) is formulated as follows:

$$\begin{aligned} (WMBPP):\,\,&\underset{u, s, \lambda }{\max }\,\,\,\Big [\sum _{i\in I_q}s_i F_i(u)+\sum _{i\in I_g}\lambda _i^g g_{i}(u)+\sum _{i\in I_k}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}(\lambda _j^h-\mu _j^h)h_{j}(u) \\ {}&\qquad \qquad +\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big )\Big ]\\ \text{ subject } \text{ to }\,\,&0\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(u)+ \sum _{i\in I_g}\lambda _i^g \text {conv}\,\partial ^*g_{i}(u)+ \sum _{i\in I_k}\lambda _i^k \text {conv}\,\partial ^*k_{i}(u)\\ {}&+\sum _{j\in I_n}\Big [\lambda _j^h\text {conv}\,\partial ^* h_{j}(u)+\mu _j^h\text {conv}\,\partial ^*(-h_j)(u) \Big ] \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(u, z) \\ {}&+\sum _{i\in I_p}\Big [\lambda _i^G\text {conv}\,\partial ^* (-G_i)(u)+\lambda _i^H\text {conv}\,\partial ^* (-H_i)(u) \Big ], \\ {}&u\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2},\,\,\,\,s_i\ge 0,\,\,\,i\in I_q, \,\,\,\, \sum _{i\in I_q}s_i=1, \,\,\,\lambda ^g_{I_g}\ge 0,\,\,\,\, \lambda ^k_{I_k}\ge 0,\,\,\,\, \\ {}&\lambda ^{\varPsi (z)}_i\ge 0,\,\,\,\,i\in I(u),\,\,z\in J(u),\,\,\,\lambda _j^h\ge 0,\,\,\mu _j^h\ge 0,\,\,j\in I_n,\,\, \\ {}&\lambda _i^G\ge 0,\,\,\lambda _i^H\ge 0,\,\,\mu _i^G\ge 0,\,\,\mu _i^H\ge 0,\,\,i\in I_p, \\ {}&\lambda _{\gamma }^G=\lambda _{\alpha }^H= \mu _{\gamma }^G=\mu _{\alpha }^H=0,\,\,\, \forall \,i\in \beta ,\,\mu _i^G=0,\,\,\mu _i^H=0, \\\text {where}\quad&\lambda _{\gamma }^G=(\lambda _i^G)_{i\in \gamma },\,\,\,\, \lambda _{\alpha }^H=(\lambda _i^H)_{i\in \alpha },\,\,\, \mu _{\gamma }^G=(\mu _i^G)_{i\in \gamma },\,\,\,\mu _{\alpha }^H=(\mu _i^H)_{i\in \alpha },\,\, \\ {}&\lambda ^{\varPsi }=\big (\lambda ^{\varPsi (z)}_i\big )_{z\in J(u),\,\, i\in I(u)},\,\,\,\, \mu =(\mu ^h, \mu ^G, \mu ^H)\in \mathbb {R}^{n+2p} \\ \text {and}\quad&(\lambda , \mu )=(s, \lambda ^g, \lambda ^k, \lambda ^h, \lambda ^{\varPsi }, \lambda ^G, \lambda ^H, \mu ^h, \mu ^G, \mu ^H)\in \mathbb {R}^{q+m+l+2n+2|J(u)|+4p}. \end{aligned}$$

We mention that the Wolfe type dual problem for the problem (MBPP\(^*\)) is denoted by (WMBPP\(^*\)). Therefore, the problem (WMBPP\(^*\)) is defined similarly to the problem (WMBPP).

The forthcoming example will be provided to illustrate the Wolfe type dual problem for the problem (MBPP).

Example 2

Consider the following problem in \(\mathbb {R}\times \mathbb {R}:\)

$$\begin{aligned} (MBPP)\quad&\text{ Minimize}_{x, y}\,\,F(x, y):=\Big (F_1(x, y), F_2(x, y)\Big )=\Big (x+y,\,\,x-y^2\Big )\\ {}&\text{ subject } \text{ to }\,\,g(x, y):=\frac{-|x|-x}{2}\le 0,\,\,h(x, y):=x^2-y=0,\,\,\\ {}&G(x, y):=-x+y^2\ge 0,\,\, H(x, y)=-y+x^2\ge 0,\\ {}&G(x, y)H(x, y)=0,\,\,y\in S(x), \end{aligned}$$

where S(x) is the solution set the following lower level problem in \(\mathbb {R}:\)

$$\begin{aligned} (MPPP-x)\quad&\text{ Minimize}_{y}\,\,f(x, y):=x^3+x^2-x-1\\ {}&\text{ subject } \text{ to }\,\,k_1(x, y):=-x^2-|y|\le 0,\,\,k_2(x, y)=-\frac{1}{2}|x|-y^2\le 0. \end{aligned}$$

It can be easily seen that \(K_x=\mathbb {R},\) \(S(x)=\mathbb {R}\) and \(K=\Big \{(x, y)\in \mathbb {R}^2: y^2\ge x,\,\,x^2=y\Big \}.\) Thus, the feasible set of problem (MBPP) is of the following form:

$$\begin{aligned} K=\Big \{(x, y)\in \mathbb {R}^2: x\le 0\,\,\,\text{ or }\,\,\,x\ge 1,\,\,x^2=y\Big \}.\end{aligned}$$

By directly calculating, for every \((u_1, u_2)\in \mathbb {R}^2\), one obtains \(\partial ^*F_1(u_1, u_2)=\{(1, 1)\}, \) \(\partial ^*F_2(u_1, u_2)=\{(1, -2u_2)\}, \) \(\partial ^*g_1(u_1, u_2)=\{(-1, 0),\,\,(0, 0)\}, \) \(\partial ^*h_1(u_1, u_2)=\{(2u_1, - 1)\}, \) \(\partial ^*(- h_1)(u_1, u_2)=\partial ^*(- H_1)(u_1, u_2)=\{(-2u_1, 1)\}, \) \(\partial ^*(- G_1)(u_1, u_2)=\{(1, - 2u_2)\}, \) \(\partial ^*k_1(u_1, u_2)=\{(-2u_1, -1),\,\,(-2u_1, 1)\}, \) \(\partial ^*k_2(u_1, u_2)=\{(-\frac{1}{2}, -2u_2),\,\,\, (\frac{1}{2}, -2u_2)\} \) and \(\partial ^*\psi _i(u_1, u_2, z)=\{(0, 0)\}\,\,\,\forall \, z\in J(u_1, u_2),\,\,\,\,i\in I(u_1, u_2).\)

Then, the Wolfe type dual problem (WMBOP) for the multiobjective bilevel programming problem (MBPP) is of the form:

$$\begin{aligned} (WMBPP):\,\,\,&\underset{u, s, \lambda }{\max }\,\,\,\Big [s(u_1+u_2)+(1-s)(u_1-u_2^2)-\frac{\lambda _1^g}{2}(u_1+|u_1|)-\lambda _1^k(u_1^2+|u_2|)\\ {}&-\lambda _2^k(\frac{1}{2} |u_1|+u_2^2)+(\lambda _1^h-\mu _1^h)(u_1^2-u_2)+\lambda _1^G(u_1-u_2^2)+\lambda _1^H(u_2-u_1^2) \Big ] \\ \text{ subject } \text{ to }\quad 0&\in s(1, 1)+(1-s)(1, -2u_2)+\lambda _1^g \text {conv}\{(-1, 0),\,\,(0, 0)\}\\ {}&+\lambda _1^k\text {conv}\{(-2u_1, -1),\,\,(-2u_1, 1)\} + \lambda _2^k \text {conv}\{(-\frac{1}{2}, -2u_2),\,\, (\frac{1}{2}, -2u_2)\}\\ {}&+\lambda _1^h (2u_1, -1)+\mu _1^h(-2u_1, 1) +\lambda _1^G(1, -2u_2)+\lambda _2^H(-2u_1, 1), \\&u=(u_1, u_2)\in \mathbb {R}^2,\,\,\,\, 0\le s\le 1, \,\,\,\, \lambda _1^g\ge 0,\,\, \lambda _1^k\ge 0,\,\,\, \lambda _2^k\ge 0, \,\,\, \\&\lambda _1^h\ge 0,\,\,\, \mu _1^h\ge 0, \,\,\, \lambda _1^G\ge 0,\,\,\,\lambda _1^H \ge 0. \end{aligned}$$

For the sake of brevity, we set \(u=(u_1, u_2)\) and \(\overline{x}=(\overline{x}_1, \overline{x}_2)\), and then, a weak duality theorem for the multiobjective bilevel programming problem with equilibrium constraints (MBPP) and its Wolfe type dual problem (WMBPP) can be stated as follows.

Theorem 2

(Weak Duality). Let \(\overline{x}\) and \((u, s, \lambda , \mu )\) be the feasible vectors to the primal problem (MBPP) and the dual problem (WMBPP), respectively. Suppose that:

  1. (i)

    \(L_{\mu }=\emptyset ;\)

  2. (ii)

    The functions \(F_i\) \((i\in I_q)\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(\pm h_j\) (\(j\in I_n\)), \(-G_i\) (\(i\in \nu _1\)), and \(-H_i\) (\(i\in \nu _2\)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at u.

  3. (iii)

    The function \(\psi _1(\,.\,,\,.\,\,,\, z):=f(\,.\,,\,.\,)-f(\,.\,,\,z)\) \((z\in \varTheta )\) admits bounded upper semi-regular convexificator and is \(\partial ^*-\) convex function at u.

Then, for any \(x=(x_1, x_2)\) is feasible to the primal problem (MBBP), there exists \(k\in I_q\), such that:

$$\begin{aligned}\begin{aligned} F_k(x) \not <F_k(u)&+\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}(\lambda _j^h-\mu _j^h)h_{j}(u) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big ). \end{aligned}\end{aligned}$$

Proof

Let \(x=(x_1, x_2)\) be a feasible vector to the problem (MBPP). We have

$$\begin{aligned}&g_i(x)\le 0\,\,\,\forall \,i\in I_g(\overline{x}), \end{aligned}$$
(4.1)
$$\begin{aligned}&h_j(x)=0\,\,\,\forall \, j\in I_n, \end{aligned}$$
(4.2)
$$\begin{aligned}&(-G_i)(x)\le 0\,\,\,\forall \, i\in I_p,\,\,\, \end{aligned}$$
(4.3)
$$\begin{aligned}&(-H_i)(x)\le 0\,\,\,\forall \, i\in I_p, \end{aligned}$$
(4.4)
$$\begin{aligned}&k_i(x)\le 0\,\,\,\forall \,i\in I_k(\overline{x}), \end{aligned}$$
(4.5)
$$\begin{aligned}&\psi _1(x, z)\le 0\,\,\,\forall \,z\in J(u). \end{aligned}$$
(4.6)

Since \((u, s, \lambda , \mu )\) is a feasible vector of problem (MBPP), there exist \(\xi _i^F\in \text {conv}\partial ^*F_i(u)\) \((i\in I_q),\) \(\xi _i^g\in \text {conv}\partial ^*g_i(u)\) \((i\in I_g(\overline{x})),\) \(\xi _i^k\in \text {conv}\partial ^*k_i(u)\) \((i\in I_k(\overline{x})),\) \(\xi ^h_j\in \text {conv}\partial ^*h_j(u)\) \((j\in I_n),\) \(\eta ^h_j\in \text {conv} \partial ^*(-h_j)(u)\) \((j\in I_n),\) \(\xi ^{\varPsi (z)}_i\in \text {conv}\,\partial ^*\psi _i(u, z)\) (\(i\in I(u),\,\,\, z\in J(u)\)), \(\xi _i^G\in \partial ^*(-G_i)(u)\) (\(i\in I_p\)) and \(\xi _i^H\in \partial ^*(-H_i)(u)\) (\(i\in I_p\)), such that:

$$\begin{aligned} \begin{aligned} \sum _{i\in I_q}s_i \xi _i^F&+\sum _{i\in I_g(\overline{x})}\lambda _i^g\xi _i^g+\sum _{i\in I_k(\overline{x})}\lambda _i^k\xi _i^k+ \sum _{j\in I_n}\big (\lambda _j^h\xi _j^h +\mu _j^h \eta _j^h\big ) \\ {}&+\sum _{i\in I_p}\big (\lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H\big ) +\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\xi _i^{\varPsi (z)}=0. \end{aligned}\end{aligned}$$
(4.7)

Adapting the initial assumptions along with the \(\partial ^*-\) convexity of \(F_i\) \((\forall \, i\in I_q)\) at u : 

$$\begin{aligned} F_i(x)-F_i(u)\ge \left\langle \xi _i^F, x-u\right\rangle ,\,\,\,\forall \, i\in I_q. \end{aligned}$$
(4.8)

In fact, we take \(N>0\) and sequences \((t_p)_{p=1,\ldots , N}\subset [0, 1]\) with \(\sum _{p=1}^Nt_p=1\) and \((\xi _{i, p}^F)_{p=1,\ldots , N}\subset \partial ^*F_i(u)\) satisfy \(\xi _i^F=\sum _{p=1}^N t_p\xi _{i, p}^F.\) Since \(F_i\) \((\forall \, i\in I_q)\) is \(\partial ^*-\) convex at u,  it holds that \( F_i(x)-F_i(u)\ge \left\langle \xi _{i, p}^F, x-u\right\rangle ,\,\,\,\forall \, p\in I_N. \) Consequently:

$$\begin{aligned} t_pF_i(x)-t_pF_i(u)\ge \left\langle t_p\xi _{i, p}^F, x-u\right\rangle ,\,\,\,\forall \, p\in I_N. \end{aligned}$$

This allows us to say that the following inequality holds true:

$$\begin{aligned} \sum _{p=1}^N t_pF_i(x)-\sum _{p=1}^Nt_pF_i(u)\ge \left\langle \sum _{p=1}^Nt_p\xi _{i, p}^F, x-u\right\rangle .\end{aligned}$$

In consequence, inequality (4.8) holds. In view of the \(\partial ^*-\) convexity of \(g_i\) \((\forall \,i\in I_g(\overline{x}))\) at u,  it follows that:

$$\begin{aligned} g_i(x)-g_i(u)\ge \left\langle \xi _i^g, x-u\right\rangle ,\,\,\,\forall \, i\in I_g(\overline{x}). \end{aligned}$$
(4.9)

Similarly, we also have:

$$\begin{aligned}&k_i(x)-k_i(u)\ge \left\langle \xi _i^k, x-u\right\rangle ,\,\,\,\forall \, i\in I_k(\overline{x}), \end{aligned}$$
(4.10)
$$\begin{aligned}&h_j(x)-h_j(u)\ge \left\langle \xi _j^h, x-u\right\rangle ,\,\,\,\forall \, j\in I_n, \end{aligned}$$
(4.11)
$$\begin{aligned}&(-h_j)(x)+h_j(u)\ge \left\langle \eta _j^h, x-u\right\rangle ,\,\,\,\forall \, j\in I_n, \end{aligned}$$
(4.12)
$$\begin{aligned}&(-G_i)(x)+G_i(u)\ge \left\langle \xi _j^G, x-u\right\rangle ,\,\,\,\forall \, j\in \nu _1, \end{aligned}$$
(4.13)
$$\begin{aligned}&(-H_i)(x)+H_i(u)\ge \left\langle \xi _j^H, x-u\right\rangle ,\,\,\,\forall \, i\in \nu _2, \end{aligned}$$
(4.14)
$$\begin{aligned}&\psi _1(x, z)-\psi _1(u, z)\ge \left\langle \xi _1^{\varPsi (z)}, x-u\right\rangle \,\,\,\forall \,z\in J(u). \end{aligned}$$
(4.15)

Since \(\mathbb {R}^l_-\subset \mathbb {R}^l\) is a closed and convex cone with nonempty interior and \(\mathbb {R}^l_-\ne \mathbb {R}^l,\) the function \(\psi _2(\,.\,,\,z\,)\) is convex, positively homogeneous, 1-Lipschitzian, decreasing on \(\mathbb {R}^l\) (see Hiriart-Urruty (1979), for instance). Making use of Luu’s result Luu (2016), we deduce that:

$$\begin{aligned} \psi _2(x, z)-\psi _2(u, z)\ge \left\langle \xi _2^{\varPsi (z)}, x-u\right\rangle \,\,\,\forall \,z\in J(u). \end{aligned}$$
(4.16)

Because \(L_\mu =\emptyset ,\) by multiplying (4.8)-(4.16) by \(s_i\) \((i\in I_q),\) \(\lambda _i^g\ge 0\) (\(i\in I_g(\overline{x})\)), \(\lambda _i^k\ge 0\) (\(i\in I_k(\overline{x})\)), \(\lambda _j^h>0\) \((j\in I_n),\) \(\mu _j^h>0\) \((j\in I_n),\) \(\lambda _i^G>0\) (\(i\in \nu _1\)), \(\lambda _i^H>0\) (\(i\in \nu _2\)), \(\lambda _i^{\varPsi (z)}\) \((i\in I(u),\,\, z\in J(u)),\) respectively, and adding (4.8)–(4.16), it results in:

$$\begin{aligned} \sum _{i\in I_q}\big (s_iF_i(x)&-s_iF_i(u)\big )+\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(x)-\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(x) \\&-\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u) +\sum _{j\in I_n}\lambda _j^h h_{j}(x)-\sum _{j\in I_n}\lambda _j^h h_{j}(u)-\sum _{j\in I_n}\mu _j^h h_{j}(x)+\sum _{j\in I_n}\mu _j^h h_{j}(u)\\ {}&-\sum _{i\in I_p}\lambda _i^G G_{i}(x)+\sum _{i\in I_p}\lambda _i^G G_{i}(u) -\sum _{i\in I_p}\lambda _i^H H_{i}(x) +\sum _{i\in I_p}\lambda _i^H H_{i}(u) \\&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)} \psi _{i}(x, z) -\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)} \psi _{i}(u, z) \\&\ge \left\langle \sum _{i\in I_q}s_i \xi _i^F+\sum _{i\in I_g(\overline{x})}\lambda _i^g\xi _i^g+\sum _{i\in I_k(\overline{x})}\lambda _i^k\xi _i^k +\sum _{j\in I_n}\big (\lambda _j^h\xi _j^h +\mu _j^h \eta _j^h\big ),\,\,\,x-u \right\rangle \\&+\left\langle \sum _{i\in I_p}\big (\lambda _i^G\xi _i^G+ \lambda _i^H\xi _i^H\big ) +\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\xi _i^{\varPsi (z)},\,\,\,x-u \right\rangle =0. \end{aligned}$$

Two cases can occur as follows:

Case 1: \(k(x_1, z)\not \in \mathbb {R}^l_-,\) it entails that \(\psi _2(x, z)\le 0,\) which combined with (4.1)–(4.6):

$$\begin{aligned} \begin{aligned} M(x):&= \sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(x)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(x)+\sum _{j\in I_n}\lambda _j^h h_{j}(x)-\sum _{j\in I_n}\mu _j^h h_{j}(x)\\ {}&-\sum _{i\in I_p}\lambda _i^G G_{i}(x)-\sum _{i\in I_p}\lambda _i^H H_{i}(x) +\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)} \psi _{i}(x, z) \le 0. \end{aligned}\end{aligned}$$

Case 2: \(k(x_1, z)\in \mathbb {R}^l_-,\) which leads to that \(\psi (x, z)=\psi _2(x, z)>0,\) this is a contradiction with the inequality \(\psi (x, z)=\psi _1(x, z)\le 0.\)

In consequence:

$$\begin{aligned} \sum _{i\in I_q}s_iF_i(x)\ge \sum _{i\in I_q}s_iF_i(u)+M(u). \end{aligned}$$
(4.17)

We have to point out that there exists \(k\in I_q\) satisfies \(F_k(x)\not < F_k(u)+M(u).\) In fact, if it was not so, then for every \(k\in I_q,\) \(F_k(x) < F_k(u)+M(u),\) which ensures that:

$$\begin{aligned} \sum _{k\in I_q}s_kF_k(x) < \sum _{k\in I_q}s_kF_k(u)+\sum _{k\in I_q}s_kM(u),\end{aligned}$$

or equivalently:

$$\begin{aligned} \sum _{k\in I_q}s_kF_k(x) < \sum _{k\in I_q}s_kF_k(u)+M(u),\end{aligned}$$

which conflicts with inequality (4.17). This completes the proof. \(\square \)

Example 3

Consider problem (MBPP) in which the problem data are given as in Example 2. We pick \(\overline{x}=0\) and \(\overline{y}=0.\) Then, \((\overline{x}, \overline{y})\in K,\) where the feasible set of problem (MBPP) is \(K=\{(x, y)\in \mathbb {R}^2: x\le 0\,\,\text{ or }\,\,x\ge 1,\,\,y=x^2\}\) (see Example 2, for instance). Let \((u, s, \lambda , \mu )\) be the feasible vector to the dual problem (WMBPP). Then, it is evident that \(I_q=I_l=\{1, 2\},\) \(I_m=I_n=I_p=\{1\},\) \(I_g=\{1\},\) \(I_k=\{1, 2\},\) \(\alpha =\gamma =\emptyset ,\) \(\beta =\{1\},\) \(\nu _1=\nu _2=\{1\},\) \(\alpha ^+_{\mu }=\gamma ^+_{\mu }=\emptyset ,\) \(\beta ^G_{\mu }=\{i=1: \mu _1^H=0,\,\mu _1^G>0\},\) \(\beta ^H_{\mu }=\{i=1: \mu _1^H>0,\,\mu _1^G=0\},\) \(L_{\mu }=\emptyset ,\) \(K_x=\{y\in \mathbb {R}: k_i(x, y)\le 0,\,\,i=1,2\}=\mathbb {R}\) and \(S(x)=\mathbb {R}\) for all \(x\in \mathbb {R}.\) We may choose \(\varTheta =[-1, 1]+[-1, 1]=[-2, 2]\), and thus, \(\varPsi : \mathbb {R}^2\times \mathbb {R}\rightarrow \mathbb {R}\) is defined by:

$$\begin{aligned} \varPsi (x,y)=\max _{z\in \varTheta }\psi (x, y, z)\,\,\,(\forall \,(x, y)\in \mathbb {R}^2\times \mathbb {R}), \end{aligned}$$

where \(\psi (x, y, z)=f(x, y)-f(x,z)=0\) for all \((x, y)\in \mathbb {R}\times S(x).\) Consequently, \(\varPsi (x, y)=0\) for all \(\forall \,(x, y)\in \mathbb {R}\times \mathbb {R}.\)

It is easy to verify that the mappings satisfy the assumptions of the Theorem 2. According to Theorem 2, we assert that for every \((x, y)\in K\) is feasible to the primal problem (MBBP), there exists \(k\in I_q=\{1, 2\}\), such that:

$$\begin{aligned}\begin{aligned} F_k(x, y) \not <F_k(u)&+\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}(\lambda _j^h-\mu _j^h)h_{j}(u) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big ), \end{aligned}\end{aligned}$$

where:

$$\begin{aligned}&u=(u_1, u_2)\in \mathbb {R}^2,\,\,\,\, \lambda _i^{\varPsi (z)}\ge 0\,\,(\forall \,z\in J(u),\,i\in I(u)), \,\,\,\, \lambda _1^g\ge 0,\,\, \\ {}&\lambda _1^k\ge 0,\,\,\, \lambda _2^k\ge 0, \,\,\, \lambda _1^h\ge 0,\,\,\, \mu _1^h\ge 0, \,\,\, \lambda _1^G\ge 0,\,\,\,\lambda _1^H \ge 0. \end{aligned}$$

For the next result providing strong duality theorems for the primal problem (MBPP) and its Wolfe type dual problem (WMBPP).

Theorem 3

(Strong Duality) Let \(\overline{x}\in K\) be a local weakly efficient solution for the problem (MBPP). Assume that:

  1. (i)

    the functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(\pm h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)), \(-G_i\) (\(i\in \nu _1\)) and \(-H_i\) (\(i\in \nu _2\)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at \(\overline{x};\)

  2. (ii)

    the functions \(F_i\) \((i\in I_q)\), \(k_i\) \((i\in I_k(\overline{x}))\) and \(\varPsi \) are locally Lipschitz near \(\overline{x};\)

  3. (iii)

    the constraint qualifications (CQ) and (GS-ACQ) at \(\overline{x}\) hold;

  4. (iv)

    \(L_{\mu }=\emptyset .\)

Then, there exist \(\overline{s}=(\overline{s}_i)_{i=1}^q\subset \mathbb {R}\) and \((\overline{\lambda }, \overline{\mu })=\Big (\overline{\lambda }^g, \overline{\lambda }^k, \overline{\lambda }^h, \overline{\lambda }^{\varPsi }, \overline{\lambda }^G, \overline{\lambda }^H, \overline{\mu }^h, \overline{\mu }^G, \overline{\mu }^H\Big )\in \mathbb {R}^{m+l+2n+2|J(\overline{x})|+4p}\), such that \((\overline{x}, \overline{s}, \overline{\lambda }, \overline{\mu })\) is a local weak efficient solution of the dual (WMBPP) and the respective objective values are equal.

Proof

Since \(\overline{x}\) be a local weakly efficient solution for the problem (MBPP), thanks to the obtained result of Theorem 1, one can find \({s}=({s}_i)_{i=1}^q\subset \mathbb {R}_+\) with \(\sum _{i\in I_q}s_i=1,\) and \(({\lambda },\,\mu )=\Big ({\lambda }^g, {\lambda }^k, {\lambda }^h, {\lambda }^{\varPsi }, {\lambda }^G, {\lambda }^H, \mu ^h, {\mu }^G, {\mu }^H\Big )\in \mathbb {R}^{m+l+2n+2|J(\overline{x})|+4p}\) with \(\lambda ^g_{I_g}\ge 0,\,\,\lambda ^k_{I_k}\ge 0,\) \(\lambda ^{\varPsi (z)}_i\ge 0\) \((i\in I(\overline{x}),\,\,z\in J(\overline{x})),\) \(\lambda _j^h\ge 0,\,\,\mu _j^h\ge 0\,\,(j\in I_n),\) \(\lambda _i^G\ge 0,\,\lambda _i^H\ge 0,\,\mu _i^G\ge 0,\,\mu _i^H\ge 0\) \((i\in I_p),\) \(\lambda _{\gamma }^G=\lambda _{\alpha }^H= \mu _{\gamma }^G=\mu _{\alpha }^H=0,\) \(\forall \,i\in \beta ,\,\,\,\mu _i^G=0,\,\,\mu _i^H=0\), such that:

$$\begin{aligned} 0\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(\overline{x})&+ \sum _{i\in I_g(\overline{x})}\lambda _i^g \text {conv}\,\partial ^*g_{i}(\overline{x}) + \sum _{i\in I_k(\overline{x})}\lambda _i^k \text {conv}\,\partial ^*k_{i}(\overline{x}) \\ {}&+\sum _{z\in J(\overline{x})}\sum _{i\in I(\overline{x})}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(\overline{x}, z) \\ {}&+\sum _{j\in I_n}\Big [\lambda _j^h\text {conv}\,\partial ^* h_{j}(\overline{x})+\mu _j^h\text {conv}\,\partial ^*(-h_j)(\overline{x}) \Big ] \\ {}&+\sum _{i\in I_p}\Big [\lambda _i^G\text {conv}\,\partial ^* (-G_i)(\overline{x})+\lambda _i^H\text {conv}\,\partial ^* (-H_i)(\overline{x}) \Big ]. \end{aligned}$$

We set \(\overline{s}=s,\) \(\overline{\lambda }={\lambda }\) and \(\overline{\mu }={\mu }.\) Taking account of the construction of the Wolfe type dual problem (WMBPP), it implies that \((\overline{x}, \overline{s}, \overline{\lambda }, \overline{\mu })\) is a feasible vector of problem (WMBPP). According to Theorem 2, for \(\theta _j^h=\lambda _j^h-\mu _j^h\) (\(j\in I_n\)), there exists \(k_0\in I_q\), such that:

$$\begin{aligned}\begin{aligned} F_{k_0}(\overline{x}) \not < F_{k_0}(u)&+\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}\theta _j^hh_{j}(u) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big )\end{aligned}\end{aligned}$$

for all feasible solution \((u, s, \lambda , \mu )\) for the Wolfe type dual problem (WMBPP). It can be seen that \(i\in I_g(\overline{x}),\) \(g_i(\overline{x})=0,\) \(i\in I_k(\overline{x}),\) \(g_k(\overline{x})=0,\) \(j\in I_n,\) \(h_j(\overline{x})=0\) and \(i\in \nu _1,\) \(G_i(\overline{x})=0,\) \(i\in \nu _2,\) \(H_i(\overline{x})=0.\) Moreover, by directly applying Lemma 3.3 Gadhi and Dempe (2013), we conclude that \(\psi _i(\overline{x}, z)=0\) for all \(i\in I(\overline{x})\) and \(z\in S(\overline{x}).\) Therefore, the following equality is fulfilled:

$$\begin{aligned}\begin{aligned} F_{k_0}(\overline{x}) = F_{k_0}(\overline{x})&+\sum _{i\in I_g(\overline{x})}{\overline{\lambda }}_i^g g_{i}(\overline{x})+\sum _{i\in I_k(\overline{x})}{\overline{\lambda }}_i^k k_{i}(\overline{x})+\sum _{j\in I_n}{\overline{\theta }}_j^hh_{j}(\overline{x}) \\ {}&+\sum _{z\in J(\overline{x})}\sum _{i\in I(\overline{x})}{\overline{\lambda }}_i^{\varPsi (z)}\psi _i(\overline{x}, z) -\sum _{i\in I_p}\big ({\overline{\lambda }}_i^GG_i(\overline{x})+{\overline{\lambda }}_i^HH_i(\overline{x})\big )\end{aligned}\end{aligned}$$

Consequently:

$$\begin{aligned}F_{k_0}(\overline{x})&+\sum _{i\in I_g(\overline{x})}{\overline{\lambda }}_i^g g_{i}(\overline{x})+\sum _{i\in I_k(\overline{x})}{\overline{\lambda }}_i^k k_{i}(\overline{x})+\sum _{j\in I_n}{\overline{\theta }}_j^hh_{j}(\overline{x}) \\ {}&+\sum _{z\in J(\overline{x})}\sum _{i\in I(\overline{x})}{\overline{\lambda }}_i^{\varPsi (z)}\psi _i(\overline{x}, z) -\sum _{i\in I_p}\big ({\overline{\lambda }}_i^GG_i(\overline{x})+{\overline{\lambda }}_i^HH_i(\overline{x})\big ) \\ {}&\not < F_{k_0}(u)+\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}\theta _j^hh_{j}(u) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big ), \end{aligned}$$

which means that \((\overline{x}, \overline{s}, \overline{\lambda }, \overline{\mu })\) is an optimal solution of problem (WMBPP) and the respective objective values are equal, as it was shown. \(\square \)

   For the next result considering \(h\equiv 0,\) \(G\equiv 0\) and \(H\equiv 0,\) the following Corollary 2 can be viewed as a directly consequence from Theorem 3 which its easy proof can be omitted.

Corollary 2

Let \(\overline{x}\in K\) be a local weakly efficient solution to the multiobjective bilevel programming problem with constraints (MBPP\(^*\)). Suppose that:

  1. (i)

    the functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x}))\) and \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at \(\overline{x};\)

  2. (ii)

    the functions \(F_i\) \((i\in I_q)\), \(k_i\) \((i\in I_k(\overline{x}))\) and \(\varPsi \) are locally Lipschitz near \(\overline{x};\)

  3. (iii)

    the conditions (i), (ii), and (vii) in the constraint qualification of the (CQ) type are valid and the constraint qualification of the (GS-ACQ) type at \(\overline{x}\) holds.

Then, there exist \(\overline{s}=(\overline{s}_i)_{i=1}^q\subset \mathbb {R}\) and \(\overline{\lambda }=\Big (\overline{\lambda }^g, \overline{\lambda }^k, \overline{\lambda }^{\varPsi }\Big )\in \mathbb {R}^{m+l+2|J(\overline{x})|}\), such that \((\overline{x}, \overline{s}, \overline{\lambda })\) is a local weak efficient solution of the dual (WMBPP\(^*\)), and the respective objective values are equal.

Remark 3

In sense, all the constraint functions \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)), \(-G_i\) (\(i\in \nu _1\)), and \(-H_i\) (\(i\in \nu _2\)) are affine, our preceding obtained results are still true. Furthermore, it should be mentioned here that all of our results in Sect. 4 are generalizations of the existing one in the literature (see, e.g., Babahadda and Gadhi 2006; Bot and Grad 2010; Chuong 2018; Dempe 1992, 2002; Dempe and Zemkoho 2012; Gadhi and Dempe 2013; Luu and Mai 2018; Pandey and Mishra 2016, 2018; Suneja and Kohli 2011; Ye and Zhu 1995, and the references therein).

4.2 The Mond–Weir duality for (MBPP)

Based on the fact that the Wolfe type dual problem was built in Section 4.1, a Mond–Weir type dual problem for the primal problem (MBPP) is defined similarly as follows.

The Mond–Weir type dual problem for the multiobjective bilevel programming problem with equilibrium constraints (MBPP) is of the form:

$$\begin{aligned} (MWMBPP):\,\,\,\,\,\,&\underset{u, s, \lambda }{\max }\,\,\,\Big [\sum _{i\in I_q}s_i F_i(u) \Big ]\,\,\,\,\, \text{ subject } \text{ to }\\ \,\,&0\in \sum _{i\in I_q}s_i\text {conv}\,\partial ^*F_i(u)+ \sum _{i\in I_g}\lambda _i^g \text {conv}\,\partial ^*g_{i}(u)+ \sum _{i\in I_k}\lambda _i^k \text {conv}\,\partial ^*k_{i}(u)\\ {}&+\sum _{j\in I_n}\Big [\lambda _j^h\text {conv}\,\partial ^* h_{j}(u)+\mu _j^h\text {conv}\,\partial ^*(-h_j)(u) \Big ] +\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)} \\ {}&\quad \text {conv}\,\partial ^*\psi _i(u, z) +\sum _{i\in I_p}\Big [\lambda _i^G\text {conv}\,\partial ^* (-G_i)(u)+\lambda _i^H\text {conv}\,\partial ^* (-H_i)(u) \Big ], \\ {}&g_{I_g(\overline{x})}(u)\ge 0,\,\,k_{I_k(\overline{x})}(u)\ge 0,\,\, h_{I_n}(u)=0,\,\,\, G_{\nu _1}(u)\le 0,\,\,\, \\ {}&H_{\nu _2}(u)\le 0,\,\,\psi _i(u, z)\ge 0,\,\,(z\in J(u),\,i\in I(u)),\,\,u\in \mathbb {R}^{q_1}\times \mathbb {R}^{q_2}, \\ {}&s_i\ge 0,\,\,\,i\in I_q, \,\,\,\, \sum _{i\in I_q}s_i=1, \,\,\,\lambda ^g_{I_g(\overline{x})}\ge 0,\,\,\,\, \lambda ^k_{I_k(\overline{x})}\ge 0,\,\,\,\, \\ {}&\lambda ^{\varPsi (z)}_i\ge 0,\,\,\,\,i\in I(u),\,\,z\in J(u),\,\,\,\lambda _j^h\ge 0,\,\,\mu _j^h\ge 0,\,\,j\in I_n,\,\, \\ {}&\lambda _i^G\ge 0,\,\,\lambda _i^H\ge 0,\,\,\mu _i^G\ge 0,\,\,\mu _i^H\ge 0,\,\,i\in I_p, \\ {}&\lambda _{\gamma }^G=\lambda _{\alpha }^H= \mu _{\gamma }^G=\mu _{\alpha }^H=0,\,\,\, \forall \,i\in \beta ,\,\mu _i^G=0,\,\,\mu _i^H=0, \\\text {where}\quad&\lambda _{\gamma }^G=(\lambda _i^G)_{i\in \gamma },\,\,\,\, \lambda _{\alpha }^H=(\lambda _i^H)_{i\in \alpha },\,\,\, \mu _{\gamma }^G=(\mu _i^G)_{i\in \gamma },\,\,\,\mu _{\alpha }^H=(\mu _i^H)_{i\in \alpha },\,\, \\ {}&\lambda ^{\varPsi }=\big (\lambda ^{\varPsi (z)}_i\big )_{z\in J(u),\,\, i\in I(u)},\,\,\,\, \mu =(\mu ^h, \mu ^G, \mu ^H)\in \mathbb {R}^{n+2p} \\ \text {and}\quad&(\lambda , \mu )=(s, \lambda ^g, \lambda ^k, \lambda ^h, \lambda ^{\varPsi }, \lambda ^G, \lambda ^H, \mu ^h, \mu ^G, \mu ^H)\in \mathbb {R}^{q+m+l+2n+2|J(u)|+4p}. \end{aligned}$$

The forthcoming example will be provided to demonstrate the Mond-Weir type dual problem for the primal problem (MBPP).

Example 4

Consider problem (MBPP) in which the data are given as in Example 2. Then, the Mond-Weir type dual problem (MWMBPP) for the original problem (MBPP) is of the following form:

$$\begin{aligned} (WMBPP):\,\,\,&\underset{u, s, \lambda }{\max }\,\,\,\Big [s(u_1+u_2)+(1-s)(u_1-u_2^2)\Big ] \\ \text{ subject } \text{ to }\quad&0\in s(1, 1)+(1-s)(1, -2u_2)+ \lambda _1^g \text {conv}\{(-1, 0),\,\,(0, 0)\} \\ {}&+\lambda _1^k\text {conv}\{(-2u_1, -1),\,\,(-2u_1, 1)\}+ \lambda _2^k \text {conv}\{(-\frac{1}{2}, -2u_2),\,\, (\frac{1}{2}, -2u_2)\}\\ {}&+\lambda _1^h (2u_1, -1)+\mu _1^h(-2u_1, 1)+\lambda _1^G(1, -2u_2)+\lambda _2^H(-2u_1, 1),\\&u_1+|u_1|\le 0,\,\,u_1^2+|u_2|\le 0,\,\,\,\frac{1}{2} |u_1|+u_2^2\le 0,\,\,\, u_1^2-u_2=0,\,\,\, \\ {}&u_1-u_2^2\ge 0,\,\,\,u_2-u_1^2\ge 0,\,\,\,u=(u_1, u_2)\in \mathbb {R}^2,\,\,\,\, 0\le s\le 1, \\ {}&\lambda _1^g\ge 0,\,\, \lambda _1^k\ge 0,\,\,\, \lambda _2^k\ge 0, \,\,\, \lambda _1^h\ge 0,\,\,\, \mu _1^h\ge 0, \,\,\, \lambda _1^G\ge 0,\,\,\,\lambda _1^H \ge 0. \end{aligned}$$

Hereafter, we state a weak duality theorem for the Mond–Weir type dual problem (MWMBPP) and the original problem (MBPP).

Theorem 4

(Weak Duality) Let \(\overline{x}\) and \((u, s, \lambda , \mu )\) be the feasible vectors to the original problem (MBPP) and the Mond-Weir type dual problem (MWMBPP), respectively. Suppose that:

  1. (i)

    \(L_{\mu }=\emptyset ;\)

  2. (ii)

    The functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(\pm h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,\,,\, z):=f(\,.\,,\,.\,)-f(\,.\,,\,z)\) \((z\in \varTheta ),\) \(-G_i\) (\(i\in \nu _1\)) and \(-H_i\) (\(i\in \nu _2\)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at u.

Then, for any \(x=(x_1, x_2)\) is feasible to the problem (MBPP), there exists \(k\in I_q\), such that:

$$\begin{aligned} F_k(x) \not <F_k(u). \end{aligned}$$

Proof

Arguing similarly as for proving Theorem 2 with observing that:

$$\begin{aligned}\begin{aligned}&\sum _{i\in I_g(\overline{x})}\lambda _i^g g_{i}(u)+\sum _{i\in I_k(\overline{x})}\lambda _i^k k_{i}(u)+\sum _{j\in I_n}(\lambda _j^h-\mu _j^h)h_{j}(u) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)} \lambda _i^{\varPsi (z)}\psi _i(u, z) -\sum _{i\in I_p}\big (\lambda _i^GG_i(u)+\lambda _i^HH_i(u)\big )\ge 0, \end{aligned}\end{aligned}$$

we arrive at the desired conclusion. \(\square \)

In what follows, we give a strong duality theorem for the original primal (MBPP) and the Mond–Weir type dual problem (MWMBPP).

Theorem 5

(Strong Duality) Let \(\overline{x}\in K\) be a local weak efficient solution to the original problem (MBPP). Suppose that:

  1. (i)

    the functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(\pm h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)), \(-G_i\) (\(i\in \nu _1\)) and \(-H_i\) (\(i\in \nu _2\)) admit bounded upper semi-regular convexificators and are \(\partial ^*-\) convex functions at \(\overline{x};\)

  2. (ii)

    the functions \(F_i\) \((i\in I_q)\), \(k_i\) \((i\in I_k(\overline{x}))\) and \(\varPsi \) are locally Lipschitz near \(\overline{x};\)

  3. (iii)

    the constraint qualifications (CQ) and (GS-ACQ) at \(\overline{x}\) hold;

  4. (iv)

    \(L_{\mu }=\emptyset .\)

Then, there exist \(\overline{s}=(\overline{s}_i)_{i=1}^q\subset \mathbb {R}\) and \((\overline{\lambda }, \overline{\mu })=\Big (\overline{\lambda }^g, \overline{\lambda }^k, \overline{\lambda }^h, \overline{\lambda }^{\varPsi }, \overline{\lambda }^G, \overline{\lambda }^H, \overline{\mu }^h, \overline{\mu }^G, \overline{\mu }^H\Big )\in \mathbb {R}^{m+l+2n+2|J(\overline{x})|+4p}\), such that \((\overline{x}, \overline{s}, \overline{\lambda }, \overline{\mu })\) is a local weak efficient solution of the Mond–Weir type dual (MWMBPP) and the respective objective values are equal.

Proof

Arguing similarly as for proving Theorem 3 with observing Theorem 4, which terminates the proof. \(\square \)

In sense, the real-valued objective and constraint functions are affine, the following corollary is inspired from Theorem 5 and Remark 3, which its easy proof can be omitted.

Corollary 3

Let \(\overline{x}\in K\) be a local weak efficient solution to the original problem (MBPP). Suppose that the functions \(F_i\) \((i\in I_q),\) \(g_i\) (\(i\in I_g(\overline{x})\)), \(k_i\) \((i\in I_k(\overline{x})),\) \(\pm h_j\) (\(j\in I_n\)), \(\psi _1(\,.\,,\,.\,, z)\) (\(z\in \varTheta \)), \(-G_i\) (\(i\in \nu _1\)), and \(-H_i\) (\(i\in \nu _2\)) are affine. If \(L_{\mu }=\emptyset \) and the constraint qualifications (CQ) and (GS-ACQ) at \(\overline{x}\) are fulfilled, then there exist \(\overline{s}=(\overline{s}_i)_{i=1}^q\subset \mathbb {R}\) and \((\overline{\lambda }, \overline{\mu })=\Big (\overline{\lambda }^g, \overline{\lambda }^k, \overline{\lambda }^h, \overline{\lambda }^{\varPsi }, \overline{\lambda }^G, \overline{\lambda }^H, \overline{\mu }^h, \overline{\mu }^G, \overline{\mu }^H\Big )\in \mathbb {R}^{m+l+2n+2|J(\overline{x})|+4p}\) such that \((\overline{x}, \overline{s}, \overline{\lambda }, \overline{\mu })\) is a local weak efficient solution of the Mond-Weir type dual (MWMBPP) and the respective objective values are equal.

An example is provided as follows.

Example 5

Consider the following problem in \(\mathbb {R}\times \mathbb {R}:\)

$$\begin{aligned} (MBPP)\quad&\text{ Minimize}_{x, y}\,\,F(x, y):=\Big (F_1(x, y), F_2(x, y)\Big )=\Big (x+y,\,\,|x|+|y|\Big )\\ {}&\text{ subject } \text{ to }\,\,g(x, y):=-y\le 0,\,\,h(x, y):=x-y=0,\,\,\\ {}&G(x, y):=-x+2y\ge 0,\,\, H(x, y)=-y+2x\ge 0,\\ {}&G(x, y)H(x, y)=0,\,\,y\in S(x), \end{aligned}$$

where S(x) is the solution set the following lower level problem in \(\mathbb {R}:\)

$$\begin{aligned} (MPPP-x)\quad&\text{ Minimize}_{y}\,\,f(x, y):=x+y\varDelta _{\mathbb {R}^2_-}(0, 0)\\ {}&\text{ subject } \text{ to }\,\,k_1(x, y):=-x\le 0,\,\,k_2(x, y)=xy\le 0. \end{aligned}$$

Then, it is evident that \(\overline{x}=(0, 0)\) is a local weak efficient solution of problem (MBPP). By an easy computation, we obtain \(I(\overline{x})=\{1\},\) \(\varTheta =[-2, 2],\) \(I_g(\overline{x})=\beta =\{1\},\) \(I_k(\overline{x})=\{1, 2\},\) \(\alpha =\gamma =\emptyset ,\) \(L_{\mu }=\emptyset \), and moreover, \( \partial ^*F_1(0, 0)=\{(1, 1)\},\,\,\, \) \(\partial ^*F_2(0, 0)=\{(1, 1), (1, -1), (-1, 1), (-1, -1)\}, \) \(\partial ^*g_1(0, 0)=\{(0, -1)\},\,\,\, \) \(\partial ^*(\pm h_1)(0, 0)=\{(\pm 1, \mp 1)\},\,\,\, \) \(\partial ^*(- G_1)(0, 0)=\{(1, - 2)\},\,\,\, \) \(\partial ^*(- H_1)(0, 0)=\{(-2, 1)\},\,\,\, \) \(\partial ^*\psi _i(0, 0, z)=\{(0, \varDelta _{\mathbb {R}^2_-}(0, 0))\}\,\,\, \) \(\forall \, z\in J(\overline{x}),\,\,\,\,i\in I(\overline{x}), \) \(\partial ^*k_1(0, 0)=\{(-1, 0)\},\,\,\, \) \(\partial ^*k_2(0, 0)=\{(0, 0)\}.\,\,\,\)

It is not difficult to verify that the functions satisfy the assumptions of the Theorem 5, and moreover, the constraint qualifications (CQ) and \((GS-ACQ)\) at \(\overline{x}=(0, 0)\) hold. Taking into account Theorem 5, there exist \((\overline{s}, 1-\overline{s})\in [0, 1]\times [0, 1]\) and a pair \((\overline{\lambda }, \overline{\mu })=\Big (\overline{\lambda }^g, \overline{\lambda }^k, \overline{\lambda }^h, \overline{\lambda }^{\varPsi }, \overline{\lambda }^G, \overline{\lambda }^H, \overline{\mu }^h, \overline{\mu }^G, \overline{\mu }^H\Big )\in \mathbb {R}^{9+2|J(\overline{x})|}\), such that the vector \((\overline{x}, \overline{s},1-\overline{s}, \overline{\lambda }, \overline{\mu })\) is a local weak efficient solution of the Mond–Weir type dual (MWMBPP) and the respective objective values are equal.

In fact, in this setting, the Mond–Weir type problem (MWMBPP) for the original problem MBPP is of the form:

$$\begin{aligned} (MW&MBPP):\,\,\,\underset{u, s, \lambda }{\max }\,\,\,\Big [s(u_1+u_2)+(1-s)(|u_1|+|u_2|)\Big ] \\ \text{ s.t. }\,\,0\in&(1-s)(1, 1)+[-s, s]\times [-s, s]+ \lambda _1^g (0, -1)+\lambda _1^k(-1, 0) \\ {}&+\lambda _1^h (1, -1)+\mu _1^h(-1, 1)+\lambda _1^G(1, -2)+\lambda _2^H(-2, 1) \\ {}&+\sum _{z\in J(u)}\sum _{i\in I(u)}\lambda _i^{\varPsi (z)}\text {conv}\,\partial ^*\psi _i(u, z), \\&u=(u_1, u_2)\in \{(0, 0)\},\,\,0\le s\le 1, \lambda _1^g\ge 0,\,\, \lambda _1^k\ge 0,\,\,\lambda _2^k\ge 0, \,\, \\ {}&\lambda _1^h\ge 0,\,\,\mu _1^h\ge 0, \,\, \lambda _1^G\ge 0,\,\,\lambda _1^H \ge 0,\,\,\lambda ^{\varPsi (z)}_i\ge 0,\,\,i\in I(u),\,\,z\in J(u). \end{aligned}$$

Since \(u=(0, 0),\) which proves that the conclusion of Theorem 5, as it was checked.

5 Conclusion

There have not been results on necessary optimality conditions for the local weak efficient solution of multiobjective bilevel programming problem with equilibrium constraints (MBPP) in terms of convexificators. In this paper, we have established necessary optimality conditions for efficiency to such problem. An application of the obtained results for the Wolfe and Mond–Weir types dual problem is also presented. The strong and weak duality theorems for the primal problem (MBPP) and its Wolfe and Mond–Weir types dual problem via the convexificators are derived. Our results here are new and more general than those obtained by Dempe (1992), Babahadda and Gadhi (2006), Suneja and Kohli (2011), and some other related authors.