1 Introduction

Fuzzy optimization is one of the most important areas of operations research. Many real-world problems are modeled into a fuzzy optimization programs because of the imprecise and the subjective nature of the decision parameters. These models are more difficult to use than deterministic cases, where the parameters are known with more precision. However, there are numerous works on modeling and solving fuzzy optimization problems in the literature. We can note that the application of linear fuzzy modeling [1,2,3,4,5,6] is easier than nonlinear fuzzy modeling [3, 7,8,9,10,11] but both are difficult to solve. Among the many published methods on fuzzy optimization, it is not easy to choose one, which can find the optimal solution to all fuzzy linear problems or fuzzy nonlinear problems, considering the preference of the decision-maker. Therefore, it is necessary to have other works to complete the lack of existing methods, especially the solving of fuzzy nonlinear optimization problems that are widely used in many domains: economy, business, engineering, management, and so on.

There are a lot of works which suggest some ways to solve these fuzzy optimization problems. For example, Pathak et al. [12] established some necessary and sufficient optimality conditions for the nonlinear fuzzy optimization problems. Horng-Ren Tai et al. [7] proposed a work on fuzzy nonlinear programming approach for optimizing the performance of a four-objective fluctuation smoothing rule in a wafer fabrication factory. Sanjaya Kumar Behera et al. [8] investigated on optimal solution of a fuzzy nonlinear programming problem with linear constraints. Ravi Shankar et al. [9] proposed some technique for solving a fuzzy nonlinear optimization problems by genetic algorithm. Nasseri et al. [10] presented some works on fuzzy nonlinear optimization. Jameel et al. [11] focused on solving nonlinear programming problems in a fuzzy environment. Hsien-Chung Wu [13] proposed the optimality conditions for the optimization problems with fuzzy-valued objective functions. This work utilizes the concept of continuous differentiability of fuzzy functions in order to deduce sufficient optimality conditions for obtaining optimal solutions. The results are especially relevant for the case of deterministic convex constraint functions. Until now, the cases with fuzzy constrained functions have not been examined. Furthermore, the null set concept was introduced by Hsien-Chung Wu [14] and has been applied to fuzzy linear optimization problems with deterministic constraints. There are also no works on the extension of the null concept to solve problems with fuzzy constraint functions. In another work, Hsien-Chung Wu [15] proposed some results on a solution concept for fuzzy multiobjective programming problems based on convex cones. In fact, the concept of a convex cone is derived from the concepts of fuzzy addition and multiplication by a scalar in fuzzy numbers. From the convex cone, partial orderings are defined on the space of fuzzy numbers. From there, the solution concepts are naturally elucidated. However, in order to establish partial orderings via the convex cone, the difference between two fuzzy numbers must be considered. The ambiguity arises from the fact that the difference between a fuzzy number and its own value cannot be zero.

In this study, we propose an extension of certain findings obtained by Hsien-Chung Wu. We propose a new method for solving fuzzy nonlinear optimization problems using the null set concept. In this approach, in addition to the objective function, the constraint functions are also fuzzy. Moreover, it is nonlinear problems that have been addressed. Due to the use of simple subtraction and the Hukuhara difference, we have considered two fuzzy nonlinear single-objective optimization problems that will lead us to emphasize the concepts of optimal solutions and the concept of H-optimal solution. Using the proposed method, each fuzzy nonlinear optimization problem is transformed into a deterministic bi-objective optimization problem, which can be solved by many deterministic methods [16,17,18,19]. Since the two objective functions are in conflict, we use the optimality conditions of Karush–Kuhn–Tucker [20, 21] to look for the optimal solution or the H-optimal solution. There are some theorems that justify the convergence to an optimal solution at each step of the method. Five examples have been dealt with, among which a production problem in a manufacturing factory. The results have been compared with other methods from the literature using a ranking function. Through an in-depth analysis of the obtained results, we can conclude that our proposed method provides good optimal solutions.

The remainder of the present paper is presented in the following manner. Section 2 is devoted to preliminaries. We will present many properties of fuzzy numbers that will be used in the following. The main outcomes of this work will be presented in Sect. 3. The presentation of the method and algorithm, the numerical results and the discussion will be the focus. Finally, we will dedicate Sect. 4 to the conclusion.

2 Preliminaries

2.1 Notation and Fuzzy Number Space

The concept of null set and some properties from this concept are presented in this part.

Definition 1

[22, 23] Let \({\mathcal {X}}\) be a set. A fuzzy subset \({\tilde{a}}\) of \({\mathcal {X}}\) is characterized by a membership function \(\mu _{{\tilde{a}}}:{\mathcal {X}} \rightarrow [0,1]\) and represented by a set of ordered pairs defined as follows:

$$\begin{aligned} {\tilde{a}}=\{(x,\mu _{{\tilde{a}}}(x))/x\in {\mathcal {X}} \}. \end{aligned}$$

The value \(\mu _{{\tilde{a}}}(x)\in [0,1]\) represents the degree of membership of x to the fuzzy set and is interpreted as the extent to which x belongs to \({\tilde{a}}\).

Definition 2

[22, 23] Let \({\tilde{a}}\) be a fuzzy set on \({\mathcal {X}}\) and \(\alpha \in [0,1]\). The \(\alpha -level\) set of \({\tilde{a}}\) is the classical set noted \({\tilde{a}}_{\alpha }\) and is defined by

$$\begin{aligned} {\tilde{a}}_{\alpha }=\{x\in {\mathcal {X}}, \mu _{{\tilde{a}}}(x)\ge \alpha \}. \end{aligned}$$

In the following, we will identify \({\mathcal {X}}\) to \(\mathbb {R}\).

Definition 3

[24] Let \({\tilde{a}}\) a fuzzy subset of \(\mathbb {R}\). Then, \({\tilde{a}}\) is called a fuzzy number if the following conditions are satisfied:

  1. (i)

    \({\tilde{a}}\) is normal, i.e., \(\mu _{{\tilde{a}}}(x)=1\) for some \(x\in \mathbb {R}\);

  2. (ii)

    \({\tilde{a}}\) is convex, i.e., the membership function \(\mu _{{\tilde{a}}}(x)\) is quasi-concave;

  3. (iii)

    \(\mu _{{\tilde{a}}}(x)\) is upper semicontinuous, i.e., \({\tilde{a}}_{\alpha }\) is a closed subset of \(\mathbb {R}\) for some \(\alpha \in \mathbb {R}\);

  4. (vi)

    the 0-level set \({\tilde{a}}_{0}\), is a compact subset of \(\mathbb {R}\).

We note by \({\mathcal {N}}(\mathbb {R})\), the set of all fuzzy numbers. Indeed, if \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\), the \(\alpha\)-level set of \({\tilde{a}}\) is a compact and convex subset of \(\mathbb {R}\), i.e., \({\tilde{a}}_{\alpha }\) is a closed interval, denoted \({\tilde{a}}_{\alpha }\)=[\({\tilde{a}}_{\alpha }^{L}\), \({\tilde{a}}_{\alpha }^{U}\)] for \(\alpha \in [0,1]\).

Remark 1

Let \({\tilde{a}} \in {\mathcal {N}}(\mathbb {R})\), then \({\tilde{a}}_{\alpha }^{L}\) and \({\tilde{a}}_{\alpha }^{U}\) are considered as functions of \(\alpha\).

For convenience, the membership function of \({\tilde{0}}\) is designated by

$$\mu _{{\tilde{0}}} (x) = \left\{ {\begin{array}{ll} {1,\;\;{\text{ if }}\;x = 0,} \hfill \\ {0,\;\;{\text{ else }},} \hfill \\ \end{array} } \right.$$

and that \({\tilde{0}}^{L}_{\alpha }=0={\tilde{0}}^{U}_{\alpha }\) for some \(\alpha \in [0,1]\).

Definition 4

[24] Let \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\), \({\tilde{a}}\) is a canonical fuzzy number if the functions \({\tilde{a}}_{\alpha }^{L}\) and \({\tilde{a}}_{\alpha }^{U}\) are continuous with respect to \(\alpha\) on [0,1].

Definition 5

[5] A ranking function is an application \({\mathcal {R}}:{\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {R}\), from the set of fuzzy numbers to the set of real numbers, such as \({\tilde{a}}\) and \({\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\), we have the following relations:

  1. (i)

    \({\tilde{a}}\approx _{{\mathcal {R}}}{\tilde{b}}\) if and only if \({\mathcal {R}}({\tilde{a}})={\mathcal {R}}({\tilde{b}})\),

  2. (ii)

    \({\tilde{a}}\succeq _{{\mathcal {R}}} {\tilde{b}}\) if and only if \({\mathcal {R}}({\tilde{a}})\ge {\mathcal {R}}({\tilde{b}})\),

  3. (ii)

    \({\tilde{a}}\preceq _{{\mathcal {R}}} {\tilde{b}}\) if and only if \({\mathcal {R}}({\tilde{a}})\le {\mathcal {R}}({\tilde{b}})\),

  4. (iv)

    \({\tilde{a}}\succ _{{\mathcal {R}}} {\tilde{b}}\) if and only if \({\mathcal {R}}({\tilde{a}})> {\mathcal {R}}({\tilde{b}})\).

In this paper we use the linear ranking functions of Amit Kumar et al [6], from which, for \({\tilde{a}}=(a^{L},a,a^{U})\in {\mathcal {N}}(\mathbb {R})\), we have

$$\begin{aligned} {\mathcal {R}}({\tilde{a}})=\dfrac{a^{L}+2a+a^{U}}{4}. \end{aligned}$$

Let \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\). Using the extension principle of Zadeh [25,26,27] and by referring to Puri and Ralescu [28], the membership function of the addition \({\tilde{a}}\oplus {\tilde{b}}\) is defined by

$$\begin{aligned} \mu _{{\tilde{a}}\oplus {\tilde{b}}}(z)=\displaystyle \sup _{\{(x,y):x+y=z\}} \min \{\mu _{{\tilde{a}}}(x),\mu _{{\tilde{b}}}(x)\}, \end{aligned}$$

and the membership function of the multiplication by a scalar \(\lambda {\tilde{a}}\), is defined by

$$\mu _{{\lambda \tilde{a}}} (z) = \left\{ {\begin{array}{ll} {\mu _{{\tilde{a}}} (z/\lambda ),\;\;{\text{ if }}\;\lambda \ne 0,} \hfill \\ {0,\;\;{\text{ if }}\;\lambda = 0\;{\text{ and }}\;z \ne 0,} \hfill \\ {1\;\;{\text{ if }}\;\lambda = 0 = z.} \hfill \\ \end{array} } \right.$$

Definition 6

[13, 14, 21] Let \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\) and the respective \(\alpha\)-level \({\tilde{a}}_{\alpha }=[{\tilde{a}}_{\alpha }^{L},{\tilde{a}}_{\alpha }^{U}]\) and \({\tilde{b}}_{\alpha }=[{\tilde{b}}_{\alpha }^{L},{\tilde{b}}_{\alpha }^{U}]\) for \(\alpha\) \(\in\) [0,1]. We have

  1. (i)

    \({\tilde{a}}\oplus {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\) and

    $$\begin{aligned} ({\tilde{a}}\oplus {\tilde{b}})_{\alpha }&={\tilde{a}}_{\alpha }\oplus {\tilde{b}}_{\alpha }\\&=[{\tilde{a}}_{\alpha }^{L}+{\tilde{b}}_{\alpha }^{L},{\tilde{b}}_{\alpha }^{U}+{\tilde{b}}_{\alpha }^{U}], \end{aligned}$$
  2. (ii)

    \({\tilde{a}}\ominus {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\) et

    $$\begin{aligned} ({\tilde{a}}\ominus {\tilde{b}})_{\alpha }&=({\tilde{a}}\oplus (-{\tilde{b}}))_{\alpha }\\&= {\tilde{a}}_{\alpha }\oplus (-{\tilde{b}})_{\alpha }\\&=[{\tilde{a}}^{L}_{\alpha }-{\tilde{b}}^{U}_{\alpha },{\tilde{a}}^{U}_{\alpha }-{\tilde{b}}^{L}_{\alpha }] \end{aligned}$$
  3. (iii)

    \(\lambda {\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\) and

    $$\begin{aligned} (\lambda {\tilde{a}})_{\alpha }= {\left\{ \begin{array}{ll} {[}\lambda {\tilde{a}}_{\alpha }^{L},\lambda {\tilde{a}}_{\alpha }^{U}]\ \text{ if }\ \lambda \ge 0,\\ {[}\lambda {\tilde{a}}_{\alpha }^{U},\lambda {\tilde{a}}_{\alpha }^{L}]\ \text{ if }\ \lambda <0. \end{array}\right. } \end{aligned}$$

Definition 7

[21, 24] Let \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\), if there is a fuzzy number \({\tilde{c}}\in {\mathcal {N}}(\mathbb {R})\) satisfying \({\tilde{a}}={\tilde{b}}\oplus {\tilde{c}}\) then \({\tilde{c}}\) is unique and is called the Hukuhara difference between \({\tilde{a}}\) and \({\tilde{b}}\) denoted by \({\tilde{a}}\ominus _{H}{\tilde{b}}\).

Proposition 1

[21, 24] Let \({\tilde{a}}\) and \({\tilde{b}}\) be the two fuzzy numbers. If the difference of Hukuhara \({\tilde{c}}={\tilde{a}}\ominus _{H}{\tilde{b}}\) exists, then \({\tilde{c}}^{L}_{\alpha }={\tilde{a}}^{L}_{\alpha }-{\tilde{b}}^{L}_{\alpha }\) and \({\tilde{c}}^{U}_{\alpha }={\tilde{a}}^{U}_{\alpha }-{\tilde{b}}^{U}_{\alpha }\) for some \(\alpha \in [0,1]\).

Let us approach the notion of the concept of null set defined by Wu [14]. It is inspired by the fact that if the difference of a real number by itself gives zero, this is not the case with fuzzy numbers. This ambiguity requires the definition of a set which gathers all the differences of the fuzzy numbers by themselves.

Let \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\) and \({\tilde{a}}\ominus {\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\). We have

$$\begin{aligned} ({\tilde{a}}\ominus {\tilde{a}})_{\alpha }= & {} [{\tilde{a}}^{L}_{\alpha }-{\tilde{a}}^{U}_{\alpha },{\tilde{a}}^{U}_{\alpha }-{\tilde{a}}^{L}_{\alpha }]\\= & {} [-({\tilde{a}}^{U}_{\alpha }-{\tilde{a}}^{L}_{\alpha }),{\tilde{a}}^{U}_{\alpha }-{\tilde{a}}^{L}_{\alpha }]. \end{aligned}$$

Then, the \(\alpha\)-level \(({\tilde{a}}\ominus {\tilde{a}})_{\alpha }\) is a set centered at 0. It is seen as an approximation of real number zero. Hence, \({\tilde{a}}\ominus {\tilde{a}}\simeq {\tilde{0}}\), which is a fuzzy zero. Let \({\tilde{\Omega }}=\{{\tilde{a}}\ominus {\tilde{a}}:{\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\}\) such as for any \({\tilde{\omega }}\in {\tilde{\Omega }}\) we have \({\tilde{\omega }}_{\alpha }=[-{\tilde{\omega }}^{U}_{\alpha },{\tilde{\omega }}^{U}_{\alpha }]\) and \({\tilde{\omega }}^{U}_{\alpha }\ge 0\) for some \(\alpha \in [0,1]\). The subset \({\tilde{\Omega }}\) is called the null set in \({\mathcal {N}}(\mathbb {R})\).

Remark 2

If the Hukuhara difference exists then \({\tilde{a}}\ominus _{H}{\tilde{a}}=\{0\}\).

Proof

Suppose that the Hukuhara difference exists for \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\), i.e., there is \({\tilde{c}}\in {\mathcal {N}}(\mathbb {R}),\) such as \({\tilde{a}}= {\tilde{a}}\oplus {\tilde{c}}\). We have

$$\begin{aligned} {\tilde{a}}_{\alpha }&= ({\tilde{a}}\oplus {\tilde{c}})_{\alpha }\\&= {\tilde{a}}_{\alpha }\oplus {\tilde{c}}_{\alpha }\\&=[{\tilde{a}}_{\alpha }^{L},{\tilde{a}}_{\alpha }^{U}]+ [{\tilde{c}}_{\alpha }^{L}, {\tilde{c}}_{\alpha }^{U}]\\&=[{\tilde{a}}_{\alpha }^{L}+{\tilde{c}}_{\alpha }^{L},{\tilde{a}}_{\alpha }^{U}+{\tilde{c}}_{\alpha }^{U}] \quad or \ {\tilde{a}}_{\alpha }=[{\tilde{a}}_{\alpha }^{L},{\tilde{a}}_{\alpha }^{U}] \end{aligned}$$

We have \(a_{\alpha }^{L}= {\tilde{a}}_{\alpha }^{L}+{\tilde{c}}_{\alpha }^{L}\Rightarrow {\tilde{c}}_{\alpha }^{L}=0\). Likewise \({\tilde{a}}_{\alpha }^{U}={\tilde{a}}_{\alpha }^{U}+{\tilde{c}}_{\alpha }^{U}\Rightarrow {\tilde{c}}_{\alpha }^{U}=0\).

So, \({\tilde{c}}_{\alpha }=[0, 0]=0\). Therefore, \({\tilde{a}}\ominus _{H}{\tilde{a}}=\{0\}\). \(\square\)

Remark 3

If \({\tilde{a}}\in {\tilde{\Omega }}\), then \({\tilde{a}}_{\alpha }^{L}=-{\tilde{a}}_{\alpha }^{U}\).

Proof

Suppose \({\tilde{a}}\in {\tilde{\Omega }}\), we have \({\tilde{a}}_{\alpha }=[-{\tilde{a}}^{U}_{\alpha },{\tilde{a}}^{U}_{\alpha }]=[{\tilde{a}}^{L}_{\alpha },{\tilde{a}}^{U}_{\alpha }]\), then \({\tilde{a}}^{L}_{\alpha }=-{\tilde{a}}^{U}_{\alpha }.\) \(\square\)

Proposition 2

[14] The following propositions are true:

  1. (i)

    \(-{\tilde{\omega }}={\tilde{\omega }}\) for some \({\tilde{\omega }}\in {\tilde{\Omega }}\),

  2. (ii)

    \({\tilde{\Omega }}\) is a closed set under fuzzy addition, i.e., \({\tilde{\omega }}^{(1)}\oplus {\tilde{\omega }}^{(2)}\in {\tilde{\Omega }}\), \(\forall\) \({\tilde{\omega }}^{(1)},{\tilde{\omega }}^{(2)}\in {\tilde{\Omega }}\),

  3. (iii)

    \(\lambda {\tilde{\Omega }}={\tilde{\Omega }}\), for \(\lambda \in \mathbb {R}^{*}\).

Definition 8

[14] Let \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\), we say that \({\tilde{a}}\) and \({\tilde{b}}\) are almost identical if and only if there exists \({\tilde{\omega }}^{(1)}\), \({\tilde{\omega }}^{(2)}\in {\tilde{\Omega }}\), such as \({\tilde{a}}\oplus {\tilde{\omega }}^{(1)}\simeq {\tilde{b}}\oplus {\tilde{\omega }}^{(2)}\). In this case, we write \({\tilde{a}}\overset{{\tilde{\Omega }}}{\simeq }{\tilde{b}}\).

Proposition 3

[14] Let \({\tilde{a}}\), \({\tilde{b}}\) \(\in\) \({\mathcal {N}}(\mathbb {R})\). We have

  1. (i)

    if \({\tilde{a}}\ominus {\tilde{b}}\in {\tilde{\Omega }}\) then \({\tilde{a}}\overset{{\tilde{\Omega }}}{\simeq }{\tilde{b}}\),

  2. (ii)

    if \({\tilde{a}}\overset{\Omega }{\simeq }{\tilde{b}}\), then there exists \({\tilde{\omega }}\in {\tilde{\Omega }}\) such as \({\tilde{a}}\ominus {\tilde{b}}\oplus {\tilde{\omega }}\in {\tilde{\Omega }}\).

Definition 9

Let \(\mathbb {E}\), a vector space. The function \(\pi :{\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {E}\) is called additive and positively homogeneous function of degree k, if the following conditions are satisfied:

$$\begin{aligned}{} & {} \pi ({\tilde{a}}\oplus {\tilde{b}})=\pi ({\tilde{a}})+\pi ({\tilde{b}})\\{} & {} \ \ \ and \ \ \ \pi (\lambda ^{k} {\tilde{a}})=\lambda ^{k}\pi ({\tilde{a}}), \ \ \ \lambda \ge 0,\ \ k>0. \end{aligned}$$

Indeed, the function \(\pi\) is a linear function if \(k=1\).

Remark 4

If \(\pi\) is an additive and positively homogeneous function of degree k of \({\mathcal {N}}(\mathbb {R})\) to \(\mathbb {E}\), then its kernel, denoted \(\ker \ (\pi )\), and its image noted \(Im\ (\pi )\) are defined by

$$\begin{aligned} \ker \pi =\{{\tilde{a}}:\pi ({\tilde{a}})=\Theta _{\mathbb {E}}\}\ and \ Im\ (\pi )=\{\pi ({\tilde{a}}):{\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\}. \end{aligned}$$

\(\Theta _{E}\) is the zero element of vector space \(\mathbb {E}\). In addition \(\forall\) \({\tilde{\omega }}\in {\tilde{\Omega }}\), \(\pi ({\tilde{\omega }})=\Theta _{\mathbb {E}}\), then we have \({\tilde{\Omega }}\subseteq \ker \ (\pi )\).

Proposition 4

Let \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\). Suppose that \(\pi\) is an additive and positively homogeneous function of degree k defined on \({\mathcal {N}}(\mathbb {R})\). If the difference of Hukuhara \({\tilde{b}}\ominus _{H}{\tilde{a}}\) exists, then \(\pi ({\tilde{b}}\ominus _{H}{\tilde{a}})=\pi ({\tilde{b}})-\pi ({\tilde{a}})\).

Proof

Let \({\tilde{c}}={\tilde{b}}\ominus _{H}{\tilde{a}}\), i.e., \({\tilde{b}}={\tilde{a}}\oplus {\tilde{c}}\). We have

$$\begin{aligned} \pi ({\tilde{b}}&)=\pi ({\tilde{a}}\oplus {\tilde{c}})\\&=\pi ({\tilde{a}})+\pi ({\tilde{c}}), \end{aligned}$$

then \(\pi ({\tilde{c}})=\pi ({\tilde{b}})-\pi ({\tilde{a}})\). Therefore, \(\pi ({\tilde{b}}\ominus _{H}{\tilde{a}})=\pi ({\tilde{b}})-\pi ({\tilde{a}})\). \(\square\)

Definition 10

A subset \({\mathcal {C}}\) of \({\mathcal {N}}(\mathbb {R})\) is a convex cone if \(\forall {\tilde{a}},{\tilde{b}} \in {\mathcal {C}}\), \(\lambda {\tilde{a}}\oplus \beta {\tilde{b}}\in {\mathcal {C}}\), for \(\lambda , \beta > 0\).

From the Proposition 2, we remark that \(\forall\) \({\tilde{a}}, {\tilde{b}}\in {\tilde{\Omega }}\), we have \(\lambda {\tilde{a}}\oplus \beta {\tilde{b}}\in {\tilde{\Omega }}\), with \(\lambda , \beta >0\). What it means that the null set \({\tilde{\Omega }}\) is a convex cone.

Proposition 5

Let \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\). If the function \(\pi\) is additive and positively homogeneous of degree k, then the set \(\pi ({\mathcal {C}})=\{\pi ({\tilde{a}}):{\tilde{a}}\in {\mathcal {C}}\}\) is a convex cone in the vector space \(\mathbb {E}\).

Proof

Assume that \({\mathcal {C}}\) is a convex cone, i.e., if \(\forall {\tilde{a}},{\tilde{b}} \in {\mathcal {C}}\), \(\lambda {\tilde{a}}\oplus \beta {\tilde{b}}\in {\mathcal {C}}\), for \(\lambda , \beta > 0\). Consider the function \(\pi\) which is an additive and positively homogeneous of degree k. We have \(\pi (\lambda {\tilde{a}}\oplus \beta {\tilde{b}})\in \pi ({\mathcal {C}})\), then \(\lambda \pi ({\tilde{a}})+\beta \pi ({\tilde{b}})\in \pi ({\mathcal {C}})\). Therefore, \(\pi ({\mathcal {C}})\) is a convex cone in the vector space \(\mathbb {E}\). \(\square\)

Remark 5

Let \(\mathbb {E}=\mathbb {R}^{2}\), we define the function \(\pi\) by \(\pi ({\tilde{a}})=(-{\tilde{a}}_{\alpha }^{L}-{\tilde{a}}_{\alpha }^{U}; {\tilde{a}}_{\alpha }^{L}+{\tilde{a}}_{\alpha }^{U})\) with \({\tilde{a}}_{\alpha }^{L}+{\tilde{a}}_{\alpha }^{U}\ge 0\). Suppose that \({\mathcal {C}}\) is a convex cone of \({\mathcal {N}}(\mathbb {R})\), we have

\(\pi ({\mathcal {C}})=\left\{ (-{\tilde{c}}_{\alpha }^{L}-{\tilde{c}}_{\alpha }^{U}; {\tilde{c}}_{\alpha }^{L}+{\tilde{c}}_{\alpha }^{U})\in \mathbb {R}^{2}/ {\tilde{c}}_{\alpha }^{L}+{\tilde{c}}_{\alpha }^{U}\ge 0 \ \text{ for } \text{ some } \alpha \in [0,1]\right\} \subseteq \left\{ (-x,x)\in \mathbb {R}^{2}, x\ge 0\right\}\). Indeed, if \({\tilde{\omega }}\in {\tilde{\Omega }}\), we have \({\tilde{\omega }}_{\alpha }^{L}=-{\tilde{\omega }}_{\alpha }^{U}\), \(\forall \in [0,1]\), then, \(\pi ({\tilde{\omega }})=(0,0)\) is the zero element of \(\mathbb {R}^{2}\) with respect to the addition, i.e., \({\tilde{\Omega }}\subseteq \ker \ \pi\). So, if \({\tilde{\Omega }}\subseteq \ker \ \pi\) then, \({\tilde{a}}\in {\tilde{\Omega }}\) and \({\tilde{a}}_{\alpha }^{L}+{\tilde{a}}_{\alpha }^{U}=0\).

Let \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\) and \(\Im =\{{\tilde{a}}\in {\mathcal {N}}(\mathbb {R}): \ \exists \ {\tilde{b}}\in {\mathcal {C}}\ such \ as\ {\tilde{a}}\overset{{\tilde{\Omega }}}{\simeq }{\tilde{b}}\}\). Based on the notion of the convex cone, we define the partial orderings on the space of fuzzy numbers \({\mathcal {N}}(\mathbb {R})\) and the vector space \(\pi ({\mathcal {N}}(\mathbb {R})) \subseteq \mathbb {E}\).

Definition 11

[14] The binary relations \(\preccurlyeq\) and \(\preccurlyeq _{H}\) are called partial orderings on \({\mathcal {N}}(\mathbb {R})\) if the following conditions are satisfied:

\(\forall\) \({\tilde{a}},{\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\),

$$\begin{aligned} {\tilde{a}}\preccurlyeq {\tilde{b}}\ \text{ if } \text{ and } \text{ only } \text{ if } \ {\tilde{b}}\ominus {\tilde{a}}\in \Im , \end{aligned}$$

and

$$\begin{aligned}{} & {} {\tilde{a}}\preccurlyeq _{H}{\tilde{b}} \ \text{ if } \text{ and } \text{ only } \text{ if } \ {\tilde{b}}\ominus _{H}{\tilde{a}} \ \text{ exist } \ \text{ and }\\{} & {} ({\tilde{b}}\ominus _{H}{\tilde{a}})\oplus {\tilde{\omega }}\in {\mathcal {C}}\ \text{ for } \ {\tilde{\omega }}\in {\tilde{\Omega }}. \end{aligned}$$

Definition 12

[14] The binary relations \(\le\) and \(\le _{H}\) are called partial orderings on \(\pi ({\mathcal {N}}(\mathbb {R}))\) if the following conditions are satisfied:

$$\begin{aligned} \pi ({\tilde{a}})\le \pi ({\tilde{b}}) \ \text{ if } \text{ and } \text{ only } \text{ if } \ \pi ({\tilde{b}})-\pi ({\tilde{a}})\in \pi ({\mathcal {C}}), \end{aligned}$$

and

$$\begin{aligned} \pi ({\tilde{a}})\le _{H} \pi ({\tilde{b}})\ \text{ if } \text{ and } \text{ only } \text{ if } \ \pi ({\tilde{b}})-\pi ({\tilde{a}})\in \pi ({\mathcal {C}}) \ \text{ and } \ {\tilde{b}}\ominus _{H}{\tilde{a}} \ \text{ exist }. \end{aligned}$$

Proposition 6

[14, 15, 24] Let partial ordering \(\preccurlyeq\) on \({\mathcal {N}}(\mathbb {R})\). The following axioms are satisfied:

  1. (i)

    if \({\tilde{\Omega }}\subseteq {\mathcal {C}}\), then \({\tilde{a}}\preccurlyeq {\tilde{a}}\) for some \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\) (reflexivity),

  2. (ii)

    if \({\tilde{a}}\preccurlyeq {\tilde{b}}\) and \({\tilde{b}}\preccurlyeq {\tilde{c}}\) then \({\tilde{a}}\preccurlyeq {\tilde{c}}\) for some \({\tilde{a}}, {\tilde{b}}, {\tilde{c}}\in {\mathcal {N}}(\mathbb {R})\) (transitivity),

  3. (iii)

    if \({\tilde{a}}\preccurlyeq {\tilde{b}}\) and \(\lambda >0\), then \(\lambda {\tilde{a}}\preccurlyeq \lambda {\tilde{b}}\) for some \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\),

  4. (iv)

    if \({\tilde{a}}\preccurlyeq {\tilde{b}}\) and \({\tilde{d}}\preccurlyeq {\tilde{e}}\) then \({\tilde{a}}\oplus {\tilde{d}}\preccurlyeq {\tilde{b}}\oplus {\tilde{e}}\) for some \({\tilde{a}},{\tilde{b}},{\tilde{d}},{\tilde{e}}\in {\mathcal {N}}(\mathbb {R})\).

Proposition 7

[14, 15, 24] Let the partial ordering \(\preccurlyeq _{H}\) on \({\mathcal {N}}(\mathbb {R})\). The following axioms are satisfied:

  1. (i)

    if \({\tilde{1}}_{\{0\}}\in {\mathcal {C}}\), then \({\tilde{a}}\preccurlyeq _{H}{\tilde{a}}\) for some \({\tilde{a}}\in {\mathcal {N}}(\mathbb {R})\) (reflexivity),

  2. (ii)

    if \({\tilde{a}}\preccurlyeq _{H}{\tilde{b}}\) and \({\tilde{b}}\preccurlyeq _{H}{\tilde{c}}\), then \({\tilde{a}}\preccurlyeq _{H}{\tilde{c}}\) for some \({\tilde{a}}, {\tilde{b}}, {\tilde{c}}\in {\mathcal {N}}(\mathbb {R})\) (transitivity),

  3. (iii)

    if \({\tilde{a}}\preccurlyeq _{H}{\tilde{b}}\) and \(\lambda >0\), then \(\lambda {\tilde{a}}\preccurlyeq _{H}\lambda {\tilde{b}}\) for some \({\tilde{a}}, {\tilde{b}}\in {\mathcal {N}}(\mathbb {R})\),

  4. (iv)

    if \({\tilde{a}}\preccurlyeq _{H}{\tilde{b}}\) and \({\tilde{d}}\preccurlyeq _{H}{\tilde{e}}\), then \({\tilde{a}}\oplus {\tilde{d}}\preccurlyeq _{H}{\tilde{b}}\oplus {\tilde{e}}\) for some \({\tilde{a}},{\tilde{b}},{\tilde{d}},{\tilde{e}}\in {\mathcal {N}}(\mathbb {R})\).

Proposition 8

[14, 15, 24] Let partial ordering \(\le\) on \(\pi ({\mathcal {N}}(\mathbb {R}))\) and \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\). Suppose the function \(\pi :{\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {E}\) is additive and positively homogeneous of degree k. Then, the following axioms are satisfied:

  1. (i)

    if \(\theta _{\mathbb {E}}\in \pi ({\mathcal {C}})\), then \(x \le x\) for some \(x\in \pi ({\mathcal {N}}(\mathbb {R}))\) (reflexivity),

  2. (ii)

    if \(x\le y\) and \(y\le z\), then \(x\le z\) for some \(x, y, z\in \pi ({\mathcal {N}}(\mathbb {R}))\) (transitivity),

  3. (iii)

    if \(x\le y\) and \(\lambda >0\), then \(\lambda x\le \lambda y\) for some \(x,y\in \pi ({\mathcal {N}}(\mathbb {R}))\),

  4. (iv)

    if \(x\le y\) and \(a\le b\), then \(x+a\le y+b\) for some \(x,y,a, b\in \pi ({\mathcal {N}}(\mathbb {R}))\).

Proposition 9

[14, 15, 24] Let partial ordering \(\le _{H}\) on \(\pi ({\mathcal {N}}(\mathbb {R}))\) and \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\). Suppose the function \(\pi :{\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {E}\) is additive and positively homogeneous of degree k. Then, the following axioms are satisfied:

  1. (i)

    if \({\tilde{1}}_{\{0\}}\in {\mathcal {C}}\), then \(x \le _{H} x\) for some \(x\in \pi ({\mathcal {N}}(\mathbb {R}))\) (reflexivity),

  2. (ii)

    if \(x\le _{H} y\) and \(y\le _{H} z\), then \(x\le _{H} z\) for some \(x, y, z\in \pi ({\mathcal {N}}(\mathbb {R}))\) (transitivity),

  3. (iii)

    if \(x\le _{H} y\) and \(\lambda >0\), then \(\lambda x\le _{H}\lambda y\) for some \(x,y\in \pi ({\mathcal {N}}(\mathbb {R}))\),

  4. (iv)

    if \(x\le _{H} y\) and \(a\le _{H} b\), then \(x+a\le _{H} y+b\) for some \(x,y,a, b\in \pi ({\mathcal {N}}(\mathbb {R}))\).

Proposition 10

[14] Let \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\). Suppose the function \(\pi : {\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {E}\) is additive and positively homogeneous of degree k. Suppose that \({\tilde{\Omega }}\subseteq ker\pi \subseteq {\mathcal {C}}\). Then, we have the following results:

  1. (i)

    if \({\tilde{a}}\preccurlyeq {\tilde{b}}\), then \(\pi ({\tilde{a}})\le \pi ({\tilde{b}})\),

  2. (ii)

    if \(\pi ({\tilde{a}})\le \pi ({\tilde{b}})\), then \({\tilde{a}}\preccurlyeq {\tilde{b}}\).

Proposition 11

[14] Let \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\). Suppose the function \(\pi : {\mathcal {N}}(\mathbb {R})\rightarrow \mathbb {E}\) is additive and positively homogeneous of degree k. Suppose that \({\tilde{\Omega }}\subseteq ker\pi \subseteq {\mathcal {C}}\). Then, we have the following results:

  1. (i)

    if \({\tilde{a}}\preccurlyeq _{H} {\tilde{b}}\), then \(\pi ({\tilde{a}})\le _{H} \pi ({\tilde{b}})\),

  2. (ii)

    if \(\pi ({\tilde{a}})\le _{H} \pi ({\tilde{b}})\), then \({\tilde{a}}\preccurlyeq _{H} {\tilde{b}}\).

2.2 Solution Concept

Let \({\tilde{f}}\) be a fuzzy-valued objective function defined on a real vector space \({\mathcal {X}}\). Then, \({\tilde{f}}({\mathcal {X}})=\{{\tilde{f}}(x)\in {\mathcal {N}}(\mathbb {R}):x\in {\mathcal {X}}\}\) is the set of objective values. Inspired by the concept of solutions employed by Wu [29], we note \(MIN({\tilde{f}}, {\mathcal {X}})\) the set of all non-dominated values of \({\tilde{f}}\) in the case of minimization. More precisely, we write

$$\begin{aligned} MIN ({\tilde{f}}, {\mathcal {X}})=\left\{ {\tilde{f}}({\overline{x}}): \ \text{ there } \text{ is } \text{ no }\ x(\ne {\overline{x}})\in {\mathcal {X}} \ \text{ such } \text{ as } \ {\tilde{f}}(x)\preccurlyeq {\tilde{f}}({\overline{x}})\right\} . \end{aligned}$$

Similarly, we define the set of all non-dominated values of \({\tilde{f}}\) following partial ordering \(\preccurlyeq _{H}\) by

$$\begin{aligned} H-MIN({\tilde{f}}, {\mathcal {X}})=\left\{ {\tilde{f}}({\overline{x}}): \text{ there } \text{ is } \text{ no } \ x(\ne {\overline{x}})\in {\mathcal {X}} \ \text{ such } \text{ as } \ {\tilde{f}}(x) \preccurlyeq _{H}{\tilde{f}}({\overline{x}})\right\} . \end{aligned}$$

2.3 Karush–Kuhn–Tucker Optimality Conditions

Let f be a real-valued function defined on \(\mathbb {R}^{n}\). Consider the following nonlinear problem with inequality constraints:

$$\begin{aligned} \displaystyle \min _{x\in S}\ f(x), \end{aligned}$$

where \(S=\{x:g_{j}(x)\le 0, j=1,2,\ldots ,m\}\subseteq \mathbb {R}^{n}\). Suppose that the constrained functions \(g_{j}\) are convex on \(\mathbb {R}^{n}\) for each \(j=1,2,\ldots ,m\). Then, the feasible set S is a convex set of \(\mathbb {R}^{n}\). The optimality conditions for the problem () (see [3] and [30] ) is stated below:

Theorem 12

Suppose that the constrained functions \(g_{j}:\mathbb {R}^{n}\rightarrow \mathbb {R}\) are convex on \(\mathbb {R}^{n}\) for \(j=1,2,\ldots ,m\). Let \(S=\{x:g_{j}(x)\le 0, j=1,2,\ldots ,m\}\) be a feasible set and a point \({\overline{x}}\in S\). Suppose the objective function \(f:\mathbb {R}^{n}\rightarrow \mathbb {R}\) is convex in \({\overline{x}}\) and f, \(g_{j}\), \(j=1,2,\ldots ,m\) are continuously differentiable in \({\overline{x}}\). If there are (Lagrange) multipliers \(0\le \mu _{j}\in \mathbb {R}\), \(j=1,2,\ldots ,m\), such as

  1. (i)

    \(\nabla f({\overline{x}})\)+\(\displaystyle \sum _{j=1}^{n}\mu _{j}\nabla g_{j}({\overline{x}})=0\);

  2. (ii)

    \(\mu _{j}g_{j}({\overline{x}})=0\) for some \(j=1,2,\ldots ,m\).

Then, \({\overline{x}}\) is an optimal solution of the problem (2.3).

3 Main Results

3.1 Method and Algorithm

Let \({\mathcal {X}}\) be a real vector space and \({\tilde{f}}:{\mathcal {X}}\rightarrow {\mathcal {N}}(\mathbb {R})\) a fuzzy nonlinear function. Consider the following two fuzzy nonlinear optimization problems:

$$\left\{ {\begin{array}{ll} {\min \;\;\tilde{f}(x),} \hfill \\ {{\text{subject to}}} \hfill \\ {\quad \tilde{g}_{j} (x){ \preccurlyeq }\tilde{0},\;\;j = \overline{{1,m}} ,} \hfill \\ {\quad x \in \mathbb{R}_{ + }^{n} ,} \hfill \\ \end{array} } \right.$$
(1)

and

$$\left\{ {\begin{array}{ll} {\min \;\;\tilde{f}(x),} \hfill \\ {{\text{subject to}}} \hfill \\ {\quad \tilde{g}_{j} (x){ \preccurlyeq }_{H} \tilde{0},\;\;j = \overline{{1,m}} ,} \hfill \\ {\quad x \in \mathbb{R}_{ + }^{n} ,} \hfill \\ \end{array} } \right.$$
(2)

where \({\tilde{f}}\) and \({\tilde{g}}_{i}\), \(i=1,2,\ldots ,m\) are nonlinear canonical fuzzy functions defined on the vector space \({\mathcal {X}}\). Each of these problems will be solved according to its corresponding solution concept. Note \({\tilde{S}}=\{x\in \mathbb {R}^{n}_{+} / {\tilde{g}}_{j}(x)\preccurlyeq {\tilde{0}},\ i=\overline{1,m}\}\) and \({\tilde{S}}^{H}=\{x\in \mathbb {R}^{n}_{+} / {\tilde{g}}_{j}(x)\preccurlyeq _{H} {\tilde{0}},\ i=\overline{1,m}\},\) respectively, the feasible set of problems (1) and (2).

Consider \({\mathcal {C}}\) a convex cone of \({\mathcal {N}}(\mathbb {R})\) and \(\forall x\in {\tilde{S}}\) ( or \(\forall x\in {\tilde{S}}^{H}\) ), \({\tilde{f}}(x)\in {\mathcal {C}}\). Based on partial orderings \(\preccurlyeq\) and \(\preccurlyeq _{H}\) on \({\mathcal {N}}(\mathbb {R})\), we present two concepts of solution of the previous problems.

Proposition 13

:

  1. (i)

    \({\overline{x}}\) is an optimal solution of problem (1) if and only if \({\tilde{f}}({\overline{x}})\in MIN({\tilde{f}}, {\mathcal {X}})\).

  2. (ii)

    \({\overline{x}}\) is an H-optimal solution of problem (2) if and only if \({\tilde{f}}({\overline{x}})\in H-MIN({\tilde{f}}, {\mathcal {X}})\).

Proposition 14

[13, 31, 32] Let \({\tilde{f}}:{\mathcal {X}}\rightarrow {\mathcal {N}}(\mathbb {R})\), a fuzzy-valued function defined on an open set \({\mathcal {X}}\) of \(\mathbb {R}^{n}\) with \(\alpha\)-level \({\tilde{f}}_{\alpha }=[{\tilde{f}}_{\alpha }^{L}; {\tilde{f}}_{\alpha }^{U}]\). Then, \({\tilde{f}}\) is convex at \({\overline{x}}\) if and only if \({\tilde{f}}_{\alpha }^{L}\) and \({\tilde{f}}_{\alpha }^{U}\) are convex at \({\overline{x}}\) for some \(\alpha \in [0,1]\).

Let \(\pi\) the function defined in remark 5. Then, we consider the following two optimization problems:

$$\left\{ {\begin{array}{*{20}l} {\min \;\;\pi o\tilde{f}(x),} \hfill \\ {st} \hfill \\ {(\pi o\tilde{g}_{j} (x)) \le \pi (\tilde{0})\;\;j = \overline{{1,m}} ,} \hfill \\ {x \in \mathbb{R}_{ + }^{n} .} \hfill \\ \end{array} } \right.$$
(3)

and

$$\left\{ {\begin{array}{*{20}l} {\min \;\;\pi o\tilde{f}(x),} \hfill \\ {st} \hfill \\ {(\pi o\tilde{g}_{j} (x)) \le _{H} \pi (\tilde{0})\;\;j = \overline{{1,m}} ,} \hfill \\ {x \in \mathbb{R}_{ + }^{n} .} \hfill \\ \end{array} } \right.$$
(4)

where \(\pi ({\tilde{0}})=(0,0)\), the zero element of \(\mathbb {R}^{2}\) with respect to the addition; \(\pi \circ {\tilde{f}}(x)=\big (-{\tilde{f}}_{\alpha }^{L}(x)-{\tilde{f}}_{\alpha }^{U}(x), {\tilde{f}}_{\alpha }^{L}(x)+{\tilde{f}}_{\alpha }^{U}(x)\big )=(\underline{{\tilde{f}}_{\alpha }}(x), \overline{{\tilde{f}}_{\alpha }}(x))\); and \(\pi \circ {\tilde{g}}_{j}(x)=(-{\tilde{g}}_{j\alpha }^{L}(x)-{\tilde{g}}_{j\alpha }^{U}(x), {\tilde{g}}_{j\alpha }^{L}(x)+{\tilde{g}}_{j\alpha }^{U}(x))=(\underline{{\tilde{g}}_{\alpha }}(x), \overline{{\tilde{g}}_{\alpha }}(x)), j=1,2,\ldots ,m.\)

The functions \({\tilde{f}}\) and \({\tilde{g}}_{j}\), \(j=1,2,\ldots ,m\) being canonical fuzzy functions then \(\underline{{\tilde{f}}_{\alpha }}\), \(\overline{{\tilde{f}}_{\alpha }}\), \(\underline{{\tilde{g}}_{j\alpha }}\), and \(\overline{{\tilde{g}}_{j\alpha }}\), \(i=1,\ldots ,m\) are continuous functions with respect to \(\alpha \in [0,1]\). Thus, we can consider the following new functions:

$$\begin{aligned} {\underline{f}}(x)=\int _{0}^{1}\underline{{\tilde{f}}_{\alpha }}(x)d\alpha \ \ \text {and} \ \ {\overline{f}}(x)=\int _{0}^{1}\overline{{\tilde{f}}_{\alpha }}d\alpha , \end{aligned}$$
(5)

and

$$\begin{aligned}{} & {} {\underline{g}}_{j}(x)=\int _{0}^{1}\underline{{\tilde{g}}_{j\alpha }}(x)d\alpha \ \ \text {and} \ \ {\overline{g}}_{j}(x)\nonumber \\{} & {} =\int _{0}^{1}\overline{{\tilde{g}}_{j\alpha }}d\alpha , \ j=1,2,\ldots ,m. \end{aligned}.$$
(6)

Remark 6

From the Proposition 10, each constraint \({\tilde{g}}_{j}(x)\preccurlyeq {\tilde{0}}\) implies that \(\pi ({\tilde{g}}_{j}(x))\le \pi ({\tilde{0}})\). We have \(({\underline{g}}_{j}(x), {\overline{g}}_{j}(x))\le (0,0),\) i.e., \({\underline{g}}_{j}(x)\le 0\) and \({\overline{g}}_{j}(x)\le 0\). As \({\underline{g}}_{j}(x)\le {\overline{g}}_{j}(x)\), then \({\underline{g}}_{j}(x)\le {\overline{g}}_{j}(x)\le 0\). So, let’s simply consider the constraint \({\overline{g}}_{j}(x)\le 0,\) \(\forall j=1,2,\ldots ,m\). Similarly, consider the constraint \({\overline{g}}_{j}(x)\le _{H}0\).

Based on equations (5) and (6) and the Remark 6, the problems (3) and (4) are rewritten into the following deterministic nonlinear bi-objective optimization problems:

$$\begin{aligned} \left\{ \begin{array}{lll} \min \ \ ({\underline{f}}(x),{\overline{f}}(x) )\\ sc\\ {\overline{g}}_{j}(x)\le 0,\ \ j=\overline{1,m},\\ x\in \mathbb {R}^{n}_{+}. \end{array} \right. \end{aligned}$$
(7)

and

$$\begin{aligned} \left\{ \begin{array}{lll} \min \ \ ({\underline{f}}(x),{\overline{f}}(x) )\\ sc\\ {\overline{g}}_{j}(x)\le _{H} 0,\ \ j=\overline{1,m},\\ x\in \mathbb {R}^{n}_{+}. \end{array} \right. \end{aligned}$$
(8)

Theorem 15

Suppose \({\tilde{\Omega }}\subseteq \ker \ (\pi )\subseteq {\mathcal {C}}\). If \({\overline{x}}\) is a Pareto optimal solution of the problem (7), then \({\overline{x}}\) is an optimal solution of the problem (1).

Proof

Let \({\overline{x}}\) be a feasible solution of the problem (1). From the remark 6, \({\overline{x}}\) is a feasible solution of the problem (7). Furthermore, suppose that \({\overline{x}}\) is a Pareto optimal solution of the problem (7). Then, there is no x \((x\ne {\overline{x}}\)) such as \(\pi ({\tilde{f}}(x))\le \pi ({\tilde{f}}({\overline{x}}))\). Using the Proposition 10, then there is no x \((x\ne {\overline{x}}\)) such as \({\tilde{f}}(x)\preccurlyeq {\tilde{f}}({\overline{x}}),\) i.e., \({\tilde{f}}({\overline{x}})\in MIN({\tilde{f}}, {\mathcal {X}})\). Therefore, \({\overline{x}}\) is an optimal solution of the problem (1). \(\square\)

Theorem 16

Suppose \({\tilde{\Omega }}\subseteq \ker \ (\pi )\subseteq {\mathcal {C}}\). If \({\overline{x}}\) is a Pareto H-optimal solution of the problem (8), then \({\overline{x}}\) is an optimal solution of the problem (2).

Proof

The proof is similar to the proof of Theorem 15 considering the binary relation \(\le _{H}\) and using the Proposition 11. \(\square\)

In order to define the KKT-like optimality conditions for the fuzzy nonlinear optimization problem, we provide some following propositions:

Proposition 17

Let \({\mathcal {X}}\) be an open interval. If \(\underline{{\tilde{f}}_{\alpha }}\) and \(\overline{{\tilde{f}}_{\alpha }}\) are continuous on \([0;1]\times {\mathcal {X}}\) and admit partial derivatives \(\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}\) et \(\dfrac{\partial \overline{{\tilde{f}}_{\alpha }}}{\partial x_{i}},\) respectively, for \(i=1,\ldots ,n\), themselves continuous on \([0;1]\times {\mathcal {X}}\). So, the functions \({\underline{f}}\) and \({\overline{f}}\) are well defined on \({\mathcal {X}}\), they are of class \(C^{1}\) and

$$\begin{aligned}{} & {} \forall x\in {\mathcal {X}}, \dfrac{\partial {\underline{f}}(x)}{\partial x_{i}}=\int _{0}^{1}\dfrac{\partial \underline{{\tilde{f}}_{\alpha }(x)}}{\partial x_{i}} d\alpha \ \ \ and \ \ \ \dfrac{\partial {\overline{f}}(x)}{\partial x_{i}}\nonumber \\{} & {} =\int _{0}^{1}\dfrac{\partial \overline{{\tilde{f}}_{\alpha }(x)}}{\partial x_{i}} d\alpha \ \ \ for \ i=1,\ldots ,n. \end{aligned}$$
(9)

Proposition 18

Let \({\tilde{f}}\) be a fuzzy-valued function defined on an open subset \({\mathcal {X}}\) of \(\mathbb {R}^{n}\). If \({\tilde{f}}\) is continuously differentiable in the neighborhood of \({\overline{x}}\). Then, the real-valued functions \({\underline{f}}\) and \({\overline{f}}\) are continuously differentiable in \({\overline{x}}\) and

$$\begin{aligned}{} & {} \dfrac{\partial {\underline{f}}}{\partial x_{i}}({\overline{x}})=\int _{0}^{1}\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}}) d\alpha \ \ \ et \ \ \ \ \dfrac{\partial {\overline{f}}}{\partial x_{i}}({\overline{x}})\nonumber \\{} & {} =\int _{0}^{1}\dfrac{\partial \overline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}}) d\alpha . \end{aligned}$$
(10)

Proof

Let us show that the partial derivatives \(\dfrac{\partial {\underline{f}}}{\partial x_{i}}\) and \(\dfrac{\partial {\overline{f}}}{\partial x_{i}}\) exist in the neighborhood of \({\overline{x}}\) and are continuous at \({\overline{x}}\) for some \(i=1,\ldots ,n\). Indeed, the proof will be done only for the \(\dfrac{\partial {\underline{f}}}{\partial x_{i}}\) and that of \(\dfrac{\partial {\overline{f}}}{\partial x_{i}}\) is done in the same way. \({\tilde{f}}\) being a canonical fuzzy function and assume \({\tilde{f}}\) continuously differentiable at the neighborhood of \({\overline{x}}\), i.e., \(\underline{{\tilde{f}}_{\alpha }}\) and \(\overline{{\tilde{f}}_{\alpha }}\) are continuously differentiable at \({\overline{x}}\) for \(\alpha \in [0,1]\). Through Proposition 17, \(\dfrac{\underline{\partial {\tilde{f}}_{\alpha }}}{\partial x_{i}}\) is continuous at \({\overline{x}},\) i.e., \(\forall \epsilon >0\), there is a \(\delta >0\) such as \(\mid x-{\overline{x}}\mid<\delta \Rightarrow \mid \dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}(x)-\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}})\mid <\epsilon\) for each \(\alpha \in [0,1]\). From Proposition 17, we have \(\mid x-{\overline{x}}\mid <\delta\) which implies that \(\forall\) \(i=1,\ldots ,n\),

$$\begin{aligned}&\mid \dfrac{\partial {\underline{f}}}{\partial x_{i}}(x)-\dfrac{\partial {\underline{f}}}{\partial x_{i}}({\overline{x}})\mid \\&=\mid \int _{0}^{1}\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}(x)-\int _{0}^{1}\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}})\mid \\&=\mid \int _{0}^{1}[\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}(x)-\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}})]d\alpha \mid \\&\le \int _{0}^{1}\mid \dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}(x)-\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}})\mid d\alpha <\epsilon . \end{aligned}$$

Therefore, \(\dfrac{\partial {\underline{f}}}{\partial x_{i}}\)is continuous \(\forall\) \(i=1,\ldots ,n\). \(\square\)

Proposition 19

Let \({\tilde{f}}\) be a fuzzy-valued function defined on an open subset \({\mathcal {X}}\) of \(\mathbb {R}^{n}\). If \({\tilde{f}}\) is continuously H-differentiable in the neighborhood of \({\overline{x}}\). Then, the real-valued functions \({\underline{f}}\) and \({\overline{f}}\) are continuously differentiable in \({\overline{x}}\) with

$$\begin{aligned}{} & {} \dfrac{\partial {\underline{f}}}{\partial x_{i}}({\overline{x}})=\int _{0}^{1}\dfrac{\partial \underline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}}) d\alpha \ \ \ et \ \ \ \ \dfrac{\partial {\overline{f}}}{\partial x_{i}}({\overline{x}})\nonumber \\{} & {} =\int _{0}^{1}\dfrac{\partial \overline{{\tilde{f}}_{\alpha }}}{\partial x_{i}}({\overline{x}}) d\alpha . \end{aligned}$$
(11)

Proof

The proof is similar to the proof of Proposition 18 assuming that the function \({\tilde{f}}\) continuously H-differentiable in the neighborhood of \({\overline{x}}\). \(\square\)

Remark 7

Propositions 1718, and 19 are valid for constrained functions \({\tilde{g}}_{j}\), \(\forall\) \(j=1,2,\ldots ,m\).

Let us present the Karush–Kuhn–Tucker optimality conditions for optimization problems (1) and (2).

Theorem 20

Let \({\tilde{\Omega }} \subseteq \ker (\pi )\subseteq {\mathcal {C}}\). If the constrained functions with fuzzy value \({\tilde{g}}_{j}:{\mathcal {X}}\rightarrow {\mathcal {N}}(\mathbb {R})\) are convex on \({\mathcal {X}}\) and continuously differentiable at \({\overline{x}}\in {\mathcal {X}}\) for \(i=1,2,\ldots ,m\) and the objective function \({\tilde{f}}\) is convex and continuously differentiable at \({\overline{x}}\). If in addition, there are multipliers (Lagrange) \(\mu _{j}\ge 0\), \(j=1,2,\ldots ,m\) and reals \(\lambda _{1}, \lambda _{2}>0\) \((\lambda _{1}\ne \lambda _{2}),\) such as

  1. (i)

    \(\lambda _{1}\nabla {\underline{f}}({\overline{x}})+\lambda _{2}\nabla {\overline{f}}({\overline{x}})+\displaystyle \sum _{i=1}^{m}\mu _{i}\nabla {\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\)

  2. (ii)

    \(\mu _{i}{\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\).

Then \({\overline{x}}\) is an optimal solution of the problem (1).

Proof

We will prove by contradiction. Suppose that \({\overline{x}}\) is not an optimal solution of the problem (1), then there exists \(x(\ne {\overline{x}})\in {\tilde{S}},\) such as \({\tilde{f}}(x)\preccurlyeq {\tilde{f}}({\overline{x}})\). We have \(\pi ({\tilde{f}}(x))\le \pi ({\tilde{f}}({\overline{x}}))\) \(\Rightarrow\) \(({\underline{f}}(x), {\overline{f}}(x))\le ({\underline{f}}({\overline{x}}), {\overline{f}}({\overline{x}}))\).

Let \(\lambda _{1}, \lambda _{2}>0\) with \(\lambda _{1}\ne \lambda _{2}\), then

$$\begin{aligned} \lambda _{1}{\underline{f}}(x)+\lambda _{2}{\overline{f}}(x)\le \lambda _{1}{\underline{f}}({\overline{x}})+\lambda _{2}{\overline{f}}({\overline{x}})\Rightarrow F(x)\le F({\overline{x}}) \end{aligned}$$

with \(F(x)= \lambda _{1}{\underline{f}}(x)+\lambda _{2}{\overline{f}}(x)\) and \(F({\overline{x}})=\lambda _{1}{\underline{f}}({\overline{x}})+\lambda _{2}{\overline{f}}({\overline{x}})\).

Suppose \({\tilde{f}}\) convex and continuously differentiable in \({\overline{x}}\) then the functions \({\underline{f}}\) and \({\overline{f}}\) are also convex and continuously differentiable in \({\overline{x}}\) according to Proposition 18, which implies F is a convex and continuously differentiable function. Moreover \({\overline{x}}\in {\tilde{S}}\), we obtain by the conditions of KKT:

  1. (i)

    (i) \(\lambda _{1}\nabla {\underline{f}}({\overline{x}})+\lambda _{2}\nabla {\overline{f}}({\overline{x}})+\displaystyle \sum _{i=1}^{m}\mu _{i}\nabla {\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\)

  2. (ii)

    \(\mu _{i}{\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\).

According to Theorem 12, \({\overline{x}}\) is an optimal solution of the objective function F under the constraints \({\overline{g}}_{j}\), \(j=1,2,\ldots ,m\). As a result, \({\overline{x}}\) is an optimal solution of the problem (1). This contradicts the initial hypothesis. Therefore, \({\overline{x}}\) is an optimal solution of the problem (1). \(\square\)

Theorem 21

Let \({\tilde{\Omega }} \subseteq \ker (\pi )\subseteq {\mathcal {C}}\). If the fuzzy-valued constraint functions \({\tilde{g}}_{j}:{\mathcal {X}}\rightarrow {\mathcal {N}}(\mathbb {R})\) are convex on \({\mathcal {X}}\) and continuously H-differentiable at \({\overline{x}}\in {\mathcal {X}}\) for \(j=1,2,\ldots ,m\) and the objective function \({\tilde{f}}\) is convex and continuously H-differentiable at \({\overline{x}}\). If, in addition, there are multipliers of Lagrange \(\mu _{j}\ge 0\), \(j=1,2,\ldots ,m\) and reals \(\lambda _{1}, \lambda _{2}>0\) \((\lambda _{1}\ne \lambda _{2}),\) such as

  1. 1.

    \(\lambda _{1}\nabla {\underline{f}}({\overline{x}})+\lambda _{2}\nabla {\overline{f}}({\overline{x}})+\displaystyle \sum _{i=1}^{m}\mu _{i}\nabla {\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\)

  2. 2.

    \(\mu _{i}{\overline{g}}_{j}({\overline{x}})=0\), \(j=1,2,\ldots ,m\).

Then, \({\overline{x}}\) is an H-optimal solution of the problem (2).

Proof

The proof is similar to the proof of Theorem 20 by modifying the partial ordering \(\preccurlyeq _{H}\). \(\square\)

Therefore, the algorithm of the method can be summarized as follows.

Algorithm

Data: Fuzzy nonlinear program

Step 1: Defuzzification. It consists in converting of the initial problem in bi-objective nonlinear deterministic problem.

Step 3: Resolution. It is based on the using of Karush–Kuhn–Tucker optimality conditions.

Step 4: Updating. It allows to transform the deterministic solutions obtained at the step 3 to fuzzy optimal solutions of the initial problem.

3.2 Numerical Examples

Example 1

[12] Consider the following fuzzy nonlinear problem:

$$\begin{aligned} \left\{ \begin{array}{lll} \min \ {\tilde{f}}(x_{1},x_{2})=({\tilde{a}}\odot x^{2}_{1})\oplus ({\tilde{b}}\odot x^{2}_{2}),\\ st\\ ({\tilde{b}}\odot (x_{1}-2)^{2})\oplus ({\tilde{b}}\odot (x_{2}-2)^{2})\preccurlyeq {\tilde{c}},\\ x_{1}\ge 0,\ x_{2}\ge 0, \end{array} \right. \end{aligned}$$
(12)

with \({\tilde{a}}=(1,2,3)\), \({\tilde{b}}=(0,1,2)\), and \({\tilde{c}}=(0,2,4)\) as triangular fuzzy numbers.

Let us set \({\tilde{g}}(x)=({\tilde{b}}\odot (x_{1}-1)^{2})\oplus ({\tilde{b}}\odot (x_{2}-1)^{2})\ominus {\tilde{c}}\).

For converting of the fuzzy nonlinear program into a deterministic bi-objective, we apply the function \(\pi\) positively homogeneous additive of degree 2, we have

$$\left\{ {\begin{array}{*{20}l} {\min \;\pi ^\circ ((\tilde{a} \odot x_{1}^{2} ) \oplus (\tilde{b} \odot x_{2}^{2} )),} \hfill \\ {st} \hfill \\ {\pi ^\circ ((\tilde{b} \odot (x_{1} - 2)^{2} ) \oplus (\tilde{b} \odot (x_{2} - 2)^{2} )){ \preccurlyeq }\pi ^\circ (\tilde{c}),} \hfill \\ {x_{1} \ge 0,\;x_{2} \ge 0,} \hfill \\ \end{array} } \right.$$
(13)

Using Remark 6, the problem (13) is rewritten into a bi-objective nonlinear deterministic problem as follows:

$$\left\{ {\begin{array}{ll} {\min \;(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{f} ,\bar{f}) = ( - 4x_{1}^{2} - 2x_{2}^{2} ,\;4x_{1}^{2} + 2x_{2}^{2} ),} \hfill \\ {st} \hfill \\ {2(x_{1} - 2)^{2} + 2(x_{2} - 2)^{2} \le 4,} \hfill \\ {x_{1} \ge 0,\;x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$

Karush–Kuhn–Tucker optimality condition is applied according to Theorem 20. Let us set \(\lambda _{1}=1\), \(\lambda _{2}=2\), then we have the following Karush–Kuhn–Tucker conditions:

$$\left\{ {\begin{array}{ll} {8x_{1} + 4\mu (x_{1} - 2) = 0} \hfill \\ {4x_{2} + 4\mu (x_{2} - 2) = 0} \hfill \\ {\mu (2(x_{1} - 2)^{2} + 2(x_{2} - 2)^{2} - 4) = 0.} \hfill \\ \end{array} } \right.$$
(14)

By solving equation 14, we get the solution \((x_{1},x_{2})=(0.8432874, 1.186356)\) and \(\mu =1.458077\). Through Theorem 20, the solution \({\overline{x}}=({\overline{x}}_{1},{\overline{x}}_{2})=(0.8432874, 1.186356)\) is an optimal solution of the problem (13).

This problem have been already solved by Pathat et al. [12]. The following Table 1 gives the optimal solution and the ranking function value of our method with that of Pathak et al.

We observe that \({\mathcal {R}}_{1}< {\mathcal {R}}_{2}\).

Table 1 Comparison of our method with that of Pathak et al.

Example 2

[8] Consider the following second fuzzy nonlinear problem:

$$\left\{ {\begin{array}{ll} {\min \;\tilde{f}(x_{1} ,x_{2} ) = (1,3,4)x_{1}^{2} + (1,2,3)x_{2}^{2} ,} \hfill \\ {st} \hfill \\ {(0,1,3)x_{1}^{2} + (2,3,5)x_{2}^{2} \le (3,4,6),} \hfill \\ {(1,2,4)x_{1}^{2} - (0,1,2)x_{2}^{2} \le (1,2,5),} \hfill \\ {x_{1} \ge 0,\;x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$
(15)

The same process used for Example 1 has allowed us to get the solution for the problem 15 as follows: \((x_{1},x_{2})=(1.19, 0.64)\) with \(\lambda _{1}=1\), \(\lambda _{2}=2\), \(\mu _{1}=1.35\), and \(\mu _{2}=0.7\). Through Theorem 20, the solution \({\overline{x}}=({\overline{x}}_{1}, {\overline{x}}_{2})=(1.19, 0.64)\) is an optimal solution of the problem (15).

This problem has been already solved by Kumar et al. [8]. The following Table 2 gives the optimal solution and the ranking function value of our method with that of Kumar et al.

We also observe that \({\mathcal {R}}_{1}< {\mathcal {R}}_{2}\).

Table 2 Comparison of our method with that of Kumar et al.

Example 3

[12] Consider the following third fuzzy nonlinear problem:

$$\left\{ {\begin{array}{ll} {\min \tilde{f}(x) = (\tilde{a} \odot x_{1}^{2} ) \oplus (\tilde{b} \odot x_{2}^{2} )} \hfill \\ {st} \hfill \\ {(x_{1} - 2)^{2} + (x_{2} - 2)^{2} ) \le 1} \hfill \\ {x_{1} ,x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$
(16)

where \({\tilde{a}}=(1,2,3)\) and \({\tilde{b}}=(0,1,2).\) Conversion of the fuzzy nonlinear program into a deterministic bi-objective: By applying the function \(\pi\) positively homogeneous additive of degree 2, we have

$$\left\{ {\begin{array}{ll} {\min \tilde{f}(x) = \left( { - 4x_{1}^{2} - 2x_{1}^{2} ,4x_{1}^{2} + 2x_{1}^{2} } \right)} \hfill \\ {st} \hfill \\ {(x_{1} - 2)^{2} + (x_{2} - 2)^{2} ) \le 1} \hfill \\ {x_{1} ,x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$

Taking \(\lambda _{1}=1\) and \(\lambda _{2}= 2\), we have the following Karush–Kuhn–Tucker conditions:

$$\begin{aligned} 8x_{1}+2\mu (x_{1}-2)=0\\ 4x_{2}+2\mu (x_{2}-2)=0\\ \mu \left( (x_{1}-2)^{2}+(x_{2}-2)^{2}-1\right) =0 \end{aligned}$$

By solving its equations, we get the solution \(x=(1,2; 1,5)\) with \(\mu =6\). Also the minimum value of objective function is \({\tilde{f}}_{\min }= (1.44, 5.13, 8.82)\) and we can find its value of the ranking function 5.13.

This problem have been already solved by Pathat et al. [12]. The following Table 3 gives the optimal solution and the ranking function value of our method with that of Pathak et al.

We also observe that \({\mathcal {R}}_{1}={\mathcal {R}}_{2}\).

Table 3 Comparison of our method with that of Pathak et al.

Example 4

[31] Consider the following forth fuzzy nonlinear problem,

$$\left\{ {\begin{array}{*{20}l} {\min \tilde{f}(x) = \tilde{3}x_{1} \oplus \tilde{2}x_{2}^{2} } \hfill \\ {st} \hfill \\ {(x_{1} - 2)^{2} + x_{2}^{2} \le 4} \hfill \\ {x_{1} ,x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$
(17)

After defuzzification, we have

$$\left\{ {\begin{array}{ll} {\min \tilde{f}(x) = \left( { - 6.5x_{1} - 45x_{2}^{2} ,6.5x_{1} + 4.5x_{2}^{2} } \right)} \hfill \\ {st} \hfill \\ {(x_{1} - 2)^{2} + x_{2}^{2} \le 4} \hfill \\ {x_{1} ,x_{2} \ge 0.} \hfill \\ \end{array} } \right.$$

Either we pose \(\lambda _{1}=1\) and \(\lambda _{2}=2\), then we have KKT conditions

$$\begin{aligned} 6.5 +2\mu (x_{1}-2)=0\\ 9x_{2}+2\mu _{2}x_{2}=0\\ \mu \left( (x_{1}-2)^{2}+x_{2}^{2}-4\right) =0 \end{aligned}$$

By solving its equations, we have \(x_{1}=0\), \(x_{2}=0\), \(\mu =1.62\), and \({\tilde{f}}_{\min }=(0,0,0)\) and the value of the ranking function is 0.

This problem have been already solved by Chalco-Cano [31]. The following Table 4 gives the optimal solution and the ranking function value of our method with that of Chalco-Cano et al.

We also observe that \({\mathcal {R}}_{1}= {\mathcal {R}}_{2}\).

Table 4 Comparison of our method with that of Chalco-Cano et al.

Example 5

[33] A manufacturing factory is going to produce two kinds of products A and B in a period (such as 1 month). The production of A and B requires three kinds of resources \(R_{1}\), \(R_{2}\), and \(R_{3}\). The requirements of each kind of resource to produce each product A are 2, 4, and 3 units, respectively. To produce each product B, 3, 2, and 2 units are to be required, respectively. The planed available resource of \(R_{1}\) and \(R_{2}\) are 50 and 44 units, respectively, but there are additional 30 and 20 units safety store of material which are administrated by the general manager. The estimate value of available quantity of resource \(R_{3}\) is 36 units, estimate error are 5units. Assuming that the planned production quantity of A and B is \(x_{1}\) and \(x_{2}\), respectively. Further assuming that unit cost and sale’s price of product A and B are \(UC_{1}=c_{1}\); \(UC_{2}=c_{2}\) and \(US_{1}=\dfrac{k_{1}}{x_{1}^{1/\alpha _{1}}}\); \(US_{2}=\dfrac{k_{2}}{x_{2}^{1/\alpha _{2}}}\), respectively. It can be described as

$$\left\{ {\begin{array}{ll} {\max f(x) = k_{1} x_{1}^{{1 - 1/\alpha _{1} }} - c_{1} x_{1} + k_{2} x_{2}^{{1 - 1/\alpha _{2} }} - c_{2} x_{2} } \hfill \\ {st} \hfill \\ {2x_{1} + 3x_{2} \le \widetilde{{50}}} \hfill \\ {4x_{1} + 2x_{2} \le \widetilde{{44}}} \hfill \\ {3x_{1} + 2x_{2} = \widetilde{{36}}} \hfill \\ {x_{1} ,x_{2} \ge 0} \hfill \\ {r = 1,\;k_{1} = 50} \hfill \\ {c_{1} = 8.0,\;k_{2} = 45,\;c_{2} = 10,\;\alpha _{1} = \alpha _{2} = 2.} \hfill \\ \end{array} } \right.$$
(18)

With our method, we have obtained \(x=(9.76, 5.06)\) with \(f_{\max }=128.75\). Also, this example have been solved by Jiafu Tang et al. [33]. In their method, they have made the following assumption: the decision-maker hopes that the total profits reach and aspiration level \(z_{0}\) and not less than lower level \(z_{0}-p_{0}\). That allow them to solve the problem by varying the value of parameter p (\(p_{0}=30, p_{1}=30, p_{2}=20, p_{3}^{-}=p_{3}^{+}=5.0\)). They have obtained many solutions of which the best is the same with our method. Hence the following table.

Here, it is not necessary to compute the ranking function value because the objective function is deterministic. A direct comparison of optimal function value is made. We observe that the two method give the same optimal solution.

Table 5 Comparison of our method with that of Jiafu Tang et al.

3.3 Discussion

Each table, Tables 1 to 5 give the results of a comparison of our method with others on an example. In the first two of these five examples, the objective function and constraint function are fuzzy. For the two following examples, the objective function is fuzzy, and the constraint functions are deterministic. In the final instance, only the constraint functions are fuzzy.

Table 1:

compares the results of our method and that of Pathak et al. The solving example has a fuzzy function and a fuzzy constraint function. In this table, our method is the best because we have obtained the small value of the ranking function.

Table 2:

shows the results of our method and that of Kumar et al. The example we have dealt with also has objective function and constraint functions that are fuzzy, as the previous one. We can observe that our method is the most effective yet.

Table 3:

presents a comparison of our method and that of Pathak et al. In this case, the example has a different form than in the first case. Here, the objective function is fuzzy, but the constraint functions are deterministic. From the results, these two methods are equivalent because they give the same result.

Table 4:

also highlights the comparison of our method with that of Chalco-Cano et al. In this case, we have a fuzzy objective function and deterministic constraint functions. The results of both methods are identical. We can say that the two methods are equivalent.

Table 5:

presents a comparative study of our method with that of Jiafu Tang et al. for the resolution of production problems in a manufacturing factoring. For this real-world problem, the objective function is deterministic, and the constraint functions are fuzzy. Also, we have the same solution.

We can summarize from these comments that our method is equivalent to other used methods when it is only the objective function that is fuzzy or the constraint functions that are fuzzy. Our method gives the best solution when both the objective and constraint functions are fuzzy. Therefore, we can say that our method is the best option for solving nonlinear fuzzy optimization problems when objective and constraints functions are fuzzy.

We are focused on cases in which variables are deterministic in this work. We are unsure how our method will work when the problem is full fuzzy because at this stage of our method, there is no provision for the defuzzification of a product of fuzzy numbers.

4 Conclusion

In this work, fuzzy nonlinear optimization problems were solved using the null set concept and Karush–Kuhn–Tucker optimality conditions that leads us to using of the notion of ordering cones and partial ordering to choose the optimal solution. In practice, our method works by transforming any fuzzy nonlinear optimization problem into a deterministic bi-objective nonlinear optimization problem before applying the optimality conditions of Karush–Kuhn–Tucker. The theoretical performances of the method were shown by some theorems. In addition, five numerical examples were treated in order to highlight the numerical performances. Based on the results, we have compared our results to those of other methods from the literature. This comparison has enabled us to evaluate the efficacy of our approach in resolving fuzzy nonlinear optimization problems utilizing fuzzy constraint functions. According to the treated examples and the chosen methods, our method is a suitable alternative to the fuzzy nonlinear optimization problems.

In future research, we will start with fully fuzzy optimization problems. Afterward, we will investigate the solution to the multiobjective cases. Finally, we will provide a comparative study of extended version with other methods from the literature on many test problems and real-world problems.