1 Introduction

Real-life optimization problems inherently involve more than one objective function to be optimized (minimized or maximized) under the given circumstances. A mathematical programming problem having more than two objectives is termed as multiobjective programming problems (MOPPs). The modeling and optimization texture of MOPPs depends on the nature of optimizing the environment, such as deterministic framework, and vague and random uncertainties. In MOPPs, it may seldom occur that a single solution set satisfies all objectives efficiently at a time, but it is quite possible to get a compromise solution that satisfies all objective functions marginally simultaneously. Thus, a considerable number of solution methods have been suggested in the literature to solve the MOPPs.

Initially, Zadeh (1965) proposed the fuzzy set (FS) and afterward, it was explored in multiple criteria, multiple attributes, and multiobjective decision-making problems. Afterward, Zimmermann (1978) investigated the fuzzy programming technique for the multiobjective optimization problem, which was based on the membership function (degree of belongingness) for the marginal evaluation of each objective function. The fuzzy programming approach (FPA) is concerned with the maximization of satisfaction degree for the DM(s) while simultaneously dealing with multiple objectives. Ahmad (2021a, 2021b) discussed the modeling and optimization of MOPPs under uncertainty.

The limitation of the fuzzy set has been examined because it is not capable of defining the non-membership function of the element into the fuzzy set. Firstly, (Atanassov 1986) introduced the intuitionistic fuzzy set (IFS). Based on IFS, Angelov (1997) first addressed the intuitionistic fuzzy programming approach (IFPA) for real-life decision-making problems. The IFPA is a more flexible and realistic optimization technique compared to the fuzzy technique because it tackles the membership as well as non-membership functions simultaneously. Mahmoodirad et al. (2018), Ahmad et al. (2021b), Ahmadini and Ahmad (2021b) studied the multiobjective transportation problem under an intuitionistic fuzzy environment. Ahmad and John (2021), Ahmad et al. (2021c, 2021e, 2021a) suggested an intuitionistic fuzzy programming method for multiobjective linear programming problems.

In reality, the characteristic of hesitation is the most trivial concerns in the decision-making process. It seldom happens that DM(s) is/are not sure about a single specific value about membership degree (degree of belongingness) of the element into the feasible decision set. In this situation, a set of some conflicting values is possible to assign it. Inspired with such cases, Torra and Narukawa (2009) proposed a set named hesitant fuzzy set (HFS), which is the extension of FS. The HFS deals with a set of different possible degrees of the element’s membership function into a feasible decision set. Researchers have widely used the hesitant fuzzy technique. Ahmad et al. (2018), Ahmadini and Ahmad (2021a) presented the research study on MOPPs under hesitant situation. Rouhbakhsh et al. (2020) studied the MOPPs under a hesitant fuzzy environment. Bharati (2018) discussed hesitant fuzzy optimization techniques for multiobjective linear programming problems. Zhou and Xu (2018) proposed a solution algorithm for portfolio optimization under a hesitant fuzzy environment.

The extension and generalization of FS and IFS are presented by Smarandache (1999) and named as neutrosophic set (NS). The NS deals with three different membership functions, namely truth (degree of belongingness), indeterminacy (belongingness up to some extent), and a falsity (degree of non-belongingness) of the element into the feasible decision set. Later on, many researchers such as (Ahmad and Adhami 2019a, b) have used the neutrosophic decision set to develop the solution methods for MOPPs. Ahmad et al. (2020), Ahmad et al. (2021c) presented modified neutrosophic optimization techniques for multiobjective supply chain planning problems. Abdel-Basset et al. (2018) presented a study on a fully neutrosophic linear programming problem. Ahmad et al. (2019), Adhami and Ahmad (2020), Ahmad (2021a) have also addressed neutrosophic goal programming technique for shale gas water management under uncertainty. Ahmad and Adhami (2019a), Ahmad (2021b) and Ahmad et al. (2021a) suggested neutrosophic optimization models for multiobjective transportation problems.

All the above-discussed decision sets and similar optimization techniques cannot unify and capture the two different scopes of human perceptions, such as indeterminacy and hesitations degrees, that arise simultaneously while making fruitful decisions. To highlight this situation, we have taken advantage of a single-valued neutrosophic hesitant fuzzy decision set and, consequently, a neutrosophic hesitant fuzzy multiobjective programming problem (NHFMOPP) is developed. Since the proposed NHFMOPP is the continuous case of optimization, we define the different membership functions based on the different experts/managers’ opinions regarding the decisions (Bharati 2018; Ahmad et al. 2018). While dealing with the multicriteria decision-making problem, the neutrosophic hesitant fuzzy set becomes more complex, and simultaneously the mathematical calculations are more advanced than the classical ones (Ye 2015). During handling the multicriteria decision-making problems, neutrosophic hesitant fuzzy parameters are taken into considerations. For more details about the neutrosophic hesitant fuzzy parameters and their numerical operations, one may visit (Ye 2015; Bharati 2018; Ahmad et al. 2022). The advanced modeling and optimization framework of NHFMOPPs is very close to real-life scenarios. It ensures the most promising optimization environment by reducing the violation of risk reliability for the decision-maker(s) (DM(s)) under a neutrosophic hesitant fuzzy environment. A novel solution scheme is investigated to solve the proposed NHFMOPPs and named neutrosophic hesitant fuzzy Pareto optimal solutions (NHFPOSs). Furthermore, two different optimization techniques are also suggested to determine the NHFPOSs of NHFMOPPs. The robustness of NHFPOS is revealed by performing the optimality tests. The optimization techniques unavoidably consider the neutral thoughts (indeterminacy degrees) and the different opinions (hesitation degrees) of various experts or DMs while making the decisions. A wholesome opportunity to interact with different distinguished experts or DMs is also advantageous in quantitative decision-planning scenarios. Hence, the proposed optimization techniques capture all sorts of vagueness, impreciseness, and the incompleteness that inevitably arise in real-life optimization problems and manage flexibility in the decision-making scenario. Thus, this paper can be considered as an extension of the work carried out by Ahmad et al. (2018), Ahmadini and Ahmad (2021a, 2021b), Bharati (2018), Ahmad and Adhami (2019a), Adhami et al. (2021) and Rouhbakhsh et al. (2020) under neutrosophic hesitant fuzzy environment. The propounded modeling and optimization structure of NHFMOPPs can be unanimously accepted as super-technique (that contains all techniques under special cases) for solving NHFMOPPs. From the author’s knowledge, no such solution concept is propounded in the literature for modeling and solving the NHFMOPPs so far.

The remaining part of the manuscript is presented as follows: In Sect. 2, the basic concept related to neutrosophic, hesitant fuzzy, and single-valued neutrosophic hesitant fuzzy sets is presented, while Sect. 3 depicts the modeling of MOPPs under neutrosophic hesitant fuzzy environment. A computational study containing three different real-life applications is presented in Sect. 4. The comparison of the proposed NHFPOSs of NHFMOPPs is made with other approaches. The conclusions and future research direction are addressed in Sect. 5.

2 Preliminaries

Definition 1

Smarandache (1999) An NS is said to be a single-valued neutrosophic set (SVNS) A, if the membership functions are represented as follows:

$$\begin{aligned} A = \{ < x, \mu _{A}(x) , \lambda _{A}(x) , \nu _{A}(x) > | x \in X \} \end{aligned}$$

where \( \mu _{A}(x) , \lambda _{A}(x)\) and \(\nu _{A}(x) \in [0,1]\) and \(0 \le \mu _{A}(x) + \lambda _{A}(x) + \nu _{A}(x) \le 3\) for all \(x \in X\).

Definition 2

Torra and Narukawa (2009) A hesitant fuzzy set (HFS) A over universe of discourse X can be represented in terms of a function \(h_{A}(x)\) returning a set of values in [0,1] and is expressed as follows:

$$\begin{aligned} H = \{ < x, h_{H}(x)> | x \in X \} \end{aligned}$$

where \(h_{H}(x)\) gives a set of values in [0,1], depicting the membership grades of the value \(x \in X \) into H. Moreover, \(h_{H}(x)\) is also called as a hesitant fuzzy element.

Definition 3

Torra and Narukawa (2009) The upper and lower bounds are represented as \(h^{-}(x)= \mathrm{min}~h(x)\) and \(h^{+}(x)= \mathrm{max}~h(x)\) for each hesitant fuzzy element h, respectively.

Definition 4

Ye (2015) Suppose X is a fixed set; then a single-valued neutrosophic hesitant fuzzy set (SVNHFS) \(N_{h}\) on X is expressed as follows:

$$\begin{aligned} N_{h} = \{ < x, \mu _{h}(x), \lambda _{h}(x), \nu _{h}(x) > | x \in X \} \end{aligned}$$

where \(\mu _{h}(x), \lambda _{h}(x)\) and \(\nu _{h}(x)\) are three sets of values in [0,1], representing the truth hesitant, indeterminacy hesitant and the falsity hesitant membership degrees of the element \(x \in X \) into the set \(N_{h}\), respectively. The conditions hold \(0 \le \alpha ,~\beta ,~\gamma \le 1\) and \(0 \le \alpha ^{+},~\beta ^{+},~\gamma ^{+} \le 3\), where \(\alpha \in \mu _{h}(x)\), \(\beta \in \lambda _{h}(x)\), \(\gamma \in \nu _{h}(x)\) with \(\alpha ^{+}\in \mu ^{+}_{h}(x)= \cup _{\alpha \in \mu _{h}(x)}\mathrm{max} \{ \alpha \}\), \(\beta ^{+}\in \lambda ^{+}_{h}(x)= \cup _{\beta \in \lambda _{h}(x)}\mathrm{max} \{ \beta \}\) and \(\gamma ^{+}\in \nu ^{+}_{h}(x)= \cup _{\gamma \in \nu _{h}(x)}\mathrm{max} \{ \gamma \}\) for all \(x \in X \).

Definition 5

Ye (2015) Suppose that \(N_{{h}_{1}}\) and \(N_{{h}_{2}}\) be two SVNHFSs; then the union of these sets can be represented by

$$\begin{aligned} N_{{h}_{1}} \cup N_{{h}_{2}}&=\! \{\mu _{h} \!\in \! (\mu _{{h}_{1}} \!\cup \! \mu _{{h}_{2}}) | \mu _{h} \!\ge \! \mathrm{max}~(\mathrm{min}~\{ \mu _{{h}_{1}}\! \cup \! \mu _{{h}_{2}} \}), \\&\lambda _{h} \in (\lambda _{{h}_{1}} \cup \lambda _{{h}_{2}}) | \lambda _{h} \le \mathrm{min}~ (\mathrm{max}~\{ \lambda _{{h}_{1}} \cup \lambda _{{h}_{2}} \}),\\&\nu _{h} \in (\nu _{{h}_{1}} \cup \nu _{{h}_{2}}) | \nu _{h} \le \mathrm{min}~ (\mathrm{max}~\{ \nu _{{h}_{1}} \cup \nu _{{h}_{2}} \}) \} \end{aligned}$$

Definition 6

Ye (2015) Suppose that \(N_{{h}_{1}}\) and \(N_{{h}_{2}}\) be two SVNHFSs; then the intersection of these sets can be represented by

$$\begin{aligned} N_{{h}_{1}} \cap N_{{h}_{2}}&=\! \{\mu _{h} \!\in \! (\mu _{{h}_{1}} \!\cap \! \mu _{{h}_{2}}) | \mu _{h} \!\le \! \mathrm{min}~(\mathrm{max}~\{ \mu _{{h}_{1}} \!\cap \! \mu _{{h}_{2}} \}),\\&\lambda _{h} \in (\lambda _{{h}_{1}} \cap \lambda _{{h}_{2}}) | \lambda _{h} \ge \mathrm{max}~(\mathrm{min}~\{ \lambda _{{h}_{1}} \cap \lambda _{{h}_{2}} \}),\\&\nu _{h} \in (\nu _{{h}_{1}} \cap \nu _{{h}_{2}}) | \nu _{h} \ge \mathrm{max}~(\mathrm{min}~\{ \nu _{{h}_{1}} \cap \nu _{{h}_{2}} \}) \} \end{aligned}$$

Definition 7

The general form of MOPPs is given as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Minimize} &{} ~(O_{1} (x),O_{2}(x), \ldots ,O_{p}(x))\\ \mathrm{s.t}. &{} B(x) (\le or = or \ge ) 0,~~x \ge 0 \end{array} \end{aligned}$$
(2.1)

where \(O_{p}(x)\) is the pth objective functions and B(x) and x are the real-valued functions and a set of decision variable vectors.

3 Formulation of MOPPs under neutrosophic hesitant fuzzy environment

In this section, we have presented the modeling approach for MOPP with neutrosophic hesitant fuzzy goals of each objective function under a neutrosophic hesitant fuzzy environment. In addition to that, we have also proposed two optimization techniques to solve the neutrosophic hesitant fuzzy multiobjective programming problems (NHFMOPP).

In the MOPP (2.1), we assume that the DM has neutrosophic fuzzy goals for each objective functions that are to be achieved. In such circumstance, the MOPP (2.1) may be transformed into the neutrosophic fuzzy multiobjective programming problem (NFMOPP) and is stated as below:

$$\begin{aligned} \begin{array}{ll} (\mathrm{NFMOPP})&{}{\widetilde{\mathrm{Minimize}}}~~ (\widetilde{O_{1}}(x), \widetilde{O_{2}}(x),...,\widetilde{O_{p}}(x))\\ &{}\mathrm{s.t}.~~ B(x) (\le or = or \ge ) 0,~~x \ge 0 \end{array} \end{aligned}$$
(3.1)

where the notations \(\widetilde{(\cdot )}\) represent a flexible or neutrosophic fuzzy version of \((\cdot )\), meaning that “the functions should be minimized as much as possible under neutrosophic fuzzy environment” subject to the given constraints (Ahmad et al. 2021d; Ahmad and Smarandache 2021). A neutrosophic optimization problem is determined by X possible solution sets and a set of neutrosophic goals \(G_{i},~i=1,2, \ldots , p\), along with a set of neutrosophic constraints \(C_{j},~j=1,2, \ldots , m\) which is depicted by neutrosophic set on X.

The idea of fuzzy decision set was developed by Bellman and Zadeh (1970), and later on, it was widely used by many researchers in the real-life optimization problems. Therefore, the fuzzy decision set is stated as \( D= G \cap C\).

Equivalently, the following expressions mathematically represent the neutrosophic decision set \(D_{N}\):

$$\begin{aligned} D_{N}= (\cap _{i=1}^{p} G_{i}) (\cap _{j=1}^{m} C_{j}) = (x,~\mu _{D}(x),~\lambda _{D}(x),~\nu _{D}(x)~) \end{aligned}$$

where

$$\begin{aligned} \mu _{D}(x)= & {} \mathrm{min} \left\{ \begin{array}{ll} \mu _{G_{1}}(x), \mu _{G_{2}}(x), \ldots , \mu _{G_{i}}(x) \\ \mu _{C_{1}}(x), \mu _{C_{2}}(x), \ldots , \mu _{C_{j}}(x) \\ \end{array} \right\} \forall ~~ x \in X\nonumber \\ \end{aligned}$$
(3.2)
$$\begin{aligned} \lambda _{D}(x)= & {} \mathrm{max} \left\{ \begin{array}{ll} \lambda _{G_{1}}(x), \lambda _{G_{2}}(x), \ldots , \lambda _{G_{i}}(x) \\ \lambda _{C_{1}}(x), \lambda _{C_{2}}(x), \ldots , \lambda _{C_{j}}(x) \\ \end{array} \right\} \forall ~~ x \in X\nonumber \\ \end{aligned}$$
(3.3)
$$\begin{aligned} \nu _{D}(x)= & {} \mathrm{max} \left\{ \begin{array}{ll} \nu _{G_{1}}(x), \nu _{G_{2}}(x), \ldots , \nu _{G_{i}}(x) \\ \nu _{C_{1}}(x), \nu _{C_{2}}(x), \ldots , \nu _{C_{j}}(x) \\ \end{array} \right\} \forall ~~ x \in X\nonumber \\ \end{aligned}$$
(3.4)

The truth, indeterminacy, and a falsity membership degrees are represented by \( \mu _{D}(x) , \lambda _{D}(x)\) and \(\nu _{D}(x)\), respectively.

The minimum and maximum values \(L_{i}\) and \(U_{i}\) of each objectives are given as follows:

$$\begin{aligned} U_{i}= \mathrm{max}~ [O_{i}(x)]~~\hbox { and }~~L_{i} = \mathrm{min}~ [O_{i}(x)]~~~\forall ~i=1,2,3, \ldots , p.\nonumber \\ \end{aligned}$$
(3.5)

The bounds for pth objective function in the neutrosophic environment are determined by the following expressions:

$$\begin{aligned}&U_{i}^{\mu }= U_{i},~~~~L_{i}^{\mu }= L_{i}~~~~\text { for truth membership}\\&U_{i}^{\lambda }\!=\! L_{i}^{\mu } \!+\! y_{i},~~~~L_{i}^{\lambda }\!=\! L_{i}^{\mu }~~\text { for indeterminacy membership}\\&U_{i}^{\nu }= U_{i}^{\mu },~~~~L_{i}^{\nu }=L_{i}^{\nu }+ z_{i} ~~~~ \text { for falsity membership} \end{aligned}$$

where \(y_{i}\) and \(z_{i} \in (0,1)\) are the known real numbers.

With the help of lower and upper bounds, the linear membership functions can be defined as follows:

$$\begin{aligned} \mu _{G_{i}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{i}(x) < L_{i}^{\mu }\\ 1- \frac{O_{i}(x) - L_{i}^{T}}{U_{i}^{\mu } - L_{i}^{\mu }} &{} \mathrm{if}~~ L_{i}^{\mu } \le O_{i}(x) \le U_{i}^{\mu }\\ 0 &{} \mathrm{if}~~ O_{i}(x) > U_{i}^{\mu } \end{array} \right. \end{aligned}$$
(3.6)
$$\begin{aligned} \lambda _{G_{i}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{i}(x) < L_{i}^{\lambda }\\ 1- \frac{O_{i}(x) - L_{i}^{\lambda }}{U_{i}^{\lambda } - L_{i}^{\lambda }} &{} \mathrm{if}~~ L_{i}^{\lambda } \le O_{i}(x) \le U_{i}^{\lambda }\\ 0 &{} \mathrm{if}~~ O_{i}(x) > U_{i}^{\lambda } \end{array} \right. \end{aligned}$$
(3.7)
$$\begin{aligned} \nu _{G_{i}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{i}(x) > U_{i}^{\nu }\\ 1- \frac{U_{i}^{\nu } - O_{i}(x)}{U_{i}^{\nu } - L_{i}^{\nu }} &{} \mathrm{if}~~ L_{i}^{\nu } \le O_{i}(x) \le U_{i}^{\nu }\\ 0 &{} \mathrm{if}~~ O_{i}(x) < L_{i}^{\nu } \end{array} \right. \end{aligned}$$
(3.8)

where \(L_{i}^{(.)} \ne U_{i}^{(.)}\) for all p objective function. Once the neutrosophic decision \(D_{N}\) is derived, the optimal decision \(x^{*} \in X\) can be determined if and only iff

$$\begin{aligned} \mu _{D}(x^{*})= & {} \mathrm{max}_{x \in X} \mu _{D}(x), \\ \lambda _{D}(x^{*})= & {} \mathrm{min}_{x \in X} \lambda _{D}(x) ~~\hbox { and }\nu _{D}(x^{*})= \mathrm{min}_{x \in X} \nu _{D}(x) \end{aligned}$$

Definition 8

\(x^{*} \in X\) can be considered as a neutrosophic fuzzy Pareto optimal solution (NFPOS) to the NFMOPP (3.1) if and only iff, there does not exist any other \(x \in X\) such that \(\mu _{G_{i}}(x) \ge \mu _{G_{i}}(x^{*}),~~\lambda _{G_{i}}(x) \le \lambda _{G_{i}}(x^{*}) ~~ and~~\nu _{G_{i}}(x) \le \nu _{G_{i}}(x^{*})~~ \forall i=1,2, \ldots , p\); and \(\mu _{G_{j}}(x) > \mu _{G_{j}}(x^{*}),~~\lambda _{G_{j}}(x)< \lambda _{G_{j}}(x^{*}) ~~ and~~\nu _{G_{j}}(x) < \nu _{G_{j}}(x^{*})\) for at least one j, respectively.

After depicting the different membership functions \(\mu _{G_{i}}(x),~~\lambda _{G_{i}}(x)\text { and }\nu _{G_{i}}(x)\) fro each objective functions \(O_{i}(x)\) by the DM, and applying the neutrosophic decision set (Ahmad et al. 2019), the NFMOPP (3.1) is converted into the equivalent problem:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~ \mathrm{min}~(\mu _{G_{1}}(x),\mu _{G_{2}}(x), \ldots ,\mu _{G_{p}}(x))\\ \mathrm{Minimize} &{}~~ \mathrm{max}~(\lambda _{G_{1}}(x),\lambda _{G_{2}}(x), \ldots ,\lambda _{G_{p}}(x))\\ \mathrm{Minimize} &{}~~ \mathrm{max}~(\nu _{G_{1}}(x),\nu _{G_{2}}(x), \ldots ,\nu _{G_{p}}(x))\\ \mathrm{s.t.} &{} ~~x \in X. \end{array} \end{aligned}$$
(3.9)

Now, problem (3.9) is equivalent to problem (3.10) and can be shown as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~\phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{} \mu _{G_{i}}(x) \ge \alpha ,~~\lambda _{G_{i}}(x) \le \beta \\ &{} \nu _{G_{i}}(x) \le \gamma ,~~ \mu _{G_{i}}(x) \ge \lambda _{G_{i}}(x)\\ &{} \mu _{G_{i}}(x) \ge \nu _{G_{i}}(x),~~0 \le \alpha , \beta , \gamma \le 1\\ &{}x \in X,~~~~~\forall ~i=1,2, \ldots , p. \end{array} \end{aligned}$$
(3.10)

The problem (3.10) is a neutrosophic optimization model and used by many researchers in different fields of real-life applications, see (Ahmad et al. 2020; Ahmad and Adhami 2019a; Ahmad et al. 2018). Based on the extended concept of Sakawa (2013) that when the different membership functions (3.6), (3.7) and (3.8) are used and problem (3.10) yields in a unique solution, then an NFPOS will be the required optimal solution. Otherwise, we can examine the Pareto optimality test by eliciting the equivalent problem (3.11):

$$\begin{aligned} \begin{array}{ll} \mathrm{Max} &{}~~ \sum _{i=1}^{p} \eta _{i}\\ \mathrm{s.t.} &{} \mu _{G_{i}}(x) - \eta _{i} = \mu _{G_{i}}(x^{*})\\ &{}\lambda _{G_{i}}(x) + \eta _{i} = \lambda _{G_{i}}(x^{*})\\ &{}\nu _{G_{i}}(x) + \eta _{i} = \nu _{G_{i}}(x^{*})\\ &{}x \in X,~~~~~\forall ~i=1,2, \ldots , p. \end{array} \end{aligned}$$
(3.11)

where \(\eta =(\eta _{1}, \eta _{2}, \ldots , \eta _{p})^{T}\) and \(x^{*}\) is an optimal solution of problem (3.9). Then for \(({\bar{\eta }},~{\bar{x}})\) as an optimal solution for problem (3.11), we have any one of the following two cases:

  1. (1)

    if \( {\bar{\eta }}_{i} \ne 0\), for at least one i, then \({\bar{x}}\) is an NFPOS for (3.1).

  2. (2)

    if \( {\bar{\eta }}_{i} = 0,~~i=1,2, \ldots ,p\), then \(x^{*}\) is an NFPOS for (3.1).

In NFMOPP (3.1), the DM incorporates his/her neutral thoughts or indeterminacy degree while making decisions. It would be better to assign the evaluations of various experts or decision-makers under the neutrosophic environment.

Hence, a novel solution method based on a single-valued neutrosophic hesitant fuzzy set is investigated for solving the MOPP. The propounded method is the mixture of the two sets, namely neutrosophic set (Smarandache 1999) and hesitant fuzzy set (Torra and Narukawa 2009), respectively. An exciting characteristic feature of the proposed method is that it manages the opposite and adverse opinions of various experts about the parameters, ensuring the DM(s) to determine the most suitable outcomes in the neutrosophic environment.

Thus, one may incorporate a neutrosophic hesitant fuzzy multiobjective programming problem (NHFMOPP) as an extension of NFMOPP under neutrosophic hesitant fuzzy modeling situation.

$$\begin{aligned} \begin{array}{ll} (NHFMOPP)&{}\widetilde{{\widetilde{Minimize}}}~~(\widetilde{\widetilde{O_{1}}}(x), \widetilde{\widetilde{O_{2}}}(x),...,\widetilde{\widetilde{O_{p}}}(x))\\ &{}s.t.~~ B(x) (\le or = or \ge ) 0,~~x \ge 0 \end{array}\nonumber \\ \end{aligned}$$
(3.12)

where the notations \(\widetilde{\widetilde{(\cdot )}}\) represent a relaxed or neutrosophic hesitant fuzzy form of \((\cdot )\), meaning that “the functions should be minimized as much as possible under neutrosophic hesitant fuzzy environment” subject to the given constraints.

In the NHFMOPP (3.12), the different membership functions such as truth, indeterminacy, and falsity hesitant membership degrees for each objective are defined by the marginal evaluations of several experts or decision-makers. The proposed NHFMOPP (3.12) requires some parameters well in advance before solving them. In order to define these parameters, one should seek the opinion of different experts about their aspiration values between 0 and 1. The values closer to “0” signify the lower satisfactory degree for the corresponding objective, and a value nearer to “1” depicts a higher satisfactory degree and vice versa.

A neutrosophic hesitant fuzzy optimization problem is determined by X possible solution sets and neutrosophic hesitant fuzzy goals \(\widetilde{G_{i}},~i=1,2, \ldots , p\), for each objective functions \(\widetilde{\widetilde{O_{i}}},~i=1,2, \ldots , p\) which is depicted by neutrosophic fuzzy set on X.

Accordingly, the neutrosophic hesitant fuzzy decision set \( D^{N}_{h} \) can be expressed as follows:

$$\begin{aligned} \widetilde{G_{i}}= \{ x,~\mu _{{\widetilde{G_{i}}}}(x),~\lambda _{\widetilde{G_{i}}}(x),~\nu _{\widetilde{G_{i}}}(x)~|~ x \in X\} \end{aligned}$$

More clearly, a set of different membership functions under neutrosophic hesitant fuzzy environment can be represented as follows:

$$\begin{aligned} h_{{\widetilde{G}}_{i}}(x)= \left\{ \begin{array}{ll} \mu _{\widetilde{G_{i}}}(x)&{}=\{\mu _{{G}^{1}_{i}}(x), \mu _{{G}^{2}_{i}}(x), \ldots , \mu _{{G}^{l_{i}}_{i}}(x) \}\\ \lambda _{\widetilde{G_{i}}}(x)&{}=\{\lambda _{{G}^{1}_{i}}(x), \lambda _{{G}^{2}_{i}}(x), \ldots , \lambda _{{G}^{l_{i}}_{i}}(x) \}\\ \nu _{\widetilde{G_{i}}}(x)&{}=\{\nu _{{G}^{1}_{i}}(x), \nu _{{G}^{2}_{i}}(x), \ldots , \nu _{{G}^{l_{i}}_{i}}(x) \} \end{array} \right. \nonumber \\ \end{aligned}$$
(3.13)

Remark 1

One should note that the different membership functions \(\mu _{{G}^{k_{i}}_{i}}(x),~\lambda _{{G}^{k_{i}}_{i}}(x)\) and \(\nu _{{G}^{k_{i}}_{i}}(x)\) for all \(i\!=\!1,2, \ldots , p\) and \(k_{i}=1,2, \ldots , l_{i}\) would be decreasing one (or increasing) functions similar to Eqs. (3.6), (3.7) and (3.8), where \(l_{i}\) is the number of experts who assigns the attainment levels for the objective functions \(\widetilde{\widetilde{O_{i}}}(x)\) for all \(i=1,2, \ldots , p\), in neutrosophic environment. Furthermore, assume that \(H_{p}(x)= \{ \mu _{{G}^{k_{i}}_{i}}(x),~\lambda _{{G}^{k_{i}}_{i}}(x),~\nu _{{G}^{k_{i}}_{i}}(x)~|~i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i} \}\). In the propounded optimization method, neutrosophic hesitant fuzzy Pareto optimal solution (NHFPOS) to the NHFMOPP (3.12) is discussed in an effective and efficient manner.

Definition 9

\(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is considered as a NHFPOS to the NHFMOPP (3.12) if and only iff, there does not exist any other \(x \in X\) such that \(\mu _{{G}^{k_{i}}_{i}}(x) \ge \mu _{{G}^{k_{i}}_{i}}(x^{*}),~~\lambda _{{G}^{k_{i}}_{i}}(x) \le \lambda _{{G}^{k_{i}}_{i}}(x^{*})\) and \(\nu _{{G}^{k_{i}}_{i}}(x) \le \nu _{{G}^{k_{i}}_{i}}(x^{*})~~ \forall i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i}\); and \(\mu _{{G}^{k_{j}}_{j}}(x) > \mu _{{G}^{k_{j}}_{j}}(x^{*}),~~\lambda _{{G}^{k_{j}}_{j}}(x) < \lambda _{{G}^{k_{j}}_{j}}(x^{*})\) and \(\nu _{{G}^{k_{j}}_{j}}(x) < \nu _{{G}^{k_{j}}_{j}}(x^{*})\) for at least one \(j \in \{1,2, \ldots , p \}\) and \(k_{j}=1,2, \ldots , l_{j}\).

Remark 2

One should note that a NHFPOS can be treated as a NFPOS if \(l_{i}=1\) for all i in (3.13). Hence, the NFPOS is a special case of the NHFPOS.

In the following sections, we have discussed the two different optimization techniques for MOPP under neutrosophic hesitant fuzzy environment.

3.1 Proposed Optimization Technique-I

Suppose the NHFS \(\widetilde{G_{i}}\) for each objective function in the NHFMOPP (3.12). On implementing the intersection concept of NHFSs, one can depict the neutrosophic hesitant fuzzy decision set. Hence, the neutrosophic hesitant fuzzy decision set \( D^{N}_{h} \) can be stated by the following expressions:

$$\begin{aligned} D^{N}_{h}= \widetilde{G_{1}} \cap \widetilde{G_{2}} \cap \cdots \cap \widetilde{G_{p}} = \{ x, ~h_{D^{N}_{h}}(x) \} \end{aligned}$$

with the neutrosophic hesitant fuzzy membership element of \(h_{D^{N}_{h}}(x)\)

$$\begin{aligned} h_{D^{N}_{h}}(x)= \left\{ \begin{array}{ll} \mu _{D^{N}_{h}} (x)&{}= \cup _{\mu _{\widetilde{G_{1}}^{\theta _{1}}}(x) \in h_{{\widetilde{G}}_{1}}(x), \ldots , \mu _{\widetilde{G_{p}}^{\theta _{p}}}(x) \in h_{{\widetilde{G}}_{p}}(x)} ~|~ \mathrm{min}~\{ \mu _{\widetilde{G_{1}}^{\theta _{1}}}(x), \ldots , \mu _{\widetilde{G_{p}}^{\theta _{p}}}(x) \}\\ &{}= \left[ \mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \right] ^{\tau }_{r=1}\\ \lambda _{D^{N}_{h}} (x)&{}= \cup _{\lambda _{\widetilde{G_{1}}^{\theta _{1}}}(x) \in h_{{\widetilde{G}}_{1}}(x), \ldots , \lambda _{\widetilde{G_{p}}^{\theta _{p}}}(x) \in h_{{\widetilde{G}}_{p}}(x)} ~|~ \mathrm{max}~\{ \lambda _{\widetilde{G_{1}}^{\theta _{1}}}(x), \ldots , \lambda _{\widetilde{G_{p}}^{\theta _{p}}}(x) \} \\ &{}= \left[ \mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \right] ^{\tau }_{r=1}\\ \nu _{D^{N}_{h}} (x)&{}= \cup _{\nu _{\widetilde{G_{1}}^{\theta _{1}}}(x) \in h_{{\widetilde{G}}_{1}}(x), \ldots , \nu _{\widetilde{G_{p}}^{\theta _{p}}}(x) \in h_{{\widetilde{G}}_{p}}(x)} ~|~ \mathrm{max}~\{ \nu _{\widetilde{G_{1}}^{\theta _{1}}}(x), \ldots , \nu _{\widetilde{G_{p}}^{\theta _{p}}}(x) \} \\ &{}= \left[ \mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \right] ^{\tau }_{r=1} \end{array} \right. \end{aligned}$$
(3.14)

for each \(x \in X\) where \(\tau = l_{1},l_{1}, \ldots , l_{p}\) and \(\theta _{ir} \in \{ 1,2, \ldots , l_{i}\}\). The members of \(\mu _{D^{N}_{h}} (x)\) are the minimum of the set of truth hesitant membership functions, whereas the members of \(\lambda _{D^{N}_{h}} (x)\) and \(\nu _{D^{N}_{h}} (x)\) are the maximum of the set of indeterminacy and a falsity hesitant membership functions, respectively. Furthermore, \(\mu _{D^{N}_{h}} (x),~\lambda _{D^{N}_{h}}(x)\) and \(\nu _{D^{N}_{h}} (x)\) contains a set of truth, indeterminacy, and a falsity hesitant degrees of acceptance for neutrosophic hesitant fuzzy solutions.

We introduce the maximum satisfaction degrees of rth \((r=1,2, \ldots , \tau )\) member of each membership functions under neutrosophic hesitant fuzzy environment as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~ \mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1}\\ \mathrm{Minimize} &{}~~ \mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1}\\ \mathrm{Minimize} &{}~~ \mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1}\\ \mathrm{s.t.} &{} ~~x \in X. \end{array} \end{aligned}$$
(3.15)

Using auxiliary variables \(\alpha ,~\beta \) and \(\gamma \), problem (3.15) can be rewritten as follows:

$$\begin{aligned} r\mathrm{th}-\mathrm{problem} \left\{ \begin{array}{ll} \mathrm{Maximize} &{}~\phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{}\mu _{\widetilde{G_{r}}^{\theta _{ir}}}(x) \ge \alpha ,~~\lambda _{\widetilde{G_{r}}^{\theta _{ir}}}(x) \le \beta \\ &{} \nu _{\widetilde{G_{r}}^{\theta _{ir}}}(x) \le \gamma ,~~ \mu _{\widetilde{G_{r}}^{\theta _{ir}}}(x) \ge \lambda _{\widetilde{G_{r}}^{\theta _{ir}}}(x)\\ &{} \mu _{\widetilde{G_{r}}^{\theta _{ir}}}(x) \ge \nu _{\widetilde{G_{r}}^{\theta _{ir}}}(x),~~0 \le \alpha , \beta , \gamma \le 1\\ &{}x \in X,~~~~~\forall ~i=1,2, \ldots , p. \end{array} \right. \nonumber \\ \end{aligned}$$
(3.16)

After solving the rth problem (3.16), the maximal degrees of the attainment level \(\phi ^{*r}\) with the optimal solution \(x^{*r}\) can be obtained. Thus, on solving \(\tau \) problems given in problem (3.16), we can determine the maximal aspiration level degrees \(\phi ^{*1},~\phi ^{*2}, \ldots , \phi ^{*\tau }\) and the various set of optimal solutions as \(x^{*1},~x^{*2}, \ldots , x^{*\tau }\) equivalently.

Remark 3

If the DM is not satisfied by an NHFPOS among the rth NHFPOSs, then there is option for pessimistic NHFPOS or optimistic NHFPOSs. To serve this facility, assume that \(\phi ^{*m}=~\mathrm{min}~\{\phi ^{*1},~\phi ^{*2}, \ldots , \phi ^{*\tau }\}\) and \(\phi ^{*M}=~\mathrm{max}~\{\phi ^{*1},~\phi ^{*2}, \ldots , \phi ^{*\tau }\}\), then we depict \(< x^{*m},~H_{p}(x^{*m})~|~ x^{*m} \in X>\) as the pessimistic NHFPOS and \(< x^{*M},~H_{p}(x^{*M})~|~ x^{*M} \in X>\) as the optimistic NHFPOS, respectively.

In Theorem 1, we will prove that all the obtained solutions for problem (3.16) are NHFPOSs.

Theorem 1

If there exists a unique optimal solution \((x^{*r},\phi ^{*r})\) for the problem (3.16), then \(< x^{*r},~H_{p}(x^{*r})~|~ x^{*r} \in X>\) will be a NHFPOS for the NHFMOPP (3.12), where \(H_{p}(x^{*r})=\{ \mu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),~\lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),~\nu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r})~|~i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i} \}\).

Proof

Assume that \(< x^{*r},~H_{p}(x^{*r})~|~ x^{*r} \in X>\) is not NHFPOS for the NHFMOPP. Then, there exists an \(x \in X\) with \(< x,~H_{p}(x)>\) such that \(\mu _{{\widetilde{G}}^{k_{i}}_{i}}(x) \ge \mu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),~\lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x) \le \lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r})\) and \(\nu _{{\widetilde{G}}^{k_{i}}_{i}}(x) \le \nu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r})\) for all \(i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i}\), and \(\mu _{{\widetilde{G}}^{k_{j}}_{j}}(x) > \mu _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r}),~\lambda _{{\widetilde{G}}^{k_{j}}_{j}}(x) < \lambda _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r})\) and \(\nu _{{\widetilde{G}}^{k_{j}}_{j}}(x) < \nu _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r})\) for at least one \(j \in \{1,2, \ldots , p \}\) and \(~k_{j}=1,2, \ldots , l_{j}\). More precisely for all \(r=1,2, \ldots , \tau \), \(\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \ge \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}),~\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) and \(\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) for all \(i=1,2, \ldots , p\); and \(\mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) > \mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r}),~\lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) and \(\nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) for at least one \(j \in \{1,2, \ldots , p \}\).

Hence, we have

\(\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \ge \mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1}\),

\(\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \le \mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1}\),

\(\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \}^{p}_{i=1} \le \mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1}\).

The inequality results in a contradiction of the optimality or uniqueness of the optimal solution \(x^{*r}\) to the problem (3.16). Thus, Theorem 1 is proven.

3.2 Neutrosophic hesitant fuzzy Pareto optimality test

If there is no guarantee that the optimal solution \(x^{*r}\) is a unique optimal solution to the problem (3.16), then one can perform the Pareto optimality test in the neutrosophic hesitant situation to determine an NHFPOS. The neutrosophic hesitant fuzzy Pareto optimality test (NHFPOT) for \(x^{*r}\) can be carried out by obtaining the solution of following mathematical programming problem (3.17):

$$\begin{aligned} \begin{array}{ll} \mathrm{(NHFPOT)}~&{}\mathrm{Max} ~~ \sum _{i=1}^{p} \eta _{i}\\ &{}\mathrm{s.t.}~~ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) - \eta _{i} = \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\\ &{}~~~~\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) + \eta _{i} = \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\\ &{}~~~~\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) + \eta _{i} = \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\\ &{}~~~~x \in X,~~~\forall ~i=1,2, \ldots , p,~~\eta =(\eta _{1},\eta _{2}, \ldots , \eta _{p}) \ge 0. \end{array} \end{aligned}$$
(3.17)

Theorem 2

Let us consider that \((x^{*r}, \phi ^{*r})\) be an efficient solution for the problem (3.16). Then, for \(({\bar{x}}^{r}, {\bar{\eta }}^{r})\) as an optimal solution of problem (3.17), one can have the below two conditions:

  1. (a)

    If \( {\bar{\eta }}^{r}_{i} = 0,~~i=1,2, \ldots ,p\), then \(< x^{*r},~H_{p}(x^{*r})~|~ x^{*r} \in X>\) is a NHFPOS for the problem (3.16).

  2. (b)

    If \( {\bar{\eta }}^{r}_{i} \ne 0\), for at least one i, then \(< {\bar{x}}^{r},~H_{p}({\bar{x}}^{r})~|~ {\bar{x}}^{r} \in X>\) is a NHFPOS for the problem (3.16).

Proof

(a): Assume that \(< x^{*r},~H_{p}(x^{*r})~|~ x^{*r} \in X>\) is not NHFPOS for the NHFMOPP (3.12). Thus, in a same manner to Theorem 1, there is \(< x,~H_{p}(x)~|~ x \in X>\), for all \(r=1,2, \ldots , (l_{1}, l_{2}, \ldots l_{p})\), \(\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \ge \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}),~\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) and \(\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) for all \(i=1,2, \ldots , p\); and \(\mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) > \mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r}),~\lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) and \(\nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) for at least one \(j \in \{1,2, \ldots , p \}\). Hence by the definition of \(\eta _{i}\) as \(\eta _{i}= \{ (\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) - \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})), (\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x)), (\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) )\}\), \(i=1,2, \ldots , p\) we have \(\eta _{i} \ge 0,~~i=1,2, \ldots , p\) and \(\eta _{j} >0\) for one j. Thus, the problem (3.17) have a feasible solution \((x, \eta )\) with the objective function value \( \sum _{i=1}^{p} \eta _{i}>0\) which has contradiction with the postulates that \(({\bar{x}}^{r}, {\bar{\eta }}^{r})\) is an efficient solution of (3.17) with the optimal objective value \( \sum _{i=1}^{p} {\bar{\eta }}^{r}_{i} = 0\).

(b): Assume that \(< {\bar{x}}^{r},~H_{p}({\bar{x}}^{r})~|~ {\bar{x}}^{r} \in X>\) is not NHFPOS for the NHFMOPP (3.12). Thus, in a same fashion to Theorem 1, there exists an \(< x,~H_{p}(x)~|~ x \in X>\) such that for all \(r=1,2, \ldots , (l_{1}, l_{2}, \ldots l_{p})\), \(\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \ge \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}),~\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) and \(\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) \le \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\) for all \(i=1,2, \ldots , p\); and \(\mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) > \mu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r}),~\lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \lambda _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) and \(\nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x) < \nu _{\widetilde{G_{j}}^{\theta _{jr}}}(x^{*r})\) for at least one \(j \in \{1,2, \ldots , p \}\). Hence by the definition of \(\eta _{i}\) as \(\eta _{i}= \{ (\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) - \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})), (\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x)), (\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) )\}\), \(i=1,2, \ldots , p\). Then, \((x, \eta )\) is a feasible solution for the problem (3.17). We know that

$$\begin{aligned} \eta _{i}= & {} \{ (\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) - \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})), (\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \\&\quad - \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x)), (\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) )\} \ge {\bar{\eta }}_{i} \\&= \{ (\mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) - \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})), (\lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r})\\&\quad - \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x)), (\nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) - \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x) )\} \ge 0, \end{aligned}$$

for one j, thus, \( \sum _{i=1}^{p} \eta _{i} > \sum _{i=1}^{p} {\bar{\eta }}^{r}_{i}\) which arises in a contradiction with the postulates that \(({\bar{x}}^{r}, {\bar{\eta }}^{r})\) is an efficient solution of (3.17).

Remark 4

Below, we can be revealed that the pessimistic NHFPOS can be determined with the help of extended concept of Bellman and Zadeh (1970) for all the membership functions under neutrosophic fuzzy environment.

To highlight this, assume that the problem (3.16) yields in the pessimistic NHFPOS. Introduce the maximum satisfaction degrees of mth \((m=1,2, \ldots , \tau )\) member of each membership functions under neutrosophic hesitant fuzzy environment as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~ \mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{im}}}(x) \}^{p}_{i=1}\\ \mathrm{Minimize} &{}~~ \mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{im}}}(x) \}^{p}_{i=1}\\ \mathrm{Minimize} &{}~~ \mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{im}}}(x) \}^{p}_{i=1}\\ \mathrm{s.t.} &{} ~~x \in X. \end{array} \end{aligned}$$
(3.18)

Using auxiliary variables \(\alpha ,~\beta \) and \(\gamma \), problem (3.18) can be rewritten as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~\phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{}\mu _{\widetilde{G_{m}}^{\theta _{im}}}(x) \ge \alpha ,~~\lambda _{\widetilde{G_{m}}^{\theta _{im}}}(x) \le \beta \\ &{} \nu _{\widetilde{G_{m}}^{\theta _{im}}}(x) \le \gamma ,~~ \mu _{\widetilde{G_{m}}^{\theta _{im}}}(x) \ge \lambda _{\widetilde{G_{m}}^{\theta _{im}}}(x)\\ &{} \mu _{\widetilde{G_{m}}^{\theta _{im}}}(x) \ge \nu _{\widetilde{G_{m}}^{\theta _{im}}}(x),~~0 \le \alpha , \beta , \gamma \le 1\\ &{}x \in X,~~~~~\forall ~i=1,2, \ldots , p. \end{array}\nonumber \\ \end{aligned}$$
(3.19)

Suppose that \((x^{*m}, \phi ^{*m})\) is an efficient solution of (3.19) with \(\phi ^{*m} = \mathrm{min}~(\phi ^{*1}, \phi ^{*2}, \ldots , \phi ^{*\tau })\). Also, assume that the extended concept (Bellman and Zadeh 1970) is used in the neutrosophic hesitant fuzzy environment.

$$\begin{aligned} \begin{array}{ll} Optimize &{}~~ H_{p}(x)\\ s.t. &{} ~~x \in X. \end{array} \end{aligned}$$
(3.20)

Equivalently, the problem (3.20) can be rewritten as follows:

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~\phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{}\mu _{\widetilde{G_{i}}^{k_{i}}}(x) \ge \alpha ,~~\lambda _{\widetilde{G_{i}}^{k_{i}}}(x) \le \beta \\ &{} \nu _{\widetilde{G_{i}}^{k_{i}}}(x) \le \gamma ,~~ \mu _{\widetilde{G_{i}}^{k_{i}}}(x) \ge \lambda _{\widetilde{G_{i}}^{k_{i}}}(x)\\ &{} \mu _{\widetilde{G_{i}}^{k_{i}}}(x) \ge \nu _{\widetilde{G_{i}}^{k_{i}}}(x),~~0 \le \alpha , \beta , \gamma \le 1\\ &{}x \in X,~~~~~\forall ~i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i}. \end{array} \end{aligned}$$
(3.21)

Theorem 3

The problems (3.19) and (3.21) have the equal optimal objective values.

Proof

Assume that \((x^{*}, \phi ^{*})\) and \((x^{*m}, \phi ^{*m})\) are the optimal solution of (3.21) and (3.19), respectively. Our aim is to prove \(\phi ^{*} = \phi ^{*m}\) or consequently

$$\begin{aligned} \mathrm{Optimum}~H_{p}(x^{*})= \left\{ \begin{array}{ll} \mathrm{max}~\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \end{array} \right. \end{aligned}$$
(3.22)

Firstly, we have to show that \(\phi ^{*} \le \phi ^{*m}\). A feasible solution for (3.21) is a subset of (3.16). In particular, a feasible solution for (3.16) is a subset of (3.19). Thus, \(\phi ^{*} \le \phi ^{*m}\) or correspondingly

$$\begin{aligned} \mathrm{Optimum}~H_{p}(x^{*}) \le \left\{ \begin{array}{ll} \mathrm{max}~\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \end{array} \right. \end{aligned}$$
(3.23)

Now, it should be shown that \( \phi ^{*m} \le \phi ^{*}\). The problem (3.18) obtains the pessimistic NHFPOS, for all \(r \in \{ 1,2, \ldots , \tau \}\), we get \( \phi ^{*m} \le \phi ^{*r}\) or equivalently

\(\mathrm{max}~\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \le \mathrm{max}~\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1}\forall ~r \in \{ 1,2, \ldots , \tau \}\),

\(\mathrm{min}~\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \ge \mathrm{min}~\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1} \forall ~r \in \{ 1,2, \ldots , \tau \}\),

\(\mathrm{min}~\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \ge \mathrm{min}~\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{ir}}}(x^{*r}) \}^{p}_{i=1}~~\forall ~r \in \{ 1,2, \ldots , \tau \}\).

Therefore, it is obvious that

$$\begin{aligned} \mathrm{Optimum}~H_{p}(x^{*}) \ge \left\{ \begin{array}{ll} \mathrm{max}~\mathrm{min}~ \{ \mu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \lambda _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \mathrm{min}~\mathrm{max}~ \{ \nu _{\widetilde{G_{i}}^{\theta _{im}}}(x^{*m}) \}^{p}_{i=1} \\ \end{array} \right. \end{aligned}$$
(3.24)

Therefore \( \phi ^{*m} \le \phi ^{*r}\). Thus, the two inequalities in Eqs. (3.23) and (3.24) confirm that \(\phi ^{*} = \phi ^{*m}\). Hence, Theorem 3 is proved.

In optimization technique-II, the weighted arithmetic-mean score function of NHFEs \(\mu _{D^{N}_{h}}(x),~\lambda _{D^{N}_{h}} (x)\) and \(\nu _{D^{N}_{h}} (x)\) is used to achieve the optimal solution and formulated a problem that gives a NHFPOS to the NHFMOPP.

3.3 Proposed optimization technique-II

Let us consider that DM(s) intends to solve NHFMOPP (3.12). Then, the arithmetic-mean score function of each \(\mu _{D^{N}_{h}}(x),~\lambda _{D^{N}_{h}} (x)\) and \(\nu _{D^{N}_{h}} (x)\) is obtained and develop problem (3.25):

$$\begin{aligned} \begin{array}{ll} &{}\mathrm{max}~(\chi (\mu _{{\widetilde{G_{1}}}}(x)),~\chi (\mu _{{\widetilde{G_{2}}}}(x)), \ldots , \chi (\mu _{{\widetilde{G_{p}}}}(x))) \\ &{}\quad - \mathrm{min}~(\chi (\lambda _{{\widetilde{G_{1}}}}(x)),~\chi (\lambda _{{\widetilde{G_{2}}}}(x)),\\ &{} \ldots , \chi (\lambda _{{\widetilde{G_{p}}}}(x))) -\mathrm{min}~(\chi (\nu _{{\widetilde{G_{1}}}}(x)),~\chi (\nu _{{\widetilde{G_{2}}}}(x)),\\ &{}\quad \ldots , \chi (\nu _{{\widetilde{G_{p}}}}(x))) \end{array} \end{aligned}$$
(3.25)

where \(\chi (\mu _{{\widetilde{G_{i}}}}(x)) = \frac{\sum _{j=1}^{l_{i}} \mu _{{\widetilde{G^{j}_{i}}}}(x) }{l_{i}}\), \(\chi (\lambda _{{\widetilde{G_{i}}}}(x)) = \frac{\sum _{j=1}^{l_{i}} \lambda _{{\widetilde{G^{j}_{i}}}}(x) }{l_{i}}\) and \(\chi (\nu _{{\widetilde{G_{i}}}}(x)) = \frac{\sum _{j=1}^{l_{i}} \nu _{{\widetilde{G^{j}_{i}}}}(x) }{l_{i}}\) are the arithmetic-mean score function of \(\mu _{{\widetilde{G_{i}}}}(x),~\lambda _{{\widetilde{G_{i}}}}(x)\) and \(\nu _{{\widetilde{G_{i}}}}(x)\), respectively.

To solve the problem (3.25), we have used the weighted sum method. The problem (3.25) can be re-formulated as follows (3.26):

$$\begin{aligned} \mathrm{max}~\sum _{i=1}^{p} w_{i} \left[ \chi (\mu _{{\widetilde{G_{i}}}}(x)) - \chi (\lambda _{{\widetilde{G_{i}}}}(x)) - \chi (\nu _{{\widetilde{G_{i}}}}(x) \right] \end{aligned}$$
(3.26)

where \(w= (w_{1}, w_{2}, \ldots , w_{p})\) is a vector of positive weights in such a way that \(\sum _{i=1}^{p} w_{i} =1\). Theorem 4 permits to solve only a single-objective mathematical programming rather NHFMOPP (3.12).

Theorem 4

Suppose that \(w= (w_{1}, w_{2}, \ldots , w_{p})\) is a vector of nonnegative weights prescribed to the objectives, in such a way that \(\sum _{i=1}^{p} w_{i} =1\). If \(x^{*}\) is an optimal solution for (3.26), then \(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is a NHFPOS for the NHFMOPP.

Proof

Assume that \(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is not NHFPOS for the NHFMOPP. Thus, there exists an \(< x,~H_{p}(x)~|~ x \in X>\) such that \(\mu _{{\widetilde{G}}^{k_{i}}_{i}}(x) \ge \mu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),\lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x) \le \lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r})\) and \(\nu _{{\widetilde{G}}^{k_{i}}_{i}}(x) \le \nu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r})\) for all \(i=1,2, \ldots , p,~~k_{i}=1,2, \ldots , l_{i}\), and \(\mu _{{\widetilde{G}}^{k_{j}}_{j}}(x) > \mu _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r}),~\lambda _{{\widetilde{G}}^{k_{j}}_{j}}(x) < \lambda _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r})\) and \(\nu _{{\widetilde{G}}^{k_{j}}_{j}}(x) < \nu _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r})\) for at least one \(j \in \{1,2, \ldots , p \}\) and \(k_{j}=1,2, \ldots , l_{j}\). All weights are nonnegative, and we get

$$\begin{aligned} \begin{array}{ll} &{}\sum _{i=1}^{p} w_{i} \chi (\mu _{{\widetilde{G_{i}}}}(x)) > \sum _{i=1}^{p} w_{i} \chi (\mu _{{\widetilde{G_{i}}}}(x{*}))\\ &{}\sum _{i=1}^{p} w_{i} \chi (\lambda _{{\widetilde{G_{i}}}}(x))< \sum _{i=1}^{p} w_{i} \chi (\lambda _{{\widetilde{G_{i}}}}(x{*}))\\ &{}\sum _{i=1}^{p} w_{i} \chi (\nu _{{\widetilde{G_{i}}}}(x)) < \sum _{i=1}^{p} w_{i} \chi (\nu _{{\widetilde{G_{i}}}}(x{*})) \end{array} \end{aligned}$$
(3.27)

All the inequalities in Eq. (3.27) contradict the optimality of \(x^{*}\) for (3.26). Thus, Theorem 4 is proven.

4 Computational study

The proposed optimization techniques are applied to three real-life optimization problems, such as manufacturing, system design, and production planning. All the multiobjective mathematical programming problems, as discussed in examples, are coded in SAS/OR software; see (Rodriguez 2011; Ruppert 2004).

4.1 Manufacturing system problem

Example 1 (seeAhmad et al. 2018; Singh and Yadav 2015): A manufacturing factory intends to produce three types of products \(P_{1}, P_{2}\), and \(P_{3}\) in a specified period (say one year). The production processes of \(P_{1}, P_{2}\) and \(P_{3}\) need three different kinds of resources \(R_{1}, R_{2}\) and \(R_{3}\). The total requirements of each resource for product \(P_{1}\) are 2, 3, and 4 units, respectively. To produce each product \(P_{2}\), each kind of resource requirement is around 4, 2, and 2 units, whereas each unit of \(P_{3}\) requires 3, 2, and 3 units approximately. The total availability of resource \(R_{1}\) and \(R_{2}\) is around 325 and 360 units, respectively. However, around 30 and 20 units of the additional stock are stored in resources monitored by the factory manager’s manager. To provide a better quality to the products, at least 365 units of resource, \(R_{3}\), should be utilized. Moreover, an additional 20 units of resource \(R_{3}\) are administrable by the managerial board at the time of emergency. The estimated completion time for each unit of products \(P_{1}, P_{2}\), and \(P_{3}\) are prescribed as 4, 5, and 6 hours, respectively. Suppose that the production quantities of \(P_{1}, P_{2}\) and \(P_{3}\) are \(x_{1}, x_{2}\) and \(x_{3}\) units, respectively. Furthermore, consider that unit cost and sale’s price of product \(P_{1}, P_{2}\) and \(P_{3}\) are \(c_{1}=8, c_{2}=10.125\) and \(c_{3}=8\), and; \(s_{1}=\frac{99.875}{x_{1}^{-1/2}}\), \(s_{2}=\frac{119.875}{x_{2}^{-1/2}}\) and \(s_{3}=\frac{95.125}{x_{3}^{-1/3}}\), respectively. The manager wants to maximize the profit and minimize the total time requirement. Thus, the mathematical programming formulations result in the nonlinear programming problem (4.1) and can be presented as follows:

$$\begin{aligned} \mathrm{Min}~O_{1}(x)&-99.875x^\frac{1}{2}_{1}+8x_{1}-119.875x^\frac{1}{2}_{2} +10.125x_{2}\nonumber \\&-95.125x^\frac{1}{3}_{3}+8x_{3}\nonumber \\ \mathrm{Min}&O_{2}(x)~~3.875x_{1}+5.125x_{2}+5.9375x_{3}\nonumber \\ \mathrm{s.t.}&\nonumber \\&2.0625x_{1}+3.875x_{2}+2.9375x_{3} \le 333.125\nonumber \\&3.875x_{1}+2.0625x_{2}+2.0625x_{3} \le 365.625\nonumber \\&2.9375x_{1}+2.0625x_{2}+2.9375x_{3} \ge 360\nonumber \\&x_{1},~x_{2},~x_{3} \ge 0. \end{aligned}$$
(4.1)

On solving the problem (4.1), we get the individual minimum and maximum values \(U_{1}=-180.72\), \(L_{1}=-516.70\), \(L_{2}=599.23\) and \(U_{2}=620.84\) for each objective function, respectively. Initially, assume that one expert has provided his aspiration levels for the first objective and can be given by the following three membership functions:

$$\begin{aligned} \mu _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -516.70\\ \frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-516.70)^{t}}{ (-516.70)^{t} - (-180.72)^{t}} &{} \mathrm{if}~~ -516.70 \le O_{1}(x) \le -180.72\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -180.72 \end{array} \right. \end{aligned}$$
(4.2)
$$\begin{aligned} \lambda _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -516.70-y_{1}\\ \frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-516.70)^{t}}{ (y_{1})^{t}} &{} \mathrm{if}~~ -516.70-y_{1} \le O_{1}(x) \le -516.70\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -516.70 \end{array} \right. \end{aligned}$$
(4.3)
$$\begin{aligned} \nu _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le -516.70+ z_{1}\\ \frac{(-180.72)^{t} - (99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}}{ (-180.72)^{t} - (-516.70)^{t} - (z_{1})^{t}} &{} \mathrm{if}~~ -516.70+ z_{1} \le O_{1}(x) \le -180.72 \\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge -180.72 \end{array} \right. \end{aligned}$$
(4.4)

One DM or expert also provides the aspiration levels for the second objective, which expressed his neutrosophic fuzzy goals by the following three membership functions:

$$\begin{aligned} \mu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 599.23\\ \frac{(620.84)^{t}- (3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(620.84)^{t} - (599.23)^{t}} &{} \mathrm{if}~ 599.23 \le O_{2}(x) \le 620.84\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 620.84 \end{array} \right. \end{aligned}$$
(4.5)
$$\begin{aligned} \lambda _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 599.23 -y_{2}\\ \frac{(620.84)^{t}-(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(y_{2})^{t}} &{} \mathrm{if}~~ 599.23 - y_{2} \le O_{2}(x) \le 599.23\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 599.23 \end{array} \right. \end{aligned}$$
(4.6)
$$\begin{aligned} \nu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 599.23 + z_{2}\\ \frac{(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}-(599.23)^{t}-(t_{2})^{t}}{(620.84)^{t} - (599.23)^{t}-(z_{2})^{t}} &{} \mathrm{if}~~ 599.23+z_{2} \le O_{2}(x) \le 620.84\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 620.84 \end{array} \right. \end{aligned}$$
(4.7)

Using the problem (3.10), the equivalent neutrosophic decision-making problem can be stated as follows (4.8):

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~ \phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{} \frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-180.72)^{t}}{ (-516.70)^{t} - (-180.72)^{t}} \ge \alpha \\ &{}\frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-180.72)^{t}}{ (y_{1})^{t}} \le \beta \\ &{}\frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-180.72)^{t}}{ (-516.70)^{t} - (-180.72)^{t} - (z_{1})^{t}} \le \gamma \\ &{}\frac{(620.84)^{t}- (3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(620.84)^{t} - (599.23)^{t}} \ge \alpha \\ &{}\frac{(620.84)^{t}-(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(y_{2})^{t}} \le \beta \\ &{}\frac{(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}-(599.23)^{t}-(t_{2})^{t}}{(620.84)^{t} - (599.23)^{t}-(z_{2})^{t}} \le \gamma \\ &{}\mathrm{constraints}~~(4.46) \end{array} \end{aligned}$$
(4.8)

At \(t=2\), the optimal solution of (4.8) is \(x=(60.48,~5.26,58.37) \), \(O_{1}=409.70,~O_{2}=607.28\) with the degree of satisfaction \(\phi ^{*}=0.62\), respectively. It should be noted that \(\phi ^{*}=0.62\) represents the overall satisfaction of neutrosophic fuzzy goals of the DM which is 62%. Furthermore, assume that three other DMs or experts contributed to decision-making processes. One DM provides his thought about the first objective function, and two DMs provide their comment about the second objective function under a neutrosophic hesitant fuzzy environment. The DM or expert’s neutrosophic, hesitant fuzzy goals for first objective function are expressed by the different membership functions as follows:

$$\begin{aligned} \mu _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -523.48\\ \frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-523.48)^{t}}{ (-162.24)^{t} - (-523.48)^{t}} &{} \mathrm{if}~~ -523.48 \le O_{1}(x) \le -162.24\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -162.24 \end{array} \right. \end{aligned}$$
(4.9)
$$\begin{aligned} \lambda _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -162.24-y_{1}\\ \frac{(99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}-(-162.24)^{t}}{ (y_{1})^{t}} &{} \mathrm{if}~~ -162.24-y_{1} \le O_{1}(x) \le -162.24\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -162.24 \end{array} \right. \end{aligned}$$
(4.10)
$$\begin{aligned} \nu _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le -523.48+ z_{1}\\ \frac{(-162.24)^{t} - (99.875x^\frac{1}{2}_{1}-8x_{1}+119.875x^\frac{1}{2}_{2} -10.125x_{2}+95.125x^\frac{1}{3}_{3}-8x_{3})^{t}}{ (-162.24)^{t} - (-523.48)^{t} - (z_{1})^{t}} &{} \mathrm{if}~~ -523.48+ z_{1} \le O_{1}(x) \le -162.24 \\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge -162.24 \end{array} \right. \end{aligned}$$
(4.11)

The neutrosophic hesitant fuzzy goals of the DMs or experts for second objective functions are expressed by the different membership functions as follows:

$$\begin{aligned} \mu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 482.35\\ \frac{(631.54)^{t}- (3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(631.54)^{t} - (482.35)^{t}} &{} \mathrm{if}~~ 482.35 \le O_{2}(x) \le 631.54\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 631.54 \end{array} \right. \end{aligned}$$
(4.12)
$$\begin{aligned} \lambda _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 631.54 -y_{2}\\ \frac{(631.54)^{t}-(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(y_{2})^{t}} &{} \mathrm{if}~~ 631.54 - y_{2} \le O_{2}(x) \le 631.54\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 631.54 \end{array} \right. \end{aligned}$$
(4.13)
$$\begin{aligned} \nu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 482.35 + z_{2}\\ \frac{(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}-(482.35)^{t}-(z_{2})^{t}}{(631.54)^{t} - (482.35)^{t}-(z_{2})^{t}} &{} \mathrm{if}~~ 482.35+z_{2} \le O_{2}(x) \le 631.54\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 631.54 \end{array} \right. \end{aligned}$$
(4.14)
$$\begin{aligned} \mu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 572.06\\ \frac{(620.84)^{t}- (3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(620.84)^{t} - (572.06)^{t}} &{} \mathrm{if}~~ 572.06 \le O_{2}(x) \le 620.84\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 620.84 \end{array} \right. \end{aligned}$$
(4.15)
$$\begin{aligned} \lambda _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 620.84 -y_{2}\\ \frac{(620.84)^{t}-(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}}{(y_{2})^{t}} &{} \mathrm{if}~~ 620.84 - y_{2} \le O_{2}(x) \le 620.84\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 620.84 \end{array} \right. \end{aligned}$$
(4.16)
$$\begin{aligned} \nu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 572.06 + z_{2}\\ \frac{(3.875x_{1}+5.125x_{2}+5.9375x_{3})^{t}-(572.06)^{t}-(z_{2})^{t}}{(620.84)^{t} - (572.06)^{t}-(z_{2})^{t}} &{} \mathrm{if}~~ 572.06+z_{2} \le O_{2}(x) \le 620.84\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 620.84 \end{array} \right. \end{aligned}$$
(4.17)

Thus, we have neutrosophic hesitant fuzzy decision set as follows:

$$\begin{aligned} \widetilde{G_{1}}= \{ x,~h_{{\widetilde{G}}_{1}}(x)~|~ x \in X\},~~\widetilde{G_{2}}= \{ x,~h_{{\widetilde{G}}_{2}}(x)~|~ x \in X\} \end{aligned}$$

where X is a feasible solution region and

$$\begin{aligned} h_{{\widetilde{G}}_{1}}(x)= & {} \left\{ \begin{array}{ll} \mu _{\widetilde{G_{1}}}(x)&{}=\{\mu _{{G}^{1}_{1}}(x), ~\mu _{{G}^{2}_{1}}(x) \}\\ \lambda _{\widetilde{G_{1}}}(x)&{}=\{\lambda _{{G}^{1}_{1}}(x),~ \lambda _{{G}^{2}_{1}}(x) \}\\ \nu _{\widetilde{G_{1}}}(x)&{}=\{\nu _{{G}^{1}_{1}}(x),~ \nu _{{G}^{2}_{1}}(x) \} \end{array} \right. \nonumber \\ \text { and }h_{{\widetilde{G}}_{2}}(x)= & {} \left\{ \begin{array}{ll} \mu _{\widetilde{G_{2}}}(x)&{}=\{\mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), \mu _{{G}^{3}_{2}}(x) \}\\ \lambda _{\widetilde{G_{2}}}(x)&{}=\{\lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x), \lambda _{{G}^{3}_{2}}(x) \}\\ \nu _{\widetilde{G_{2}}}(x)&{}=\{\nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x) \} \end{array} \right. \nonumber \\ \end{aligned}$$
(4.18)

In the optimization technique-I, the neutrosophic hesitant fuzzy decision (Example 1) is stated as below:

$$\begin{aligned} D^{N}_{h}= \widetilde{G_{1}} \cap \widetilde{G_{2}} = \{ x, ~h_{D^{N}_{h}}(x) ~|~ x \in X \} \end{aligned}$$

with

$$\begin{aligned} h_{D^{N}_{h}}(x) = \left\{ \begin{array}{ll} \mu _{\widetilde{G_{i}}}(x) &{}= \left\{ \begin{array}{l} \mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{1}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{2}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{3}_{2}}(x) \}\\ \mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{1}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{2}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{3}_{2}}(x) \} \end{array} \right. \\ \\ \lambda _{\widetilde{G_{i}}}(x) &{}= \left\{ \begin{array}{l} \mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{2}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{3}_{2}}(x) \}\\ \mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{2}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{3}_{2}}(x) \} \end{array} \right. \\ \\ \nu _{\widetilde{G_{i}}}(x) &{}= \left\{ \begin{array}{l} \mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{2}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{3}_{2}}(x) \}\\ \mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{2}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{3}_{2}}(x) \} \end{array} \right. \end{array} \right. \end{aligned}$$
(4.19)

for each \(x \in X\). Now, our aim is to maximize the truth \(\mu _{\widetilde{G_{i}}}(x)\) hesitant membership function and minimization of indeterminacy \(\lambda _{\widetilde{G_{i}}}(x)\) and falsity \(\nu _{\widetilde{G_{i}}}(x)\) hesitant membership functions of \(h_{D^{N}_{h}}(x)\), respectively. The obtained NHFPOSs are summarized in Table 1. Also, the optimistic and pessimistic NHFPOSs with other approaches discussed in Ahmad et al. (2018); Singh and Yadav (2015) are shown in Table 2.

Using Remark 4, the pessimistic NHFPOSs are given in (4.20):

$$\begin{aligned}&\mathrm{Optimize}~H_{2}(x) \nonumber \\&\quad = \left\{ \begin{array}{ll} &{}\mathrm{max}~\mathrm{min}~ \{\mu _{{G}^{1}_{1}}(x), ~\mu _{{G}^{2}_{1}}(x), ~\mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), \mu _{{G}^{3}_{2}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\lambda _{{G}^{1}_{1}}(x), ~\lambda _{{G}^{2}_{1}}(x), ~\lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x), \lambda _{{G}^{3}_{2}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\nu _{{G}^{1}_{1}}(x), ~\nu _{{G}^{2}_{1}}(x), ~\nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x) \} \end{array} \right. \nonumber \\&\quad s.t. ~~x \in X. \end{aligned}$$
(4.20)

The optimal solution of problem (4.20) is obtained as \(x^{*}=(60.48,~5.26,~58.37) \), and \(\phi ^{*}=0.96\) with objective functions values \(O_{1}=288.86,~O_{2}=599.64\) which is the pessimistic NHFPOSs as depicted in Table 2.

Furthermore, suppose that \(O_{1}\) first objective is more important than \(O_{2}\) second, such that \(w_{1}=0.65\) and \(w_{2}=0.35\), then by implementing the optimization technique-II we have obtained the following problem (4.21):

$$\begin{aligned} \begin{array}{ll} \mathrm{max}~~&{}0.65 \left( \frac{\mu _{{G}^{1}_{1}}(x)+\mu _{{G}^{2}_{1}}(x) - \lambda _{{G}^{1}_{1}}(x) - \lambda _{{G}^{2}_{1}}(x) - \nu _{{G}^{1}_{1}}(x) - \nu _{{G}^{2}_{1}}(x) }{2} \right) \\ &{} + 0.35 \left( \frac{ \mu _{{G}^{1}_{2}}(x) + \mu _{{G}^{2}_{2}}(x) + \mu _{{G}^{3}_{2}}(x) - \lambda _{{G}^{1}_{2}}(x)- \lambda _{{G}^{2}_{2}}(x)- \lambda _{{G}^{3}_{2}}(x) - \nu _{{G}^{1}_{2}}(x)- \nu _{{G}^{2}_{2}}(x)- \nu _{{G}^{3}_{2}}(x)}{3} \right) \\ s.t. ~~&{}x \in X. \end{array} \end{aligned}$$
(4.21)

On solving the problem (4.21), we have obtained the optimal solution \(x^{*}=(60.48,~5.26,~58.37) \), and \(\phi ^{*}=0.99\) with objective functions values \(O_{1}=409.70,~O_{2}=607.28\). According to Theorem 4, \(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is a NHFPOS.

4.2 System design problem

Table 1 Example 1: Optimal solution results of NHFPOSs to NHFMOPP using optimization techniques
Table 2 Example 1: The optimal solutions of six problems, to obtain NHFPOS of NHFMOPP using optimization technique-I

Example 2 (see Sakawa 2013, Rouhbakhsh et al. 2020): Suppose that a park consists of six machine types with different capacities available to the production of three unique products, say \(P_{1}\), \(P_{2}\), and \(P_{3}\), respectively. All the relevant information is summarized in Table 3. The decision-maker(s) intended to develop and optimize the three different objectives, (i) total profits, (ii) quality of the products, and (iii) worker satisfaction.

Assume that \(x_{1}\), \(x_{2}\), and \(x_{3}\) be the optimal number of each product types that are to be produced. Thus, the mathematical formulations of multiobjective programming problem (4.22) can be given as follows:

$$\begin{aligned} \begin{array}{lll} Max~O_{1}(x)&{}= 50x_{1} +100 x_{2}+ 17.5x_{3}&{}(Total~ profits)\\ Max~O_{2}(x)&{}=\! 92x_{1} \!+\!75 x_{2}\!+\! 50x_{3}&{}(Quality~ of~ the~ products)\\ Max~O_{2}(x)&{}= 25x_{1} +100 x_{2}+ 75x_{3}&{}(Worker ~satisfaction)\\ s.t.&{}&{}\\ &{}12x_{1}+17x_{2} \le 1400&{}\\ &{}3x_{1}+9x_{2}+8 x_{3} \le 1000&{}\\ &{}10x_{1}+13x_{2}+15 x_{3} \le 1750&{}\\ &{}9.5x_{1}+9.5x_{2}+4 x_{3} \le 1075&{}\\ &{}6x_{1}+16x_{3} \le 1325&{}\\ &{}12x_{2}+7 x_{3} \le 900&{}\\ &{}x_{1},~x_{2},~x_{3} \ge 0.&{} \end{array} \end{aligned}$$
(4.22)

We would like to examine this problem under the neutrosophic hesitant fuzzy environment. On solving the problem (4.22), we get the individual minimum and maximum values for each objective functions \(L_{1}=5452.63\), \(L_{2}=10020.33\), \(L_{3}=5903\), \(U_{1}=8041.14\), \(U_{2}=10950.59\) and \(U_{3}=9355.90\), respectively. Assume that one expert provides the aspiration levels for the first objective:

$$\begin{aligned}&\mu _{G^{1}_{1}}(x)\nonumber \\&\quad = \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le 5452.63\\ \frac{O_{1}(x)- 5452.63}{2588.51} &{} \mathrm{if}~~ 5452.63 \le O_{1}(x) \le 8041.14\\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge 8041.14 \end{array} \right. \end{aligned}$$
(4.23)
$$\begin{aligned}&\lambda _{G^{1}_{1}}(x)\nonumber \\&\quad = \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le 5452.63\\ \frac{O_{1}(x)- 5452.63}{y_{1}} &{} \mathrm{if}~~ 5452.63 \le O_{1}(x) \le 5452.63+y_{1}\\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge 5452.63+y_{1} \end{array} \right. \nonumber \\ \end{aligned}$$
(4.24)
$$\begin{aligned}&\nu _{G^{1}_{1}}(x)\nonumber \\&\quad = \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le 5452.63+z_{1}\\ \frac{8041.14-O_{1}(x)-z_{1}}{2588.51-z_{1}} &{} \mathrm{if}~~ 5452.63+z_{1} \le O_{1}(x) \le 8041.14\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge 8041.14 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.25)

Furthermore, consider that three experts provide the aspiration levels for the second objective which expressed their neutrosophic hesitant fuzzy goals by the following membership functions:

Table 3 Example 2: Total available capacities and technological coefficients
$$\begin{aligned} \mu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 10020.33\\ \frac{O_{2}(x)- 10020.33}{930.26} &{} \mathrm{if}~~ 10020.33 \le O_{2}(x) \le 10950.59\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 10950.59 \end{array} \right. \end{aligned}$$
(4.26)
$$\begin{aligned} \lambda _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 10020.33\\ \frac{O_{2}(x)- 10020.33}{y_{2}} &{} \mathrm{if}~~ 10020.33 \le O_{2}(x) \le 10020.33+y_{2}\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 10020.33+y_{2} \end{array} \right. \nonumber \\ \end{aligned}$$
(4.27)
$$\begin{aligned} \nu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 10020.33+z_{2}\\ \frac{10950.59 - O_{2}(x)-z_{2} }{930.26-z_{2}} &{} \mathrm{if}~~ 10020.33+z_{2} \le O_{3}(x) \le 10950.59\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 10950.59 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.28)
$$\begin{aligned} \mu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 7300\\ \frac{O_{2}(x)- 7900}{3600} &{} \mathrm{if}~~ 7300 \le O_{2}(x) \le 10900\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 10900 \end{array} \right. \end{aligned}$$
(4.29)
$$\begin{aligned} \lambda _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 7300\\ \frac{O_{2}(x)- 7300}{y_{2}} &{} \mathrm{if}~~ 7300 \le O_{2}(x) \le 10900+y_{2}\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 10900+y_{2} \end{array} \right. \end{aligned}$$
(4.30)
$$\begin{aligned} \nu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 7300+z_{2}\\ \frac{10900 - O_{2}(x)-z_{2} }{3600-z_{2}} &{} \mathrm{if}~~ 7300+z_{2} \le O_{2}(x) \le 10900\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 10900 \end{array} \right. \end{aligned}$$
(4.31)
$$\begin{aligned} \mu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 8300\\ \frac{O_{2}(x)- 8300}{1000} &{} \mathrm{if}~~ 8300 \le O_{2}(x) \le 9300\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 9300 \end{array} \right. \end{aligned}$$
(4.32)
$$\begin{aligned} \lambda _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 8300\\ \frac{O_{2}(x)- 8300}{y_{2}} &{} \mathrm{if}~~ 8300 \le O_{2}(x) \le 8300+y_{2}\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 8300+y_{2} \end{array} \right. \end{aligned}$$
(4.33)
$$\begin{aligned} \nu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 8300+z_{2}\\ \frac{9300 - O_{2}(x)-z_{2} }{1000-z_{2}} &{} \mathrm{if}~~ 8300+z_{2} \le O_{2}(x) \le 9300\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 9300 \end{array} \right. \end{aligned}$$
(4.34)

Also, consider that two experts provide the aspiration levels for the second objective which expressed their neutrosophic hesitant fuzzy goals by the following membership functions:

$$\begin{aligned} \mu _{G^{1}_{3}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{3}(x) \le 5903\\ \frac{O_{3}(x)- 5903}{3452.90} &{} \mathrm{if}~~ 5903 \le O_{3}(x) \le 9355.90\\ 1 &{} \mathrm{if}~~ O_{3}(x) \ge 9355.90 \end{array} \right. \end{aligned}$$
(4.35)
$$\begin{aligned} \lambda _{G^{1}_{3}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{3}(x) \le 5903\\ \frac{O_{3}(x)- 5903}{y_{3}} &{} \mathrm{if}~~ 5903 \le O_{3}(x) \le 5903+y_{3}\\ 1 &{} \mathrm{if}~~ O_{3}(x) \ge 5903+y_{3} \end{array} \right. \end{aligned}$$
(4.36)
$$\begin{aligned} \nu _{G^{1}_{3}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{3}(x) \le 5903+z_{3}\\ \frac{9355.90 - O_{3}(x)-z_{3} }{3452.90-z_{3}} &{} \mathrm{if}~~ 5903+z_{3} \le O_{3}(x) \le 9355.90\\ 0 &{} \mathrm{if}~~ O_{3}(x) \ge 9355.90 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.37)
$$\begin{aligned} \mu _{G^{2}_{3}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{3}(x) \le 7400\\ \frac{O_{3}(x)- 7400}{2300} &{} \mathrm{if}~~ 7400 \le O_{3}(x) \le 9700\\ 1 &{} \mathrm{if}~~ O_{3}(x) \ge 9700 \end{array} \right. \end{aligned}$$
(4.38)
$$\begin{aligned} \lambda _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{3}(x) \le 7400\\ \frac{O_{3}(x)- 7400}{y_{3}} &{} \mathrm{if}~~ 7400 \le O_{3}(x) \le 7400+y_{3}\\ 1 &{} \mathrm{if}~~ O_{3}(x) \ge 7400+y_{3} \end{array} \right. \end{aligned}$$
(4.39)
$$\begin{aligned} \nu _{G^{2}_{3}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{3}(x) \le 7400+z_{3}\\ \frac{9700 - O_{3}(x)-z_{3} }{2300-z_{3}} &{} \mathrm{if}~~ 7400+z_{3} \le O_{3}(x) \le 9700\\ 0 &{} \mathrm{if}~~ O_{3}(x) \ge 9700 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.40)

Hence, we have neutrosophic hesitant fuzzy decision set as follows:

$$\begin{aligned}&\widetilde{G_{1}}= \{ x,~h_{{\widetilde{G}}_{1}}(x)~|~ x \in X\},~~\widetilde{G_{2}}= \{ x,~h_{{\widetilde{G}}_{2}}(x)~|~ x \in X\},\\&\quad \widetilde{G_{3}}= \{ x,~h_{{\widetilde{G}}_{3}}(x)~|~ x \in X\} \end{aligned}$$

where X is a feasible solution region and

$$\begin{aligned} h_{{\widetilde{G}}_{1}}(x)= & {} \left\{ \begin{array}{ll} \mu _{\widetilde{G_{1}}}(x)&{}= \{ \mu _{{G}^{1}_{1}}(x) \}\\ \lambda _{\widetilde{G_{1}}}(x)&{}= \{ \lambda _{{G}^{1}_{1}}(x) \}\\ \nu _{\widetilde{G_{1}}}(x)&{}= \{ \nu _{{G}^{1}_{1}}(x) \} \end{array} \right. \end{aligned}$$
(4.41)
$$\begin{aligned} h_{{\widetilde{G}}_{2}}(x)= & {} \left\{ \begin{array}{ll} \mu _{\widetilde{G_{2}}}(x)&{}=\{\mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), \mu _{{G}^{3}_{2}}(x) \}\\ \lambda _{\widetilde{G_{2}}}(x)&{}=\{\lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x), \lambda _{{G}^{3}_{2}}(x) \}\\ \nu _{\widetilde{G_{2}}}(x)&{}=\{\nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x) \} \end{array} \right. \end{aligned}$$
(4.42)
$$\begin{aligned} h_{{\widetilde{G}}_{3}}(x)= & {} \left\{ \begin{array}{ll} \mu _{\widetilde{G_{3}}}(x)&{}=\{\mu _{{G}^{1}_{3}}(x), \mu _{{G}^{2}_{3}}(x) \}\\ \lambda _{\widetilde{G_{3}}}(x)&{}=\{\lambda _{{G}^{1}_{3}}(x), \lambda _{{G}^{2}_{3}}(x) \}\\ \nu _{\widetilde{G_{3}}}(x)&{}=\{\nu _{{G}^{1}_{3}}(x), \nu _{{G}^{2}_{3}}(x) \} \end{array} \right. \end{aligned}$$
(4.43)

In the optimization technique-I, the neutrosophic hesitant fuzzy decision (Example 2) can be stated:

$$\begin{aligned} D^{N}_{h}= \widetilde{G_{1}} \cap \widetilde{G_{2}} \cap \widetilde{G_{3}} = \{ x, ~h_{D^{N}_{h}}(x) ~|~ x \in X \} \end{aligned}$$

Intuitionally, our intention is to maximize the truth \(\mu _{\widetilde{G_{i}}}(x)\) hesitant membership function and minimization of indeterminacy \(\lambda _{\widetilde{G_{i}}}(x)\) and falsity \(\nu _{\widetilde{G_{i}}}(x)\) hesitant membership functions of \(h_{D^{N}_{h}}(x)\), respectively. The obtained NHFPOSs of NHFMOPP using optimization technique-I are summarized in Table 4. Also, the optimistic and pessimistic NHFPOSs are shown in Table 5.

Table 4 Example 2: The optimal solutions of NHFMOPP using optimization technique-I

With the aid of Remark 4, the pessimistic NHFPOSs are obtained by solving (4.44):

$$\begin{aligned}&Optimize~H_{2}(x) \nonumber \\&\quad = \left\{ \!\! \begin{array}{@{}ll} &{}\mathrm{max}~\mathrm{min}~ \{\mu _{{G}^{1}_{1}}(x), ~\mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), ~\mu _{{G}^{3}_{2}}(x), ~\mu _{{G}^{1}_{3}}(x), \mu _{{G}^{2}_{3}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\lambda _{{G}^{1}_{1}}(x), ~\lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x), \lambda _{{G}^{3}_{2}}(x), ~\lambda _{{G}^{1}_{3}}(x), ~\lambda _{{G}^{2}_{3}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\nu _{{G}^{1}_{1}}(x), ~\nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x), ~\nu _{{G}^{1}_{3}}(x), ~\nu _{{G}^{2}_{3}}(x) \} \end{array} \right. \nonumber \\&\quad s.t. ~~x \in X. \end{aligned}$$
(4.44)

The optimal solution of problem (4.44) is obtained as \(x^{*}=(54.97,~38.56,~46.59) \), and \(\phi ^{*}=0.58\) with objective functions values \(O_{1}=7419.82,~O_{2}=10278.74\) and \(O_{3}=8724.50\) which is the pessimistic NHFPOSs as depicted in Table 5.

Table 5 Example 2: Optimistic and pessimistic NHFPOSs of NHFMOPP

Moreover, assume that the three different weighting schemes, such as (\(w_{1}=0.6\), \(w_{2}=0.2\), \(w_{3}=0.2\)), (\(w_{1}=0.2\), \(w_{2}=0.6\), \(w_{3}=0.2\)) and (\(w_{1}=0.2\), \(w_{2}=0.2\), \(w_{3}=0.6\)), then by applying the optimization technique-II we have obtained the following problem (4.45):

$$\begin{aligned} \begin{array}{ll} \mathrm{max}~~&{}w_{1} \left( \mu _{{G}^{1}_{1}}(x)- \lambda _{{G}^{1}_{1}}(x)- \nu _{{G}^{1}_{1}}(x) \right) \\ &{}+ w_{2} \left( \frac{ \mu _{{G}^{1}_{2}}(x) + \mu _{{G}^{2}_{2}}(x) + \mu _{{G}^{3}_{2}}(x) - \lambda _{{G}^{1}_{2}}(x)- \lambda _{{G}^{2}_{2}}(x)- \lambda _{{G}^{3}_{2}}(x) - \nu _{{G}^{1}_{2}}(x)- \nu _{{G}^{2}_{2}}(x)- \nu _{{G}^{3}_{2}}(x)}{3} \right) \\ &{}+ w_{3} \left( \frac{ \mu _{{G}^{1}_{3}}(x) + \mu _{{G}^{2}_{3}}(x) - \lambda _{{G}^{1}_{3}}(x)- \lambda _{{G}^{2}_{3}}(x)- \nu _{{G}^{1}_{3}}(x)- \nu _{{G}^{2}_{3}}(x)}{2} \right) \\ s.t. ~~&{}x \in X. \end{array} \end{aligned}$$
(4.45)

On solving the problem (4.45), we get the optimal solution at different weights summarized in Table 6. According to Theorem 4, \(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is a NHFPOS. Furthermore, the comparative study of problem (Example 2) is performed with other existing methods and depicted in Table 7. However, the solution outcomes determined by our proposed optimization techniques are quite better, but it cannot be claimed that our techniques are always outperformed, because it depends on the experience and opinion of various experts, but it can be stated that our results are much nearer to reality because we have utilized the opinions of several experts with degrees of neutral thoughts in decision-making processes.

Table 6 Example 2: The optimal solutions of NHFMOPP with different weights using optimization technique-II
Table 7 Example 2: Solution results comparison with other approaches

4.3 Production planning problem

Example 3 (see Zeleny 1986, Rouhbakhsh et al. 2020): A production company produces two different items \(I_{1}\) and \(I_{2}\) by using three different raw materials \(R_{1}\), \(R_{2}\) and \(R_{3}\) and intends to maximize the total profit after-sales. The different input data of resources required to produce each unit of \(I_{1}\) and \(I_{2}\) are summarized in Table 8. The maximum capacity available materials are restricted to 27, 45, and 15 tons for each \(R_{1}\), \(R_{2}\), and \(R_{3}\), respectively. The individual profit incurred over each product is also well-known in advance and such that item \(I_{1}\) yields a profit of 1 million yen per ton, whereas \(I_{2}\) incurs a profit of 2 million yen per ton. Under the available resources, the company aims to determine the optimal production policy of each unit of items \(I_{1}\) and \(I_{2}\) in such a way that the overall profit is maximum. Furthermore, it should be kept in mind that the item \(I_{1}\) releases three units of pollution per ton, while \(I_{2}\) generates two units of pollutions per ton. Therefore, it is indispensable for the decision-maker(s) or experts to not only enhance the total profit but also reduce the amount of pollution.

Assume that \(x_{1}\) and \(x_{2}\) represent the number of tonnes produced of each items \(I_{1}\) and \(I_{2}\), respectively. Therefore, the mathematical formulations of the production planning problem can be given as follows (4.46):

$$\begin{aligned} \begin{array}{ll} Min~O_{1}(x)~~&{}= -x_{1} -2 x_{2}\\ Min~O_{2}(x)~~&{}=3 x_{1}+2x_{2}\\ s.t.&{}\\ &{}2x_{1}+6x_{2} \le 27\\ &{}3x_{1}+x_{2} \le 15\\ &{}8x_{1}+6x_{2} \le 45\\ &{}x_{1},~x_{2} \ge 0. \end{array} \end{aligned}$$
(4.46)

After solving the problem (4.46), we have obtained the individual minimum and maximum values for each objective functions \(L_{1}=-10\), \(L_{2}=0\), \(U_{1}=0\) and \(U_{2}=16.5\), respectively.

$$\begin{aligned} \mu _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -10\\ \frac{O_{1}(x)-(-8)}{ -2} &{} \mathrm{if}~~ -10 \le O_{1}(x) \le -8\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -8 \end{array} \right. \end{aligned}$$
(4.47)
$$\begin{aligned} \lambda _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -10-y_{1}\\ \frac{O_{1}(x)-(-10-y_{1})}{ y_{1}} &{} \mathrm{if}~~ -10-y_{1} \le O_{1}(x) \le -10\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -10 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.48)
$$\begin{aligned} \nu _{G^{1}_{1}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le -10+z_{1}\\ \frac{O_{1}(x)-(-10+z_{1})}{ (-2+z_{1})} &{} \mathrm{if}~~ -10+z_{1} \le O_{1}(x) \le -8\\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge -8 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.49)
Table 8 Example 3: resources input dataset

Also, one DM or expert provides the aspiration levels for the second objective which expressed his neutrosophic fuzzy goals by the following three membership functions:

$$\begin{aligned} \mu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 9\\ \frac{O_{2}(x)- 14}{-5} &{} \mathrm{if}~~ 9 \le O_{2}(x) \le 14\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 14 \end{array} \right. \end{aligned}$$
(4.50)
$$\begin{aligned} \lambda _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 9 -y_{2}\\ \frac{O_{2}(x)- (-9-y_{2})}{y_{2}} &{} \mathrm{if}~~ 9-y_{2} \le O_{2}(x) \le 9\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 9 \end{array} \right. \end{aligned}$$
(4.51)
$$\begin{aligned} \nu _{G^{1}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 9 + z_{2}\\ \frac{O_{2}(x)-(9+z_{2})}{ (-5+z_{2})} &{} \mathrm{if}~~ 9+z_{2} \le O_{2}(x) \le 14\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 14 \end{array} \right. \end{aligned}$$
(4.52)

Using the problem (3.10), the equivalent neutrosophic decision-making problem can be stated as follows (4.53):

$$\begin{aligned} \begin{array}{ll} \mathrm{Maximize} &{}~~ \phi ~~ (\phi =\alpha - \beta - \gamma )\\ \mathrm{s.t.} &{} \frac{O_{1}(x)-(-8)}{ -2} \ge \alpha \\ &{}\frac{O_{1}(x)-(-8)}{ y_{1}} \le \beta \\ &{}\frac{O_{1}(x)-(-10-z_{1})}{ (-2+z_{1})} \le \gamma \\ &{}\frac{O_{2}(x)- 14}{-5} \ge \alpha \\ &{}\frac{O_{2}(x)- 14}{y_{2}} \le \beta \\ &{}\frac{O_{2}(x)-(9+z_{2})}{ (-5+z_{2})} \le \gamma \\ &{}\mathrm{constraints}~~(4.46) \end{array} \end{aligned}$$
(4.53)

The optimal solution of the bi-objective programming problem (4.53) is \(x^{*}=(x_{1}^{*},x_{2}^{*})=(0.87,~4.21) \), \(O_{1}=-9.29,~O_{2}=11.03\) with the degree of satisfaction \(\phi ^{*}=0.64\), respectively. It should be noted that \(\phi ^{*}=0.64\) represents the overall satisfaction of neutrosophic fuzzy goals of the DM is 64%. Furthermore, assume the neutrosophic hesitant fuzzy goals of the DMs or experts for first objective function are expressed by the different membership functions as follows:

$$\begin{aligned} \mu _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -11\\ \frac{O_{1}(x)-(-5)}{ -6} &{} \mathrm{if}~~ -11 \le O_{1}(x) \le -5\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -5 \end{array} \right. \end{aligned}$$
(4.54)
$$\begin{aligned} \lambda _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -11-y_{1}\\ \frac{O_{1}(x)-(-11-y_{1})}{ y_{1}} &{} \mathrm{if}~~ -11-y_{1} \le O_{1}(x) \le -11\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -11 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.55)
$$\begin{aligned} \nu _{G^{2}_{1}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le -11+z_{1}\\ \frac{O_{1}(x)-(-11+z_{1})}{ (-6+z_{1})} &{} \mathrm{if}~~ -11+z_{1} \le O_{1}(x) \le -5\\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge -5 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.56)
$$\begin{aligned} \mu _{G^{3}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -12\\ \frac{O_{1}(x)-(-7)}{ -5} &{} \mathrm{if}~~ -12 \le O_{1}(x) \le -7\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -7 \end{array} \right. \end{aligned}$$
(4.57)
$$\begin{aligned} \lambda _{G^{3}_{1}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{1}(x) \le -12-y_{1}\\ \frac{O_{1}(x)-(-12-y_{1})}{ y_{1}} &{} \mathrm{if}~~ -12-y_{1} \le O_{1}(x) \le -12\\ 0 &{} \mathrm{if}~~ O_{1}(x) \ge -12 \end{array} \right. \nonumber \\ \end{aligned}$$
(4.58)
$$\begin{aligned} \nu _{G^{3}_{1}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{1}(x) \le -12+z_{1}\\ \frac{O_{1}(x)-(-12+z_{1})}{ (-5+z_{1})} &{} \mathrm{if}~~ -12+z_{1} \le O_{1}(x) \le -7\\ 1 &{} \mathrm{if}~~ O_{1}(x) \ge -7 \end{array} \right. \end{aligned}$$
(4.59)
$$\begin{aligned} \mu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 2\\ \frac{O_{2}(x)- 10}{-8} &{} \mathrm{if}~~ 2 \le O_{2}(x) \le 10\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 10 \end{array} \right. \end{aligned}$$
(4.60)
$$\begin{aligned} \lambda _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 2 -y_{2}\\ \frac{O_{2}(x)- (2-y_{2})}{y_{2}} &{} \mathrm{if}~~ 2-y_{2} \le O_{2}(x) \le 2\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 2 \end{array} \right. \end{aligned}$$
(4.61)
$$\begin{aligned} \nu _{G^{2}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 2 + z_{2}\\ \frac{O_{2}(x)-(2+z_{2})}{ (-8+z_{2})} &{} \mathrm{if}~~ 2+z_{2} \le O_{2}(x) \le 10\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 10 \end{array} \right. \end{aligned}$$
(4.62)
$$\begin{aligned} \mu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 0\\ \frac{O_{2}(x)- 15}{-15} &{} \mathrm{if}~~ 0 \le O_{2}(x) \le 15\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 15 \end{array} \right. \end{aligned}$$
(4.63)
$$\begin{aligned} \lambda _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 0 -y_{2}\\ \frac{O_{2}(x)- y_{2}}{y_{2}} &{} \mathrm{if}~~ 0-y_{2} \le O_{2}(x) \le 0\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 0 \end{array} \right. \end{aligned}$$
(4.64)
$$\begin{aligned} \nu _{G^{3}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 0 + z_{2}\\ \frac{O_{2}(x)-(z_{2})}{ (-15+z_{2})} &{} \mathrm{if}~~ 0+z_{2} \le O_{2}(x) \le 15\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 15 \end{array} \right. \end{aligned}$$
(4.65)
$$\begin{aligned} \mu _{G^{4}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 8\\ \frac{O_{2}(x)- 16.5}{-8.5} &{} \mathrm{if}~~ 8 \le O_{2}(x) \le 16.5\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 16.5 \end{array} \right. \end{aligned}$$
(4.66)
$$\begin{aligned} \lambda _{G^{4}_{2}}(x)= & {} \left\{ \begin{array}{ll} 1 &{} \mathrm{if}~~ O_{2}(x) \le 8 -y_{2}\\ \frac{O_{2}(x)- (8-y_{2})}{y_{2}} &{} \mathrm{if}~~ 8-y_{2} \le O_{2}(x) \le 8\\ 0 &{} \mathrm{if}~~ O_{2}(x) \ge 8 \end{array} \right. \end{aligned}$$
(4.67)
$$\begin{aligned} \nu _{G^{4}_{2}}(x)= & {} \left\{ \begin{array}{ll} 0 &{} \mathrm{if}~~ O_{2}(x) \le 8 + z_{2}\\ \frac{O_{2}(x)-(8+z_{2})}{ (-8.5+z_{2})} &{} \mathrm{if}~~ 8+z_{2} \le O_{2}(x) \le 16.5\\ 1 &{} \mathrm{if}~~ O_{2}(x) \ge 16.5 \end{array} \right. \end{aligned}$$
(4.68)

Therefore, we have neutrosophic hesitant fuzzy decision set as follows:

$$\begin{aligned} \widetilde{G_{1}}= \{ x,~h_{{\widetilde{G}}_{1}}(x)~|~ x \in X\},~~\widetilde{G_{2}}= \{ x,~h_{{\widetilde{G}}_{2}}(x)~|~ x \in X\} \end{aligned}$$

where X is a feasible solution region and

$$\begin{aligned}&h_{{\widetilde{G}}_{1}}(x)= \left\{ \begin{array}{ll} \mu _{\widetilde{G_{1}}}(x)&{}=\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{2}_{1}}(x), \mu _{{G}^{3}_{1}}(x) \}\\ \lambda _{\widetilde{G_{1}}}(x)&{}=\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{3}_{1}}(x) \}\\ \nu _{\widetilde{G_{1}}}(x)&{}=\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{2}_{1}}(x), \nu _{{G}^{3}_{1}}(x) \} \end{array} \right. \nonumber \\&\text { and } h_{{\widetilde{G}}_{2}}(x)\nonumber \\&\quad = \left\{ \begin{array}{ll} \mu _{\widetilde{G_{2}}}(x)&{}=\{\mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), \mu _{{G}^{3}_{2}}(x), \mu _{{G}^{4}_{2}}(x) \}\\ \lambda _{\widetilde{G_{2}}}(x)&{}=\{\lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x), \lambda _{{G}^{3}_{2}}(x), \lambda _{{G}^{4}_{2}}(x) \}\\ \nu _{\widetilde{G_{2}}}(x)&{}=\{\nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x), \nu _{{G}^{4}_{2}}(x) \} \end{array} \right. \nonumber \\ \end{aligned}$$
(4.69)

Thus, the neutrosophic hesitant fuzzy decision for this problem (Example 3) is stated as follows:

$$\begin{aligned} D^{N}_{h}= \widetilde{G_{1}} \cap \widetilde{G_{2}} = \{ x, ~h_{D^{N}_{h}}(x) ~|~ x \in X \} \end{aligned}$$

with

$$\begin{aligned} h_{D^{N}_{h}}(x)= \left\{ \begin{array}{ll} \mu _{\widetilde{G_{i}}}(x)=&{} \left\{ \begin{array}{l} \mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{1}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{2}_{2}}(x) \},\\ \mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{3}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{4}_{2}}(x) \},\\ \mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{1}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{2}_{2}}(x) \},\\ \mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{3}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{2}_{1}}(x), \mu _{{G}^{4}_{2}}(x) \},\\ \mathrm{min}~\{\mu _{{G}^{3}_{1}}(x), \mu _{{G}^{1}_{2}}(x) \}, ~\mathrm{min}~\{\mu _{{G}^{3}_{1}}(x), \mu _{{G}^{2}_{2}}(x) \},\\ \mathrm{min}~\{\mu _{{G}^{3}_{1}}(x), \mu _{{G}^{3}_{2}}(x) \},~\mathrm{min}~\{\mu _{{G}^{3}_{1}}(x), \mu _{{G}^{4}_{2}}(x) \} \end{array} \right. \\ \\ \lambda _{\widetilde{G_{i}}}(x)=&{} \left\{ \begin{array}{l} \mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{4}_{2}}(x) \},\\ \mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{1}_{2}}(x) \}, ~\mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{2}_{1}}(x), \lambda _{{G}^{4}_{2}}(x) \},\\ \mathrm{max}~\{\lambda _{{G}^{3}_{1}}(x), \lambda _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{3}_{1}}(x), \lambda _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\lambda _{{G}^{3}_{1}}(x), \lambda _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\lambda _{{G}^{3}_{1}}(x), \lambda _{{G}^{4}_{2}}(x) \} \end{array} \right. \\ \\ \nu _{\widetilde{G_{i}}}(x)=&{} \left\{ \begin{array}{l} \mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{4}_{2}}(x) \},\\ \mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{1}_{2}}(x) \}, ~\mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{2}_{1}}(x), \nu _{{G}^{4}_{2}}(x) \},\\ \mathrm{max}~\{\nu _{{G}^{3}_{1}}(x), \nu _{{G}^{1}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{3}_{1}}(x), \nu _{{G}^{2}_{2}}(x) \},\\ \mathrm{max}~\{\nu _{{G}^{3}_{1}}(x), \nu _{{G}^{3}_{2}}(x) \},~\mathrm{max}~\{\nu _{{G}^{3}_{1}}(x), \nu _{{G}^{4}_{2}}(x) \} \end{array} \right. \end{array} \right. \end{aligned}$$
(4.70)

for each \(x \in X\). Thus, our aim is to maximize the truth \(\mu _{\widetilde{G_{i}}}(x)\) hesitant membership function and minimization of indeterminacy \(\lambda _{\widetilde{G_{i}}}(x)\) and falsity \(\nu _{\widetilde{G_{i}}}(x)\) hesitant membership functions of \(h_{D^{N}_{h}}(x)\), respectively. The obtained NHFPOSs are summarized in Table 9. The optimistic and pessimistic NHFPOSs are depicted in Table 10 and compared with HFPOSs (Rouhbakhsh et al. 2020).

Table 9 Example 3: Optimal solutions of NHFMOPP using optimization technique-I

With the aid of Remark 4, the pessimistic NHFPOSs can be determined by solving the following problem (4.71):

$$\begin{aligned}&Optimize~H_{2}(x)\nonumber \\&\quad = \left\{ \!\! \begin{array}{@{}ll} &{}\mathrm{max}~\mathrm{min}~ \{\mu _{{G}^{1}_{1}}(x), \mu _{{G}^{2}_{1}}(x),\mu _{{G}^{3}_{1}}(x), \mu _{{G}^{1}_{2}}(x), \mu _{{G}^{2}_{2}}(x), \mu _{{G}^{3}_{2}}(x),\\ &{}\mu _{{G}^{4}_{2}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\lambda _{{G}^{1}_{1}}(x), \lambda _{{G}^{2}_{1}}(x),\lambda _{{G}^{3}_{1}}(x), \lambda _{{G}^{1}_{2}}(x), \lambda _{{G}^{2}_{2}}(x),\\ &{}\lambda _{{G}^{3}_{2}}(x), \lambda _{{G}^{4}_{2}}(x) \} \\ &{}\mathrm{min}~\mathrm{max}~ \{\nu _{{G}^{1}_{1}}(x), \nu _{{G}^{2}_{1}}(x),\nu _{{G}^{3}_{1}}(x), \nu _{{G}^{1}_{2}}(x), \nu _{{G}^{2}_{2}}(x), \nu _{{G}^{3}_{2}}(x),\\ &{}\nu _{{G}^{4}_{2}}(x) \} \end{array} \right. \nonumber \\&\quad s.t. ~~x \in X. \end{aligned}$$
(4.71)

The optimal solution of problem (4.71) is obtained as \(x^{*}=(0,~4.3) \), and \(\phi ^{*}=0.21\) with objective functions values \(O_{1}=-8.60,~O_{2}=8.60\) which is the pessimistic NHFPOSs as depicted in Table 10.

Table 10 Example 3: Optimistic and pessimistic NHFPOSs of NHFMOPP

Furthermore, suppose that \(O_{1}\) is more important than \(O_{2}\), such that \(w_{1}=0.75\) and \(w_{2}=0.25\), then by implementing the optimization technique-II, we have obtained the following problem (4.72):

$$\begin{aligned} \begin{array}{ll} \mathrm{max}~~&{}0.75 \left( \frac{\mu _{{G}^{1}_{1}}(x)+\mu _{{G}^{2}_{1}}(x)+\mu _{{G}^{3}_{1}}(x) - \lambda _{{G}^{1}_{1}}(x) - \lambda _{{G}^{2}_{1}}(x)- \lambda _{{G}^{3}_{1}}(x) - \nu _{{G}^{1}_{1}}(x) - \nu _{{G}^{2}_{1}}(x) - \nu _{{G}^{3}_{1}}(x) }{3} \right) \\ &{} + 0.25 \left( \frac{ \mu _{{G}^{1}_{2}}(x) + \mu _{{G}^{2}_{2}}(x) + \mu _{{G}^{3}_{2}}(x)+ \mu _{{G}^{4}_{2}}(x) - \lambda _{{G}^{1}_{2}}(x)- \lambda _{{G}^{2}_{2}}(x)- \lambda _{{G}^{3}_{2}}(x)- \lambda _{{G}^{4}_{2}}(x) - \nu _{{G}^{1}_{2}}(x)- \nu _{{G}^{2}_{2}}(x)- \nu _{{G}^{3}_{2}}(x) - \nu _{{G}^{4}_{2}}(x)}{4} \right) \\ s.t. ~~&{}x \in X. \end{array} \end{aligned}$$
(4.72)

On solving the problem (4.72), we have obtained the optimal solution \(x^{*}=(0,~4.2) \), and \(\phi ^{*}=0.2\) with objective functions values \(O_{1}=-8.40,~O_{2}=8.40\). According to Theorem 4, \(< x^{*},~H_{p}(x^{*})~|~ x^{*} \in X>\) is a NHFPOS.

4.4 Computational steps and discussion

This paper investigated the two different optimization techniques for MOPPs under the neutrosophic hesitant fuzzy environment. The robustness of proposed techniques is also presented by performing the Pareto optimality tests. The stepwise solution algorithm for optimization technique-I is presented as follows:

  1. 1.

    Elicit the different membership functions \( \mu _{{\widetilde{G^{k_{i}}_{i}}}} (x), \lambda _{{\widetilde{G^{k_{i}}_{i}}}} (x)\) and \( \nu _{{\widetilde{G^{k_{i}}_{i}}}} (x)~~\forall ~~k_{i}=1,2, \ldots , l_{i}\) DMs and develop \(\widetilde{G_{i}}= \{ x,~\mu _{{\widetilde{G_{i}}}}(x),~\lambda _{\widetilde{G_{i}}}(x),~\nu _{\widetilde{G_{i}}}(x)~|~ x \in X\}\) such that \(\mu _{\widetilde{G_{i}}}(x)=\{\mu _{{G}^{1}_{i}}(x), \mu _{{G}^{2}_{i}}(x), \ldots , \mu _{{G}^{l_{i}}_{i}}(x) \} \), \(\lambda _{\widetilde{G_{i}}}(x)=\{\lambda _{{G}^{1}_{i}}(x), \lambda _{{G}^{2}_{i}}(x), \ldots , \lambda _{{G}^{l_{i}}_{i}}(x) \} \) and \( \nu _{\widetilde{G_{i}}}(x)=\{\nu _{{G}^{1}_{i}}(x), \nu _{{G}^{2}_{i}}(x), \ldots , \nu _{{G}^{l_{i}}_{i}}(x) \}\) as neutrosophic hesitant fuzzy goals for objective function \(O_{i}~~\forall ~~i=1,2, \ldots , p \).

  2. 2.

    For every \(r=1,2, \ldots , \tau \) where \(\tau = l_{1}l_{2} \ldots l_{p},\) select one \(\theta _{ir} \in \{1,2, \ldots , l_{i}\} ~~\forall ~~i=1,2, \ldots , p \) and construct the problem (3.16).

  3. 3.

    After solving rth model of the problem (3.16), we determine the maximal degree of the aspiration level \(\phi ^{*r}\) with the optimal solution \(x^{*r}\) and elicit \(H_{p} (x^{*r}) = \{ \mu _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),~\lambda _{{\widetilde{G}}^{k_{i}}_{i}}(x^{*r}),~\nu _{{\widetilde{G}}^{k_{j}}_{j}}(x^{*r}) |~~i=1,2, \ldots , p \}\) and \(~k_{j}=1,2, \ldots , l_{j}\).

  4. 4.

    It can be suggested that \( < \phi ^{*m}, x^{*m}, H_{p} (x^{*m})>\) as the pessimistic NHFPOS which \(\phi ^{*m}= \mathrm{min}~\{\phi ^{*1}, \phi ^{*2}, \ldots , \phi ^{*\tau } \} \) and \( < \phi ^{*M}, x^{*M}, H_{p} (x^{*M})>\) as the optimistic NHFPOS which \(\phi ^{*M}= \mathrm{min}~\{\phi ^{*1}, \phi ^{*2}, \ldots , \phi ^{*\tau } \} \)

The stepwise solution algorithm for optimization technique-II is summarized as follows:

  1. 1.

    Follow the first step of optimization technique-I.

  2. 2.

    Evaluate the arithmetic-mean score function \( ~ \chi (\mu _{{\widetilde{G_{i}}}}(x))\), \(\chi (\lambda _{{\widetilde{G_{i}}}}(x))\) and \(\chi (\nu _{{\widetilde{G_{i}}}}(x))\) for each \(\mu _{{\widetilde{G_{i}}}}(x),~\lambda _{{\widetilde{G_{i}}}}(x)\) and \(\nu _{{\widetilde{G_{i}}}}(x) \), respectively.

  3. 3.

    Assign the positive weights \(w_{i}\) to the ith objective function \(O_{i}(x)\) according to the decision makers’ preference. Construct the problem (3.26) and solve it to obtain the optimal solution \(x^{*r}\), define \( < x^{*}, H_{p} (x^{*})>\) as NHFPOS for the NHFMOPP.

To analyze the computational complexity, Table 11 presents an overview of the dimensions involved among the NHFMOPP in problem (3.12) including n variables, m constraints, and p objectives in the first and second optimization techniques with other methods.

Table 11 Computational complexity comparison with other approaches

In all the above discussed three examples, we have obtained the truth, indeterminacy, and falsity membership functions using the marginal evaluations of each objective function. Then, we took the opinion of different experts/managers regarding the satisfaction values corresponding to each membership function based on their previous knowledge or experiences.

As a matter of discussion, it should be answered that how the proposed optimization techniques are capable of yielding a better solution than other existing approaches? For this purpose, in general, we all better know that using the neutral thoughts and opinions of several experts or DMs for a problem is very much related to reality and yields more analytical results and more reliable too. We are not claiming that if we seek the opinion of various experts along with neutral thoughts (indeterminacy degree), then the quantitative solutions would be better in the neutrosophic hesitant fuzzy environment, it fully complies over the indeterminacy degrees under the different opinion of experts; in fact, the proposed optimization techniques unifies and captures the degrees of neutrality and hesitancy of various experts or DMs simultaneously for solving MOPPs.

In optimization technique-I, alternative solutions are reflected toward the DMs’ specific viewpoints. Hence, each solution has its importance and particular utilization for DMs’ as the optimistic NHFPOS or the pessimistic NHFPOS and many more. Furthermore, each NHFPOS in the optimization technique-I is elaborated as the satisfactory degrees of the experts from objective function values; consequently, one can choose an NHFPOS among the specified solutions. Conclusively, this technique determines the final decision from a set of solutions. If anyone is not interested in a neutrosophic hesitant fuzzy set, select one of them, such as the optimistic or pessimistic NHFPOS.

In optimization technique-II, one can deal with a single programming problem by assigning weights to each objective function when their preference is different. Thus, this technique is used in the real-life applications of MOPPs in the neutrosophic hesitant fuzzy environment when the priorities of each objective function play an essential role and are conflicting in nature. Additionally, in this technique, with the aggregation of the experts’ or DMs’ opinions under neutral thoughts, only a single programming problem needs to be solved. However, we lose some information.

5 Conclusions

In this study, an effective modeling and optimization framework for the MOPPs has been presented under neutrosophic hesitant fuzzy uncertainty. This paper used the neutrosophic hesitant fuzzy sets in modeling and optimizing the multiobjective programming problems with neutrosophic hesitant fuzzy objectives called the neutrosophic hesitant fuzzy multiobjective programming problems. Then, a novel solution concept, namely neutrosophic hesitant fuzzy Pareto optimal solutions, is developed to solve NHFMOPPs. Neutral/indeterminacy is the area of ignorance of a proposition’s value, between truth and a falsity degree. The indeterminacy factor is incorporated that leads the decision-making process more realistic in nature. The two different techniques have been proposed under the neutrosophic hesitant fuzzy environment, which consists of independent indeterminacy/neutral thoughts under hesitations in decision-making processes. The superiority of the proposed techniques is revealed by the fact that they give a set of solutions based on various experts’ satisfaction levels in the neutrosophic hesitant fuzzy environments. One can obtain different NHFPOSs using the two proposed optimization techniques. These alternative solutions respond to the DMs’ predetermined points of view. Thus, each solution has its importance for the DM(s). The DM(s) may select a promising optimal solution according to the adverse situation.

The propounded study has some limitations that can be addressed in future research. Various metaheuristic approaches may be applied to solve the NHFMOPP as a future research scope. The discussed NHFMOPPs can be applied to various real-life applications such as transportation problems, supplier selection problems, inventory control, and portfolio optimization. Therefore, the proposed techniques would be worth useful in the situation where neutral thoughts and hesitation values exist simultaneously in the decision-making processes.