1 Introduction

In a conventional multi-objective decision-making problem, the coefficients in the objective functions and constraints are usually fixed real numbers. However, in most of the real-life situations these parameters are not exactly known because relevant data are inconsistent, or scarce, or difficult to obtain, or estimate. In an optimization model, such types of uncertainties are usually measured by probability theory or possibility theory. However, in some cases, it is very difficult to specify the probability/possibility distributions of these parameters. To overcome these difficulties, the uncertain parameters may be assumed to be closed intervals.

Existing literature shows that, multi-objective linear programming problems with interval coefficients have been studied by Bitran (1980), Inuiguchi and Sakawa (1996), Urli and Nadeau (1992), Oliveira and Antunes (2007), Hladik (2008), Oliveira and Antunes (2009), Rivaz and Yaghoobi (2013) and Roy et al. (2017). Using deterministic multi-objective programming, Chanas and Kuchta (1996) discussed the generalized perceptions of the solution methodology of the linear programming problem with interval parameters in the objective function based on preference relations between intervals. In which they have been generalized the preference relations of intervals by considering two parameters \(t_0, t_1 \in [0, 1]\). Nonlinear multi-objective interval optimization problem (\(\mathbf{MIOP}\)) has been studied by Wu (2009), Soares et al. (2009), Gong et al. (2013) and Wu (2009) has examined the conditions for the existence of solution of an \(\mathbf{MIOP}\) whose objective functions are interval valued functions, and whose all constraints are real valued functions. Gong et al. (2013) has been established an interactive evolutionary algorithm to obtain a set of non-dominated solutions. Rivaz et al. (2016) established a methodology for the mini-max regret solution to multi-objective interval linear programming problems, wherein only coefficients in the objective functions as interval. Bhurjee and Panda (2014, 2016, 2019) discussed the conditions for the existence of the solution of \(\mathbf{MIOP}\) model whose objective functions, as well as constraints, are nonlinear interval valued functions. Li et al. (2019) considered multiple objective interval linear programming problem and solved it with an admissible order. Sun et al. (2019) proposed an evolutionary algorithms known as memetic algorithm to obtain efficient solution of \(\mathbf{MIOP}\) with good convergence and even distribution. Recently, the necessary and sufficient Karush–Kuhn–Tucker optimality conditions for some types of optimal solutions of the convex semi-infinite programming with multiple interval-valued objective functions are investigated by Tung (2020). They also have discussed the duality theory for the problem. One may observe in the above mentioned developments that the decision variables of these models are real variables and the coefficients are intervals. However, Kuchta (2011) has introduced a solution technique to determine a temporary interval solution of the deterministic multi-objective linear programming problem (i.e. the parameters of the problem are not in the form of intervals). In this technique, the author has prepared an initial solution in the form of interval value of decision variables by himself to implement an entire range of solutions, despite the fact that the final one will be a crisp value. It is also seen that in some of the real-world problems, e.g. in portfolio selection (Kumar et al. 2016, 2018), waste water management problem, project scheduling, etc., decision variables may vary in intervals. One can also see that the solution of the interval equation, \([a, b]x = [c, d]\) is not necessarily a real number. For example if \([2,3]x = [1, 4]\) then \(x = [\frac{1}{3}, \frac{4}{2}]\). In fact, solution of the interval equation \([a, b]x = [c, d]\) is the set \(\Big \{t\in \mathbb {R} | \alpha t = \beta , a\le \alpha \le b, c\le \beta \le d\Big \}\), which is an interval [see Huang and Moore (1993)]. Similarly, the inequality \([a,b]x\preceq [c,d]\) is the set \(\Big \{t\in \mathbb {R} | \alpha t \le \beta ; \alpha \in [a,b], \beta \in [c,d]\Big \}\). That is, solution of an interval equation or inequations is an interval instead of being a single point. So the decision variables of an interval optimization problem are not necessarily fixed real numbers. Hence it might be necessary to consider interval decision variables instead of real decision variables in an interval optimization problem. This has motivated us to study an enhanced version of multi-objective optimization problem, in which uncertainties in interval form are associated with the coefficients as well as decision variables. A general form of this type problem is stated in Sect. 3. Henceforth this problem will be referred as \(\mathbf{MEIOP}\) throughout the article. Two types of uncertainties are associated with this problem: one due to the presence of intervals in the model and another due to the conflicting nature of the objective functions. To address these two important aspects, we have introduced the concept of \(t\omega \)-efficient solution and studied the existence of such solution of \(\mathbf{MEIOP}\).

This paper is organized as follows. First, Sect. 2 reviews some preliminary knowledge about interval analysis. In Sect. 3, multi-objective enhanced interval optimization problem is proposed. Then, \(t\omega \)-efficient solution is defined and the existence of the such solution of \(\mathbf{MEIOP}\) is discussed. At the last of the section, we present numerical examples to demonstrate the proposed methodology. Finally, the concluding remarks and future research directions are provided in Sect. 4.

2 Notations and preliminaries

  • For two real vectors \(x=(x_1,x_2,\ldots ,x_n)^T,\) \(y=(y_1,y_2,\ldots ,y_n)^T\) in \(\mathbb {R}^n\), we denote

    $$\begin{aligned} x\geqq _v y \Leftrightarrow x_i\ge & {} y_i;~x\leqq _v y \Leftrightarrow x_i\le y_i;~x>_v y \Leftrightarrow x_i> y_i;~ x<_v y \Leftrightarrow x_i< y_i,~i\\&=1,2,\ldots ,n. \end{aligned}$$
  • A closed interval \(\mathbf {A}\) is denoted by a bold capital letter and represented by \([a^L,a^R]\) with \(a^L\le a^R\), where \(a^L\) and \(a^R\) are lower and upper bounds of \(\mathbf {A}.\) In case \(a^L= a^R\), \(\mathbf {A}\) is called degenerate interval. The set of all closed intervals on \(\mathbb {R}\) is denoted by \(\mathbb {I(R)}\). A closed interval is said to be non-negative if \(a^L\ge 0\) and set of all non-negative closed intervals in \(\mathbb {R}\) is represented by \(\mathbb {I}(\mathbb {R}_+)\). Set of all negative closed intervals \((a^R<0)\) in \(\mathbb {R}\) is denoted by \(\mathbb {I}(\mathbb {R}_-)\).

  • An interval can be expressed in terms of a parameter in several ways. Any point in \(\mathbf {A}\) may be expressed as \(a_t\), where \(a_t=a^L+t(a^R-a^L),t\in [0,1].\) Throughout this paper, we consider a specific parametric representation of an interval as \(\mathbf {A}=[a^L,a^R]=\{a_t\mid t\in [0,1]\}.\)

  • Following notations are used through out the paper : \((\mathbb {I(R)})^k\)= The product space \(\underbrace{\mathbb {I(R)}\times \mathbb {I(R)}\times \ldots \times \mathbb {I(R)}}_{k ~\hbox {times}}\); the \(k-\)dimensional interval vector is represented as \(\mathbf {C}_v^k\in (\mathbb {I(R)})^k\), \(\mathbf {C}_v^k=(\mathbf {C}_1,\mathbf {C}_2,\ldots ,\mathbf {C}_k)^T\), \(\mathbf {C}_j=[c^L_{j},c^R_{j}],~j\in \varLambda _k;~\varLambda _k=\{1,2,\ldots ,k\};\) where symbol v indicates vector.

  • \(\mathbf {C}_v^k\in (\mathbb {I(R)})^k\) is the set of real vectors,

    $$\begin{aligned} \Big \{c_t|~c_t= & {} (c_{t^1}^1,c_{t^2}^2,\ldots ,c_{t^k}^k)^T,~ c_{t^j}^j=c_j^L+t^j(c_j^R-c_j^L), ~t=(t^1,t^2,\ldots ,t^k)^T,~0\nonumber \\&\le t^j \le 1, j\in \varLambda _k\Big \}. \end{aligned}$$
    (1)

The binary operation \(\circledast \) between two closed intervals \(\mathbf {A}=[a^L,a^R]\) and \( \mathbf {B}=[b^L,b^R]\) in \(\mathbb {I(R)}\) is defined to be \(\mathbf {A}\circledast \mathbf {B}\) \(=\) \(\{a *b: a\in \mathbf {A}, b \in \mathbf {B}\}\), where \(*\in \{+,-,\cdot ,/\}\). In the case of division, \(\mathbf {A}\oslash \mathbf {B}\), it is assumed that \(0 \notin \mathbf {B}\). In the classical form, the algebraic operations of intervals are defined in terms of either lower and upper bound or mean and spread of the intervals. These interval operations can also be performed with respect to parameters as follows.

$$\begin{aligned} \mathbf {A}\circledast \mathbf {B}=\Big \{a_{t^1}*b_{t^2}| ~t^1,~t^2 \in [0,1]\Big \}\equiv \Big [\min _{t^1,~t^2}(a_{t^1}*b_{t^2}),~\max _{t^1,~t^2}(a_{t^1}*b_{t^2})\Big ]. \end{aligned}$$

Hence, we have

$$\begin{aligned} \mathbf {A}\oplus \mathbf {B}=&\{a_{t^1}+ b_{t^2} |~ t^1,~t^2\in [0,1]\} \equiv [a^L+b^L,~a^R+b^R],\\ \mathbf {A} \ominus \mathbf {B}=&\{a_{t^1}- b_{t^2}|~t^1,~t^2\in [0,1]\} \equiv [a^L-b^R,~a^R-b^L],\\ \mathbf {A}\odot \mathbf {B}=&\{a_{t^1} \cdot b_{t^2} | ~t^1,~t^2\in [0,1]\} \equiv \Big [\min _{t^1,~t^2}(a_{t^1}\cdot b_{t^2}),~\max _{t^1,~t^2}(a_{t^1} \cdot b_{t^2})\Big ],\\ \mathbf {A}\oslash \mathbf {B}=&\Big \{a_{t^1}/ b_{t^2} |~ t^1,~t^2\in [0,1],b_{t^2} \ne 0\Big \}\equiv \Big [\min _{t^1,~t^2}(a_{t^1}/ b_{t^2}),~\max _{t^1,~t^2}(a_{t^1}/ b_{t^2})\Big ],\\ k \mathbf {A}=&\{k a_t~|~t\in [0,1]\}\equiv [\min _t(k a_t),~\max _t(k a_t)],~k\in \mathbb {R}. \end{aligned}$$

The set of closed intervals, \(\mathbb {I(R)}\) is not a totally order set. Several partial order relations exist on \(\mathbb {I(R)}\) in the literature [see Moore (1966), Ishibuchi and Tanaka (1990)]. Here we consider a partial ordering due to Bhurjee and Panda (2012) defined as follows.

Definition 1

(Bhurjee and Panda 2012)   For \(\mathbf {A},\mathbf {B}\in \mathbb {I(R)}\), \(a_t\in \mathbf {A}\) and \(b_t\in \mathbf {B}\)

$$\begin{aligned}&\mathbf {A}\preceq \mathbf {B} ~\hbox {if and only if} ~a_t\le b_t~\forall ~t\in [0,1],\nonumber \\&\mathbf {A}\prec \mathbf {B} ~\hbox {if and only if} ~a_t < b_t~\hbox {for at least one} ~t\in [0,1],\nonumber \\&\mathbf {A}\ne \mathbf {B} ~\hbox {if and only if} ~a_t\ne b_t~\hbox {for at least one}~ t\in [0,1]. \end{aligned}$$
(2)

Note that \(\mathbf {A}\preceq \mathbf {B}\) is not same as \(\mathbf {B}\ominus \mathbf {A}\succeq \mathbf {0}.\) For example \([3,5] \preceq [4,9]\), but \([4,9] \ominus [3,5]=[-1,6]\nsucceq \mathbf {0}.\)

An interval-valued function can be defined as the extension of real-valued function with one or more interval arguments onto an interval, see Moore (1966) and Hansen (2004). Ishibuchi and Tanaka (1990) are considered the interval valued function \(\mathbf {F}:\mathbb {R}^n\rightarrow \mathbb {I(R)}\) as \(\mathbf {F}(x)=[F^L(x),F^R(x)],~\hbox {where}~F^L,F^R:\mathbb {R}^n\rightarrow \mathbb {R},~F^L(x)\le F^R(x)~\forall x\in \mathbb {R}^n.\) This can be further extended crisp real variables to interval variables in a function that means \(\mathbf{F}:\mathbb {(I(R))}^n\rightarrow \mathbb {I(R)}\) in the place of \(\mathbb {R}^n\rightarrow \mathbb {I(R)}\). Denote n-component as \(\mathbf {X}_v^n=([x^L_1, x^R_1], [x^L_2, x^R_2],...,[x^L_n, x^R_n])^T \in \mathbb {(I(R))}^n\), the interval valued function with unknown variables in the form of closed intervals is defined as follows.

Definition 2

For given \(\mathbf {C}_v^k\in \mathbb {(I(R))}^k\), an interval valued function \(\mathbf {F}_{\mathbf {C}_v^k}:(\mathbb {I(R)})^n\rightarrow \mathbb {I(R)}\) is represented by

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^k}(\mathbf {X}_v^n)=\Big \{f_{c_t}(x_\omega )~\Big |~f_{c_t}: \mathbb {R}^n\rightarrow \mathbb {R}, ~c_t\in \mathbf {C}_v^k,~x_\omega \in \mathbf {X}_v^n \Big \}, \end{aligned}$$

where \(x_\omega = \big (x_{\omega _1}, x_{\omega _2}, \ldots , x_{\omega _n}\big ),\) \(x_{\omega _j}= x^L_j+\omega _j(x^R_j-x^L_j),\) for all \(j=1,2,\ldots ,n\). Denote \(x^L=(x^L_1,x^L_2,\ldots ,\) \(x^L_n)^T\), \(x^R=(x^R_1,x^R_2,\ldots ,x^R_n)^T\); \(x_\omega =x^L+\omega ^T(x^R-x^L),~\omega =(\omega _1,\omega _2,\ldots ,\omega _n)^T\in [0,1]^n\). For every fixed \(\mathbf {X}_v^n \in (\mathbb {I(R)})^n\), \(f_{c_t}\) is continuous in \(t \in [0,1]^n\), so for every fixed \((x^L,~x^R)\in \mathbb {R}^n \times \mathbb { R}^n\), \(f_{c_t}(x_\omega )\) is continuous in \((t,\omega )\in [0,1]^k\times [0,1]^n\). In this case \(\underset{t,\omega }{\min }f_{c_t}(x_\omega )\) and \(\underset{t,\omega }{\max }f_{c_t}(x_\omega )\) exist and

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^k}(\mathbf {X}_v^n)=\Big [\min _{t,\omega }f_{c_t} (x_\omega ),~\max _{t,\omega }f_{c_t}(x_\omega )\Big ]. \end{aligned}$$

Example 1

For \(\mathbf {C}_v^3=([1,2],[-1,1],[0,3])^T\in (\mathbb {I(R)})^3\), the interval valued function \(\mathbf {F}_{\mathbf {C}_v^3}:(\mathbb {I}(\mathbb {R}_+))^2\rightarrow \mathbb {I(R)}\) is

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^3}(\mathbf {X}_1,\mathbf {X}_2)=&[1,2]\otimes \mathbf {X}_1^2\oplus [-1,1]\otimes e^{[0,3]\otimes \mathbf {X}_2}\\ =&\left\{ f_{c_t}(x_{\omega _1}, x_{\omega _2})~\big |~c_t\in \mathbf {C}_v^3,~x_{\omega _1} \in \mathbf {X}_1,~x_{\omega _2}\in \mathbf {X}_2\right\} , \end{aligned}$$

where \(f_{c_t}(x_{\omega _1}, x_{\omega _2})=(1+t_1)(x_{\omega _1})^2+(-1+2t_2)e^{(3t_3)(x_{\omega _2})},\) \(t=(t_1,t_2,t_3)^T\in [0,1]^3,\) and \(\mathbf {X}_1=[x_1^L,x_1^R],\) \(\mathbf {X}_2=[x_2^L,x_2^R]\), \(x_{\omega _1}=x_1^L+\omega _1(x_1^R-x_1^L),\) \(x_{\omega _2}=x_2^L+\omega _2(x_2^R-x_2^L),\) \(\omega =(\omega _1,\omega _2)^T\in [0,1]^2\). Accordingly,

$$\begin{aligned} \min _{t,\omega }f_{c_t}(x_{\omega _1},x_{\omega _2})=&\min _t \{(1+t_1)(x_1^L)^2+(-1+2t_2)\} =(x_1^L)^2-1,\\ \max _{t,\omega }f_{c_t}(x_{\omega _1},x_{\omega _2})=&\max _t \{(1+t_1)(x_1^R)^2+(-1+2t_2) e^{(3t_3)(x_2^R)}\}=2(x_1^R)^2+e^{3x_2^R}. \end{aligned}$$

Hence

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^3}(\mathbf {X}_1,\mathbf {X}_2)= [\min _{t,\omega }f_{c_t}(x_{\omega _1},x_{\omega _2}),~\max _{t,\omega }f_{c_t} (x_{\omega _1},x_{\omega _2})]=[(x_1^L)^2-1,~2(x_1^R)^2+e^{3x_2^R}]. \end{aligned}$$

The concept of convexity plays an important role in the existence of solution of an optimization problem. Therefore, we define the convexity for interval valued function as follows.

Definition 3

Interval convex set: A non-empty set \(\mathcal {D}\subseteq (\mathbb {I}(\mathbb {R}))^n\) is said to be convex if for every \( \mathbf{X}_v^n, \mathbf{Y}_v^n\in \mathcal {D}\) and \(0\le \lambda \le 1,\) then

$$\begin{aligned} (\lambda \mathbf{X}_v^n\oplus (1-\lambda )\mathbf{Y}_v^n) \in \mathcal {D}. \end{aligned}$$

Example 2

Let \(\mathcal {E}\subseteq (\mathbb {I}(\mathbb {R}))^2\) be a non-empty set of real closed interval vectors with unit width of each components. Suppose for any \( \mathbf{X}_v^2, \mathbf{Y}_v^2\in \mathcal {E}\) and \(0\le \lambda \le 1,\) then \(\lambda \mathbf{X}_v^2\oplus (1-\lambda )\mathbf{Y}_v^2 \in \mathcal {E}.\) Hence, \(\mathcal {E}\) is a convex set.

Definition 4

Interval valued convex function: Suppose \(\mathcal {D} \subseteq (\mathbb {I}(\mathbb {R}))^n\) is a convex set. For given \(\mathbf {C}_v^k\in (\mathbb {I}(\mathbb {R}))^k\), the interval valued function \(\mathbf {F}_{\mathbf {C}_v^k}: \mathcal {D}\rightarrow \mathbb {I}(\mathbb {R})\) is said to be convex with respect to \(\preceq \) if for every \( \mathbf{X}_v^n, \mathbf{Y}_v^n\in \mathcal {D},\) and \(0\le \lambda \le 1,\)

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^k}(\lambda \mathbf{X}_v^n\oplus (1-\lambda )\mathbf{Y}_v^n)\preceq \lambda \mathbf {F}_{\mathbf {C}_v^k}(\mathbf{X}_v^n)\oplus (1-\lambda )\mathbf {F}_{\mathbf {C}_v^k}(\mathbf{Y}_v^n). \end{aligned}$$

Remark 1

From (2) and definition of interval valued convex function, one may observe that \(\mathbf {F}_{\mathbf {C}_v^k}\) is convex with respect to \(\preceq \) means \(f_{c_t}(\lambda x_{\omega }+(1-\lambda )y_{\omega })\le \lambda f_{c_t}(x_{\omega })+(1-\lambda )f_{c_t}( y_{\omega }),\) for all \(t\in [0,1]^k\), \(x_\omega \in \mathbf{X}_v^n, y_\omega \in \mathbf{Y}_v^n\) and \(\omega \in [0,1]^n\). So we can conclude that \(\mathbf {F}_{\mathbf {C}_v^k}\) is convex with respect to \(\preceq \) if and only if \(f_{c_t}(x_\omega )\) is a convex function on \(\mathcal {D}\) for every \(t\in [0,1]^k\) and \(\omega \in [0,1]^n\).

Example 3

Consider an interval valued function \(\mathbf {F}_{\mathbf {C}_v^{3}}:\mathcal {E}\rightarrow \mathbb {I}(\mathbb {R}),\) \(\mathcal {E}\subseteq (\mathbb {I}(\mathbb {R_+}))^2\) as

$$\mathbf {F}_{\mathbf {C}_v^{3}} (\mathbf{X}_1,\mathbf{X}_2)=[1,2]\otimes \mathbf{X}_1^2\oplus [2,3]\otimes \mathbf{X}_2^2\oplus [1,2].$$

The parametric form of \(\mathbf {F}_{\mathbf {C}_v^{3}}\) is

$$f_{c_{t^1}}(x_{\omega _1},x_{\omega _2}) =(1+t^1_1){(x_{\omega _1})}^2+(2+t^1_2){(x_{\omega _2})}^2+(1+t^1_3),$$

where \(x_{\omega _j}=x_j^L+\omega _j (x_j^R-x^L_j),j=1,2,\) \(t^1=(t_1^1,t_2^1,t_3^1)^T\in [0,1]^3,~\omega _1,\omega _2\in [0,1]\). It can be observed that \(f_{c_{t^1}}(x_{\omega _1},x_{\omega _2})\) is convex function in \(\mathcal {E}\) for every \(t^1=(t_1^1,t_2^1,t_3^1)^T\in [0,1]^3,~\omega _1,\omega _2\in [0,1]\). Hence, \(\mathbf {F}_{\mathbf {C}_v^{3}}\) is an interval valued convex function due to Remark 1.

The following separation theorem is required to prove the existence of the solution of \(\mathbf{MEIOP}\).

Proposition 1

(Mangasarian 1969) Let f be a \(m-\)dimensional convex vector function on the convex set \(\varGamma \subset \mathbb {R}^n.\) Then either

(I) \(f(x)<_v0\) has a solution \(x\in \varGamma \) or

(II) \(p^Tf(x)\ge 0\) for all \(x\in \varGamma \) for some \(p\geqq _v 0,\) \(p\in \mathbb {R}^m\)

but never both.

3 Multi-objective enhanced interval optimization problem

Consider the following multiple objective enhanced interval optimization problem (henceforth it will be referred to as \(\mathbf{MEIOP}\)) as:

$$\begin{aligned} \mathbf{(MEIOP)}~~~~~~~\min ~&~ \mathbf {F}(\mathbf {X}_v^n)\nonumber \\ \hbox {subject to}~&~\mathbf {G}_{\mathbf{D}_{v}^{m_p}}^p(\mathbf {X}_v^n)\preceq \mathbf {B}_p,~p\in \varLambda _q, \end{aligned}$$
(3)

where \(\mathbf {F}(\mathbf {X}_v^n)=\Big (\mathbf {F}^1_{\mathbf {C}_v^{k_1}}(\mathbf {X}_v^n), \mathbf {F}^2_{\mathbf {C}_v^{k_2}}(\mathbf {X}_v^n),\ldots ,\mathbf {F}^m_{\mathbf {C}_v^{k_m}} (\mathbf {X}_v^n)\Big )^T\), \(\mathbf {F}^i_{\mathbf {C}_v^{k_i}},\mathbf {G}^p_{\mathbf{D}_{v}^{m_p}}:(\mathbb {I(R)})^n\rightarrow \mathbb {I(R)}\), \(i \in \varLambda _m\), partial orderings (\(\preceq \)) in the constraints (3) are as defined in (2). As based on parametric Definition 2 of interval function

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^{k_i}}^i(\mathbf {X}_v^n)=\Big \{f_{c_{t^i}^i}^i(x_{\omega }) ~|~f_{c_{t^i}^i}^i:\mathbb {R}^n\rightarrow \mathbb {R},~{c_{t^i}^i}\in \mathbf {C}_v^{k_i} , x_\omega \in \mathbf {X}_v^n\Big \}, \end{aligned}$$

and also constraints (3) can be expressed by using Definition 2 and inequality (2) as, for \(\mathcal {O} \subseteq \mathbb {R}^n \)

$$\begin{aligned}&\Big \{\mathbf {X}_v^n \in (\mathbb {I(R)})^n~\big |~ \mathbf {G}_{\mathbf {D}_{v}^{m_p}}^p (\mathbf {X}_v^n)\preceq \mathbf {B}_p \Big \}\\\equiv & {} \Big \{(x^L, x^R)\in \mathcal {O} \times \mathcal {O}~\big |~g^p_{d_{t^p}^p} (x_\omega )\le b_{t^p}^p,~ \omega \in [0,1]^n\Big \},~\forall p, \end{aligned}$$

where \(g^p_{d_{t^p}^p}:\mathcal {O}\rightarrow \mathbb {R}, ~d_{t^p}^p\in \mathbf {D}_v^{m_p},~b_{t^p}^p\in \mathbf {B}_p.\) We represent throughout the section, \(t=(t^1,t^2,...,\) \(t^m)^T\), \(t^i=(t^i_1,t^i_2\ldots ,t^i_{k_i})^T,\) \(t^i_j\in [0,1]\), \(j \in \varLambda _{k_i}\), \(i \in \varLambda _m\), \(t^p\in [0,1]\) and \(\omega =(\omega _1, \omega _2,\dots , \omega _n)^T\).

The feasible region of \(\mathbf{MEIOP}\) can be expressed as the set,

$$\begin{aligned} \mathcal {F}= & {} \Big \{\mathbf {X}_v^n \in (\mathbb {I(R)})^n~\big |~\mathbf {G}_ {\mathbf {D}_{v}^{m_p}}^p(\mathbf {X}_v^n)\preceq \mathbf {B}_p, p\in \varLambda _q \Big \}\\\equiv & {} \Big \{(x^L,x^R)\in \mathcal {O} \times \mathcal {O}~\big |~\min _{t,\omega } \{g_{d_{t_p}^p}^p(x_\omega )\} \le b^L_p,~\max _{t,\omega }\{g_{d_{t_p}^p}^p(x_\omega )\} \le b^R_p,~p\in \varLambda _q \Big \}. \end{aligned}$$

The objective function of \(\mathbf{MEIOP}\) is a multi-valued mapping, and in addition to this, each objective is a set-valued function. Such type of problems has a set of compromise/efficient/ Pareto optimal solutions as in case of general multi-objective programming problem. Therefore, to compare any two different intervals vectors \(\mathbf {X}_v^n\) and \(\mathbf {Y}_v^n\) into the feasible region, corresponding interval objective values \(\mathbf {F}(\mathbf {X}_v^n)\) and \(\mathbf {F}(\mathbf {Y}_v^n)\) can be compared componentwise like real vectors. We denote this by

$$\begin{aligned} \mathbf {F}(\mathbf {X}_v^n)\preceq _v\mathbf {F}(\mathbf {Y}_v^n) \Leftrightarrow \mathbf {F}^i_{\mathbf {C}_v^{k_i}} (\mathbf {X}_v^n)\preceq \mathbf {F}^i_{\mathbf {C}_v^{k_i}}(\mathbf {Y}_v^n),~\forall i\in \varLambda _m. \end{aligned}$$

It is difficult to obtain the efficient of \(\mathbf{MEIOP}\) directly, same as a general multi-objective optimization problem. Because of a partial ordering cannot compare all intervals and the complexities associated with this, it involves at different stages of \(\mathbf{MEIOP}\). To avoid these patchwork, \(\mathbf{MEIOP}\) is transformed to a general optimization problem in the subsequent Sect. 3.1, and that accept the partial ordering as defined in (2) to establish the existence of an efficient solution of \(\mathbf{MEIOP}\) through the solution of the transformed problem. Such an efficient solution is called \(t\omega \)-efficient solution. Since the parametric representation of each of the objective function involves two types of the parameters t and \(\omega \) correspond to the objective functions’ interval coefficients and interval decision variables, respectively. The different forms of the objective functions acquire for the different values of t and \(\omega \), which means these parameters play an important role in obtaining the efficient solution of \(\mathbf{MEIOP}\) based on partial ordering. Therefore, we represent an efficient solution of \(\mathbf{MEIOP}\) as \(t\omega \)-efficient solution to differentiate with the efficient solution of the classical multi-objective programming. The results of this paper are based on the partial ordering in \(\mathbb {I}(\mathbb {R})\) exist in the parametric form. Similar to a general multi-objective optimization problem, a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\) is defined as follows.

Definition 5

An interval vector \({\mathbf {X}_v^n}^* \in \mathcal {F}\) is called a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\) if there does not exist \(\mathbf {X}_v^n\in \mathcal {F}\) such that

$$\begin{aligned} \mathbf {F}_{\mathbf {C}_v^{k_i}}^i(\mathbf {X}_v^n)\preceq \mathbf {F}_{\mathbf {C}_v^{k_i}}^i({\mathbf {X}_v^n}^*), i \in \varLambda _m~ \hbox {and at least one} j\ne i,~\mathbf {F}_{\mathbf {C}_v^{k_j}}^j(\mathbf {X}_v^n)\prec \mathbf {F}_{\mathbf {C}_v^{k_j}}^j({\mathbf {X}_v^n}^*). \end{aligned}$$
(4)

Definition 6

An interval vector \({\mathbf {X}_v^n}^* \in \mathcal {F}\) is called a properly \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\), if \({\mathbf {X}_v^n}^*\in \mathcal {F}\) is a \(t\omega \)-efficient solution and there is a positive degenerate interval (a positive real number) \(\mu > 0\), so that for some \(t^i\in [0,1]^{k_i}\) and every \(\mathbf {X}_v^n\in \mathcal {F}\) with \(f_{c_{t^i}^i}^i(x_{\omega })<f_{c_{t^i}^i}^i(x^*_\omega ),\) at least one \(t^j\in [0,1]^{k_j},~t^i\ne t^j\) exists with \(f_{c_{t^j}^j}^j(x_\omega ^*)<f_{c_{t^j}^j}^j(x_\omega )\) and

$$\begin{aligned} \frac{f_{c_{t^i}^i}^i(x^*_\omega )-f_{c_{t^i}^i}^i(x_\omega )}{f_{c_{t^j}^j}^j (x_\omega )-f_{c_{t^j}^j}^j(x^*_\omega )}\le \mu , \end{aligned}$$
(5)

where \(x_\omega ^*\in {\mathbf {X}_v^n}^*\) and \(x_\omega \in {\mathbf {X}_v^n}\).

3.1 Exitance of \(t\omega \)-efficient solution of MEIOP

In order to obtain a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\), we first make an equivalent parametric deterministic optimization problem using some transformation as given in next paragraph, and prove that an optimal solution of the transform problem is a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\) in the subsequent theorem.

Suppose a vector values weight function \(p~(or~p(t))=(p_1(t^1),p_2(t^2)\ldots ,p_m(t^m))^T\), \(p_i(t^i)> 0\), \(p_0(\omega )>0\) and \(\lambda _i>0\), \(i \in \varLambda _m,\) and builds an optimization problem

$$\begin{aligned} (\mathbf{MEIOP}^\lambda _p){:}~~\min _{x_\omega \in \mathcal {F}} {\varvec{\varPhi }}(x_\omega ), \end{aligned}$$
(6)

where \({\varvec{\varPhi }}(x_\omega )=\sum _{i \in \varLambda _m} \lambda _i\phi _i(x_\omega )\) represents the weighted sum of deterministic transformed of each objective function \(\phi _i(x_\omega )\). This construction is primarily based on the idea of the weighted sum scalarization approach of the classical multi-objective programming problem. We designate the problem by \(\mathbf{MEIOP}_p^\lambda \) as a weighted sum scalarization of \(\mathbf{MEIOP}\). Each \(\phi _i(x_\omega )\) is the deterministic transformation by means of the parametric definition of each interval objective function based on the technique developed for single objective function by Bhurjee and Panda (2012), which is as follows

$$\begin{aligned} \phi _i(x_\omega )=\int _{k_i+n} p_i(t^i)p_0(\omega ) f^i_{c^i_{t^i}}(x_\omega ) \,dt^id\omega ,~dt^i=dt^i_1dt^i_2\ldots dt^i_{k_i}, ~d\omega =d\omega _1 d\omega _2\ldots d\omega _n, \end{aligned}$$

\(\hbox { and } \int _{k_i+n}=\underbrace{\int _0^1\int _0^1\ldots \int _0^1}_{({k_i+n})~ \hbox {times}}.\) Here \(t^i_1,t^i_2,\ldots ,t^i_{k_i}\), and \(\omega _1,\omega _2,\ldots ,\omega _n\) are mutually independent and each component of \(t^i, \omega _j\) vary from 0 to 1. So the integral is a function of \((x^L,x^R)\) only, henceforth \(\phi _i(x_\omega )\) is represented as \(\phi _i(x^L,x^R)\), i.e.,

$$\begin{aligned} \phi _i(x^L,x^R)=\int _{k_i+n} p_i(t^i)p_0(\omega ) f_{c_{t^i}^i}^i(x_{\omega }) \,dt^id\omega . \end{aligned}$$

Eventually,

$$\begin{aligned} {\varvec{\varPhi }}(x^L,x^R)=\sum _{i \in \varLambda _m} \lambda _i\phi _i(x^L,x^R) \end{aligned}$$

Hence \(\mathbf{MEIOP}^\lambda _p\) becomes

$$\begin{aligned} (\mathbf{MEIOP}^\lambda _p):~~\min _{x^L,x^R\in \mathcal {F}} {\varvec{\varPhi }}(x^L,x^R). \end{aligned}$$
(7)

This is a general deterministic nonlinear programming problem that is free from interval uncertainty, and can be solved by using nonlinear programming technique.

Note 1

In \(\mathbf{MEIOP}_p^\lambda \), p(t) and \(p_0(\omega )\) are preference weight functions chosen by the decision maker. Different efficient solutions for the problem can be obtained by considering different preference weight functions. For \(p(t)=1\) and \(p_0(\omega )=1\), the decision maker’s factual attitude is to estimate the mean of the objective functions of the problem. If \(\int _0^1p_i(t^i)dt^i=1\) for each i and \(\int _0^1p_0(\omega )d\omega =1\), then decision maker’s desire to estimate remains (stands) in between optimistic and pessimistic optimal value. Forthcoming results will show that any selections of \(p_i\) with positive value can provide a \(t\omega \)-efficient solution.

Theorem 1

If \((x^{L*},x^{R*})\in \mathcal {F}\) is an optimal solution of \(\mathbf{MEIOP}^\lambda _p,\) some \(p>_v0\), \(p_0>0\) and \(\lambda >_v0\) then \({\mathbf {X}_v^n}^*\in \mathcal {F}\) is a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}.\)

Proof

Suppose \((x^{L*},x^{R*})\in \mathcal {F}\) be an optimal solution of \(\mathbf{MEIOP}^\lambda _p\). If it is assumed that \({\mathbf {X}_v^n}^*\) is not a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\), then according to Definition 5 some \(\mathbf {X}_v^n\in \mathcal {F}\) satisfies relation (4). Using partial ordering (2), relation (4) can be rewritten as follows For some \((x^{L},x^{R})\in \mathcal {F}\) and each \(t^i \in [0,1]^{k_i}, \omega \in [0,1]^n\)

$$\begin{aligned} f_{c_{t^i}^i}^i(x_{\omega })\le f_{c_{t^i}^i}^i(x^*_\omega ),~i\in \varLambda _m \end{aligned}$$
(8)

and for at least one \(j\ne i\),

$$\begin{aligned} f_{c_{t^j}^j}^j(x_{\omega })< f_{c_{t^j}^j}^j(x^*_{\omega }). \end{aligned}$$
(9)

For real-valued functions \(p_i:[0,1]^{k_i}\rightarrow \mathbb {R}_+\), \(p_0:[0,1]^{n}\rightarrow \mathbb {R}_+\) and \(\lambda >_v 0\) imply that for some \((x^{L},x^{R})\in \mathcal {F}\) each \(t^i \in [0,1]^{k_i}, \omega \in [0,1]^n\)

$$\begin{aligned}&\phi _i(x^L, x^R)\le \phi _i(x^{L*}, x^{R*}),~i\in \varLambda _m~ \hbox {and for at least one} j\ne i,~\phi _j(x^L, x^R)< \phi _j(x^{L*}, x^{R*}),\\ \Rightarrow&\sum _{i\in \varLambda _m}\lambda _i\phi _i(x^L, x^R)<\sum _{i\in \varLambda _m} \lambda _i\phi _i(x^{L*}, x^{R*}) \end{aligned}$$

This is equivalent to \(\varPhi (x^L, x^R)<\varPhi (x^{L*}, x^{R*})\), which is impossible since \((x^{L*}, x^{R*})\) is the optimal solution of \(\mathbf{MEIOP}^\lambda _p\). Hence \({\mathbf {X}_v^n}^*\) is a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\). \(\square \)

Theorem 2

If \(X_v^{*n}\) is a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\) with \(\mathbf {G}^p_{D_{v}^{m_p}}({\mathbf {X}_v^n}^*)\preceq \mathbf {B}_p\) and \(\mathbf {F}^i_{\mathbf {C}_v^{k_i}},\) \(\mathbf {G}^p_{D_v^{m_p}},~p\in \varLambda _q,~i\in \varLambda _m\) are interval valued convex functions with respect to \(\preceq \) then there exists a weight function \(p(t)\ge 0\) and \(p_0(\omega )\ge 0\) such that \((x^{L*},x^{R*})\) is an optimal solution of \(\mathbf{MEIOP}^\lambda _p\) for any \(\lambda >_v0\).

Proof

Here \(\mathcal {F}= \Big \{(x^L,x^R)\in \mathcal {O} \times \mathcal {O}~\big |~\min _{t,\omega }\{g_{d_{t^p}^p}^p(x_\omega )\} \le b^L_p,~\max _{t',\omega }\{g_{d_{t^p}^p}^p(x_\omega )\} \le b^R_p,~p\in \varLambda _q \Big \}\). Since \(\mathbf {G}^p_{D_v^{m_p}}\) is an interval valued convex function, so \( g^p_{d^p_{t^p}}(x_\omega )\) is a convex function as Remark 1. Hence \(\mathcal {F}\) is a convex set.

Let \(\mathbf {X}_v^{n*} \in \mathcal {F}\) be a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\). So there does not exist \(\mathbf {X}_v^n\in \mathcal {F}\) that satisfies (4). According to Remark 1, we can conclude that for every \(t^i\) and \(\omega \), \(f^i_{c_{t^i}^i}\) is convex on a convex set \(\mathcal {F}\). Since (4) has no solution, so using the concept of partial ordering and interval valued function as discussed, we can conclude that, for every \(t^i\) in \([0,1]^{k_i},\) and \(\omega \) in \([0,1]^n\) the following system has no solution on \(\mathcal {F}\).

$$\begin{aligned} f^i_{c_{t^i}^i}(x_\omega )- f^i_{c_{t^i}^i}(x_\omega ^*)\le 0,~\forall ~ i \in \varLambda _m~ \hbox {and}~f^j_{c_{t^j}^j}(x_\omega )- f^j_{c_{t^j}^j}(x_\omega ^*)<0, ~\hbox {for at least one} j\ne i. \end{aligned}$$

If we denote \(F(x^L,x^R,t,\omega )=(f^1_{c_{t^1}^1}(x_\omega )- f^1_{c_{t^1}^1}(x_\omega ^*),\ldots ,f^m_{c_{t^m}^m}(x_\omega )- f^m_{c_{t^m}^m}(x_\omega ^*))^T,\) then above system implies that

$$\begin{aligned} F(x^L,x^R,t,\omega )\leqq _v 0 \end{aligned}$$

has no solution for every t and \(\omega \). This implies that \(F(x^L,x^R,t,\omega )<_v 0\) has no solution for every t and \(\omega \). Hence from Proposition 1, there exists a real vector \(u=(u_1,u_2,\ldots ,u_m)^T\), \(u\geqq _v0\) such that \(u^TF(x^L,x^R,t,\omega )\ge 0\) is true for all \((x^L,x^R)\in \mathcal {F}\). Define \(p_i:[0,1]^{k_i}\rightarrow \mathbb {R}_+\) by \(p_i(t^i)=u_i,i\in \varLambda _m\) and \(p_0:[0,1]^n\rightarrow \mathbb {R}_+\) by \(p_0(\omega )=u_0\). Then \((u_0u)^TF(x^L,x^R,t,\omega )\ge 0\) is same as

$$\begin{aligned} \{p_0(\omega )p(t)\}^TF(x^L,x^R,t,\omega )\ge 0, \forall (x^L,x^R) \in \mathcal {F}. \end{aligned}$$

This implies that \(\sum _{i \in \varLambda _m} p_i(t^i)p_0(\omega )f^i_{c^i_{t^i}}(x^L,x^R) \ge \sum _{i \in \varLambda _m} p_i(t^i)p_0(\omega )f^i_{c^i_{t^i}}(x^{*L},x^{*R}), \forall ~(x^L,x^R)\in \mathcal {F}.\) Hence for \(\lambda _i > 0,\) \(\sum _{i \in \varLambda _m} \lambda _i \phi _i(x^L,x^R) \ge \sum _{i \in \varLambda _m} \lambda _i \phi _i(x^{L*},x^{R*}),~\forall (x^L,x^R)\in \mathcal {F}.\) This is equivalent to \(\varPhi (x^L,x^R)\ge \varPhi (x^{L*},x^{R*}),~ \forall (x^L,x^R)\in \mathcal {F}\), which implies that \((x^{L*},x^{R*})\) is an optimal solution of \(\mathbf{MEIOP}^\lambda _p\) for \(p\geqq _v 0\), \(\omega \ge _v0\), \(\lambda >_v 0\). \(\square \)

Theorem 3

If \((x^{L*},x^{R*})\in \mathcal {F}\) is an optimal solution of \(\mathbf{MEIOP}^\lambda _p\), \(p_0\), \(p>_v0\), \(p_i\) are continuous functions satisfying \(\int _{n} p_0(\omega )\,d\omega =1\), \(\int _{k_i} p_i(t^i)\,dt^i=1\) and \(\lambda >_v0\), then \(\mathbf{X}_v^{n*}\) is a properly \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\).

Proof

Suppose \(\mathbf{X}_v^{n*}\) is not a properly \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\). That means \(\mathbf{X}_v^{n*}\) is not a \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\) or there doesn’t satisfy (5) for a positive \(\mu \), if \(\mathbf{X}_v^{n*}\) is a \(t\omega \)-efficient solution.

Suppose \(\mathbf{X}_v^{n*}\) is not \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\). It is easy to show that according to the Theorem 1, \((x^{L*},x^{R*})\) is not an optimal solution of \(\mathbf{MEIOP}^\lambda _p\), for any \(p_0>0\), \(p>_v0\), and \(\lambda >_v0\). This contradicts that \((x^{L*},x^{R*})\) an optimal solution of \(\mathbf{MEIOP}^\lambda _p\).

Assume that \({\mathbf {X}_v^n}^*\) is \(t\omega \)-efficient solution of \(\mathbf{MEIOP}\), but it does not satisfy conditions of properly \(t\omega \)-efficient (5). So for some \(t^i,{\tilde{t}}^i\in [0,1]^{k_i}, i\in \varLambda _m\) and \((x^{L},x^{R})\in \mathcal {F}\) with \(f_{c_{t^i}^i}^i(x_{\omega })<f_{c_{t^i}^i}(x_\omega ^*)\), one can choose

$$\begin{aligned} \mu =(m-1)\underset{i,j}{\max }\left[ \underset{t^i,{\tilde{t}}^i, t^j,{\tilde{t}}^j}{\max } \Big \{\frac{\lambda _jp_j(t^j)p_j(\tilde{t}^j)p_0(\omega )}{\lambda _ip_i(t^i)p_i ({\tilde{t}}^i)p_0(\omega )}\Big \}\right] ,~ m\ge 2, i\ne j, \end{aligned}$$

where \(p_i:[0,1]^{k_i}\rightarrow \mathbb {R}_+\) and \(p_0:[0,1]^{n}\rightarrow \mathbb {R}_+\) are two arbitrary chosen continuous function, satisfying for all \(j\in \varLambda _m\diagup \{i\}\)

$$\begin{aligned} \frac{f_{c_{t^i}^i}^i(x^*_{\omega })-f_{c_{{\tilde{t}}^i}^i}^i(x_{\omega })}{f_{c_{t^j}^j}^j(x_\omega )-f_{c_{{\tilde{t}}^j}^j}^j(x_{\omega }^*)}> & {} \mu ~~\hbox {for all} ~t^j,{\tilde{t}}^j\in [0,1]^{k_j},\omega \in [0,1]^n ~\hbox {with}~f_{c_{t^j}^j}^j(x_\omega )>f_{c_{{\tilde{t}}^j}^j}^j(x^*_{\omega }).\\ f_{c_{t^i}^i}^i(x_{\omega }^*)-f_{c_{{\tilde{t}}^i}^i}^i(x_{\omega })> & {} \mu (f_{c_{t^j}^j}^j(x_\omega )-f_{c_{{\tilde{t}}^j}^j}^j(x_{\omega }^*))\\> & {} \left\{ (m-1)\frac{\lambda _jp_j(t^j)p_j({\tilde{t}}^j)p_0(\omega )}{\lambda _ip_i(t^i)p_i({\tilde{t}}^i)p_0(\omega )}\right\} \Big (f_{c_{t^j}^j}^j (x_\omega )-f_{c_{{\tilde{t}}^j}^j}^j(x_{\omega }^*)\Big ). \end{aligned}$$

So \(\lambda _ip_i(t^i)p_i({\tilde{t}}^i)p_0(\omega )\Big (f_{c_{t^i}^i}^i(x_{\omega }^*) -f_{c_{{\tilde{t}}^i}^i}^i(x_{\omega })\Big )>(m-1)\lambda _jp_j(t^j)p_j ({\tilde{t}}^j)p_0(\omega )\Big (f_{c_{t^j}^j}^j(x_\omega )-f_{c_{{\tilde{t}}^j}^j}^j (x_{\omega }^*)\Big ).\) By integrating on both sides, we get

$$\begin{aligned} \lambda _i\int _{k_i}\int _{{\tilde{k}}_i} \int _{n} p_i(t^i)p_i({\tilde{t}}^i)p_0(\omega )&\Big (f_{c_{t^i}^i}^i(x_{\omega }^*)- f_{c_{{\tilde{t}}^i}^i}^i(x_{\omega })\Big )\,dt^i d{\tilde{t}}^i d\omega \\&> \\ (m-1)\lambda _j\int _{k_j}\int _{{\tilde{k}}_j}\int _{n} p_j(t^j)p_j({\tilde{t}}^j)p_0(\omega )&\Big (f_{c_{t^j}^j}^j(x_\omega ) -f_{c_{{\tilde{t}}^j}^j}^j(x_{\omega }^*)\Big )\,dt^j d{\tilde{t}}^j d\omega \end{aligned}$$

This implies

$$\begin{aligned} \lambda _i\left( \int _{k_i}\int _{n} p_i(t^i)p_0(\omega )f_{c_{t^i}^i}^i(x_{\omega }^*)dt^i d\omega -\int _{{\tilde{k}}_i}\int _{n} p_i({\tilde{t}}^i)p_0(\omega )f_{c_{{\tilde{t}}^i}^i}^i(x_{\omega })\,d{\tilde{t}}^i d\omega \right) \\ >\\ (m-1)\lambda _j\left( \int _{\tilde{k}_j}\int _{n} p_j({\tilde{t}}^j)p_0(\omega )f_{c_{{\tilde{t}}^j}^j}^j(x_\omega )\, d{\tilde{t}}^j d\omega -\int _{k_j}\int _{n} p_j(t^j)p_0(\omega )f_{c_{t^j}^j}^j(x_{\omega }^*)\, dt^j d\omega \right) . \end{aligned}$$

Thus

$$\begin{aligned} \lambda _i(\phi _i(x^{L*},x^{R*})-\phi _i(x^L,x^R))>(m-1)\lambda _j (\phi _j(x^L,x^R)-\phi _j(x^{L*},x^{R*})) \end{aligned}$$

Hence

$$\begin{aligned} \sum _{i \in \varLambda _m,j\ne i}\lambda _i(\phi _i(x^{L*},x^{R*}) - \phi _i(x^L,x^R))&>(m-1)\sum _{j \in \varLambda _m,j\ne i}\lambda _j(\phi _j(x^L,x^R) - \phi _j(x^{L*},x^{R*}))\\ \Longrightarrow ~~~\lambda _i(\phi _i(x^{L*},x^{R*}) - \phi _i(x^L,x^R))&>\sum \limits _{j \in \varLambda _m,j\ne i}\lambda _j(\phi _j(x^L,x^R) - \phi _j(x^{L*},x^{R*})). \end{aligned}$$

Hence

$$\begin{aligned} \sum _{j \in \varLambda _m} \lambda _j\phi _j(x^{L*},x^{R*})> \sum _{j \in \varLambda _m} \lambda _j \phi _j(x^L,x^R)~~\hbox {i.e.,}~~\varPhi (x^{L*},x^{R*})>\varPhi (x^L,x^R). \end{aligned}$$

This contradicts to the assumption that \((x^{L*},x^{R*})\in \mathcal {F}\) is an optimal solution of \(\mathbf{MEIOP}^\lambda _p.\) Hence \({\mathbf {X}_v^n}^*\) is a properly efficient solution of \(\mathbf{MEIOP}\).

3.2 Numerical examples

In order to show the applicability of results discussed in previous sections, we illustrate the method by means of the following examples.

Example 1

Consider the following bi-objective interval optimization problem in which one objective is minimize type and another is maximization type:

(10)

Denote \(\mathbf {F}^1_{\mathbf {C}_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=[2, 4]\otimes \mathbf{X}_1^2 \oplus [1,3]\otimes \mathbf{X}_2^2\), \(\mathbf {F}^2_{\mathbf {C}_v^{2}}(\mathbf{X}_1,\mathbf{X}_2) =[2,4]\oplus [-2,-1]\otimes \mathbf{X}_1\oplus [4, 6]\otimes \mathbf{X}_2^2\), \(\mathbf {G}^1_{D_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=\mathbf{X}_1\), and \(\mathbf {G}^2_{D_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=\mathbf{X}_2\). The parametric form of \(\mathbf {F}^1_{\mathbf {C}_v^{1}}\) and \(\mathbf {F}^2_{\mathbf {C}_v^{2}}\) is \(f^1_{c^1_{t^1}}(x^1_{{\omega _1}},x^2_{{\omega _2}})=(2+2t_1^1)(x_1^L+\omega _1 (x_1^R-x_1^L))^2+(1+2t_2^1)(x_2^L+\omega _2(x_2^R-x_2^L))^2\) and \(f^2_{c^2_{t^2}}(x^1_{{\omega _1}},x^2_{{\omega _1}})=(2+2t^2_1)+(-2+t^2_2) (x_1^L+\omega _1(x_1^R-x_1^L))+(4+2t^2_3)(x_2^L+\omega _2(x_2^R-x_2^L))^2\), respectively.

Consider the weight functions \(p_1(t^1)=1\), \(p_2(t^2)=1\), \(p_0(\omega )=1\) and \(\lambda _1=\frac{1}{2},\) \(\lambda _2=\frac{1}{2}\). Then \(k_1=2, k_2=3, n=2,\)

$$\begin{aligned} \phi _1(x_1^L,x_1^R, x_2^L, x_2^R)&=\int _{k_1+n}p_1(t^1)p_0(\omega )f^1_{c^1_{t^1}} (x_\omega )dt^1d\omega \nonumber \\&=\left( x_1^L\right) ^2+\left( x_1^R\right) ^2+x_1^Lx_1^R+ \frac{2}{3}\left( \left( x_2^L\right) ^2+\left( x_2^R\right) ^2+x_2^Lx_2^R\right) ,\end{aligned}$$
(11)
$$\begin{aligned} \phi _2\left( x_1^L,x_1^R,x_2^L,x_2^R\right)&=\int _{k_2+n}p_2\left( t^2\right) p_0\left( \omega \right) f^2_{c^2_{t^2}} \left( x_\omega \right) dt^2d\omega \nonumber \\&=3-\frac{3}{4}\left( x_1^L+x_1^R\right) +\frac{5}{3}\left( \left( x_2^L\right) ^2+\left( x_2^R\right) ^2+x_2^Lx_2^R\right) . \end{aligned}$$
(12)

Corresponding deterministic equivalent form of (10) is given below.

$$\begin{aligned} \left( \mathbf{MEIOP}_p^{\lambda }\right) ~:~~~\max ~~~&\sum _{i=1}^2\lambda _i\phi _i\left( x_1^L,x_1^R, x_2^L, x_2^R\right) \\&=-\frac{3}{2}+\frac{1}{2}\left( \left( x_1^L\right) ^2+\left( x_1^R\right) ^2+x_1^Lx_1^R\right) \\&+\frac{3}{8}\left( x_1^L+x_1^R\right) -\frac{1}{2}\left( \left( x_2^L\right) ^2+\left( x_2^R\right) ^2+x_2^Lx_2^R\right) ,\\ \hbox {subject to}~~~&x_1^L\le 1, x_1^R\le 1.5, x_2^L\ge 0.5, x_2^R\ge 0.75,~x_1^L\\&\le x_1^R,~x_2^L\le x_2^R,~x_1^L\ge 0,~x_2^L\ge 0. \end{aligned}$$

In order to solve \(\mathbf{MEIOP}_p^{\lambda }\) problem, one can be used any software which supports nonlinear programming problem. Here LINGO 11 software and Python code for Genetic algorithmFootnote 1 (GA) have been used to obtain the optimal solution.

i) Using LINGO 11 software, the optimal solution of the above problem has obtained as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(1.00000, 1.50000, 0.50000 0.75000) \end{aligned}$$

with the optimal value 1.21875. Hence ([1.00000, 1.50000], [0.50000, 0.75000]) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

ii) Using Python software, the optimal solution of the above problem has obtained based on Genetic algorithm as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(0.99999576, 1.49934, 0.50772 , 0.75572) \end{aligned}$$

with the optimal value 1.20463, where the total number of population, generations and number of parents are fixed 100000, 5 and 100, respectively. Hence \(([0.99999576, 1.49934], [0.50772, 0.75572])\) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

Example 2

Consider the following optimization problem whose objective functions and constraints are nonlinear interval valued functions:

Denote \(\mathbf {F}^1_{\mathbf {C}_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=[-2,0]\otimes \mathbf{X}_1\), \(\mathbf {F}^2_{\mathbf {C}_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)=[-1,2]\otimes \mathbf{X}_1\oplus [-2,-1]\otimes \mathbf{X}_2^2\), \(\mathbf {G}^1_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)=[0.5,1.5]\otimes \mathbf{X}_1\oplus [1.5,2.5]\otimes \mathbf{X}_2,\) \(\mathbf {G}^2_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)=[0.5,1.5]\otimes \mathbf{X}_1^2\oplus [1,1]\otimes \mathbf{X}_2\). Then \(f^1_{c^1_{t^1}}(x^1_{{\omega _1}},x^2_{{\omega _2}})=(-2+2t^1_1)(x_1^L+ \omega _1(x_1^R-x_1^L)),\) \(f^2_{c^2_{t^2}}(x^1_{{\omega _1}},x^2_{{\omega _1}})=(-1+3t^2_1)(x_1^L+ \omega _1(x_1^R-x_1^L))+(-2+t^2_2)(x_2^L+\omega _2(x_2^R-x_2^L))^2\), where \(t_1^1,t_1^2, t_2^2\in [0,1]\).

Consider the weight functions \(p_1(t^1)=1+t^1_1\), \(p_2(t^2)=1\), \(p_0(\omega )=1\) and \(\lambda _1=\frac{3}{4},\) \(\lambda _2=\frac{1}{4}\). Then \(k_1=1, k_2=2, n=2\),

$$\begin{aligned}&\phi _1(x_1^L,x_1^R, x_2^L, x_2^R)=\int _0^1 \int _0^1p_1(t^1)p_0(\omega ) f^1_{c^1_{t^1}}(x_\omega )dt^1d\omega =-\frac{2}{3}(x_1^L+x_1^R),\\&\phi _2(x_1^L,x_1^R, x_2^L, x_2^R)=\int _{k_2+n}p_2(t^2)p_0(\omega )f^2_{c^2_{t^2}} (x_\omega )dt^2d\omega =\frac{x_1^L+x_1^R}{4}\\&-\frac{((x_2^L)^2+(x_2^R)^2+x_2^Lx_2^R)}{2}. \end{aligned}$$

Using (4), the parametric form of \(\mathbf {G}^1_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)\preceq [2.5,3.5]\) and \(\mathbf {G}^2_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)\succeq [0.05,0.20]\) can be written as

$$\begin{aligned} g^1_{d^1_{t^1}}(x^1_{\omega _1},x^2_{\omega _2}) \le (2.5+t_3^1),~\forall t_3^1\in [0,1]~\hbox {and} ~g^2_{d^2_{t^2}}(x^1_{\omega _1},x^2_{\omega _2})\ge (0.05+0.15t_3^2),~\forall t_3^2\in [0,1], \end{aligned}$$

respectively, where

$$\begin{aligned} g^1_{d^1_{t^1}}(x^1_{\omega _1},x^2_{\omega _2})=(0.5+t^1_1)(x_1^L+\omega _1(x_1^R-x_1^L)) +(1.5+t^1_2)(x_2^L+\omega _2(x_2^R-x_2^L)),\\ g^2_{d_2{t^2}}(x^1_{\omega _1},x^2_{\omega _2})=(0.5+t^2_1)(x_1^L+\omega _1(x_1^R-x_1^L))^2 +(x_2^L+\omega _2(x_2^R-x_2^L)). \end{aligned}$$

Hence

$$\begin{aligned} \mathcal {F}= & {} \Big \{(x_1^L,x_1^R,x_2^L, x_2^R)|g^1_{d^1_{t^1}}(x^1_{\omega _1},x^2_{\omega _2}) \le (2.5+t_3^1),g^2_{d^2_{t^2}}(x^1_{\omega _1},x^2_{\omega _2})\ge (0.05+0.15t_3^2);\\&t_3^1,t_3^2\in [0,1],~x_1^L\le x_1^R, ~x_2^L\le x_2^R,~x_1^L\ge 0,~x_2^L \ge 0\Big \}\\= & {} \Big \{(x_1^L,x_1^R,x_2^L, x_2^R)|0.5x_1^L+1.5x_2^L\le 2.5, 1.5x_1^R+2.5x_2^R \le 3.5,0.5(x_1^L)^2+x_2^L \ge 0.05,\\&1.5(x_1^R)^2+x_2^R\ge 0.2, ~x_1^L\le x_1^R,~ x_2^L\le x_2^R,~x^L_1\ge 0,~x_2^L \ge 0\Big \}. \end{aligned}$$

The deterministic problem corresponding to \(\mathbf{MEIOP}\) becomes

$$\begin{aligned} (\mathbf{MEIOP}_p^{\lambda })~ :~~~\min _{(x_1^L,x_1^R,x_2^L, x_2^R)\in \mathcal {F}}-\frac{7}{16}(x_1^L+x_1^R)-\frac{1}{8}((x_2^L)^2+(x_2^R)^2+x_2^Lx_2^R). \end{aligned}$$

Using LINGO 11 software, the optimal solution of the above problem has obtained as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(2.33333, 2.33333, 0.00000, 0.00000) \end{aligned}$$

with the optimal value \(-\,2.04167\). Hence ([2.33333, 2.33333], [0.0000, 0.0000]) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

ii) Using Python software, the optimal solution of the above problem has obtained based on Genetic algorithm as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(2.32888, 2.33097, 0.00124, 0.00135) \end{aligned}$$

with the optimal value -2.03869, where the total number of population, generations and number of parents are fixed 10, 50 and 4, respectively. Hence \(\big ([2.32888, 2.33097],\) \([0.00124, 0.00135]\big )\) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

Example 3

Consider the following optimization problem whose objective functions and constraints are nonlinear interval valued convex functions with respect to \(\preceq \).

$$\begin{aligned} \max&\Big \{[-1,2]\otimes \mathbf{X}_1\oplus [-2,-1]\otimes \mathbf{X}_2^3\oplus [1,2],~[2,4] \otimes \mathbf{X}_1^2\oplus [1,3]\otimes \mathbf{X}_2^2\oplus [3,5]\Big \},\\ \text{ subject } \text{ to }&[-1,1]\otimes \mathbf{X}_1\oplus [-1,3]\otimes \mathbf{X}_2^3\preceq [2,3],\\&[0.5, 1.5]\otimes \mathbf{X}_1^2 \oplus [1.5,2.5]\otimes \mathbf{X}_2^3\preceq [2.5,3.5],\\&\mathbf{X}_1\preceq [1,3], \mathbf{X}_2\preceq [2.5,4], ~x_1^L\le x_1^R, x_2^L\ge x_2^R, ~x_1^L\ge 0, ~x_2^L\ge 0. \end{aligned}$$

Denote \(\mathbf {F}^1_{\mathbf {C}_v^{3}}(\mathbf{X}_1,\mathbf{X}_2)=[-1,2]\otimes \mathbf{X}_1\oplus [-2,-1]\otimes \mathbf{X}_2^3\oplus [1,2]\), \(\mathbf {F}^2_{\mathbf {C}_v^{3}}(\mathbf{X}_1,\mathbf{X}_2)=[2,4]\otimes \mathbf{X}_1^2\oplus [1,3]\otimes \mathbf{X}_2^2\oplus [3,5]\), \(\mathbf {G}^1_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)=[-1,1]\otimes \mathbf{X}_1\oplus [-1,3]\otimes \mathbf{X}_2^3,\) \(\mathbf {G}^2_{D_v^{2}}(\mathbf{X}_1,\mathbf{X}_2)=[0.5, 1.5]\otimes \mathbf{X}_1^2 \oplus [1.5,2.5]\otimes \mathbf{X}_2^3\), \(\mathbf {G}^3_{D_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=\mathbf{X}_1\) and \(\mathbf {G}^4_{D_v^{1}}(\mathbf{X}_1,\mathbf{X}_2)=\mathbf{X}_2.\)

The parametric form of \(\mathbf {F}^1_{\mathbf {C}_v^{3}}\) and \(\mathbf {F}^2_{\mathbf {C}_v^{3}}\) is

\(f^1_{c^1_{t^1}}(x^1_{{\omega _1}},x^2_{{\omega _2}})=(-1+3t_1^1)(x_1^L+\omega _1(x_1^R-x_1^L)) +(-2+t_2^1)(x_2^L+\omega _2(x_2^R-x_2^L))^3+(1+t_3^1)\) and

\(f^2_{c^2_{t^2}}(x^1_{{\omega _1}},x^2_{{\omega _1}})=(2+2t^2_1)(x_1^L+\omega _1 (x_1^R-x_1^L))^2+(1+2t^2_2)(x_2^L+\omega _2(x_2^R-x_2^L))^2+(3+2t^2_3)\), respectively.

Consider the weight functions \(p_1(t^1)=1\), \(p_2(t^2)=1\), \(p_0(\omega )=1\) and \(\lambda _1=\frac{1}{2},\) \(\lambda _2=\frac{1}{2}\). Then \(k_1=3, k_2=3, n=2,\)

$$\begin{aligned} \phi _1(x_1^L,x_1^R, x_2^L, x_2^R)&=\int _{k_1+n}p_1(t^1)p_0(\omega )f^1_{c^1_{t^1}} (x_\omega )dt^1d\omega \nonumber \\&=\frac{1}{4}(x_1^L+x_1^R)-\frac{3}{8}\Big \{(x_2^L)^3+(x_2^R)^3+(x_2^L)^2x_2^R+x_2^L (x_2^R)^2\Big \}+\frac{3}{2},\nonumber \\ \phi _2(x_1^L,x_1^R,x_2^L,x_2^R)&=\int _{k_2+n}p_2(t^2)p_0(\omega )f^2_{c^2_{t^2}} (x_\omega )dt^2d\omega \nonumber \\&=(x_1^L)^2+(x_1^R)^2+x_1^Lx_1^R+\frac{2}{3}\Big \{(x_2^L)^2+(x_2^R)^2+x_2^Lx_2^R\Big \}+4. \end{aligned}$$
(13)

Corresponding deterministic equivalent form of (10) is given below

$$\begin{aligned} (\mathbf{MEIOP}_p^{\lambda }):~\max ~&\sum _{i=1}^2\lambda _i\phi _i(x_1^L,x_1^R, x_2^L, x_2^R)\\&=\frac{1}{2}\Big (\frac{1}{4}(x_1^L+x_1^R)-\frac{3}{8}\Big \{(x_2^L)^3+(x_2^R)^3 +(x_2^L)^2x_2^R+x_2^L(x_2^R)^2\Big \}+\frac{3}{2}\Big )\\&\quad +\frac{1}{2}\Big ((x_1^L)^2 +(x_1^R)^2+x_1^Lx_1^R+\frac{2}{3}\Big \{(x_2^L)^2+(x_2^R)^2+x_2^Lx_2^R\Big \}+4\Big ),\\ \hbox {subject to}~~~&-x_1^L-(x_2^L)^3\le 2, x_1^R+3(x_2^R)^3\le 3,\\&0.5(x_1^L)^2+1.5(x_2^L)^3\le 2.5, 1.5(x_1^R)^2+2.5(x_2^R)^3\le 3.5,\\&x_1^L\le 1, x_1^R\le 3, x_2^L\le 2.5, x_2^R\le 4,~x_1^L\le x_1^R,~x_2^L\le x_2^R,~x_1^L\ge 0,~x_1^L\ge 0. \end{aligned}$$

(i) Using LINGO 11 software, the optimal solution of the above problem has obtained as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(1.00000, 1.50485, 0.34552, 0.34552) \end{aligned}$$

with the optimal value 5.53627. Hence ([1.00000, 1.50485], [0.34552, 0.34552]) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

(ii) Using Python software, the optimal solution of the above problem has obtained based on Genetic algorithm as

$$\begin{aligned} (x_1^L,x_1^R,x_2^L,x_2^R)=(0.99856, 1.50511, 0.34373, 0.34421) \end{aligned}$$

with the optimal value 5.53346, where the total number of population, generations and number of parents are fixed 100000, 50 and 10, respectively. Hence \(\big ([1.00000, 1.50485], [0.34552, 0.34552]\big )\) is a \(t\omega -\)efficient solutions for \(\mathbf{MEIOP}\).

In the above models, one can observe that the solution obtained by using the Python code of GA is approximately nearby to the solution obtained by LINGO 11 software.

4 Conclusion

This paper has developed a methodology to solve an enhanced interval multi-objective programming problem through a transformed deterministic form. This methodology takes care the decision variables as intervals and parameterize all the intervals. The proposed methodology is different from the existing methodologies in this context. In the deterministic form of the original problem, we consider only positive weight functions (\(p_0>0\), \(p>_v0\)), which must be piecewise continuous in [0,1]. Solving this problem with any general weight function is computationally difficult. In addition to this, selection of weight functions is also a difficult task. These are the limitation of this methodology. Developing a methodology, which is free from predetermined weight functions is the future scope of the present work.