17.1 Introduction

The parameters, coefficients and right-hand side values, of mathematical programming problems are assumed to be specified as real numbers. In real-world problems, we may face cases when those parameters cannot be specified as real numbers because of environmental fluctuation and/or the lack of knowledge. Moreover, we may have cases when we cannot describe our goals and constraints with exact values.

Fuzzy and possibilistic programming approaches are proposed to mathematical programming problems with ambiguity and vagueness [8, 21]. By those approaches, we obtain reasonable solutions under conflicting soft constraints and goals, robust solutions under hard and soft constraints, hopeful solutions of attaining high-level goals, and so on. Fuzzy programming approaches were formulated by proposing treatments of inequality constraints whose coefficients and right-hand side values are fuzzy numbers [23,24,25]. Tanaka and Asai [25] proposed the nonnegativity index of a fuzzy number and applied it to the treatments of the inequality constraints of fuzzy linear programming problems. As the nonnegativity index takes a positive value when the center of the fuzzy number is positive, and the unity when the lower bound of the support of the fuzzy number is non-negative, this treatment can be considered a strongly robust one. Słowiński [23, 24] proposed two indices: optimistic and pessimistic indices. The optimistic index shows to what extent the constraint is potentially satisfied as it compares the upper bound of the level set of a fuzzy number to be larger is not less than the lower bound of the level set of the other fuzzy number. The pessimistic index is defined by the difference of the upper bounds of the level sets of fuzzy numbers. To control the required levels of constraint satisfaction, the minimally required differences are assumed to be given as real numbers including negative values. Namely, this treatment can be seen as a weakly robust one. Similar approaches were proposed in the literature [11, 20, 26].

After the possibility theory [4, 30], possibilistic programming approaches [3, 7, 12] were proposed. It has been shown that many fuzzy programming approaches can be seen as variations of possibilistic programming approaches [7, 12]. In possibilistic programming approaches, possibility and necessity measures are used to reduce the problems to the conventional programming problems. Many results demonstrate that possibilistic linear programming problems preserve the linearity in the reduced problems when possibility and necessity measures are defined, respectively, by minimum operation and Dienes implication function. However, cases with the other conjunction and implication functions have not yet been considerably investigated, while several alternative approaches [11, 21] have been proposed in the calculation of linear functions with fuzzy coefficients. Inuiguchi [6] has shown that the necessity fractile optimization models of possibilistic linear programming problems with soft constraints can be reduced to semi-infinite linear programming problems even when necessity measures are not defined by Dienes implication function. Tanaka and Asai’s approach [25] for fuzzy programming problems is equivalent to a possibilistic programming approach using the necessity measure defined by Dienes implication function. Słowiński’s approach [23, 24] for fuzzy programming problems can have a close relation to a possibilistic programming approach using the possibility measure and the necessity measures defined by Gödel and reciprocal Gödel implication functions.

In recent years, the theoretical and methodological contributions in fuzzy optimization have shifted mainly to nonlinear programming problems [2, 17, 18, 27] and optimization over fuzzy relational constraints [15, 16, 19, 28, 29] as fuzzy linear programming problems have been investigated deeply. However, approaches developed in fuzzy linear programming problems are useful in other types of mathematical programming problems. Indeed, the fuzzy linear programming techniques are applied to many real-world programming problems [1, 5, 22].

In this chapter, we further study fuzzy and possibilistic linear programming problems as there still exist some open problems. Namely, the introduction of various implication functions into fuzzy linear programming problems has not yet studied considerably, while it increases the representability of decision-maker’s request on the robustness of constraints and goals. Therefore, we study the possibilistic linear programming approach using general necessity measures. We assume that all fuzzy coefficients as well as fuzzy constraints have linear membership functions and that necessity measures are defined by modifier functions based on the approach proposed by Inuiguchi et al. [9, 13]. As the results are more or less complex due to the treatment of general cases, we concentrate the necessity fractile model among various models in possibilistic programming approaches [8]. This model treated in this chapter can be seen as a robust optimization approach.

This chapter is organized as follows. In the next section, necessity measures are reviewed. Some properties and representations of necessity measures are briefly described. The possibilistic linear programming problem treated in this chapter is explained in Sect. 17.3. The reduction to a semi-infinite linear programming problem is shown. Moreover, the differences of inclusion relations equivalent to necessity fractile constraints defined by famous implication functions are illustrated. In Sect. 17.4, results in cases where functions associated with necessity measures are convex and concave are shown. Similar results in cases where modifier functions defining necessity measures are convex and concave are described in Sect. 17.5. In Sect. 17.6, the results in Sects. 17.4 and 17.5 are applied to R-, reciprocal R-, and S-implication functions as well as to famous implication functions. It is shown that the possibilistic linear programming problems with necessity measures defined by many famous implication functions are reduced to linear programming problems or solved rather easily by the proposed solution procedure.

17.2 Necessity Measures

Necessity measure [9, 13] of fuzzy event B under fuzzy set A is defined by

$$\displaystyle \begin{aligned} N_{A}(B)=\inf_{u\in U} I(\mu_{A}(u),\mu_{B}(u)), {} \end{aligned} $$
(17.1)

where μ A and μ B are the membership functions of A and B. I : [0, 1] × [0, 1] → [0, 1] is an implication function satisfying the following properties:

(I0):

I is upper semi-continuous.

(I1):

I(0, 0) = I(0, 1) = I(1, 1) = 1 and I(1, 0) = 0.

(I2):

I(a, b) ≤ I(c, d), for 0 ≤ c ≤ a ≤ 1 and 0 ≤ b ≤ d ≤ 1.

The relation of the necessity measure to the inclusion relation can be found in the following equivalence [6]:

$$\displaystyle \begin{aligned} \begin{array}{rcl} N_{A}(B)\geq h & \Leftrightarrow &\displaystyle \inf_{u\in U} I(\mu_A(u),\mu_B(u)) \geq h \\ & \Leftrightarrow &\displaystyle \left( \forall u\in U,\forall k\in [0,1];\:\mu_{A}(u)\geq k \Rightarrow \mu_{B}(u)\geq \theta(k,h) \right) \\ & \Leftrightarrow &\displaystyle \forall k\in [0,1];\ [A]_{k}\subseteq [B]_{\theta^i(k,h)}, {} \end{array} \end{aligned} $$
(17.2)

where \(\theta (k,h)=\inf \{s \in [0,1] \mid I(k,s)\geq h\}\) and [A]k is a k-level set of a fuzzy set A ⊆ U, i.e., [A]k = {u ∈ Uμ A(u) ≥ k}. In what follows, we use a strong k-level set (A)k of a fuzzy set A ⊆ U defined by (A)k = {u ∈ Uμ A(u) > k}.

Although a necessity measure is defined by implication function I, it is not an easy task to select suitable I depending on the situation. Then Inuiguchi and Tanino [9] and Inuiguchi et al. [13] proposed a method for selecting a necessity measure suitable for the decision-maker’s requirement. By this method, a necessity measure is specified based on the decision-maker’s satisfaction degrees to several inclusion relations between two fuzzy sets from weak to strong ones. The transition of the inclusion relation from weak to strong can be expressed by two modifier-generating functions g m, g M : [0, 1] × [0, 1] → [0, 1] satisfying:

(g1):

g m(a, ⋅) and g M(a, ⋅) are lower and upper semi-continuous for all a ∈ [0, 1], respectively.

(g2):

g m(1, h) = g M(1, h) = 1 and g m(0, h) = g M(0, h) = 0 for all h > 0.

(g3):

g m(a, 0) = 0 and g M(a, 0) = 1 for all a ∈ [0, 1].

(g4):

h 1 ≥ h 2 implies g m(a, h 1) ≥ g m(a, h 2) and g M(a, h 1) ≤ g M(a, h 2) for all a ∈ [0, 1].

(g5):

a ≥ b implies g m(a, h) ≥ g m(b, h) and g M(a, h) ≥ g M(b, h) for all h ∈ [0, 1].

(g6):

g m(a, 1) > 0 and g M(a, 1) < 1 for all a ∈ (0, 1).

g m is called an inner modifier-generating function, while g M is called an outer modifier-generating function.

Then, a necessity measure is defined by

$$\displaystyle \begin{aligned} N_A(B) \geq h \Leftrightarrow m_h(A) \subseteq M_h(B), {} \end{aligned} $$
(17.3)

where m h(A) and M h(B) are defined by

$$\displaystyle \begin{aligned} \mu_{m_h(A)}(u)=g^m(\mu_A(u),h), \quad \mu_{M_h(B)}(u)=g^M(\mu_B(u),h). {} \end{aligned} $$
(17.4)

To put it differently, a necessity measure is defined by

$$\displaystyle \begin{aligned} N_A(B) = \sup \{h \in [0,1] \mid m_h(A) \subseteq M_h(B) \}. {} \end{aligned} $$
(17.5)

Inuiguchi and Tanino [9] and Inuiguchi et al. [13] showed that the necessity measures defined by modifier-generating functions can be defined by (17.1) with the following implication function:

$$\displaystyle \begin{aligned} I(a,b)=\sup_h\{h\in [0,1] \: | \: g^m(a,h) \leq g^M(b,h)\}. {} \end{aligned} $$
(17.6)

When we define g m(a, h) = a and g M(a, h) = a, ∀a ∈ [0, 1] for some h ∈ (0, 1], N A(B) ≥ h is equivalent to the normal inclusion relation between fuzzy sets A and B, i.e., A ⊆ B. Then the condition N A(B) ≥ h for general modifier-generating functions can be seen as a generalization of the inclusion relation between A and B, and N A(B) can be regarded as the degree of inclusion.

Using modifier-generating functions g m and g M, we define a necessity measure N A(B) by giving the equivalent condition of N A(B) ≥ h (h ∈ (0, 1]) by an inclusion relation between a modified fuzzy set A and a modified fuzzy set B, i.e., m h(A) ⊆ M h(B). Then, a necessity measure can be specified by giving modifier-generating functions g m(⋅, h) and g M(⋅, h) that determine how A and B are contracted/expanded and expanded/contracted according to degree h, respectively. The specification of modifier-generating functions would be easier than that of implication function directly.

17.3 Possibilistic Linear Programming

We consider the following possibilistic linear programming problem:

(17.7)

where x = (x 1, x 2, …, x n)T is a decision vector. b i, i = 1, 2, …, m are constants. Components c j of c and a ij of a i are not known exactly, but the possible ranges of those values are known as trapezoidal fuzzy numbers C j and A ij, respectively. A trapezoidal fuzzy number C is characterized by a quadruple (c L, c R, γ L, γ R), where c L and c R are the lower and upper bounds of the most plausible interval for C, while γ L and γ R are the left and right spreads so that c L − γ L and c R + γ R show the lower and upper bounds of the least plausible interval. More concretely, the membership function μ C of C is defined by

$$\displaystyle \begin{aligned} \mu_C(r)= \max \left (0, \min \left( 1-\frac{c^{\mathrm{L}}-r}{\gamma^{\mathrm{L}}}, 1-\frac{r-c^{\mathrm{r}}}{\gamma^{\mathrm{R}}},1 \right) \right). {} \end{aligned} $$
(17.8)

We assume C j and A ij are trapezoidal fuzzy numbers characterized by \((c_j^{\mathrm {L}},c_j^{\mathrm {R}},\gamma _j^{\mathrm {L}},\gamma _j^{\mathrm {R}})\) and \((a_{ij}^{\mathrm {L}},a_{ij}^{\mathrm {R}},\alpha _{ij}^{\mathrm {L}},\alpha _{ij}^{\mathrm {R}})\). The notation is a fuzzified inequality so that corresponds to a fuzzy set B i with verbal expression “a set of real numbers which are roughly smaller than b i.” We assume that the membership function \(\mu _{B_i}\) of B i is defined by

$$\displaystyle \begin{aligned} \mu_{B_i}(r)= \max \left (0, \min \left( 1-\frac{r-b_i}{\beta_i^{\mathrm{R}}},1 \right) \right), {} \end{aligned} $$
(17.9)

where \(\beta _i^{\mathrm {R}}\) is a spread showing the tolerance.

The possibilistic linear programming problem (17.7) is a fuzzy linear programming problem as the coefficients are specified by fuzzy numbers showing their possible ranges. In the possibilistic linear programming problem, each coefficient is considered as an uncertain variable taking a value in the given fuzzy number and treated by the possibility theory [4, 30].

As in the literature [8, 21], because x ≥0, c T x and \(\boldsymbol {a}_i^{\mathrm {T}}\boldsymbol {x}\) are restricted by trapezoidal fuzzy numbers C T x and \(\boldsymbol {a}_i^{\mathrm {T}}\boldsymbol {x}\) characterized by (c L T x, c R T x, γ L T x, γ R T x) and \(({\boldsymbol {a}_i^{\mathrm {L}}}^{\mathrm {T}}\boldsymbol {x},{\boldsymbol {a}_i^{\mathrm {R}}}^{\mathrm {T}}\boldsymbol {x}, {\boldsymbol {\alpha }_i^{\mathrm {L}}}^{\mathrm {T}}\boldsymbol {x},\) \({\boldsymbol {\alpha }_i^{\mathrm {R}}}^{\mathrm {T}}\boldsymbol {x}) \), respectively, where we define \(\boldsymbol {c}^{\mathrm {L}}=(c_1^{\mathrm {L}},c_2^{\mathrm {L}}, \ldots ,c_n^{\mathrm {L}})^{\mathrm {T}}\), \(\boldsymbol {c}^{\mathrm {R}}=(c_1^{\mathrm {R}},c_2^{\mathrm {R}}, \ldots ,c_n^{\mathrm {R}})^{\mathrm {T}}\), \(\boldsymbol {\gamma }^{\mathrm {L}}=(\gamma _1^{\mathrm {L}}, \gamma _2^{\mathrm {L}}, \ldots ,\gamma _n^{\mathrm {L}})^{\mathrm {T}}\), \(\boldsymbol {\gamma }^{\mathrm {R}}=(\gamma _1^{\mathrm {R}}, \gamma _2^{\mathrm {R}}, \ldots ,\gamma _n^{\mathrm {R}})^{\mathrm {T}}\), \(\boldsymbol {a}_i^{\mathrm {L}}=(a_{i1}^{\mathrm {L}},\) \(a_{i2}^{\mathrm {L}},\ldots ,a_{in}^{\mathrm {L}})^{\mathrm {T}}\), \(\boldsymbol {a}_i^{\mathrm {R}}= (a_{i1}^{\mathrm {R}},a_{i2}^{\mathrm {R}}, \ldots ,a_{in}^{\mathrm {R}})^{\mathrm {T}}\), \(\boldsymbol {\alpha }_i^{\mathrm {L}}=(\alpha _{i1}^{\mathrm {L}},\) \(\alpha _{i2}^{\mathrm {L}}, \ldots ,\alpha _{in}^{\mathrm {L}})^{\mathrm {T}}\), and \(\boldsymbol {\alpha }_i^{\mathrm {R}}= (\alpha _{i1}^{\mathrm {R}}, \alpha _{i2}^{\mathrm {R}}, \ldots ,\alpha _{in}^{\mathrm {R}})^{\mathrm {T}}\).

Using a necessity measure N i defined by an implication function I i, in this chapter, we formulate Problem (17.7) as a necessity fractile optimization model (see Inuiguchi and Ramík [8]):

$$\displaystyle \begin{aligned} \begin{array}{rl} \mbox{maximize} & q,\\ {} \mbox{subject to} & N_{\boldsymbol{{C}}^{\mathrm{T}}\boldsymbol{{x}}}^0([q,+\infty)) \geq h^0,\\ & N_{\boldsymbol{{A}}_i^{\mathrm{T}}\boldsymbol{{x}}}^i(B_i) \geq h^i,\ i=1,2,\ldots,m, \\ & \boldsymbol{x} \geq \mathbf{0}, \end{array} {} \end{aligned} $$
(17.10)

where q is an auxiliary variable. h 0 ∈ (0, 1] and h i ∈ (0, 1], i = 1, 2, …, m are certainty levels of goal achievement and constraint satisfactions specified by the decision- maker. Constraints \(N_{\boldsymbol { {C}}^{\mathrm {T}}\boldsymbol { {x}}}^0([q,+\infty )) \geq h^0\) and \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) are called necessity fractile constraints.

In Fig. 17.1, the difference between \(\boldsymbol {A}_i^{\mathrm {T}}\boldsymbol {x}\subseteq B_i\) and \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}(B_i)\geq h^i\) is shown. In the right figure, \(\boldsymbol {A}_i^{\mathrm {T}}\boldsymbol {x}\subseteq B_i\) is not satisfied, but \(I_i(\mu _{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}(r), \mu _{B_i}(r)) \geq 0.8\) with I i(a, b) = 1 − a + ab (Reichenbach implication function) is satisfied for all r ∈R. Namely, \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq 0.8\) is satisfied.

Fig. 17.1
figure 1

\( \boldsymbol {A}_i^{\mathrm {T}} \boldsymbol {x}\subseteq B_i\) and \(N_{ \boldsymbol { {A}}_i^{\mathrm {T}} \boldsymbol { {x}}} \boldsymbol { {x}}(B_i)\geq h^i\)

Table 17.1 shows the relations between several implication functions and their transitions of the inclusion relations required by constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) as h i increases. The definitions of implication functions shown in Table 17.1 are given later in Table 17.3 of Sect. 17.6. From Table 17.1, we observe the difference of constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) by the implication function defining the necessity measure. This difference implies the significance of the selection of implication function I i defining necessity measure N i. A more detailed transition of the inclusion relations required by the necessity fractile constraint can be useful for selecting the implication function defining the necessity measure. Modifier functions g m and g M corresponding to necessity measure N i are useful for seeing the transition of the inclusion relations. The decision-maker can choose an implication function I i and degree h i considering his requirement on the robustness of the constraint.

Table 17.1 Implication functions and transitions of inequality relations expressed by \(N_{ \boldsymbol { {A}}_i^{\mathrm {T}} \boldsymbol { {x}}}^i(B_i) \geq h^i\)

We first see that Problem (17.10) is reduced to a semi-infinite linear programming problem. Let \(c_j^{\mathrm {L}}(h)=\inf [C_j]_h\), \(c_j^{\mathrm {R}}(h)=\sup [C_j]_h\), \(a_{ij}^{\mathrm {L}}(h)=\inf [A_{ij}]_h\), \(a_{ij}^{\mathrm {R}}(h)=\sup [A_{ij}]_h\), and \(b_i^{\mathrm {R}}(h) =\sup [B_i]_h\). Then, from (17.2) and x ≥0, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle {N_{\boldsymbol{{C}}^{\mathrm{T}}\boldsymbol{{x}}}^0([q,+\infty)) \geq h^0 \Leftrightarrow \left[\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x}\right]_k \subseteq \left[[q,+\infty)\right]_{\theta^0(k,h^0)} } \\ & \Leftrightarrow &\displaystyle \inf \left[\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x}\right]_k \geq q,\ \forall k\in [0,1],\ \theta^0(k,h^0) > 0, \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(k) x_j \geq q,\ \forall k\in [0,1],\ \theta^0(k,h^0) > 0, {} \end{array} \end{aligned} $$
(17.11)
$$\displaystyle \begin{aligned} \begin{array}{rcl} & {}{}{}&\displaystyle {N_{\boldsymbol{{A}}_i^{\mathrm{T}}\boldsymbol{{x}}}^i(B_i) \geq h^i \Leftrightarrow \left[\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}\right]_k \subseteq \left[B_i \right]_{\theta^i(k,h^i)} } \\ & \Leftrightarrow &\displaystyle \sup \left[\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}\right]_k \leq \sup\left[B_i \right]_{\theta^i(k,h^i)},\ \forall k\in [0,1],\ \theta^i(k,h^i) > 0, \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k) x_j \leq b_i^{\mathrm{R}}(\theta^i(k,h^i)),\ \forall k\in [0,1],\ \theta^i(k,h^i)>0, {} \end{array} \end{aligned} $$
(17.12)

where \(\theta ^i(k,h)=\inf \{ s \in [0,1] \mid I_i(k,s)\geq h \}\), i = 0, 1, …, m. We note that the constraints \(N_{\boldsymbol { {C}}^{\mathrm {T}}\boldsymbol { {x}}}^0([q,+\infty )) \geq h^0\) and \(N_{\boldsymbol { {C}}^{\mathrm {T}}\boldsymbol { {x}}}^0([q,+\infty )) \geq h^0\) are vanished when θ 0(k, h 0) = 0 and θ i(k, h i) = 0, respectively.

From (17.11) and (17.12), Problem (17.10) is reduced to the following semi-infinite linear programming problem:

$$\displaystyle \begin{aligned} \begin{array}{rl} \mbox{maximize} & t,\\ \mbox{subject to} & \displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(k) x_j \geq q,\ \forall k\in [0,1],\ \theta^0(k,h^0)>0,\\ & \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k) x_j \leq b_i^{\mathrm{R}}(\theta^i(k,h^i)),\ \forall k\in [0,1],\ \theta^i(k,h^i)>0,\ \\ & {i=1,2,\ldots,m,}\\ & \boldsymbol{x} \geq \mathbf{0}. \end{array} {} \end{aligned} $$
(17.13)

Then, selecting finitely many numbers of k ∈ [0, 1], Problem (17.10) is approximately reduced to a linear programming problem. We note that, as fuzzy numbers are trapezoidal, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} c_j^{\mathrm{L}}(k)&=& \left\{ \begin{array}{ll} c_j^{\mathrm{L}}-(1-k)\gamma_j^{\mathrm{L}}, & \mbox{if }k \in (0,1], \\ -\infty, & \mbox{if } k=0, \end{array} \right. \ j=1,2,\ldots,n, {} \end{array} \end{aligned} $$
(17.14)
$$\displaystyle \begin{aligned} a_{ij}^{\mathrm{R}}(k)&\!=\! \left\{ \begin{array}{ll} a_{ij}^{\mathrm{R}}+(1-k)\alpha_{ij}^{\mathrm{R}}, & \mbox{if } k \in (0,1],\\ +\infty, & \mbox{if } k=0, \end{array} \right. \ i\!=\!1,2,\ldots,m,\ j\!=\!1,2,\ldots,n, {} \end{aligned} $$
(17.15)
$$\displaystyle \begin{aligned} \begin{array}{rcl} b_{j}^{\mathrm{R}}(k)&=& \left\{ \begin{array}{ll} b_{j}^{\mathrm{R}}+(1-k)\beta_{j}^{\mathrm{R}}, & \mbox{if } k \in (0,1],\\ +\infty, & \mbox{if } k=0, \end{array} \right. \ j=1,2,\ldots,n. {} \end{array} \end{aligned} $$
(17.16)

17.4 Case Where θ i(⋅, h i)’s Are Convex and Concave

As fuzzy parameters C j, A ij, and B i have linear membership functions, we obtain simpler reduced problems when θ i(⋅, h i)’s are convex and concave.

Let us consider a case where θ i(⋅, h i) : [0, 1] → [0, 1], i = 1, 2, …, m, are convex in range (0, 1) (see Fig. 17.2). From (17.9), we obtain \(b_i^{\mathrm {R}}(\theta ^i(k,h^i))=b_i+(1-\theta ^i(k,h^i))\beta _i\), i = 1, 2, …, m. Therefore, the convexity of θ i(⋅, h i) implies the concavity of \(b_i^{\mathrm {R}}(\theta ^i(\cdot ,h^i))\). Under the concavity of \(b_i^{\mathrm {R}}(\theta ^i(\cdot ,h^i))\), the following equivalence is valid:

$$\displaystyle \begin{aligned} \begin{array}{rcl} N_{\boldsymbol{{A}}_i^{\mathrm{T}}\boldsymbol{{x}}}^i(B_i) \geq h^i &\Leftrightarrow& \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k) x_j \leq b_i^{\mathrm{R}}(\theta^i(k,h^i)),\ \forall k\in [0,1],\ \theta^i(k,h^i)>0 \\ & \Leftrightarrow & \left \{ \begin{array}{l} \displaystyle \sum_{j=1}^n \bar{a}_{ij}^{\mathrm{R}}(\bar{k}^i) x_j \leq \bar{b}_i^{\mathrm{R}}(\theta^i(\bar{k}^i,h^i)), \\ \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(\hat{k}^i) x_j \leq b_i^{\mathrm{R}}(\theta^i(\hat{k}^i,h^i)), \end{array}\right. {} \end{array} \end{aligned} $$
(17.17)
Fig. 17.2
figure 2

θ i(⋅, h i) convex in range (0, 1)

where we define \(\bar {k}^i=\inf \{k\in [0,1] \mid \theta ^i(k,h^i)>0 \}\), \(\hat {k}^i=\inf \{k\in [0,1] \mid \theta ^i(k,h^i)=\theta ^i(1,h) \}\), \(\bar {a}_{ij}^{\mathrm {R}}(k)=\sup (A_{ij})_k\), and \(\bar {b}_i^{\mathrm {R}}(k)=\sup (B_i)_k\). As A ij and B i are linear membership functions, we obtain

$$\displaystyle \begin{aligned} \bar{a}_{ij}^{\mathrm{R}}(k)&\!=\! \left\{ \begin{array}{ll} a_{ij}^{\mathrm{R}}\!+(1-k)\alpha_{ij}^{\mathrm{R}}, & \mbox{if } k \in [0,1),\\ -\infty, & \mbox{if } k\!=\!1, \end{array} \right. \ i\!=\!1,2,\ldots,m,\ j\!=\!1,2,\ldots,n, {} \end{aligned} $$
(17.18)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \bar{b}_{i}^{\mathrm{R}}(k)&=& \left\{ \begin{array}{ll} b_{i}^{\mathrm{R}}+(1-k)\beta_{i}^{\mathrm{R}}, & \mbox{if } k \in [0,1),\\ -\infty, & \mbox{if } k=1, \end{array} \right. \ i=1,2,\ldots,m. {} \end{array} \end{aligned} $$
(17.19)

From (17.17), Problem (17.13) can be reduced to the following linear programming problem:

$$\displaystyle \begin{aligned} \begin{array}{rl} \mbox{maximize} & \displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(\bar{k}^0) x_j\\ \mbox{subject to} & \displaystyle \sum_{j=1}^n \bar{a}_{ij}^{\mathrm{R}}(\bar{k}^i) x_j \leq \bar{b}_i^{\mathrm{R}}(\theta^i(\bar{k}^i,h^i)),\ i=1,2,\ldots,m,\\ & \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(\hat{k}^i) x_j \leq b_i^{\mathrm{R}}(\theta^i(\hat{k}^i,h^i)),\ i=1,2,\ldots,m,\\ & \boldsymbol{x} \geq \mathbf{0}. \end{array} {} \end{aligned} $$
(17.20)

Now, let us consider a case where θ i(⋅, h i) : [0, 1] → [0, 1], i = 0, 1, 2, …, m, are concave in the range (0, 1). In this case, \(b_i^{\mathrm {R}}(\theta ^i(\cdot ,h^i))\) becomes convex. Because \(a_{ij}^{\mathrm {R}}(k)=a_{ij}^{\mathrm {R}}+(1-k)\alpha _{ij}^{\mathrm {R}}\), i.e., linear with respect to k > 0, given x = (x 1, x 2, …, x n)T ≥0, there exists \(k_i^*\) such that

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle { \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k) x_j \leq b_i^{\mathrm{R}}(\theta^i(k,h^i)),\ \forall k\in [0,1],\ \theta^i(k,h^i)>0 } \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^*) x_j \leq b_i^{\mathrm{R}}(\theta^i(k_i^*,h^i)). \end{array} \end{aligned} $$
(17.21)

Utilizing the convexity of \(b_i^{\mathrm {R}}(\theta ^i(\cdot ,h^i))\), Problem (17.13) can be solved by the following relaxation procedure [6] together with a bisection method, where \(k_i^*\) is approximately calculated for each candidate solution x :

S0.:

Specify ε by a sufficiently small positive number.

S1.:

Let z i = 0 and \(k_i^{z_i}=0.5\bar {k}^i+0.5\hat {k}^i\), i = 1, 2, …, m.

S2.:

Solve the following linear programming problem:

$$\displaystyle \begin{aligned} \begin{array}{ll} \mbox{maximize} & \displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(\bar{k}^0) x_j,\\ \mbox{subject to} & \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^l) x_j \leq b_i^{\mathrm{R}}(\theta^i(k_i^l,h^i)),\ {l=0,1,\ldots,z_i,\ i=1,2,\ldots,m,}\\ & \boldsymbol{x} \geq \mathbf{0}. \end{array} {} \end{aligned} $$
(17.22)

Let \(\boldsymbol {x}^*=(x_1^*,x_2^*,\ldots ,x_n^*)^{\mathrm {T}}\) be the obtained optimal solution.

S3.:

For i = 1, 2, …, m, check the existence of \(k_i^* \in (0,1]\) such that \(\sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^*) x_j^* > b_i^{\mathrm {R}}(\theta ^i(k_i^*,h^i))\) by a bisection method and update z i = z i + 1 with defining \(k_i^{z_i}=k_i^*\) if \(k_i^*\) exists.

S4.:

If at least one z i is increased, return S2. Otherwise, x is an optimal solution to Problem (17.10).

The bisection method in S3 can be performed as follows (see Fig. 17.3):

B0.:

Let \(\lambda _i=-\sum _{j=1}^n \alpha _{ij}^{\mathrm {R}} x_j^*\), and let δ be a positive small number.

Fig. 17.3
figure 3

Figure for the explanation of the bisection procedure composed of B0 to B5

B1.:

Define \(\tilde {k}_i=k_i^q\) with \(q=\mbox{arg min}_{l=0,1,\ldots ,z_i} \left ( b_i^{\mathrm {R}}(\theta ^i(k_i^l,h^i))- \sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^l) x_j^* \right ) \). Set \(\overline {\kappa }_i=\min \left ( \{ k_i^l \mid k_i^l >k_i^q,\ i=0,1,\ldots ,z_i \} \cup \{ \hat {k}^i \} \right )\) and

\( \underline {\kappa }_i=\max \left ( \{ k_i^l \mid k_i^l <k_i^q,\ i=0,1,\ldots ,z_i \} \cup \{ \bar {k}^i \} \right )\).

B2.:

If \(b_i^{\mathrm {R}}(\theta ^i(\tilde {k}_i+\delta ,h^i)) < b_i^{\mathrm {R}}(\theta ^i(\tilde {k}_i,h^i))+ \lambda _i \delta \), update \( \underline {\kappa }_i=\tilde {k}_i\).

B3.:

If \(b_i^{\mathrm {R}}(\theta ^i(\tilde {k}_i-\delta ,h^i)) < b_i^{\mathrm {R}}(\theta ^i(\tilde {k}_i,h^i))- \lambda _i \delta \), update \(\overline {\kappa }_i=\tilde {k}_i\).

B4.:

If \(\overline {\kappa }_i- \underline {\kappa }_i > \varepsilon \) and \(\min (\overline {\kappa }_i-\tilde {k}_i,\tilde {k}_i- \underline {\kappa }_i)=0\), update \(\tilde {k}_i=0.5\overline {\kappa }_i + 0.5 \underline {\kappa }_i\) and return B2.

B5.:

If \(\sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^*) x_j^* > b_i^{\mathrm {R}}(\theta ^i(k_i^*,h^i))\), terminate the procedure with \(k_i^*=\tilde {k}_i\). Otherwise, there is no \(k_i^*\) such that \(\sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^*) x_j^* > b_i^{\mathrm {R}}(\theta ^i(k_i^*,h^i))\).

We note that Problem (17.10) can be solved by the relaxation procedure with a bisection method described above when θ i(⋅, h i) associated with each constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) is convex of concave in the range (0, 1), although we considered only the case when all θ i(⋅, h i) are concave.

17.5 Similar Results for Necessity Measures Defined by Modifier-Generating Functions

Now we describe the case when necessity measure N i is defined by modifier-generating functions \(g_i^m\) and \(g_i^m\). In this case, constraints \(N_{\boldsymbol { {C}}^{\mathrm {T}}\boldsymbol { {x}}}^0([t,+\infty )) \geq h^{0}\) and \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) can be rewritten as

$$\displaystyle \begin{aligned} m_{h^0}^0(\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x}) \subseteq [t,+\infty), \ \ m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}) \subseteq M_{h^i}^i(B_i), {} \end{aligned} $$
(17.23)

where \(m_{h^i}^i\) and \(m_{h^i}^i\) are defined by \(g_i^m\), \(g_i^m\), and h i.

Let \(\bar {q}_i^m=\inf \{g_i^m(k,h^i) \mid g_i^m(k,h^i)>0,\ k\in [0,1]\}\) and \(\hat {q}_i^M=\sup \{g_i^M(k,h^i) \mid g_i^M(k,h^i)<1,\ k\in [0,1]\}\). Then we have

$$\displaystyle \begin{aligned} N_{\boldsymbol{{C}}^{\mathrm{T}}\boldsymbol{{x}}}^0([t,+\infty)) \geq h^{0} \Leftrightarrow {[m_{h^0}^0(\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x})]_{\bar{q}_0^m}} \subseteq [t,+\infty). \end{aligned} $$
(17.24)

As in the analysis with θ i(⋅, h i)’s, the second inclusion relation of (17.23) is reduced to semi-infinite linear inequalities. In what follows, we investigate cases where the second inclusion relation of (17.23) is treated in some easier ways.

First, let us consider the case where \(g_i^{\mathrm {m}}(\cdot ,h^i)\) is convex in range (0, 1) and \(g_i^{\mathrm {m}}(\cdot ,h^i)\) is concave in range (0, 1). Then, we have the following equivalence (see Fig. 17.4a):

$$\displaystyle \begin{aligned} N_{\boldsymbol{{A}}_i^{\mathrm{T}}\boldsymbol{{x}}}^i(B_i) \geq h^i \Leftrightarrow \left \{ \begin{array}{@{}l} \left\{ \begin{array}{@{}ll} (m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}))_{0} \subseteq (M_{h^i}^i(B_i))_{0}, & \mbox{if }\bar{q}_i^m=0, \\ {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{\bar{q}_i^m} \subseteq (M_{h^i}^i(B_i))_{\bar{q}_i^m}}, & \mbox{if }\bar{q}_i^m> 0, \\ \end{array} \right .\\ \left\{ \begin{array}{@{}ll} {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_1 \subseteq [M_{h^i}^i(B_i)]_1}, & \mbox{if }\hat{q}_i^M=1, \\ {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{\hat{q}_i^M} \subseteq (M_{h^i}^i(B_i))_{\hat{q}_i^M}}, & \mbox{if }\hat{q}_i^M<1. \end{array} \right. \end{array} \right. \end{aligned} $$
(17.25)
Fig. 17.4
figure 4

Properties of \(g_i^m\) and \(g_i^m\) \(m_{h^i}^i( \boldsymbol {A}_i^{\mathrm {T}} \boldsymbol {x})\) and \(M_{h^i}^i(B_i)\): (a) convex \(g_i^m\) and concave \(g_i^m\) and (b) concave \(g_i^m\) and convex \(g_i^m\)

Namely, as fuzzy sets A ij and B i have linear membership functions, the convexity of \(g_i^m(\cdot ,h^i)\) and the concavity of \(g_i^m(\cdot ,h^i)\) in the range (0, 1) make the membership functions of \(m_{h^i}^i(\boldsymbol {A}_i^{\mathrm {T}}\boldsymbol {x})\) and \(M_{h^i}^i(B_i)\) convex and concave in the range (0, 1), respectively. From these properties of the membership functions of \(m_{h^i}^i(\boldsymbol {A}_i^{\mathrm {T}}\boldsymbol {x})\) and \(M_{h^i}^i(B_i)\), constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) is equivalent to two inclusion constraints of level sets of \(m_{h^i}^i(\boldsymbol {A}_i^{\mathrm {T}}\boldsymbol {x})\) and \(M_{h^i}^i(B_i)\).

Let us define functions \(k_i^{m}:[0,1]\rightarrow [0,1]\) and \(k_i^{m}:[0,1]\rightarrow [0,1]\) by

$$\displaystyle \begin{aligned} \begin{array}{rcl} k_i^{m}(a) &=& \left \{ \begin{array}{ll} \sup \{ k \in [0,1] \mid g_i^m(k,h^i) \leq a \}, & \mbox{if }a \neq 1, \\ \sup \{ k \in [0,1] \mid g_i^m(k,h^i) < 1 \}, & \mbox{if }a=1, \end{array} \right. \end{array} \end{aligned} $$
(17.26)
$$\displaystyle \begin{aligned} \begin{array}{rcl} k_i^{M}(a) &=& \left \{ \begin{array}{ll} \inf \{ k \in [0,1] \mid g_i^M(k,h^i) \geq a \}, & \mbox{if }a \neq 0, \\ \inf \{ k \in [0,1] \mid g_i^M(k,h^i) > 0 \}, & \mbox{if }a=0. \end{array} \right. \end{array} \end{aligned} $$
(17.27)

Then, as fuzzy sets A ij and B i have linear membership functions, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} {[m_{h^0}^0(\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x})]_{\bar{q}_0} \subseteq [t,+\infty)]} & \Leftrightarrow &\displaystyle [\boldsymbol{C}^{\mathrm{T}}\boldsymbol{x}]_{k_0^m(\bar{q}_0^m)} \subseteq [t,+\infty)] \\ & \Leftrightarrow &\displaystyle c_j^{\mathrm{L}}(k_0^m(\bar{q}_0^m)) \geq t, \end{array} \end{aligned} $$
(17.28)
$$\displaystyle \begin{aligned} \begin{array}{rcl} (m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}))_{0} \subseteq (M_{h^i}^i(B_i))_{0} & \Leftrightarrow &\displaystyle (\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})_{k_i^m(0)} \subseteq (B_i)_{k_i^M(0)} \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n \bar{a}_{ij}^{\mathrm{R}}(k_i^m(0))x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(0)), \end{array} \end{aligned} $$
(17.29)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{\bar{q}_i^m} \subseteq \mathrm{cl}(M_{h^i}^i(B_i))_{\bar{q}_i^m}} & \Leftrightarrow &\displaystyle [\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}]_{k_i^m(\bar{q}_i^m)} \subseteq \mathrm{cl}(B_i)_{k_i^M(\bar{q}_i^m)} \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(\bar{q}_i^m))x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(\bar{q}_i^m)), \qquad \qquad \end{array} \end{aligned} $$
(17.30)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{1} \subseteq [M_{h^i}^i(B_i)]_{1}} & \Leftrightarrow &\displaystyle [\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}]_{k_i^m(1)} \subseteq [B_i]_{k_i^M(1)} \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(1))x_j \leq b_i^{\mathrm{R}}(k_i^M(1)), \end{array} \end{aligned} $$
(17.31)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {[m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{\hat{q}_i^M} \subseteq \mathrm{cl}(M_{h^i}^i(B_i))_{\hat{q}_i^M}} & \Leftrightarrow &\displaystyle [\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}]_{k_i^m(\hat{q}_i^M)} \subseteq \mathrm{cl}(B_i)_{k_i^M(\hat{q}_i^M)} \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}(k_i^m(\hat{q}_i^M))x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(\hat{q}_i^M)),\qquad \end{array} \end{aligned} $$
(17.32)

where clD is the closure of a set of D ⊆R.

As a result, applying the necessity fractile optimization model, Problem (17.7) is reduced to the following linear programming problem:

$$\displaystyle \begin{aligned} \begin{array}{rl} \mbox{maximize} & \left \{ \begin{array}{l} \displaystyle \sum_{j=1}^n \bar{c}_j^{\mathrm{L}}(k_0^m(0)) x_j, \ \ \mbox{if } \bar{q}_0^m=0,\\ \displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(k_0^m(\bar{q}_0^m)) x_j, \ \ \mbox{if } \bar{q}_0^m> 0, \end{array} \right . \\ \mbox{subject to} \\ {} & \left \{ \begin{array}{l} \displaystyle \sum_{j=1}^n \bar{a}_{ij}^{\mathrm{R}}(k_i^m(0)) x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(0)), \ \ \mbox{if } \bar{q}_i^m=0,\\ \displaystyle \sum_{j=1}^n {a}_{ij}^{\mathrm{R}}(k_i^m(\bar{q}_i^m)) x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(\bar{q}_i^m)), \ \ \mbox{if } \bar{q}_i^m>0, \end{array} \right \} \ i=1,2,\ldots,m,\\ & \left \{ \begin{array}{l} \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(1)) x_j \leq b_i^{\mathrm{R}}(k_i^M(1)),\ \ \mbox{if } \hat{q}_i^M=1, \\ \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(\hat{q}_i^M)) x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(\hat{q}_i^M)),\ \ \mbox{if } \hat{q}_i^M<1, \end{array} \right \} \ i=1,2,\ldots,m,\\ & \boldsymbol{x} \geq \mathbf{0}. \end{array} {} \end{aligned} $$
(17.33)

Now let us consider the case where \(g_i^m(\cdot ,h^i):[0,1] \rightarrow [0,1]\) and \(g_i^m(\cdot ,h^i):[0,1] \rightarrow [0,1]\) are concave and convex in range (0, 1). Because A ij and B i have linear membership functions, given x = (x 1, x 2, …, x n)T ≥0, there exists \(q_i^*\) such that (see Fig. 17.4b)

$$\displaystyle \begin{aligned} \begin{array}{rcl} m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x}) \subseteq M_{h^i}^i(B_i) & \Leftrightarrow &\displaystyle [m_{h^i}^i(\boldsymbol{A}_i^{\mathrm{T}}\boldsymbol{x})]_{q_i^*} \subseteq \mathrm{cl}(M_{h^i}^i(B_i))_{q_i^*} \\ & \Leftrightarrow &\displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(q_i^*)) x_j \leq \bar{b}_i^{\mathrm{R}}(k_i^M(q_i*)). \end{array} \end{aligned} $$
(17.34)

Problem (17.10) can be solved by a relaxation procedure together with a bisection method. In this procedure, we explore an approximately optimal solution x by the relaxation procedure with searching \(q_i^*\)’s corresponding to tentative solutions generated in the procedure. Let \(\bar {q}_i^m=\inf \{g_i^m(k,h^i) \mid g_i^m(k,h^i)>0,\ k\in [0,1]\}\) and \(\hat {q}_i^M=\sup \{g_i^M(k,h^i) \mid g_i^M(k,h^i)<1,\ k\in [0,1]\}\). Then the procedure can be written as follows:

T0.:

Let ε be a sufficiently small positive number. Let \(\bar {q}_i=\max (\bar {q}_i^m,\bar {q}_i^M)\) and \(\hat {q}_i=\min (\hat {q}_i^m,\hat {q}_i^M)\).

T1.:

Let z i = 0 and \(q_i^{z_i}=0.5\bar {q}^i+0.5\hat {q}^i\), i = 0, 1, …, m.

T2.:

Solve the following linear programming problem:

$$\displaystyle \begin{aligned} \begin{array}{ll} \mbox{maximize} & \displaystyle \sum_{j=1}^n c_j^{\mathrm{L}}(k_0^m(\bar{q}_0^m)) x_j,\\ \mbox{subject to} & \displaystyle \sum_{j=1}^n a_{ij}^{\mathrm{R}}(k_i^m(q_i^l)) x_j \!\leq\! \bar{b}_i^{\mathrm{R}}(k_i^M(q_i^l)),\ {l\!=\!0,1,\ldots,z_i,\ i\!=\!1,2,\ldots,m,}\\ & \boldsymbol{x} \geq \mathbf{0}. \end{array} {} \end{aligned} $$
(17.35)

Let \(\boldsymbol {x}^*=(x_1^*,x_2^*,\ldots ,x_n^*)^{\mathrm {T}}\) be the obtained optimal solution.

T3.:

For i = 1, 2, …, m, check the existence of \(q_i^* \in (0,1]\) such that

\(\sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^m(q_i^*)) x_j^* > \bar {b}_i^{\mathrm {R}}(k_i^M(q_i^*))+\varepsilon \) by a bisection method and update z i = z i + 1 with defining \(q_i^{z_i}=q_i^*\) if \(q_i^*\) exists.

T4.:

If at least one z i is increased, then return S2. Otherwise, x is an optimal solution to Problem (17.10).

The bisection method in T3 can be performed as follows:

C0.:

Define \(\varphi _i(q)=\bar {b}_i^{\mathrm {R}}(k_i^M(q)) - \sum _{j=1}^n a_{ij}^{\mathrm {R}}(k_i^m(q)) x_j \), and let \(\hat {q}_i=\mbox{arg min}_{l=0,1,\ldots ,z_i} \varphi (q_i^l)\). Let \(UB=\min \left ( \{q_i^l \mid q_i^l > \check {q}_i,\ l=0,1,\ldots ,z_i \} \cup \{\hat {q}_i \} \right )\) and \(LB=\max \left ( \{q_i^l \mid q_i^l < \check {q}_i,\ l=0,1,\ldots ,z_i \} \cup \{\bar {q}_i \} \right )\).

C1.:

If \(\hat {q}_i=UB \) and \(\varphi _i(\hat {q}_i) < \varphi _i(\hat {q}_i-\varepsilon )\), then terminate this procedure. In this case, if φ i(1) < 0, then \(q_i^*=1\); otherwise, \(q_i^*\) does not exist.

C2.:

If \(\bar {q}_i=LB\) and \(\varphi _i(\bar {q}_i) < \varphi _i(\bar {q}_i+\varepsilon )\), then terminate this procedure. In this case, if \(\varphi _i(\bar {q}_i) <0\), then \(q_i^*=\bar {q}_i\); otherwise, \(q_i^*\) does not exist.

C3.:

If UB − LB ≤ ε, then terminate this procedure. In this case, if \(\varphi _i(\tilde {q}_i) <0\), then \(q_i^*=\tilde {q}_i\); otherwise, \(q_i^*\) does not exist.

C4.:

Let \(\tilde {q}_i=0.5\check {q}_i+0.5UB\). If \(\varphi (\tilde {q}_i) < \varphi (\check {q}_i)\), then set \(LB=\check {q}_i\) and \(\check {q}_i=\tilde {q}_i\) and return to C3. Otherwise, set \(UB=\tilde {q}_i\).

C5.:

Let \(\tilde {q}_i=0.5\check {q}_i+0.5UB\). If \(\varphi (\tilde {q}_i) < \varphi (\check {q}_i)\), then set \(UB=\check {q}_i\) and \(\check {q}_i=\tilde {q}_i\). Otherwise, set \(LB=\tilde {q}_i\). Return to C3.

We note that Problem (17.10) can be solved by the relaxation procedure with a bisection method described in the previous section and this section when each constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) is reduced to two linear inequalities (θ(⋅, h i) is convex in the range (0, 1) or g m(⋅, h i) and g M(⋅, h i) are convex and concave in the range (0, 1), respectively) or treated by the relaxation procedure with a bisection method (θ(⋅, h i) is concave in the range (0, 1) or g m(⋅, h i) and g M(⋅, h i) are concave and convex in the range (0, 1), respectively), although we considered cases only when all I i have the same property in the previous section and this section.

17.6 Results Applied to Various Implication Functions

17.6.1 R-, Reciprocal R-, and S-Implication Functions

In this section, we investigate Problem (17.10) with necessity measures defined by R-, reciprocal R-, and S-implication functions. R-, reciprocal R-, and S-implication functions are constructed from a t-norm t : [0, 1] × [0, 1] → [0, 1] satisfying: (t1) t(a, 1) = a, ∀a ∈ [0, 1], (t2) t(a, b) = t(b, a), ∀a, b ∈ [0, 1], (t3) t(a, t(b, c)) = t(t(a, b), c), ∀a, b, c ∈ [0, 1], and (t4) t(a, b) ≤ t(c, d), ∀a, b, c ∈ [0, 1];a ≤ c, b ≤ d and a strong negation n : [0, 1] → [0, 1] satisfying (n1) n(0) = 1 and (n2) n(n(a)) = a, ∀a ∈ [0, 1]. Namely, under given t-norm t and strong negation n, R-implication function I R[t], reciprocal R-implication function I r−R[t, n], and S-implication function I S[t, n] are defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} I^{\mathrm{R}}[t](a,b) & = &\displaystyle \sup \{ s \in [0,1] \mid t(a,s) \leq b \}, {} \end{array} \end{aligned} $$
(17.36)
$$\displaystyle \begin{aligned} \begin{array}{rcl} I^{\mathrm{r-R}}[t,n](a,b) & = &\displaystyle \sup \{ s \in [0,1] \mid t(n(b),s) \leq n(a) \}, {} \end{array} \end{aligned} $$
(17.37)
$$\displaystyle \begin{aligned} \begin{array}{rcl} I^{\mathrm{S}}[t,n](a,b) & = &\displaystyle n(t(a,n(b))). {} \end{array} \end{aligned} $$
(17.38)

A t-norm t is said to be Archimedian if it satisfies t(a, a) < a, ∀a ∈ (0, 1). It is known that any continuous Archimedian t-norm t can be generated from a strictly decreasing and continuous function f : [0, 1] → [0, +) ∪{+} with f(1) = 0 as

$$\displaystyle \begin{aligned} t(a,b)=f^*(f(a)+f(b), {} \end{aligned} $$
(17.39)

where f  : [0, +) ∪{+}→ [0, 1] is a pseudo-inverse of f defined by

$$\displaystyle \begin{aligned} f^*(r)=\sup\{ h \in [0,1] \mid f(h) \geq r \}=\left\{ \begin{array}{cl} f^{-1}(r), & \mbox{if } r<f(0), \\ 0, & \mbox{if } r \geq f(0). \end{array} \right. \end{aligned} $$
(17.40)

Such a function f is called an additive generator of t-norm t.

17.6.2 Results in R-Implication Functions

When implication function I is an R-implication function I R[t], for any k, h ∈ [0, 1], we have

$$\displaystyle \begin{aligned} \theta(k,h)=t(k,h). \end{aligned} $$
(17.41)

Therefore, if necessity measure N i of Problem (17.10) is defined by an R-implication function I i with t-norm t i, i.e., I i = I R[t i] such that t i(⋅, h i) is convex in range (0, 1), the constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) can be reduced to a system of two linear inequalities shown in (17.17).

Moreover, if a necessity measure N i is defined by an R-implication function I i with t-norm t i such that t i(⋅, h i) is concave in range (0, 1), the relaxation procedure together with a bisection method described in Sect. 17.4 is applicable for \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\).

For many famous t-norms such as minimum operation, arithmetic product, bounded product, and so on, t(⋅, h i) becomes convex in range (0, 1). However, for Schweizer–Sklar t-norms [14] \(t_{\eta }^{\mathrm {SS}}\) with parameter η > 1, Hamacher t-norms [14] \(t_{\eta }^{\mathrm {H}}\) with parameter η < 1, and so on, t(⋅, h i) becomes concave in range (0, 1), where \(t_{\eta }^{\mathrm {SS}}\) and \(t_{\eta }^{\mathrm {H}}\) are defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} t_{\eta}^{\mathrm{SS}}(a,b)&=&\left\{ \begin{array}{cl} \max (0,a^{\eta}+b^{\eta}-1)^{\frac{1}{\eta}}, & \mbox{if }\eta \in (-\infty,0), \cup (0,+\infty),\\ \min(a,b), & \mbox{if }\eta=-\infty,\\ ab, & \mbox{if }\eta=0,\\ \left\{ \begin{array}{cl} \min(a,b), & \mbox{if }\max(a,b)=1,\\ 0, & \mbox{otherwise,} \end{array} \right . & \eta=+\infty, \end{array} \right. \end{array} \end{aligned} $$
(17.42)
$$\displaystyle \begin{aligned} \begin{array}{rcl} t_{\eta}^{\mathrm{H}}(a,b)&=&\left\{ \begin{array}{cl} \left\{ \begin{array}{cl} \min(a,b), & \mbox{if }\max(a,b)=1,\\ 0, & \mbox{otherwise,} \end{array} \right . & \eta=+\infty, \\ 0, & \mbox{if } \eta=a=b=0,\\ \displaystyle \frac{ab}{\eta+(1-\eta)(a+b-ab)}, & \mbox{otherwise.} \end{array} \right. \end{array} \end{aligned} $$
(17.43)

17.6.3 Results in Reciprocal R-Implication Functions

When implication function I is a reciprocal R-implication function I r−R[t, n], we have \(\theta (k,h)=\inf \{s \in [0,1] \mid t(n(s),h)\leq n(k) \}\). Then when a reciprocal R-implication function I r−R[t, n] is defined by n(h) = 1 − h and t-norm t such that t(⋅, h i) is convex in range (0, 1), we can prove that θ(⋅, h i) is convex in range (0, 1) as follows: for any k 1, k 2 ∈ [0, 1], any λ ∈ [0, 1], we have

Therefore, if necessity measure N i of Problem (17.10) is defined by reciprocal R-implication function I i = I[t, n] with n(h) = 1 − h and t-norm t such that t(⋅, h i) is convex in range (0, 1), the constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) can be reduced to a system of two linear inequalities shown in (17.17).

17.6.4 Results in S-Implication Functions

When implication function I i is an S-implication I S[t, n], we have

$$\displaystyle \begin{aligned} \theta(k,h^i)=n \left( \sup\{ s \in [0,1] \mid t(k,s) \leq n(h^i) \} \right). {} \end{aligned} $$
(17.44)

Let us define a set BS(h) ⊆ [0, 1] × [0, 1] and a function ψ BS(h) : [0, 1] → [0, 1] by

$$\displaystyle \begin{aligned} \begin{array}{rcl} BS(h) & = &\displaystyle \{(k,s) \in [0,1]\times [0,1] \mid t(k,s) \leq h \}, \end{array} \end{aligned} $$
(17.45)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \psi^{BS(h)}(k) & \!=\!&\displaystyle \sup \{s \in [0,1] \mid t(k,s) \!\leq\! h\} \!=\!\sup\{s \mid (k,s)\in BS(h) \}.\qquad \end{array} \end{aligned} $$
(17.46)

It is easily shown that ψ BS(h) is concave if BS(h) is a convex set. Then we obtain that if t-norm t is quasi-convex and n is convex, θ(⋅, h i) becomes convex. Therefore, if necessity measure N i of Problem (17.10) is defined by S-implication I i = I S[t, n] with a convex strong negation n and a quasi-convex t-norm t, the constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) can be reduced to a system of two linear inequalities shown in (17.17).

When t-norm t is an Archimedian t-norm having the additive generator f with (17.39), S-implication I S[t, n] can be defined by (17.6) with the following modifier-generating functions (see Inuiguchi and Tanino [9] and Inuiguchi et al. [13]):

$$\displaystyle \begin{aligned} \begin{array}{rcl} g^m(a,h)& =&\displaystyle \max \left ( 0, 1- \frac{f(a)}{f(n(h))} \right), \\ g^M(a,b)& =&\displaystyle \min \left (1, \frac{f(n(a))}{f(n(h))} \right). {} \end{array} \end{aligned} $$
(17.47)

If I i is S-implication function I S[t, n] with respect to a continuous Archimedian t-norm t such that the additive generator f is concave and n is convex, from (17.47) and strict decreasingness of f, g m(⋅, h) and g M(⋅, h) are convex and concave in the range (0, 1). Then in this case, the constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) can be reduced to a system of two linear inequalities.

Moreover, if a necessity measure N i is defined by an S-implication I i = I S[t, n] with respect to a continuous Archimedian t-norm such that the additive generator f is convex and n is concave, from (17.47) and strict decreasingness of f, g m(⋅, h) and g M(⋅, h) are concave and convex in the range (0, 1). Therefore, the relaxation procedure with a bisection method described in the previous section is applicable for the constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\).

17.6.5 Obtained Results Applied to Famous Implication Functions

The obtained results are arranged in Table 17.2. In Table 17.2, the convexity and concavity of θ(⋅, h i), g m(⋅, h i), g M(⋅, h i), and t(⋅, h i) are restricted in the range (0, 1). RPBM-applicable means that the relaxation procedure together with a bisection method is applicable. As shown in Table 17.2, some convexity and/or concavity of some functions related to implication functions are required for solving Problem (17.10) in a simpler way. These conditions may be seen as being strong.

Table 17.2 Implication functions and the conditions for reducing \(N_{ \boldsymbol { {A}}_i^{\mathrm {T}} \boldsymbol { {x}}}^i(B_i) \geq h^i\)
Table 17.3 Reduction of \(N_{ \boldsymbol { {A}}_i^{\mathrm {T}} \boldsymbol { {x}}}^i(B_i) \geq h^i\) with famous implication functions

However, if we apply the obtained results to famous and useful implication functions, we obtain Table 17.3. The implication functions can be found in [9, 10, 13]. In Table 17.3, column “reduc.” shows the reduced constraints. “Linear” stands for linear inequalities, while “Relx.” stands for relaxation procedure with a bisection method. As shown in Table 17.3, constraint \(N_{\boldsymbol { {A}}_i^{\mathrm {T}}\boldsymbol { {x}}}^i(B_i) \geq h^i\) with respect to many of famous implication functions I i except Reichenbach implication function is reduced to two linear inequality conditions. When necessity measure N i is defined by Reichenbach implication function, if other necessity fractile constraints are reduced to linear inequalities or treated by the relaxation procedure with a bisection method, Problem (17.10) can be solved by the relaxation procedure with a bisection method described in previous sections.

Therefore, for many famous implication functions, necessity measures can be treated without great loss of linearity when fuzzy numbers C j, A ij and fuzzy constraints B i have linear membership functions.

17.7 Concluding Remarks

We have investigated in the necessity fractile optimization models of possibilistic linear programming problems with trapezoidal fuzzy numbers. We consider the models with general necessity measures defined by various implication functions. In general, the model is reduced to a semi-infinite linear programming problem that can be solved approximately by a linear programming technique with selecting many constraints from the semi-infinite constraints.

We showed that the model can be reduced to a usual linear programming problem or solved by a relaxation procedure with a bisection method when functions related to the implication function have convexity and/or concavity. Utilizing the obtained results, we demonstrated that the model can be reduced to a usual linear programming problem when many famous implication functions are used for defining necessity measures. To see the significance of the selection of implication function, differences of the equivalent conditions to the necessity fractile constraints by the implication functions are observed.

The studies on necessity measure optimization models would be one of the future topics derived from the results of this chapter. Moreover, the study on the specification of necessity measures suitable for decision-maker’s requirements would be one of the important topics.