Abstract
In this article, a Davidon-Fletcher-Powell type quasi-Newton method is proposed to capture nondominated solutions of fuzzy optimization problems. The functions that we attempt to optimize here are multivariable fuzzy-number-valued functions. The decision variables are considered to be crisp. Towards developing the quasi-Newton method, the notion of generalized Hukuhara difference between fuzzy numbers, and hence generalized Hukuhara differentiability for multi-variable fuzzy-number-valued functions are used. In order to generate the iterative points, the proposed technique produces a sequence of positive definite inverse Hessian approximations. The convergence result and an algorithm of the developed method are also included. It is found that the sequence in the proposed method has superlinear convergence rate. To illustrate the developed technique, a numerical example is exhibited.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In order to deal with imprecise nature of the objective and constraint functions in a decision-making problem, fuzzy optimization problems are widely studied since the seminal work by Bellman and Zadeh [6], in 1970. The research article by Cadenas and Verdegay [7], the monograph by Słowínski [29], and the references therein are rich stream of this topic. Very recently, in a survey article, Luhandjula [26] reported the milestones and perspective of the theories and applications of fuzzy optimization. The survey books by Lai and Hwang [20, 21] and by Lodwick and Kacprzyk [22] explored a perceptive overview on the development of fuzzy optimization problems. Recently, Ghosh and Chakraborty published a fuzzy geometrical view [9, 14, 15] on fuzzy optimization problems [16,17,18,19].
In order to solve an unconstrained fuzzy optimization problem, recently, Pirzada and Pathak [23] and Chalco-Cano et al. [8] developed a Newton method. Much similar to fuzzy optimization, Ghosh [12] derived a Newton method [11] and a quasi-Newton method [13] for interval optimization problems.
The real life optimization models often need to optimize a fuzzy function over a real data set. Mathematically, the problem is the following:
where \(\widetilde{f}\) is a fuzzy-number-valued function. There is plethora of studies and numerical algorithms [1, 28] to effectively tackle unconstrained conventional optimization problems. However, the way to apply those methods to solve fuzzy optimization problems is not apparent. In this article, we develop a Davidon-Fletcher-Powell type quasi-Newton method for an unconstrained fuzzy optimization problem.
In order to capture the optimum candidates [4, 28] for a smooth optimization problem, it is natural to use the notion of differentiability. However, to find the optimal points of a fuzzy optimization problem, in addition to the idea on differentiability of the fuzzy function, identification of an appropriate ordering of fuzzy numbers is of utmost importance. Because, unlike the real number set, the set of all fuzzy numbers is not linearly ordered [16].
Towards developing the notion of differentiability of fuzzy functions, Dubois and Prade [10] used the extension principle, and also exhibited that a fuzzy extension of the conventional definition of differentiation requires further enquiry. Recently, the idea of Hukuhara differentiability (H-differentiability) [3, 24] received substantial attention in fuzzy optimization theory. The concept of H-fuzzy-differentiation is rigorously discussed in [2]. Stefanini [30] proposed generalizations of H-differentiability (gH-differentiability) and its application in fuzzy differential equations. Bede and Stefanini [5] gave a generalized H-differentiability of fuzzy-valued functions. Chalco-Cano et al. [8] reported that gH-derivative is the most general concept for differentiability of fuzzy functions. Thus, in this paper, we employ the gH-derivative and its calculus [8, 33] to derive the quasi-Newton method for fuzzy optimization problems.
There have been extensive literature on ordering of fuzzy numbers including the research articles [25, 27, 31]. References of [32] reports main stream on ordering of fuzzy numbers. In this paper, we use the fuzzy-max ordering of fuzzy numbers of Ramík and Rimanek [27]. There are two reasons behind this choice. First, it is a partial ordering in the space of fuzzy numbers [23]. Second, it has insightful association [33] with the optimality notion on fuzzy optimization.
The rest of the article is organized in the following sequence. In Sect. 2, the notations and terminologies are given which are used throughout the paper. In Sect. 3, we derive a quasi-Newton method. Convergence analysis of the proposed method is presented in Sect. 4. Section 5 includes an illustrative numerical example. Finally, we give a brief conclusions and scopes for future research in Sect. 6.
2 Preliminaries
We use upper and lower case letters with a tildebar (\(\widetilde{A}, \widetilde{B}, \widetilde{C}, \dots \) and \(\widetilde{a}, \widetilde{b}, \widetilde{c}, \dots \)) to denote fuzzy subsets of \(\mathbb {R}\). The membership function of a fuzzy set \(\widetilde{A}\) of \(\mathbb {R}\) is represented by \(\mu (x|\widetilde{A})\), for x in \(\mathbb {R}\), with \(\mu (\mathbb {R}) \subseteq [0, 1]\).
2.1 Fuzzy Numbers
Definition 1
(\(\alpha \)-cut of a fuzzy set [16]). The \(\alpha \)-cut of a fuzzy set \(\widetilde{A}\) of \(\mathbb {R}\) is denoted by \(\widetilde{A}(\alpha )\) and is defined by:
Definition 2
(Fuzzy number [16]). A fuzzy set \(\widetilde{N}\) of \(\mathbb {R}\) is called a fuzzy number if its membership function \(\mu \) has the following properties:
-
(i)
\(\mu (x|\widetilde{N})\) is upper semi-continuous,
-
(ii)
\(\mu (x|\widetilde{N}) = 0\) outside some interval [a, d], and
-
(iii)
there exist real numbers b and c satisfying \(a\le b\le c\le d\) such that \(\mu (x|\widetilde{N})\) is increasing on [a, b] and decreasing on [c, d], and \(\mu (x|\widetilde{N}) = 1\) for each x in [b, c].
In particular, if \(b = c\), and the parts of the membership functions \(\mu (x|\widetilde{N})\) in [a, b] and [c, d] are linear, the fuzzy number is called a triangular fuzzy number, denoted by (a/c/d). We denote the set of all fuzzy numbers on \(\mathbb {R}\) by \(\mathcal {F}(\mathbb {R})\).
Since \(\mu (x|\widetilde{a})\) is upper semi-continuous for a fuzzy number \(\widetilde{a}\) the \(\alpha \)-cut of \(\widetilde{a}\), \(\widetilde{a}(\alpha )\) is a closed and bounded interval of \(\mathbb {R}\) for all \(\alpha \) in [0, 1]. We write
Let \(\oplus \) and \(\odot \) denote the extended addition and multiplication. According to the well-known extension principle, the membership function of \(\widetilde{a} \otimes \widetilde{b}\) (\(\otimes = \oplus \) or \(\odot \)) is defined by
For any \(\widetilde{a}\) and \(\widetilde{b}\) in \(\mathcal {F}(\mathbb {R})\), the \(\alpha \)-cut (for any \(\alpha \) in [0, 1]) of their addition and scalar multiplication can be obtained by:
Definition 3
(Generalized Hukuhara difference [30]). Let \(\widetilde{a}\) and \(\widetilde{b}\) be two fuzzy numbers. If there exists a fuzzy number \(\widetilde{c}\) such that \(\widetilde{c}\) \(\oplus \) \(\widetilde{b} = \widetilde{a}\) or \(\widetilde{b} = \widetilde{a} \ominus \widetilde{c}\), then \(\widetilde{c}\) is said to be generalized Hukuhara difference (gH-difference) between \(\widetilde{a}\) and \(\widetilde{b}\). Hukuhara difference between \(\widetilde{a}\) and \(\widetilde{b}\) is denoted by \(\widetilde{a} \ominus _{gH} \widetilde{b}\).
In terms of \(\alpha \)-cut, for all \(\alpha \in [0, 1]\), we have
2.2 Fuzzy Functions
Let \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) be a fuzzy function. For each x in \(\mathbb {R}^n\), we present the \(\alpha \)-cuts of \(\widetilde{f}(x)\) by
The functions \(\widetilde{f}_\alpha ^L\) and \(\widetilde{f}_\alpha ^U\) are, evidently, two real-valued functions on \(\mathbb {R}^n\) and are called the lower and upper functions, respectively.
With the help of gH-difference between two fuzzy numbers, gH-differentiability of a fuzzy function is defined as follows.
Definition 4
(gH-differentiability of fuzzy functions [8]). Let \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) be a fuzzy function and \(x_0 = \left( x_1^0, x_2^0, \cdots , x_n^0\right) \) be an element of \(\mathbb {R}^n\). For each \(i = 1, 2, \cdots , n\), we define a fuzzy function \(\widetilde{h}_i: \mathbb {R} \rightarrow \mathcal {F}(\mathbb {R})\) as follows
We say \(h_i\) is gH-differentiable if the following limit exists
If \(\widetilde{h}_i\) is gH-differentiable, then we say that \(\widetilde{f}\) has the i-th partial gH-derivative at \(x_0\) and is denoted by \(\frac{\partial \widetilde{f}}{\partial x_i}(x_0)\).
The function \(\widetilde{f}\) is said to be gH-differentiable at \(x_0 \in \mathbb {R}^n\) if all the partial gH-derivatives \(\frac{\partial \widetilde{f}}{\partial x_1}(x_0)\), \(\frac{\partial \widetilde{f}}{\partial x_2}(x_0)\), \(\cdots \), \(\frac{\partial \widetilde{f}}{\partial x_n}(x_0)\) exist on some neighborhood of \(x_0\) and are continuous at \(x_0\).
Proposition 1
(See [8]). If a fuzzy function \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) is gH-differentiable at \(x_0 \in \mathbb {R}^n\), then for each \(\alpha \in [0, 1]\), the real-valued function \(\widetilde{f}^L_\alpha + \widetilde{f}^U_\alpha \) is differentiable at \(x_0\). Moreover,
Definition 5
(gH-gradient [8]). The gH-gradient of a fuzzy function \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) at a point \(x_0 \in \mathbb {R}^n\) is defined by
We denote this gH-gradient by \(\nabla \widetilde{f} (x_0)\).
We define an m-times continuously gH-differentiable fuzzy function \(\widetilde{f}\) as a function whose all the partial gH-derivatives of order m exist and are continuous. Then, we have the following immediate result.
Proposition 2
(See [8]). Let \(\widetilde{f}\) be a fuzzy function. Let at \(x_0 \in \mathbb {R}^n\), the function \(\widetilde{f} \) be m-times gH-differentiable. Then, the real-valued function \(\widetilde{f}^L_\alpha + \widetilde{f}^U_\alpha \) is m-times differentiable at \(x_0\).
Definition 6
(gH-Hessian [8]). Let the fuzzy function \(\widetilde{f}\) be twice gH-differentiable at \(x_0\). Then, for each i, the function \(\frac{\partial \widetilde{f}}{\partial x_i}\) is gH-differentiable at \(x^0\). The second order partial gH-derivative can be calculated through \(\frac{\partial ^2 \widetilde{f}}{\partial x_i \partial x_j}\). The gH-Hessian of \(\widetilde{f}\) at \(x_0\) can be captured by the square matrix
2.3 Optimality Concept
Definition 7
(Dominance relation between fuzzy numbers [23]). Let \(\widetilde{a}\) and \(\widetilde{b}\) be two fuzzy numbers. For any \(\alpha \in [0, 1]\), let \(\widetilde{a}_\alpha = \left[ \widetilde{a}^L_\alpha , \widetilde{a}^U_\alpha \right] \) and \(\widetilde{b}_\alpha = \left[ \widetilde{b}^L_\alpha , \widetilde{b}^U_\alpha \right] \). We say \(\widetilde{a}\) dominates \(\widetilde{b}\) if \(\widetilde{a}^L_\alpha \le \widetilde{b}^L_\alpha \) and \(\widetilde{a}^U_\alpha \le \widetilde{b}^U_\alpha \) for all \(\alpha \in [0, 1]\). If \(\widetilde{a}\) dominates \(\widetilde{b}\), then we write \(\widetilde{a} \preceq \widetilde{b}\). The fuzzy number \(\widetilde{b}\) is said to be strictly dominated by \(\widetilde{a}\), if \(\widetilde{a} \preceq \widetilde{b}\) and there exists \(\beta \in [0, 1]\) such that \(\widetilde{a}^L_\beta < \widetilde{b}^L_\beta \) or \(\widetilde{a}^U_\beta < \widetilde{b}^U_\beta \). If \(\widetilde{a}\) strictly dominates \(\widetilde{b}\), then we write \(\widetilde{a} \prec \widetilde{b}\).
Definition 8
(Non-dominated solution [23]). Let \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) be a fuzzy function and we intend to find a solution of ‘\(\displaystyle \min _{x \in \mathbb {R}^n} \widetilde{f}(x)\)’. A point \(\bar{x} \in \mathbb {R}^n\) is said to be a locally non-dominated solution if for any \(\epsilon >0\), there exists no \(x \in N_{\epsilon }(\bar{x})\) such that \(\widetilde{f}(x) \preceq \widetilde{f}(\bar{x})\), where \(N_{\epsilon }(\bar{x})\) denotes \(\epsilon \)-neighborhood of \(\bar{x}\). A local non-dominated solution is called a local solution of ‘\(\displaystyle \min _{x \in \mathbb {R}^n} \widetilde{f}(x)\)’.
For local non-dominated solution, the following result is proved in [8].
Proposition 3
(See [8]). Let \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) be a fuzzy function. If \(x^*\) is a local minimizer of the real-valued function \(\widetilde{f}^L_\alpha + \widetilde{f}^U_\alpha \) for all \(\alpha \in [0, 1]\), then \(x^*\) is a locally non-dominated solution of ‘\(\displaystyle \min _{x \in \mathbb {R}^n} \widetilde{f}(x)\)’.
3 Quasi-Newton Method
In this section, we consider to solve the following unconstrained Fuzzy Optimization Problem:
where \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) is a multi-variable fuzzy-number-valued function. On finding nondominated solution of the problem, we note that the existing Newton method (see [8, 23]) requires computation of the inverse of the concerned Hessian. However, computation of the inverse of Hessian is cost effective. Thus in this article, we intend to develop a quasi-Newton method to sidestep the computational cost of the existing Newton method. Towards this end, for the considered FOP we assume that at each of the following generated sequential points \(x_k\), the function \(\widetilde{f}\), its gH-gradient and gH-Hessian are well-defined. Therefore, according to Propositions 1 and 2, we can calculate \(\widetilde{f}^L_\alpha (x_k)\), \(\widetilde{f}^U_\alpha (x_k)\), \(\nabla \widetilde{f}^L_\alpha (x_k)\), \(\nabla \widetilde{f}^U_\alpha (x_k)\), \(\nabla ^2 \widetilde{f}^L_\alpha (x_k)\) and \(\nabla ^2 \widetilde{f}^U_\alpha (x_k)\) for all \(\alpha \in [0, 1]\), for all \(k = 0, 1, 2, \cdots \). Hence, we can have a quadratic approximations of the lower and the upper functions \(\widetilde{f}^L_\alpha \) and \(\widetilde{f}^U_\alpha \) at each \(x_k\).
Let the quadratic approximation models of the functions \(\widetilde{f}^L_\alpha \) and \(\widetilde{f}^U_\alpha \) at \(x_{k+1}\) be
and
which satisfy the interpolating conditions
The derivatives of \(h_\alpha ^L\) and \(h_\alpha ^U\) yield
In the next, the Newton method (see [8, 23]) attempts to find x in terms of \(x_{k+1}\) as follows
where \(\phi (x) = \int _0^1 \left( \widetilde{f}_\alpha ^L(x) + \widetilde{f}_\alpha ^U(x)\right) d\alpha \). However, due to inherent computational difficulty to find \([\nabla ^2 \phi (x_{k+1})]^{-1}\), it is often suggested to consider an appropriate approximation. Let \(A_{k+1}\) be an approximation of \([\nabla ^2 \phi (x_{k+1})]^{-1}\). Then from (1) setting \(x = x_k\), \(\delta _k = x_{k+1} - x_k\), \(\beta _{\alpha k}^L = \nabla \widetilde{f}_\alpha ^L(x_{k+1}) - \nabla \widetilde{f}_\alpha ^L(x_{k})\) and \(\beta _{\alpha k}^U = \nabla \widetilde{f}_\alpha ^U(x_{k+1}) - \nabla \widetilde{f}_\alpha ^U(x_{k})\), we obtain
According to Proposition 3, to obtain nondominated solutions of (FOP), we need to have solutions of the equation \(\nabla (\widetilde{f}_\alpha ^L + \widetilde{f}_\alpha ^U)(x) = 0\). In order to capture solutions of this equation, much similar to the Newton method [23], the above procedure clearly suggests to consider the following generating sequence
If \(A_k\) is an appropriate approximation of the inverse Hessian matrix \(\left[ \nabla ^2 \phi (x_{k+1})\right] ^{-1}\) and \(\nabla (\widetilde{f}_\alpha ^L + \widetilde{f}_\alpha ^U)(x_{k+1}) \thickapprox 0\), then the equation \(x_{k+1} = x_k + A_{k+1} \varPhi _k\) reduces to
which is the generating equation of the Newton method [23] and hence obviously will converge to the minimizer of \(\widetilde{f}_\alpha ^L + \widetilde{f}_\alpha ^U\).
As we observe, the key point of the above method is to appropriately generate \(A_{k}\)’s. Due to the inherent computational difficulty to find the inverse of the Hessian \(\nabla ^2 \phi (x_{k+1})\) we consider an approximation that should satisfy
In this article we attempt to introduce a simple rank-two update of the sequence \(\{A_k\}\) that satisfy the above quasi-Newton Eq. (2).
Let \(A_k\) be the approximation of the k-th iteration. We attempt to update \(A_k\) into \(A_{k+1}\) by adding two symmetric matrices, each of rank one as follows:
where \(u_k\) and \(v_k\) are two vectors in \(\mathbb {R}^n\), and \(p_k\) and \(q_k\) are two scalars which are to be determined by the quasi-Newton Eq. (2). Therefore, we now have
Evidently, \(v_k\) and \(w_k\) are not uniquely determined, but their obvious choices are \(v_k = \delta _k\) and \(w_k = A_k \varPhi _k\). Putting this values in (3), we obtain
Therefore,
We consider this equation to generate the sequence of above mentioned inverse Hessian approximation.
Therefore, accumulating all, we follow the following sequential way (Algorithm 1) to obtain an efficient solution of the considered (FOP) \(\min _{x \in \mathbb {R}^n} \widetilde{f}(x)\).
4 Convergence Analysis
In this section, the convergence analysis of the proposed quasi-Newton method is performed. In the following theorem, it is found that the proposed method has superlinear convergence rate.
Theorem 1
(Superlinear convergence rate). Let \(\widetilde{f}: \mathbb {R}^n \rightarrow \mathcal {F}(\mathbb {R})\) be thrice continuously gH-differentiable on \(\mathbb {R}^n\) and \(\bar{x}\) be a point such that
-
(i)
\(\bar{x}\) is a local minimizer of \(\widetilde{f}^L_\alpha \) and \(\widetilde{f}^U_\alpha \),
-
(ii)
\(\int _0^1 \nabla ^2 \widetilde{f}_\alpha ^L(x) d\alpha \) and \(\int _0^1 \nabla ^2 \widetilde{f}_\alpha ^U(x) d\alpha \) are symmetric positive definite, and
-
(iii)
\(\int _0^1 \nabla ^2 \widetilde{f}_\alpha ^L(x) d\alpha \) and \(\int _0^1 \nabla ^2 \widetilde{f}_\alpha ^U(x) d\alpha \) are Lipschitzian with constants \(\gamma ^L\) and \(\gamma ^U\), respectively.
Then the iteration sequence \(\{x_k\}\) in Algorithm 1 converges to \(\bar{x}\) superlinearly if and only if
where \(\phi (x) = \int _0^1\left( \widetilde{f}_\alpha ^L(x) + \widetilde{f}_\alpha ^U(x)\right) d\alpha \).
Proof
According to the hypothesis (i), at \(\bar{x}\) we have
From hypothesis (ii), the Hessian matrix
is positive definite and a symmetric matrix. With the help of hypothesis (iii), the function
is found to be Lipschitzian at \(\bar{x}\) with constant \(\gamma ^L + \gamma ^U\). Mathematically, there exists a neighborhood \(N_\epsilon (\bar{x})\) where
Towards proving the result, the following equivalence will be proved:
With the help of quasi-Newton Eq. (1), we have
Therefore,
It is evident to note that
Since \(\nabla ^2 \phi (\bar{x})\) is positive definite, we have \(\xi > 0\) and \(m \in \mathbb {N}\) such that
Hence, we now have
where \(c_k = \frac{\Vert x_{k+1} - \bar{x}\Vert }{\Vert x_k - \bar{x}\Vert }\). This inequality gives
This completes the proof of superlinear convergence of the sequence \(\{x_k\}\) in Algorithm 1.
Conversely, since \(\{x_k\}\) converges superlinearly to \(\bar{x}\) and \(\nabla \phi (\bar{x}) = 0\), we must have \(\beta > 0\) and \(p \in \mathbb {N}\) such that
Again due to superlinear convergence of \(\{x_k\}\), we have
Since \(\lim _{k \rightarrow \infty } \frac{\Vert x_{k+1} - x_k\Vert }{\Vert x_k - \bar{x}\Vert } = 1\), this inequality implies \(\lim _{k \rightarrow \infty } \frac{\Vert \varPhi _k\Vert }{\Vert x_{k+1} - x_k\Vert } = 0\). Hence, the result follows.
5 Illustrative Example
In this section, an illustrative example is presented to explore the computational procedure of Algorithm 1.
Example 1
Consider the following quadratic fuzzy optimization problem:
Let us consider the initial approximation to the minimizer as \(x_0 = \left( x_1^0, x_2^0\right) = (0, 0)\). With the help of fuzzy arithmetic, the lower and the upper function can be obtained as
and
Therefore,
Here
Considering the initial matrix \(A_0 = I_2\), we calculate the sequence \(\{x_k\}\), \(x_k = \left( x_1^k, x_2^k\right) \), through the following equations:
The initial iteration (\(k = 0\))
The second iteration (\(k = 1\))
Hence \(\bar{x} = (-1, \tfrac{3}{2})\) is a nondominated solution of the considered problem. It is important to note that the method converged at the second iteration since the objective function is a quadratic fuzzy function.
6 Conclusion
In this paper, a quasi-Newton method with rank-two modification has been derived to find a non-dominated solution of an unconstrained fuzzy optimization problem. In the optimality concept, the fuzzy-max ordering of a pair of fuzzy numbers has been used. The gH-differentiability of fuzzy functions have been employed to find the non-dominated solution point. An algorithmic implementation and the convergence analysis of the proposed technique has also been presented. The technique is found to have superlinear convergence rate. A numerical example has been explored to illustrate the proposed technique.
It is to note that unlike Newton method for fuzzy optimization problems [8, 23], the proposed method is derived without using the inverse of the concerned Hessian matrix. Instead, a sequence of positive definite inverse Hessian approximation \(\{A_k\}\) is used which satisfies the quasi-Newton equation \( A_{k+1} \varPhi _k = \delta _k\). Thus the derived method sidestepped the inherent computational difficulty to compute inverse of the concerned Hessian matrix. In this way the proposed method is made more efficient than the existing Newton method [8, 23].
The method can be easily observed as much similar to the classical DFP method [28] for conventional optimization problem. In the analogous way of the presented technique, it can be further extended to a BFGS-like method [23] for fuzzy optimization. Instead of the rank-two modification in the approximation of the inverse of the Hessian matrix, a rank-one modification can also be done. It is also to be observed that in Algorithm 1, we make use the exact line search technique along the descent direction \(d_k = - A_k \nabla \phi (x_k)\). However, an inexact line search technique [28] could have also been used. A future research on this rank-one modification and inexact line search technique for fuzzy optimization problem can be performed and implemented on Newton and quasi-Newton method.
References
Andrei, N.: Hybrid conjugate gradient algorithm for unconstrained optimization. J. Optim. Theor. Appl. 141, 249–264 (2009)
Anastassiou, G.A.: Fuzzy Mathematics: Approximation Theory, vol. 251. Springer, Heidelberg (2010)
Banks, H.T., Jacobs, M.Q.: A differential calculus for multifunctions. J. Math. Anal. Appl. 29, 246–272 (1970)
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. Wiley-interscience, 3rd edn. Wiley, New York (2006)
Bede, B., Stefanini, L.: Generalized differentiability of fuzzy-valued functions. Fuzzy Sets Syst. 230, 119–141 (2013)
Bellman, R.E., Zadeh, L.A.: Decision making in a fuzzy environment. Manag. Sci. 17, B141–B164 (1970)
Cadenas, J.M., Verdegay, J.L.: Towards a new strategy for solving fuzzy optimization problems. Fuzzy Optim. Decis. Mak. 8, 231–244 (2009)
Chalco-Cano, Y., Silva, G.N., Rufián-Lizana, A.: On the Newton method for solving fuzzy optimization problems. Fuzzy Sets Syst. 272, 60–69 (2015)
Chakraborty, D., Ghosh, D.: Analytical fuzzy plane geometry II. Fuzzy Sets Syst. 243, 84–109 (2014)
Dubois, D., Prade, H.: Towards fuzzy differential calculus part 3: differentiation. Fuzzy Sets Syst. 8(3), 225–233 (1982)
Ghosh, D.: Newton method to obtain efficient solutions of the optimization problems with interval-valued objective functions. J. Appl. Math. Comput. 53, 709–731 (2016). (Accepted Manusctipt)
Ghosh, D.: A Newton method for capturing efficient solutions of interval optimization problems. Opsearch 53, 648–665 (2016)
Ghosh, D.: A quasi-newton method with rank-two update to solve interval optimization problems. Int. J. Appl. Comput. Math. (2016). doi:10.1007/s40819-016-0202-7 (Accepted Manusctipt)
Ghosh, D., Chakraborty, D.: Analytical fuzzy plane geometry I. Fuzzy Sets Syst. 209, 66–83 (2012)
Ghosh, D., Chakraborty, D.: Analytical fuzzy plane geometry III. Fuzzy Sets Syst. 283, 83–107 (2016)
Ghosh, D., Chakraborty, D.: A method to capture the entire fuzzy non-dominated set of a fuzzy multi-criteria optimization problem. Fuzzy Sets Syst. 272, 1–29 (2015)
Ghosh, D., Chakraborty, D.: A new method to obtain fuzzy Pareto set of fuzzy multi-criteria optimization problems. Int. J. Intel. Fuzzy Syst. 26, 1223–1234 (2014)
Ghosh, D., Chakraborty, D.: Quadratic interpolation technique to minimize univariable fuzzy functions. Int. J. Appl. Comput. Math. (2016). doi:10.1007/s40819-015-0123-x (Accepted Manuscript)
Ghosh, D., Chakraborty, D.: A method to obtain complete fuzzy non-dominated set of fuzzy multi-criteria optimization problems with fuzzy parameters. In: Proceedings of IEEE International Conference on Fuzzy Systems, FUZZ IEEE, IEEE Xplore, pp. 1–8 (2013)
Lai, Y.-J., Hwang, C.-L.: Fuzzy Mathematical Programming: Methods and Applications. Lecture Notes in Economics and Mathematical Systems, vol. 394. Springer, New York (1992)
Lai, Y.-J., Hwang, C.-L.: Fuzzy Multiple Objective Decision Making: Methods and Applications. Lecture Notes in Economics and Mathematical Systems, vol. 404. Springer, New York (1994)
Lodwick, W.A., Kacprzyk, J.: Fuzzy Optimization: Recent Advances and Applications, vol. 254. Physica-Verlag, New York (2010)
Pirzada, U.M., Pathak, V.D.: Newton method for solving the multi-variable fuzzy optimization problem. J. Optim. Theor. Appl. 156, 867–881 (2013)
Puri, M.L., Ralescu, D.A.: Differentials of fuzzy functions. J. Math. Anal. Appl. 91(2), 552–558 (1983)
Lee-Kwang, H., Lee, J.-H.: A method for ranking fuzzy numbers and its application to decision-making. IEEE Trans. Fuzzy Syst. 7(6), 677–685 (1999)
Luhandjula, M.K.: Fuzzy optimization: milestones and perspectives. Fuzzy Sets Syst. 274, 4–11 (2015)
Ramík, J., Rimanek, J.: Inequality relation between fuzzy numbers and its use in fuzzy optimization. Fuzzy Sets Syst. 16(2), 123–138 (1985)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)
Słowínski, R.: Fuzzy Sets in Decision Analysis, Operations Research and Statistics. Kluwer, Boston (1998)
Stefanini, L.: A generalization of Hukuhara difference and division for interval and fuzzy arithmetic. Fuzzy Sets Syst. 161, 1564–1584 (2010)
Wang, X., Kerre, E.E.: Reasonable properties for the ordering of fuzzy quantities. Fuzzy Sets Syst. 118, 375–385 (2001)
Wang, Z.-X., Liu, Y.-L., Fan, Z.-P., Feng, B.: Ranking L-R fuzzy number based on deviation degree. Inform. Sci. 179, 2070–2077 (2009)
Wu, H.-C.: The optimality conditions for optimization problems with fuzzy-valued objective functions. Optimization 57, 473–489 (2008)
Acknowledgement
The author is truly thankful to the anonymous reviewers and editors for their valuable comments and suggestions. The financial support through Early Career Research Award (ECR/2015/000467), Science and Engineering Research Board, Government of India is also gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ghosh, D. (2017). A Davidon-Fletcher-Powell Type Quasi-Newton Method to Solve Fuzzy Optimization Problems. In: Giri, D., Mohapatra, R., Begehr, H., Obaidat, M. (eds) Mathematics and Computing. ICMC 2017. Communications in Computer and Information Science, vol 655. Springer, Singapore. https://doi.org/10.1007/978-981-10-4642-1_20
Download citation
DOI: https://doi.org/10.1007/978-981-10-4642-1_20
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-4641-4
Online ISBN: 978-981-10-4642-1
eBook Packages: Computer ScienceComputer Science (R0)