Abstract
During the multi-objective optimization process, numerous efficient solutions may be generated to form the Pareto frontier. Due to the complexity of formulating and solving mathematical problems, choosing the best point to be implemented becomes a non-trivial task. Thus, this paper introduces a weighting strategy named robust optimal point selection, based on ratio diversification/error, to choose the most preferred Pareto optimal point in multi-objective optimization problems using response surface methodology. Furthermore, this paper proposes to explore a theoretical gap—the prediction variance behavior related to the weighting. The ratios Shannon’s entropy/error and diversity/error and the unscaled prediction variance are experimentally modeled using mixture design and the optimal weights for the multi-objective optimization process are defined by the maximization of the proposed measures. The study could demonstrate that the weights used in the multi-objective optimization process influence the prediction variance. Furthermore, the use of diversification measures, such as entropy and diversity, associated with measures of error, such as mean absolute percent error, was determined to be useful in mapping regions of minimum variance within the Pareto optimal responses obtained in the optimization process.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
According to Cua et al. [1], quality management principles emphasize the importance of cross-functional product development and systematic management process, as well as the involvement of customers, suppliers, and employees to ensure the quality of products and processes.
Kano and Nakagawa [2] argued that, to improve product quality, a system having at least the following functions is necessary: (1) predicting product quality by operating conditions; (2) detecting faults and malfunctions for preventing undesirable operation; and (3) determining the best operating conditions to improve product quality. The first function is performed through the development of a software program, which is a mathematical model that relates the operating conditions to product quality. The second function is performed via multivariate statistical process control. The third function is performed by formulating and solving optimization problems.
In most industrial processes, the relationships between the answers and the decision variables are unknown. To obtain this information, it is necessary to design and execute experiments and to collect and analyze the data. In a planned experiment, purposive variations are made in controllable process variables, observing the resulting output data to make inferences about which variables are responsible for the observed changes.
According to Montgomery [3], when the objective is to optimize a given problem, the response surface methodology (RSM) should be chosen to define the experimental design. As one of the objectives of RSM is to optimize the answers, it is recommended, whenever possible, to represent them through second-order models, as the curvature presented by them defines the location of an optimal point.
Despite being considered an adequate approximation for the responses of interest, the values generated by the estimated model will always present an error in relation to the real values. The magnitude of these errors is measured using the prediction variance of the model. Thus, the quality of the forecast of a response depends on the prediction variance. Small prediction variance values are desirable for reliable predictions [4].
From an analysis of the manufacturing processes, it is concluded that the optimization of various possibly controlled parameters, such as quality, cost, and productivity, leads to multi-objective mathematical models. In industrial processes where the joint optimization of multiple characteristics is desired, the problem can be defined by the following mathematical formulation:
where f1(x), f2(x),…, fk(x) are objective functions to be optimized; hi(x) represents the l equality constraints; and gj(x) represents the m inequality constraints.
In multi-objective problems, it is very unlikely that all the functions are minimized simultaneously by one optimal solution x*. Indeed, these goals are a function of the same decision variable set and are conflicting [5]. The Pareto optimal solution concept, also called the compromise solution, has become considerably relevant to these problems. A feasible solution x* is Pareto optimal if no other feasible solution z exists such that \(f_{i} \left( z \right) \le f_{i} (x^{*} ),\quad i = 1,2, \ldots ,m\), with \(f_{j} \left( z \right) < f_{i} \left( {x^{*} } \right)\) in at least one objective j.
The purpose of multi-objective optimization processes (MOPs) is to offer support and ways to find the best compromise solution, in which the decision maker and his/her preference information play an important role, as it is typically responsible for the final solution of the problem. As it is difficult to know the importance degree to be assigned to each objective [6], the weights for each function are eventually defined, subjectively influenced by the analyst’s preferences.
However, Zeleny [7], when proposing his weighting method based on entropy for linear multi-objective optimization, discussed some points against this practice, among which the following are cited: 1. human capacity to reach an overall assessment by weighting and combining different attributes is not very good, and such a weight allocation process is unstable, suboptimal, and often arbitrary; 2. the total number of all possible and identifiable criteria and attributes can be very large, as it is not plausible to expect that any human being can assign weights to hundreds of attributes with any reliability; 3. Weight changes reflect the fact that they are dependent on a particular problem, i.e., any particular weighting structure must be learned to be more the result of the analysis rather than its input. Indeed, to elicit direct preference information from the analyst can be counterproductive in real-world decision-making because of the requirement of a high cognitive effort [8].
The question of weighing has been discussed since the publication of Zeleny’s works in the 1970s. Since then, many works on the subject have been published but without an apparent consensus. In general, the literature on the subject is divided into four categories: equally distributed weighting; random weighting; subjective weighting methods; and objective weighting methods. Subjective weighting is supported by methods based on personal or collective judgments, usually produced by direct assignment [9], ANP [10], AHP [11, 12] and/or fuzzy method [13, 14]. Objective weighting methods set priorities according to quantitative values. The main representative of this category is the methods based on entropic parameters [15,16,17].
In the researched literature, the ways in which the weighting in the MOPs affects the forecast variance, for RSM experimental design, have never been studied; hence, it provides a scope for exploring the theoretical contributions pertaining to this topic. Thus, the main objective of this study is to develop a method to identify the optimal weights in MOPs, based on the weighting diversification obtained through the maximization of entropy and diversity functions, and to study how the weighting affects forecast variance in multi-objective optimization using RSM. This paper proposes that the use of entropic metrics in choosing optimal weights in MOPs can reduce forecast variance. Hence, the present proposal is called robust optimal point selection (ROPS). The use of metrics proposed in ROPS is presented as a useful tool in the multiple-criteria decision-making process, because it leads to robust responses without the necessity of including the variance term in the mathematical formulation of the problem, making it simpler.
2 Theoretical fundamentals
2.1 Weighting methods applied to multi-objective optimization
As previously mentioned in Sect. 1, during the MOP, numerous efficient solutions may be generated to form the Pareto frontier. Due to complexity in formulating and solving mathematical problems, choosing the best point to be implemented becomes a non-trivial task.
By assigning different weights to the representative objective functions of the characteristics of the processes that we want to optimize, we consider the relative importance of each parameter within the analyzed process. This indicates that weights should be assigned to functions to indicate their relative importance to identify the important aspects during the optimization process, thus electing priorities [18].
According to Taboada et al. [19], Gaudreault et al. [20], and Pilavachi et al. [21], the priority given to the criteria is essential to achieve results and should be applied with caution, as the final result can vary significantly depending on the importance assigned to each of its objectives. This may lead to a problem because of the uncertainty of decision makers about the exact weight of objective functions and utility functions [19].
The Pareto set includes all the rational choices, among which the decision maker must identify the solution by comparing their various objectives [19]. Several techniques have been presented to search the solution space for a set of Pareto optimal solutions. However, the major drawback of such methods is that the decision maker can choose from several solutions. Thus, according to Taboada et al. [19], it is necessary to bridge the gap between single solutions and optimal Pareto sets.
The lack of consensus to stipulate an acceptable weighting method makes the process even more difficult. This is due to the large number of methods that can be applied and the considerable differences among them [18].
The question of weighing has been discussed since the publication of Zeleny’s works [7, 22]. Melachrinoudis [23] determined an optimum location for an undesirable facility in a workroom environment. The author defined the problem as the selection of a location within the convex region that maximizes the minimum weighted Euclidean distance with respect to all existing facilities, where the degree of undesirability between an existing facility and the new undesirable entity is reflected through a weighting factor.
Saaty [24] presented a multi-criteria decision-making approach, named the analytic hierarchy process (AHP), in which selected factors are arranged in a hierarchic structure descending from an overall goal to criteria, subcriteria, and alternatives in successive levels. Despite its popularity, this method has been criticized by decision analysts. Some authors have pointed out that Saaty’s procedure does not optimize any performance criterion [25]. However, according to Promentilla et al. [26], the analytic network process (ANP), which is a generalized form of AHP, is an attractive tool for understanding the complex decision problem better, as this approach overcomes the limitation of the linear hierarchical structure of the AHP.
Figueira et al. [8] presented a method for ranking a finite set of actions evaluated on a finite set of criteria. The generalized regression with intensities of preference (GRIP) is based on indirect preference information and the ordinal regression paradigm. It can be compared to the AHP, as the decision maker is requested to express the intensity of preference in qualitative-ordinal terms in both approaches. However, in contrast to AHP, in GRIP, the marginal value functions are just a numerical representation of the original qualitative-ordinal information. The pairwise comparison principle has also been used in more recent models as a set of dominance decision rules induced from rough approximations of comprehensive preference relations [6].
Taboada et al. [19] proposed a different approach. In their work, the authors presented two alternatives to reduce the Pareto optimal set, to be used in the decision-making stage. The first is by an order of objective functions without, however, assigning them to numerical values and the second is the use of cluster analysis between the Pareto optimal points. According to the authors, the act of reducing the Pareto optimal set makes the decision-making process easier.
Over time, other methods for deriving priority weights have been proposed, such as, methods using simulated annealing [27, 28], geometric mean procedure [29, 30], methods based on constrained optimization models [31], trial and error methods [32], methods using fuzzy logic [27, 29, 30, 33, 34], and methods using grey decision [35,36,37].
Recently, Monghasemi et al. [38], dealing with the multi-objective optimization of time–cost-quality trade-off problems in construction projects, have used Shannon’s entropy [39] to define the weights involved in the optimization process. According to the authors, Shannon’s entropy can provide a more reliable assessment of the relative weights for the objectives in the absence of the decision maker’s preferences.
Rocha et al. [40] and Rocha et al. [41] used Shannon’s [39] entropy index associated with an error measure to determine the most preferred Pareto optimal point in a vertical turning MOP.
Wang et al. [42], when reviewing the methods of multi-criteria decision-making, classified the weighting methods into two main groups: subjective and objective weighting methods. Subjective weighting is supported by methods based on personal or collective judgments, usually produced by expert panels, the Delphi method, paired comparison, both in its original form incorporated into either the AHP or ANP, etc. [18]. In contrast, objective weighting methods set priorities according to quantitative values obtained mainly by applying statistical models or procedures that implicitly calculate the criteria weights. The main representative of this category is the entropy method presented by Zeleny [7, 22]. Ibáñes-Forés et al. [18] presented two other categories: equally distributed weighting and random weighting. The latter involves analyzing the results under all possible combinations of weights that can be assigned to each criterion in the study, typically using any simulation technique. The theoretical review performed using these categories is summarized in Table 1.
Based on Table 1, one can perceive the extent of the subject. Even after more than 40 years of research, it remains relevant. Diverse applications can be found: energy sector, sustainability, chemical industry, machining processes, teachers’ evaluation, etc. Notably, despite these efforts, there is no intention to exhaust the theme, mainly due to the different applicability of the weighting. Many works were included, because they explicitly used some of the aforementioned methods, despite not presenting a discussion on the weighting. Nevertheless, several other works could also be included in this literature review.
Among the papers presented, only Shahraki and Noorossana [93] proposed to evaluate any variability parameter when selecting the best Pareto optimal solution. The authors used two criteria to make this selection: the sensitivity to reliability levels and the process capability index.
This work aims to study how the weighting functions in multi-objective optimization affect the forecast variance.
2.2 Entropy
In 1865, when the German physicist Rudolf Clausius attempted to give a new name to irreversible heat loss, the word “entropy” was introduced. Since then, entropy has played an important role in thermodynamics. This concept also helps measure the amount of order and disorder [99]. The word entropy had belonged to the domain of physics until 1948 when Claude Shannon, while developing his theory of communication [39], used the term to represent a measure of information [100].
Entropy can be defined as a measure of probabilistic uncertainty. Its use is indicated in situations where the probability distributions are unknown, in search of diversification. Among the several other desirable properties of Shannon’s entropy index, the following are highlighted: Shannon’s measure is nonnegative, and its measure is concave. The first is desirable, because the entropy index ensures non-null solutions. The latter is desirable, because it is much easier to maximize a concave function than a non-concave one [100]. Higher entropy values indicate more randomness; less information is expressed.
Shannon’s entropy index is one of several diversity indices used to measure diversity in categorical data. It is simply the information entropy of the distribution, treating species as symbols and their relative population sizes as the probability [101]. The information can simply be defined as the values of the objectives. The underlying assumption is that an event that has a lower probability of occurrence is more likely to provide more information by its occurrence [92].
The maximum entropy principle determines the less informative probability distribution for a random variable x given any prior information about x. If the mean and variance information of x are available, the continuous probability distribution that maximizes the differential Shannon entropy is the normal distribution. According to Zhou et al. [99], when dealing with continuous probability distributions, the density function is evaluated for all values of the argument. Thus, given a continuous probability distribution with a density function f(x), its entropy can be defined as
where \(\int_{ - \infty }^{ + \infty } {f(x){\text{d}}x} = 1\) e \(f(x) \ge 0\).
As the weights used in the weighting of functions in multi-objective optimization are proportions, f(x) follows a discrete probability distribution. Thus, Eq. (2) becomes
where wi are the weights assigned to the objectives to be optimized.
The index shown in Eq. (3) is also known as the Shannon–Weiner entropy index [102].
2.3 Diversity
According to Stirling [102], our actions are permeated with a lack of certainty arising from various sources such as incomplete knowledge, contradictory information, data variability, conceptual imprecision, different reference points, and the inherent indeterminacy of several natural and social processes.
The theory of probability attempts to address this issue. A probability can be assigned to each possible set of future events. It can be considered to reflect the established frequency of occurrence of similar past events under comparable conditions and is thus, in some sense, objective. This “frequentist” interpretation of probability is vulnerable to doubts about the comparability of past and future circumstances and results. In a more subjective way, from a Bayesian perspective, probability can be considered simply to reflect the probabilities of different eventualities, given the best available information and the prior expert opinion. However, due to the deficiency of information, these procedures tend to be vulnerable to error, unconscious bias, or manipulation [102].
Recognizing these difficulties, a distinction is made between risk (where the probability density function can significantly be set for a range of possible outcomes) and uncertainty (where there is no basis for assigning probabilities). In situations where there is no basis for assigning probabilities to outcomes or knowledge about several possible outcomes, another state of the absence of certainty has been distinguished, i.e., ignorance. In several fields, ignorance, rather than risk or uncertainty, dominates the real decision-making process [102].
Of all the strategies developed to deal with the absence of certainty, the best one is diversification. The concepts of diversity employed in several fields of science have the combination of only three properties—variety, balance, and disparity—each of which is a necessary but insufficient diversity feature [103].
Stirling [103] stated that variety is the number of categories into which the elements of the system are divided. The larger the variety, the greater is the diversity. Balance is a function of the pattern of division of elements across categories. The greater the balance, the greater is the diversity. Disparity indicates how different the elements are from one another. The greater the disparity between the elements, the greater is the diversity.
According to Stirling [103], Shannon’s entropy index, as presented in Eq. (3), only includes the variety and balance dimensions. Thus, the author proposed a formulation that considered variety, balance, and disparity as follows:
where dij is the disparity between two elements; w indicates the weights representing the proportion of the elements i and j; α and β are terms quantifying the importance degree between disparity and balance, and, in the reference case, α = β = 1.
The disparity (dij) is a measure of the difference between the objects. For this, two measures are most widely used: correlation measures and distance measurements.
The method usually known to measure the correlation between two variables is the Pearson linear coefficient, which can be calculated as
where \(\sigma_{XY}\) corresponds to the covariance between X and Y; \(\sigma_{X}\) corresponds to the standard deviation of X; and \(\sigma_{Y}\) corresponds to the standard deviation of Y.
High positive correlations indicate similarity, and high negative correlations indicate disparity. Thus, it is defined that \(d_{ij} = 1 - \rho_{ij}\).
The most commonly recognized distance measure is the Euclidean distance. It is the measure of the length of a straight line drawn between two objects when represented graphically. Thus, the greater the distance between two objects, the greater is their disparity. A distance measure in the context of multi-objective optimization can be calculated as the Euclidean distance between the anchor points, that is, the points that optimize each response individually, calculated as follows [15]:
where \(x_{1} , \, x_{2} \ldots x_{n}\) are the decision variables of the problem; \(f_{i} (x)\) and \(f_{j} (x)\) are the objective functions.
3 Robust optimal point selection
As discussed earlier, several of the weighting strategies employed during the optimization process and decision-making consist of, at least in one of their stages, imprecise and subjective elements. Large portions of these strategies still use error-prone elements, which can make significant contributions. However, considering that, among all the consulted sources, only Shahraki and Noorossana [93] proposed to evaluate any variability parameter when selecting the best Pareto optimal solution, another theoretical gap that the current study intends to explore is the forecast variance behavior in relation to the weighting strategies.
ROPS is an alternative approach for identifying optimal weights for MOPs. To this end, Rocha et al. [40] and Rocha et al. [41] proposed a weighting method that combines Shannon’s entropy and an error measure. The entropy-based weighting presented in the aforementioned studies was useful in identifying the optimal weights used in multi-objective optimization. Nevertheless, the authors did not discuss the forecast variance.
Therefore, addressing this gap in the work of Rocha et al. [40] and Rocha et al. [41] and in the literature in general, this paper presents different weighting strategies showing how these strategies affect the forecast variance. The diversity index [103], entropy index [39] and entropy-based weighting [40, 41] are used as parameters for selecting the most preferred Pareto optimal point and their results are compared. In all these possibilities, the forecast variance behavior was evaluated.
The optimization algorithms are included during the step of identifying optimal solutions, after they have been modeled using RSM (for the mathematical formulation of RSM, see [15, 41, 98]). The generalized reduced gradient (GRG) algorithm is used by the Excel® Solver function. The normal boundary intersection (NBI) approach is used to identify the Pareto optimal solutions and construct the Pareto frontier (for the mathematical formulation of NBI, see [104]). This approach was chosen, because it has become possible to define a Pareto frontier with evenly distributed solutions, regardless of the function convexity, overcoming the drawbacks of the weighted sum method.
To demonstrate the proposition of the present study mathematically, consider the following MOP:
where fi (x) represents the objective functions to be optimized, and wi represents the weights assigned to each objective function.
To calculate the variance for the function under analysis, the following process is considered:
where \(\rho_{{f_{i} f_{j} }}\) is the correlation between the functions fi and fj.
Considering that we can calculate the variance of fi(x) at a given point \({\mathbf{X}}_{0}^{T} = \left[ {1 \, x_{01} \, x_{02} \, . \, . { } . { }x_{0k} } \right]\), such as \({\text{Var}}[f_{i} ({\mathbf{X}}_{0} )] = \hat{\sigma }_{{f_{i} }}^{2} {\mathbf{X}}_{0}^{T} ({\mathbf{X}}^{T} {\mathbf{X}})^{ - 1} {\mathbf{X}}_{0}\), we can modify Eq. (8) to
Now, let \(\rho_{{f_{i} f_{j} }}\) equal zero. In this case, Eq. (9) becomes
As the variance of the estimated responses depends on the square of the weight assigned to each response, one way of minimizing its value is by diversification, i.e., by the uniform distribution of weights among the functions involved in the MOP.
Figure 1 shows the step-by-step proposal.
The NBI approach is used to solve the MOP, using the following equation [104]:
where w is the convex weighting; D is the distance between the Utopia line and the Pareto frontier; \(\bar{F}({\mathbf{x}})\) is the vector containing the individual values of the normalized objectives in each run; \(e\) is a column vector of ones; α is the value of the axial point of experimental planning; \(\varPhi\) and \(\bar{\varPhi }\) are the payoff and normalized payoff matrices, respectively, and can be written as
In mixture design of experiments, the factors are the ingredients or components of a mixture, and consequently, their levels are not independent. With two components, the experimental region for the mixture experiments considers all values along one line. In the case of three components, this region is the area bounded by one triangle, where the vertices correspond to the neat blends, the sides to the binary mixtures, and the triangular region to the complete mixtures (for the mathematical formulation of Mixture Design of Experiments, see [41]).
With regard to the metrics used as weighting criteria (presented in step 6 of the flowchart), to compare how different weighting metrics affect the prediction variance, the ratios Shannon’s entropy/error and diversity/error are calculated. The use of the error allows the reduction of the distance of the optimum Pareto solution determined from its ideal value, which justifies its use in the denominator. The original ratio entropy/error (ξ) metric is obtained using the equation [40, 41]:
The global percentage error (GPE) in Eq. (13) is calculated as [105]
where \(y_{i}^{*}\) is the value of the Pareto optimal responses;\(T_{i}\) is the defined target; \(m\) is the number of objectives.
By dividing the GPE by the number of objectives, m, we derive the mean absolute percentage error (MAPE), as presented by Montgomery et al. [106]:
In the present study, the GPE will be replaced by the MAPE, yielding the equation:
Two strategies are used to define the parameter dij when calculating the diversity. First, we generate the diversity correlation (DC) using \(d_{ij} = 1 - \rho_{ij}\). Second, we create the diversity optimum (DO) using the Euclidean distance between the anchor points, i.e., points that optimize each answer individually, as presented in Eq. (6). The strategy presented in Eq. (16) will be used for both the diversity metrics.
In this work, the unscaled prediction variance (UPV) will be used as a measure of the variance of the model. According to Zahran et al. [107], several measures of prediction performance exist for comparing experimental designs, the most commonly considered one being the scaled prediction variance (SPV). SPV is defined as \(N{\text{Var}}\left[ {\hat{y}({\mathbf{X}}_{0} )} \right]/\sigma^{2} = N{\mathbf{X}}_{0}^{T} ({\mathbf{X}}^{T} {\mathbf{X}})^{ - 1} {\mathbf{X}}_{0}\), where N is the total sample size. However, if direct comparisons between the expected variance of estimation are desired, the UPV could be modeled directly by the variance of the estimated mean response divided by \(\sigma^{2}\): \({\text{Var}}\left[ {\hat{y}({\mathbf{X}}_{0} )} \right]/\sigma^{2} = {\mathbf{X}}_{0}^{T} ({\mathbf{X}}^{T} {\mathbf{X}})^{ - 1} {\mathbf{X}}_{0}\). It is equivalent to the hat matrix [108].
4 Illustrative examples
Some cases were used to demonstrate the applicability of the proposed method. The first two cases consider simulated experimental matrices for a hypothetical process. The first case considers two convex objective functions and two decision variables. The second case considers three objective functions with different convexities and two decision variables. The third case refers to a machining process for hardened steel using a tool with wiper geometry, considering three objective functions and three decision variables. These experimental matrices were composed using the central composite design (CCD), because, according to Montgomery [3], for the modeling of the response surface functions, the experimental design most often used for data collection is the CCD. In all the cases, five center points (cp) were used, because Myers et al. [108] argued that the use of five center points provides reasonable stability of the prediction variance throughout the experimental region.
4.1 Case 1
For the analysis of the first case, consider that a certain process has some characteristics that depend on two variables. Thus, to analyze two of its characteristics that are to be minimized, a sequential set of experiments was established using a CCD, constructed according to the response surface design 22, with 4 axial points and 5 central points, generating 13 experiments. Table 2 presents the CCD for this process (Step 1).
The experimental matrix described in Table 2 presents some desirable properties for second-order response surface models, which are axial points defined as \(\alpha = \sqrt[4]{{2^{k} }}\) with the cp number equal to 5. According to Myers et al. [108], this ensures the rotationality and good dispersion of the prediction variance throughout the experimental region.
The analysis of the experimental data generates the mathematical modeling presented in Table 3, and Fig. 2 presents the response surface for the generated models (Step 2):
Once the equations were defined, a simplex lattice arrangement of degree 10 (Step 4) was implemented, generating the combination of weights to be used in multi-objective optimization using the NBI (Step 3).
The data in Table 4 correspond to the optimum Pareto points of the optimization of the responses y1 and y2. This set of points forms the Pareto frontier for the problem under analysis (Step 5). Figure 3 graphically shows the Pareto frontier obtained.
It can be observed in Fig. 3 that the multi-objective optimization method employed, i.e., the NBI, could construct a Pareto border with uniformly distributed points, which becomes an advantage in the decision-making process by allowing the decision maker to evaluate trade-off behavior easier and determine how prioritizing one response affects the other. This would not be possible if there were an agglomeration of solutions at some point, generating a discontinuous boundary. The mixture arrangement, by providing a uniform combination of weights, favors the construction of the frontier and the obtaining of canonical mixing polynomials by modeling the responses.
Figure 4 is presented to visualize the solution space referring to the optimal Pareto points. As the variance of the forecast is measured in the solution space, i.e., \({\text{UPV}} = {\mathbf{X}}_{0}^{T} ({\mathbf{X}}^{T} {\mathbf{X}})^{ - 1} {\mathbf{X}}_{0}\), visualizing how the points are distributed in this space is essential; therefore, this can indicate how the variance behaves in the analyzed problem.
An important aspect is that the weights assigned to the responses during the optimization influence the points in the solution space, which indicates that the weighting influences the prediction variance.
Based on the data presented in Table 4 (Step 6), a Pearson correlation analysis was performed between the weighting metrics and the variance measure, UPV. Thus, Table 5 presents the results of the correlation analysis, together with their respective p values, with values lower than 5% indicating statistically significant correlations.
The ratios of the diversification metrics, i.e., entropy, DC, and DO, with MAPE were analyzed, presenting correlation values with the UPVs of − 0.687, − 0.672, and − 0.672, respectively. The negative and statistically significant correlations presented by these metrics indicate that they are good parameters for defining the optimal weights for the MOP presented, leading to a reduction in the variance and, consequently, a robust response from the point of view of variability, maintaining the diversification between the answers.
The modeling of the weighting metrics using mixture arrangement is generated (Step 7) from the data presented in Table 4. Thus, its canonical mixing polynomials are:
All the canonical mixing polynomials had a good fit, as all have an adjusted R2 close to 100%. Notably, it was possible to model the UPV as a function of the weights. Therefore, the weights interfere in the space of the solution, as shown in Fig. 4.
Finally, it is possible to maximize the functions related to Entropy/MAPE, DC/MAPE, and DO/MAPE metrics. This action aims to maximize diversification and reduce error. This process executed for each metric generates a vector of optimal weights (Step 8) to be used in the original optimization problem, implemented using the NBI, generating different optimal responses, allowing their comparison. Table 6 summarizes the results obtained.
All the metrics used performed well, especially considering that the maximum value of UPV for the analyzed problem was 0.403. The goal of diversification was achieved by preventing the achievement of zero weights.
4.2 Case 2
For the analysis of the fourth case, consider three characteristics, y1, y2, and y3, of a process that depend on two variables. To maximize y1 and minimize y2 and y3, a sequential set of experiments was established using a CCD, constructed according to the response surface design 22, with 4 axial points and 5 central points, generating 13 experiments. Table 7 presents the CCD for this process.
The analysis of the experimental data generates the mathematical modeling presented in Table 8, and Fig. 5 presents the response surface for the generated models.
Once the equations were defined, a simplex lattice arrangement of degree 10 was implemented, generating the combination of weights to be used in multi-objective optimization using the NBI.
The data in Table 9 correspond to the optimum Pareto points of the optimization of the responses y1, y2, and y3. This set of points forms the Pareto frontier for the problem under analysis. Figure 6 graphically shows the Pareto frontier obtained.
Figure 7 is presented to visualize the solution space referring to the optimal Pareto points.
As shown in the previous case, it is observed that the points move in the solution space, as the weights are changed in the optimization process, which will directly influence the UPV values (Figs. 8, 9, 10, 11).
Based on the data presented in Table 9, a Pearson correlation analysis was performed between the weighting metrics and the variance measure, UPV. Thus, Table 10 presents the results of the correlation analysis, together with their respective p values.
It can be observed that all the diversification/error metrics presented a negative and statistically significant correlation with the UPV, which indicates that the maximization of these metrics reduces the measurement of the UPV.
From the data presented in Table 9, the canonical mixing polynomials with their respective response and contour plot surfaces are shown as follows:
Notably, all the canonical mixing polynomials had a good fit, as all have an adjusted R2 close to 100%. Once again, we could model the variance as a function of the weights.
As in the first case, the functions of the metrics were maximized, generating the result presented in Table 11.
If the amplitude of variation of the UPV for this problem (0.190–0.403) is considered, it can be affirmed that all the analyzed metrics performed well, as they led to the choice of optimal Pareto points located in the region of minimum variance. Furthermore, the weights between the responses are well distributed and without zero weights due to diversification.
4.3 Case 3—Real case analysis
For this real case analysis, the method proposed in this work was used to optimize the machining process for hardened steel AISI H13 using a polycrystalline cubic boron nitride (PCBN) tool with wiper geometry, based on Campos [109]. For this study, we considered the material removal rate (MRR), surface roughness parameter (Ra), and cutting force (Fc), using cutting speed (Vc), feed rate (f), and the depth of cut (d) as the decision variables. The workpieces were machined using the range of parameters defined in Table 12. The decision variables were analyzed in a coded way.
A sequential set of experimental runs was established using a CCD built according to a response surface design 23, with 6 axial points and 5 center points, generating 19 experiments (Table 13).
The analysis of the experimental data generates the mathematical modeling presented in Table 14, and Fig. 12 presents the response surface for the generated models.
Once the equations were defined, a simplex lattice arrangement of degree 10 was implemented, generating the combination of weights to be used in multi-objective optimization using the NBI.
The data in Table 15 correspond to the optimum Pareto points of the optimization of the responses MRR, Ra, and Fc. This set of points forms the Pareto frontier for the problem under analysis. Figure 13 graphically shows the Pareto frontier obtained.
Figure 14 is presented to visualize the solution space referring to the optimal Pareto points.
As shown in the previous case, it is observed that the points move in the solution space, as the weights are changed in the optimization process, which will directly influence the UPV values (Figs. 15, 16, 17, 18).
Based on the data presented in Table 15, a Pearson correlation analysis was performed between the weighting metrics and the variance measure, UPV. Thus, Table 16 presents the results of the correlation analysis, together with their respective p values.
Notably, all the diversification/error metrics presented a negative and statistically significant correlation with the UPV, which indicates that the maximization of these metrics reduces the measurement of the UPV.
From the data presented in Table 15, the canonical mixing polynomials with their respective response and contour plot surfaces are shown as follows:
Notably, all the canonical mixing polynomials had a good fit, as all have an adjusted R2 close to 100%. Once again, we could model the variance as a function of the weights.
As in the other presented cases, the functions of the metrics were maximized, generating the result presented in Table 17.
If the amplitude of variation of the UPV for this problem (0.279–0.607) is considered, it can be affirmed that all the analyzed metrics performed well, as they led to the choice of optimal Pareto points located in the region of minimum variance. Furthermore, the weights between the responses are well distributed and without zero weights due to diversification.
4.4 Comparative analysis between the cases
The results of the cases are compared. Table 18 presents the results of UPV for each metric in each case analyzed.
In general, the strategy of using diversification and error as parameters for the selection of the most preferred Pareto optimal point was efficient, as it led to the choice of a point with low prediction variance without zeroing any of the weights associated with the objective functions. The use of the proposed weighting for Case 1 leads to a 45.66% reduction in the prediction variance if we consider the maximum value of UPV for the problem in question. For Case 2, the reduction in the UPV is 52.85%. In Case 3, which presented a problem of optimization of the turning process of hardened steel using a tool with wiper geometry, the reduction is 49.75%. For real industrial problems, the information regarding the most reliable prediction is very important, because the analyst does not initially know where the optimum is situated in the experimental space.
Notably, in all the cases analyzed, it was possible to model the variance in terms of weights. This is because the weights interfere in the solution space. Nevertheless, the points of the solution space chosen to optimize the single optimization problem, i.e., the individual objectives, do not necessarily match the minimum variance points of the experimental space. This makes the act of choosing Pareto optimal robust solutions non-trivial. In this context, the ROPS proposal becomes relevant by inducing the choice of Pareto optimal points having less prediction variance.
Finally, the behavior of the variance and the choice of the optimal Pareto point with less variability were not affected either by the convexity of the functions and by the number of functions to be optimized or by the amount of variables involved in each process. This allows the use of ROPS to solve problems of different dimensions.
5 Conclusions
As previously mentioned in Sect. 2.1, weighting methods for selecting an optimal point in the Pareto frontier, as an aid to decision-making, are studied even after several years of research. The present study aimed to discuss the variability of the Pareto optimal responses, which is not extensively discussed in the literature, despite the extensive discussion about the behavior of the variance in experimental designs. Therefore, this paper introduced the ROPS, developed to choose the most preferred Pareto optimal point in MOPs using RSM.
The study could demonstrate that the weights used in the MOP influence the prediction variance of the obtained response. Furthermore, the use of diversification measures, such as entropy and diversity, associated with measures of error, such as MAPE, was useful in mapping regions of minimum variance within the Pareto optimal responses obtained in the optimization process. Thus, the results show that the proposed method is efficient and applicable to choose the vector of weights that produces Pareto optimal results with less variability and greater reliability.
Finally, the use of metrics proposed in ROPS is presented as a useful tool in the multiple-criteria decision-making process, because it leads to robust responses without the necessity of including the variance term in the mathematical formulation of the problem, making it simpler. As a proposal for future studies, we recommend the use of ROPS for different designs of experiments models, to evaluate their behavior under different experimental conditions.
References
Cua KO, Mckone KE, Schroeder RG (2001) Relationships between implementation of TQM, JIT, and TPM and manufacturing performance. J Oper Manag 19(6):675–694
Kano M, Nakagawa Y (2008) Data-based process monitoring, process control, and quality improvement: recent developments and applications in steel industry. Comput Chem Eng 32(1–2):12–24
Montgomery DC (2009) Design and analysis of experiments, 5th edn. Wiley, New York, p 665
Khuri A, Kim HJ, Um Y (1996) Quantile plots of the prediction variance for response surface designs. Comput Stat Data Anal 22(4):395–407
Baril C, Yacout S, Clément B (2011) Design for six sigma through collaborative multiobjective optimization. Comput Ind Eng 60(1):43–55
Szeląg M, Greco S, Słowiński R (2014) Variable consistency dominance-based rough set approach to preference learning in multicriteria ranking. Inf Sci 277(1):525–552
Zeleny M (1974) A concept of compromise solutions and the method of the displaced ideal. Comput Oper Res 1(3–4):479–496
Figueira JR, Greco S, Słowiński R (2009) Building a set of additive value functions representing a reference preorder and intensities of preference: GRIP method. Eur J Oper Res 195(2):460–486
Tian N, Tang S, Che A, Wu P (2020) Measuring regional transport sustainability using super-efficiency SBM-DEA with weighting preference. J Clean Prod 242:118474
Matin A, Zare S, Ghotbi-Ravandi M, Jahani Y (2020) Prioritizing and weighting determinants of workers’ heat stress T control using an analytical network process (ANP) a field study. Urban Clim 31:100587
Kamaruzzaman S, Lou E, Wong P, Wood R, Che-Ani A (2018) Developing weighting system for refurbishment building assessment scheme in Malaysia through analytic hierarchy process (AHP) approach. Energy Policy 112:280–290
Zhu X, Dapeng N, Wang X, Wang F, Jia M (2019) Comprehensive energy saving evaluation of circulating cooling water system based on combination weighting method. Appl Thermal Eng 157:113735
Gaudêncio J, Almeida F, Sabioni RC, Turrioni JB, Paiva AP, Campos PHS (2019) Fuzzy multivariate mean square error in equispaced pareto frontiers considering manufacturing process optimization problems. Eng Comput 35:1213–1236
Lakshmi R, Baskar S (2019) Novel term weighting schemes for document representation based on ranking of terms and Fuzzy logic with semantic relationship of terms. Experts Syst Appl 137:493–503
Rocha LCS, de Paiva AP, Junior PR, Balestrassi PP, da Silva Campos PH, Davim JP (2017) Robust weighting applied to optimization of AISI H13 hardened-steel turning process with ceramic wiper tool: a diversity-based approach. Precis Eng 50:235–247
Davoudabadi R, Mousavi SM, Sharifi E (2020) An integrated weighting and ranking model based on entropy, DEA and PCA considering two aggregation approaches for resilient supplier selection problem. J Comput Sci 40:101074
Aquila G, Rocha LCS, Pamplona E, Queiroz A, Rotela Junior P, Balestrassi P, Fonseca M (2018) Proposed method for contracting of wind-photovoltaic projects connected to T the Brazilian electric system using multiobjective programming. Renew Sustain Energy Rev 97:377–389
Ibáñes-Forés V, Bovea MD, Pérez-Belis V (2014) A holistic review of applied methodologies for assessing and selecting the optimal technological alternative from a sustainability perspective. J Clean Prod 70(1):259–281
Taboada HA, Baheranwala F, Coit DW, Wattanapongsakorn N (2007) Practical solutions for multi-objective optimization: an application to system reliability design problems. Reliab Eng Syst Saf 92(3):314–322
Gaudreault C, Samson R, Stuart P (2009) Implications of choices and interpretation in LCA for multi-criteria process design: de-inked pulp capacity and cogeneration at a paper mill case study. J Clean Prod 17(17):1535–1546
Pilavachi PA, Stephanidis SD, Pappas VA, Afgan NH (2009) Multi-criteria evaluation of hydrogen and natural gas fuelled power plant technologies. Appl Therm Eng 29(11–12):2228–2234
Zeleny M (1975) The theory of the displaced ideal. In: Zeleny M (ed) Lecture notes in economics and mathematical systems, no 123: multiple criteria decision making—Kyoto. Springer, Berlin
Melachrinoudis E (1985) Determining an optimum location for an undesirable facility in a workroom environment. Appl Math Model 9(5):365–369
Saaty TL (1990) How to make a decision: the analytic hierarchy process. Eur J Oper Res 48(1):9–26
Grzybowski AZ (2012) Note on a new optimization based approach for estimating priority weights and related consistency index. Expert Syst Appl 39(14):11699–11708
Promentilla MAB, Furuichi T, Ishii K, Tanikawa N (2008) A fuzzy analytic network process for multi-criteria evaluation of contaminated site remedial countermeasures. J Environ Manag 88(3):479–495
Tran NH, Tran K (2007) Combination of fuzzy ranking and simulated annealing to improve discrete fracture inversion. Math Comput Model 45(7–8):1010–1020
Narang N, Dhillon JS, Kothari DP (2014) Weight pattern evaluation for multiobjective hydrothermal generation scheduling using hybrid search technique. Int J Electr Power Energy Syst 62:665–678
Wan S-P, Dong J-Y (2015) Power geometric operators of trapezoidal intuitionistic fuzzy numbers and application to multiattribute group decision making. Appl Soft Comput 29:153–168
Wang Z-J (2015) Consistency analysis and priority derivation of triangular fuzzy preference relations based on modal value and geometric mean. Inf Sci 314:169–183
Gomes JHF, Paiva AP, Costa SC, Balestrassi PP, Paiva EJ (2013) Weighted multivariate mean square error for processes optimization: a case study on flux-cored arc welding for stainless steel claddings. Eur J Oper Res 226(3):522–535
Savier JS, Das D (2011) Loss allocation to consumers before and after reconfiguration of radial distribution networks. Int J Electr Power Energy Syst 33(3):540–549
Huang HZ, Gu YK, Du X (2006) An interactive fuzzy multi-objective optimization method for engineering design. Eng Appl Artif Intell 19(5):451–460
Rubio L, De La Sen M, Longstaff AP, Fletcher S (2013) Model-based expert system to automatically adapt milling forces in Pareto optimal multi-objective working points. Expert Syst Appl 40(6):2312–2322
Luo D, Wang X (2012) The multi-attribute grey target decision method for attribute value within three-parameter interval grey number. Appl Math Model 36(5):1957–1963
Zhu J, Hipel KW (2012) Multiple stages grey target decision making method with incomplete weight based on multigranularity linguistic label. Inf Sci 212:15–32
Luo D (2009) Decision-making methods with three-parameter interval grey number. Syst Eng Theory Pract 29(1):124–130
Monghasemi S, Nikoo MR, Khaksar Fasaee MA, Adamowski J (2015) A novel multi criteria decision making model for optimizing time-cost-quality trade-off problems in construction projects. Expert Syst Appl 42(6):3089–3104
Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423
Rocha LCS, Paiva AP, Balestrassi PP, Severino G, Rotela Junior P (2015a) Entropy-based weighting for multiobjective optimization: an application on vertical turning. Math Probl Eng Article ID 608325
Rocha LCS, Paiva AP, Balestrassi PP, Severino G, Rotela Junior P (2015) Entropy-based weighting applied to normal boundary intersection approach: the vertical turning of martensitic gray cast iron piston rings case. Acta Sci Technol 37(4):361–371
Wang JJ, Jing YY, Zhang CF, Zhao JH (2009) Review on multi-criteria decision analysis aid in sustainable energy decision-making. Renew Sustain Energy Rev 13(9):2263–2278
Wuwongse V, Kobayashi S, Iwai S-I, Ichikawa A (1983) Optimal design of linear control systems by an interactive optimization method. Comput Ind 4(4):381–394
Bonano EJ, Apostolakis GE, Salter PF, Ghassemi A, Jennings S (2000) Application of risk assessment and decision analysis to the evaluation, ranking and selection of environmental remediation alternatives. J Hazard Mater 71(1–3):35–57
Dijkmans R (2000) Methodology for selection of best available techniques (BAT) at the sector level. J Clean Prod 8(1):11–21
Geldermann J, Rentz O (2004) The reference installation approach for the techno-economic assessment of emission abatement options and the determination of BAT according to the IPPC-directive. J Clean Prod 12(4):389–402
Halog A, Shultmann F, Rentz O (2001) Using quality function deployment for technique selection for optimum environmental performance improvement. J Clean Prod 9:387–394
Prabhu TR, Vizayakumar K (2001) Technology choice using FHDM: a case of iron-making technology. IEEE Trans Eng Manag 48(2):209–222
Vignes RP (2001) Use limited life-cycle analysis for environmental decision-making. Chem Eng Prog 97(2):40–54
Zhang W, Yang H (2001) A study of the weighting method for a certain type of multicriteria optimization problem. Comput Struct 79(31):2741–2749
Derden A, Vercaemst P, Dijkmans R (2002) Best available techniques (BAT) for the fruit and vegetable processing industry. Resour Conserv Recycl 34(4):261–271
Beccali M, Cellura M, Mistretta M (2003) Decision-making in energy planning. Application of the Electre method at regional level for the diffusion of renewable energy technology. Renew Energy 28(13):2063–2087
Afgan NH, Carvalho MG (2004) Sustainability assessment of hydrogen energy systems. Int J Hydrogen Energy 29(13):1327–1342
Cziner K, Tuomaala M, Hurme M (2005) Multicriteria decision making in process integration. J Clean Prod 13(5):475–483
Sadiq R, Khan FI, Veitch B (2005) Evaluating offshore technologies for produced water management using GreenPro-I: a risk-based life cycle analysis for green and clean process selection and design. Comput Chem Eng 29(5):1023–1039
Chowdhury S, Husain T (2006) Evaluation of drinking water treatment technology: an entropy-based fuzzy application. J Environ Eng 132(10):1264–1271
Critto A, Cantarella L, Carlon C, Giove S, Petruzzelli G, Marcomini A (2006) Decision support-oriented selection of remediation technologies to rehabilitate contaminated sites. Integr Environ Assess Manag 2(3):273–285
Doukas H, Patlitzianas KD, Psarras J (2006) Supporting sustainable electricity technologies in Greece using MCDM. Resour Policy 31(2):129–136
Khelifi O, Dalla Giovanna F, Vranes S, Lodolo A, Miertus S (2006) Decision support tool for used oil regeneration technologies assessment and selection. J Hazard Mater 137(1):437–442
Pilavachi PA, Roumpeas CP, Minett S, Afgan NH (2006) Multi-criteria evaluation for CHP system options. Energy Convers Manag 47(20):3519–3529
Shehabuddeen N, Probert D, Phaal R (2006) From theory to practice: challenges in operationalising a technology selection framework. Technovation 26(3):324–335
Begić F, Afgan NH (2007) Sustainability assessment tool for the decision making in selection of energy system-Bosnian case. Energy 32(10):1979–1985
Fijal T (2007) An environmental assessment method for cleaner production technologies. J Clean Prod 15(10):914–919
Grandinetti L, Guerriero F, Lepera G, Mancini M (2007) A niched genetic algorithm to solve a pollutant emission reduction problem in the manufacturing industry: a case study. Comput Oper Res 34(7):2191–2214
Krajnc D, Mele M, Glavič P (2007) Fuzzy logic model for the performance benchmarking of sugar plants by considering best available techniques. Resour Conserv Recycl 52(2):314–330
Mavrotas G, Georgopoulou E, Mirasgedis S, Sarafidis Y, Lalas D, Hontou V, Gakis N (2007) An integrated approach for the selection of best available techniques (BAT) for the industries in the greater Athens area using multiobjective combinatorial optimization. Energy Econ 29(4):953–973
Zeng G, Jiang R, Huang G, Xu M, Li J (2007) Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis. J Environ Manag 82(2):250–259
Bollinger D, Pictet J (2008) Multiple criteria decision analysis of treatment and land-filling technologies for waste incineration residues. Omega 36(3):418–428
Georgopoulou E, Hontou V, Gakis N, Sarafidis Y, Mirasgedis S, Lalas DP, Loukatos A, Gargoulas N, Mentzis A, Economidis D, Triantafilopoulos T, Korizi K (2008) BEAsT: a decision-support tool for assessing the environmental benefits and the economic attractiveness of best available techniques in industry. J Clean Prod 16(3):359–373
Schollenberger H, Treitz M, Geldermann J (2008) Adapting the European approach of best available techniques: case studies from Chile and China. J Clean Prod 16(17):1856–1864
Bréchet T, Tulkens H (2009) Beyond BAT: selecting optimal combinations of available techniques, with an example from the limestone industry. J Environ Manag 90(5):1790–1801
Cavallaro F (2009) Multi-criteria decision aid to assess concentrated solar thermal technologies. Renew Energy 34(7):1678–1685
Daim T, Intarode N (2009) A framework for technology assessment: case of a Thai building material manufacturer. Energy Sustain Dev 13(4):280–286
Gómez-López MD, Bayo J, García-Cascales MS, Angosto JM (2009) Decision support in disinfection technologies for treated wastewater reuse. J Clean Prod 17(16):1504–1511
Karagiannidis A, Perkoulidis G (2009) A multi-criteria ranking of different technologies for the anaerobic digestion for energy recovery of the organic fraction of municipal solid wastes. Bioresour Technol 100(8):2355–2360
Karavanas A, Chaloulakou A, Spyrellis N (2009) Evaluation of the implementation of best available techniques in IPPC context: an environmental performance indicators approach. J Clean Prod 17(4):480–486
Paiva AP, Paiva EJ, Ferreira JF, Balestrassi PP, Costa SC (2009) A multivariate mean square error optimization of AISI 52100 hardened steel turning. Int J Adv Manuf Technol 43(7):631–643
Yang QZ, Chua BH, Song B (2009) A matrix evaluation model for sustainability assessment of manufacturing technologies. World Acad Sci Eng Technol Int J Mech Aerosp Ind Mech Manuf Eng 3(8):953–958
Kazagić A, Smajević I, Duić N (2010) Selection of sustainable technologies for combustion of Bosnian coals. Thermal Sci 14(3):715–727
Lin GTR, Shen YC (2010) A collaborative model for technology evaluation and decision-making. J Sci Ind Res 69(2):94–100
Bottero M, Comino E, Riggio V (2011) Application of the analytic hierarchy process and the analytic network process for the assessment of different wastewater treatment systems. Environ Model Softw 26(10):1211–1224
García N, Caballero JA (2011) Economic and environmental assessment of alternatives to the extraction of acetic acid from water. Ind Eng Chem Res 50(18):10717–10729
Inoue Y, Katayama A (2011) Two-scale evaluation of remediation technologies for a contaminated site by applying economic input-output life cycle assessment: risk-cost, risk-energy consumption and risk-CO2 emission. J Hazard Mater 192(3):1234–1242
San Cristóbal JR (2011) A multi criteria data envelopment analysis model to evaluate the efficiency of the renewable energy technologies. Renew Energy 36(10):2742–2746
Cristóbal J, Guillén-Gosálbez G, Jiménez L, Irabien A (2012) Optimization of global and local pollution control in electricity production from coal burning. Appl Energy 92:369–378
De Lange WJ, Stafford WHL, Forsyth GG, Le Maitre DC (2012) Incorporating stakeholder preferences in the selection of technologies for using invasive alien plants as a bio-energy feedstock: applying the analytical hierarchy process. J Environ Manag 99(30):76–83
Giner-Santonja G, Aragonés-Beltrán P, Niclós-Ferragut J (2012) The application of the analytic network process to the assessment of best available techniques. J Clean Prod 25:86–95
Liu X, Wen Z (2012) Best available techniques and pollution control: a case study on China’s thermal power industry. J Clean Prod 23(1):113–121
Liu F, Zhang W-G, Wang ZX (2012) A goal programming model for incomplete interval multiplicative preference relations and its application in group decision-making. Eur J Oper Res 218(3):747–754
Severino G, Paiva EJ, Ferreira JR, Balestrassi PP, Paiva AP (2012) Development of a special geometry carbide tool for the optimization of vertical turning of martensitic gray cast iron piston rings. Int J Adv Manuf Technol 63(5–8):523–534
Yu OY, Guikema SD, Briaud JL, Burnett D (2012) Sensitivity analysis for multiattribute system selection problems in onshore environmentally friendly drilling (EFD). Syst Eng 15(2):153–171
Khorasani G, Mirmohammadi F, Motamed H, Fereidoon M, Tatari A, Maleki Verki MR, Khorasani M, Fazelpour S (2013) Application of multi criteria decision making tools in road safety performance indicators and determine appropriate method with average concept. Int J Innov Technol Explor Eng 3(5):173–177
Shahraki AF, Noorossana R (2014) Reliability-based robust design optimization: a general methodology using genetic algorithm. Comput Ind Eng 74:199–207
Hein N, Kroenke A, Rodrigues Junior MM (2015) Professor assessment using multi-criteria decision analysis. Proc Comput Sci 55:539–548
Shahhosseini H, Farsi M, Eini S (2016) Mult-objective optimization of industrial membrane SMR to produce syngas for Fischer-Tropsch production using NSGA-II and decision makins. J Nat Gas Sci Eng 32:222–238
Howard E, Kamper M (2016) Weighted Factor multiobjective design optimization of a reluctance synchronous machine. IEEE Trans Ind Appl 52:3
Prakash C, Barua M (2016) Robust multi-criteria decision making framework for evaluation of the airport service quality enablers for ranking the airports. J Qual Assur Hosp Tour 17:3
Rocha LCS, de Paiva AP, Junior PR, Balestrassi PP, da Silva Campos PH (2017) Robust multiple criteria decision making applied to optimization of AISI H13 hardened steel turning with PCBN wiper tool. Int J Adv Manuf Technol 89(5–8):2251–2268
Zhou R, Cai R, Tong G (2013) Applications of entropy in finance: a review. Entropy 15(11):4909–4931
Fang S-C, Rajasekera JR, Tsao H-SJ (1997) Entropy optimization and mathematical programming. Kluwer Academic Publishers, Boston
Hickey EA, Carlson JL, Loomis D (2010) Issues in the determination of the optimal portfolio of electricity supply options. Energy Policy 38:2198–2207
Stirling A (1994) Diversity and ignorance in electricity supply investment: addressing the solution rather than the problem. Energy Policy 22(3):195–216
Stirling A (2007) A general framework for analysing diversity in science, technology and society. J R Soc Interface 4(15):707–719
Das I, Dennis JE (1998) Normal boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657
Rocha LCS, Paiva AP, Paiva EJ, Balestrassi PP (2015) Comparing DEA and principal component analysis in the multiobjective optimization of P-GMAW process. J Braz Soc Mech Sci Eng. https://doi.org/10.1007/s40430-015-0355-z
Montgomery DC, Jennings CL, Kulahci M (2008) Introduction to time series analysis and forecasting. Wiley, New York, p 445
Zahran A, Anderson-Cook CM, Myers RH (2003) Fraction of design space to assess prediction capability of response surface designs. J Qual Technol 35(4):377–386
Myers RH, Montgomery DC, Anderson-Cook CM (2009) Response surface methodology: process and product optimization using designed experiments, 3rd edn. Wiley, New York, p 680
Campos PHS (2015) (Doctoral dissertation) Metodologia DEA-OTS: Uma contribuição para a seleção ótima de ferramentas no Torneamento do Aço ABNT H13 Endurecido. Universidade Federal de Itajubá, Itajubá
Acknowledgements
The authors would like to express their gratitude to CAPES, CNPq and FAPEMIG for their financial support and research incentive.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors would like to declare that they have no conflicts of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Rocha, L.C.S., Rotela Junior, P., Aquila, G. et al. Toward a robust optimal point selection: a multiple-criteria decision-making process applied to multi-objective optimization using response surface methodology . Engineering with Computers 37, 2735–2761 (2021). https://doi.org/10.1007/s00366-020-00973-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00366-020-00973-5