Abstract
Multiresponse parameter design problems have become increasingly important and have received considerable attention from both researchers and practitioners since there are usually several quality characteristics that must be optimized simultaneously in most modern products/processes. This study applies support vector regression (SVR), Taguchi loss function, and the artificial bee colony (ABC) algorithm to develop a six-staged procedure that resolves these common and complicated parameter design problems. SVR is used to model the mathematical relationship between input control factors and output responses, and the ABC algorithm is used to find the optimal control factor settings by searching the well-constructed SVR models in which the Taguchi loss function is applied to evaluate the overall performance of a product/process. The feasibility and effectiveness of the proposed approach are demonstrated via a case study in which the design of a total internal reflection (TIR) lens is optimized while fabricating an MR16 light-emitting diode lamp. Experimental results indicate that the proposed solution procedure can provide highly robust design parameter settings for TIR lenses that can be directly applied in real manufacturing processes. Comparisons with the Taguchi method reveal that the Taguchi method is an undesirable and inappropriate method for resolving multiple-response parameter design problems, while the ABC algorithm can search the solution spaces in continuous domains modeled via SVR instead of in the limited discrete experiment levels, thus finding a more robust design than that obtained by the traditional analysis of variance. Consequently, the proposed integrated approach in this study can be considered feasible and effective and can be popularized as a useful tool for resolving general multiresponse parameter design problems in the real world.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In many real-world applications, several of the output responses of a product/system must be optimized simultaneously by determining the optimal settings for input variables (control factors), called multiresponse parameter design problems. The Taguchi method is a well-known traditional approach for addressing such problems; however, some subjective trade-offs must be made while selecting the optimal setting for each control factor in order to simultaneously consider all responses. Therefore, to deal with the parameter design problems with multiple responses, some approaches [1–4] combining miscellaneous techniques from various fields have been proposed and have yielded adequate implementation results. These studies could only determine the settings of the control factors based on their original discrete experimental levels; however, the true optimal parameter settings for the control factors might exist in their experimental ranges with continuous domains. For this reason, mapping the functional relationships (mathematical models) between output responses and input control factors using modeling methodologies and finding the settings of the control factors in continuous domains by exploring well-constructed mathematical models using optimization algorithms have been attempted [5–13]. In these studies, the second-order regression models, neural networks, and genetic programming are common tools for building estimation models. However, the relationships between the input control factors and output responses might be too complex, and therefore, second-order regression models do not always yield estimation models with a sufficient degree of accuracy, i.e., a sufficient R-square. Although neural networks can provide black-box models that are, in general, more accurate than second-order regression models, the topology (e.g., the numbers of hidden layers and neurons in each hidden layer), parameters (e.g., learning rate and momentum), activation functions, and learning rules will significantly affect their training results (performance). For genetic programming, the best models generated during each run are not always identical owing to the probabilistic mechanism used in its evolutionary procedure. In addition, transforming multiple responses into an integrated performance index, i.e., objective, and determining the optimal settings of the control factors through various optimization techniques with the goal of optimizing a single objective has been a common approach for addressing several correlated (or uncorrelated) and conflict output responses. However, several parameters, which must be specified subjectively in advance, are usually required for combining multiple responses into one. Furthermore, the mathematical models constructed during the optimization procedure are usually too complex or difficult for users.
To overcome the above-mentioned shortcomings, this study attempts to design a general procedure for resolving the popular and complicated multiresponse parameter design problems based on a support vector regression (SVR), the Taguchi loss function, and the artificial bee colony (ABC) algorithm. Specifically, the SVR is applied to model the mathematical relationship between the input control factors and output responses, as the SVR technique combined with a radial basis function (RBF) kernel can nonlinearly map data into a higher dimensional space [14] and construct a regression model that reflects the common nonlinear functional dependence of the output responses on the input control factors. In addition, the popular grid-search approach [14] can be used to effectively and efficiently find the best parameters in the SVR, thus obtaining a unique estimation model. Next, the Taguchi loss function is utilized to evaluate the overall quality (performance) of a product from the viewpoint of the total loss (cost) incurred owing to deviations in the quality characteristics from their targets. The advantage of assessing the quality characteristics using the Taguchi loss function is that decision makers need not actually determine the only parameter, i.e., quality loss coefficient, as illustrated in Sect. 5. Finally, the ABC algorithm has been utilized successfully in solving optimization problems in various fields and has yielded sufficient results [15–18]; however, applications of multiresponse parameter design problems are rare. Hence, this study attempts to apply the ABC algorithm during the optimization stage, thus determining the (near) optimal parameter settings of the control factors by exploring well-constructed SVR models.
The remainder of this paper is organized as follows. Previous research on the topic of multiresponse parameter design problems is reviewed in Sect. 2. Section 3 briefly introduces the three main methodologies—SVR, Taguchi loss function, and the ABC algorithm—used in our study. The integrated approach to deal with multiresponse parameter design problems is presented in Sect. 4. In Sect. 5, the feasibility and effectiveness of the proposed approach are illustrated by means of a case study aimed at improving the design of a TIR lens comprising an MR16 light-emitting diode (LED) lamp. Finally, conclusions are summarized in Sect. 6.
2 Literature review
Multiresponse parameter design problems have become increasingly important and have received a considerable amount of attention from both researchers and practitioners, since more than one correlated response must be assessed simultaneously in most modern products/processes. The Taguchi method is a well-known traditional approach for tackling such a problem; however, it has not proved to be fully functional for optimizing multiple responses, especially in the case of correlated responses. Therefore, many recent studies have centered on solving parameter design problems with multiple responses based on various techniques. For example, Kim and Lin [10] presented an approach that aims to maximize the overall minimal value of satisfaction with respect to all responses in order to address the multiresponse parameter design problem by using response surface methodology (RSM) and exponential desirability functions. Lu and Antony [4] utilized a fuzzy-rule-based inference system to map signal-to-noise (S/N) ratios for multiple responses into a single performance index, called multiple performance statistic (MPS). The Taguchi method is then applied to analyze the MPS values in an experiment, thus identifying the important factor/interaction effects, as well as determining the optimal settings of factors for optimizing the process performance. Tong et al. [1] applied principal component analysis (PCA) and the technique for order preference by similarity to ideal solution (TOPSIS) to optimize multiple responses simultaneously. Kovach and Cho [5] developed a multidisciplinary–multiresponse robust design (MMRD) optimization approach for resolving parameter design problems with multiple responses. In their approach, a combined array design is utilized to effectively incorporate noise factors into a robust design model. In addition, a nonlinear goal programming technique in which the system specifications and desired target values are incorporated as constraints and goals, which are prioritized with the first goal to minimize the variance and the second goal to achieve the mean equal to the desired target value, is employed to optimize multiple responses simultaneously. Routara et al. [2] proposed an approach that applies weighted principal component analysis (WPCA), combined quality loss (CQL), and the Taguchi method to tackle multiresponse optimization problems. Ramezani et al. [11] developed an approach for resolving multiple-response optimization problems in which concepts from goal programming with normalization based on negative and positive ideal solutions, as well as prediction intervals, are used to obtain a set of non-dominated, efficient solutions; the non-dominated solutions are then ranked using the TOPSIS to generate some suggested control factor settings. Sibalija et al. [7] proposed an integrated approach based on Taguchi method, principal component analysis (PCA), gray relational analysis (GRA), neural networks (NNs), and genetic algorithms (GAs) to optimize a multiresponse process. In their approach, the overall performance with respect to all responses is evaluated by means of a synthetic performance measure generated using Taguchi’s quality losses, PCA, and GRA. The relationship between the synthetic performance measure and control factors is then established by using well-trained NNs. Finally, the optimal parameter control factor settings are determined by searching the mathematical model described via the constructed NNs. Al-Refaie [3] proposed a procedure that uses two techniques of data envelopment analysis (DEA) to improve the performance of a product/process with multiple responses. In the proposed procedure, each experimental trial in a Taguchi orthogonal array is treated as a decision-making unit (DMU) in which the multiple responses are set as inputs and/or outputs for all DMUs. The cross-evaluation and aggressive formulation techniques of DEA are then utilized to generate efficiency scores to measure the performance of each DMU. Finally, the optimal combination of product/process factor levels is identified based on the maximum value of the efficiency scores obtained from DEA. Salmasnia et al. [13] presented a three-phased approach that uses principal component analysis (PCA), adaptive-network-based fuzzy inference systems (ANFIS), desirability function and genetic algorithms (GAs) to simultaneously optimize multiple correlated responses in which the relationships between responses and design variables are highly nonlinear. He et al. [8] considered the uncertainty associated with the fitted response surface model by taking account of all values in the confidence interval rather than a single predicted value for each response. In their approach, robust optimal solutions that can simultaneously optimize multiple responses are found by using a hybrid genetic algorithm coupled with pattern search, in which the robustness measure for the traditional desirability function is defined by the worst-case strategy. Bera and Mukherjee [9] proposed an adaptive penalty function-based “maximin” desirability index for multiple-response optimization (MRO) problems with close engineering tolerances of quality characteristics. In addition, a near-optimal solution for the single objective, i.e., desirability index, problem is determined via continuous ant colony optimization, ant colony optimization in real space, and global best particle swarm optimization.
Based on the approaches above, it can be seen that a solution for tackling multiresponse parameter design problems is generally composed of three stages: data gathering, model building, and optimization. Furthermore, transforming multiple responses into a single objective and determining the optimal parameter settings of the control factors by optimizing the single objective using various optimization techniques has been a feasible and effective way to address multiresponse parameter design problems. However, as illustrated in Sect. 1, there are some drawbacks to the previously proposed approaches when integrating multiple responses into a single response, building an estimation model, or finding the optimal settings of the control factors. Therefore, this study attempts to apply the SVR, Taguchi loss function, and ABC algorithm to design a general procedure for resolving multiresponse parameter design problems and uses a case study on optimizing the design of a total internal reflection (TIR) lens to evaluate the feasibility and effectiveness of the proposed approach.
3 Research methodologies
In this section, the three main methodologies applied in the proposed integrated procedure for resolving multiresponse parameter design problems are briefly introduced, starting with SVR.
3.1 Support vector regression
The support vector machine (SVM), originally developed by Vapnik et al. [19–23], is a supervised learning model with an associated learning algorithm that is used to construct a hyperplane in a high-dimensional feature space used for classification. The SVM can also be applied to cases of function approximation or regression, called support vector regression (SVR) [23, 24]. Given a training data \( \{ X_{k} ,d_{k} \}_{k = 1}^{Q} \), where the input variable \( X_{k} \in {\mathbb{R}}^{n} \) is an n-dimensional vector and the output variable \( d_{k} \in {\mathbb{R}} \) is a real value, we want to construct an appropriate model to describe the functional dependence of d on X. SVR uses a map \( \Upphi \) to transform a nonlinear regression problem into a linear regression problem in a high-dimensional feature space and approximates a function of the form
where \( w_{i} \) is the weight; W is the weight vector; \( \phi_{i} (X) \) is the feature; \( \Upphi (X) \) is the feature vector; and \( w_{0} \) is the bias. In order to evaluate the prediction error, Vapnik [25] introduced a general error function, called the ε-insensitive loss function, defined by
Therefore, the penalty (loss) can be expressed by
where \( \xi_{i} \) and \( \xi_{i}^{'} \) are non-negative slack variables used to measure the errors above and below the predicted function, respectively, for each data point. The empirical risk minimization problem can then be defined as [25, 26]
subject to the constraints in Eqs. (3)–(6), where C is a user-specified parameter for the trade-off between complexity and losses. To solve the optimization in Eq. (7), the Lagrangian in primal variables are constructed as
where \( \Upxi = (\xi_{1} , \ldots ,\xi_{Q} )^{T} \) and \( \Upxi^{'} = (\xi_{1}^{'} , \ldots ,\xi_{Q}^{'} )^{T} \) are slack variable vectors; \( \Uplambda = (\lambda_{1},\ldots\!,\lambda_{Q} )^{T} \),\( \Uplambda^{'} = (\lambda_{1}^{'} , \ldots ,\lambda_{Q}^{'} )^{T} \), \( \Upgamma = (\gamma_{1} , \ldots ,\gamma_{Q} )^{T} \), and \( \Upgamma^{'} = (\gamma_{1}^{'} , \ldots ,\gamma_{Q}^{'} )^{T} \) are the Lagrangian multiplier vectors for Eqs. (3)–(6). For optimality, the partial derivatives of L P with respect to the primal variables have to vanish at the saddle point. Therefore,
The simplified dual form L D can then be obtained by substituting Eqs. (9), (11), and (12) into Eq. (8), as
subject to
where \( K(X_{i} ,X_{j} ) \equiv \Upphi (X_{i} ) \cdot \Upphi (X_{j} ) \) is called the kernel function. In addition, the data points for which \( \lambda_{i} \) or \( \lambda_{i}^{'} \) is not zero are the support vectors. With the Lagrangian optimization done, the optimal weight vectors can be obtained as follows
where \( n_{s} \) is the number of support vectors, and the index k only runs over support vectors. Finally, the optimal bias can be obtained by exploiting the Karush–Kuhn–Tucker (KKT) conditions [27, 28], as follows
where \( n_{\text{us}} \) is the number of unbounded support vectors with Lagrangian multipliers satisfying \( 0 < \lambda_{i} < C \) and \( \beta_{i} = \hat{\lambda }_{i} - \hat{\lambda }_{i}^{'} \). Therefore, the approximate regression model can be obtained as follows:
The applications of SVR for resolving real-world problems in various fields are rich and plentiful, and adequate results have been obtained in the literature [29–37]. Further analysis and discussions on SVR can be found in Cristianini and Shawe-Taylor [38], Smola and Schölkopf [39], and Kumar [40].
3.2 Taguchi loss function
Genichi Taguchi considers the quality of a product in terms of its loss to society—which is composed of the costs incurred in the production process and the costs encountered during its usage by a customer—and uses a quadratic loss function to quantify these costs. For a nominal-the-best (NTB) case, the loss function is defined as
where y is the quality characteristic (output response) of a product, m is its target value, k is the quality loss coefficient, and L(y) is the quality loss. The loss functions for the smaller-the-better (STB) and the larger-the-better (LTB) quality characteristics are defined as
and
respectively.
The Taguchi loss function recognizes that more consistent products and low-cost products are desired by customers and producers, respectively. It also provides engineers with more understanding of the importance of designing for variation. In addition, the loss function makes the evaluation of quality more effective and helps designers make better engineering decisions, such as the choice of materials, components, and designs in the early phase of the development of a product.
3.3 Artificial bee colony algorithm
In the natural world, honey bees forage according to a particular repeated process. At the very beginning, a potential forager starts as an unemployed bee since it has no knowledge about the food sources around the hive. The unemployed bee can be a scout that is sent to search for food sources around the hive spontaneously or can be a recruit that is recruited as a forager after being motivated by the waggle dances performed by other foragers. Once a scout finds a food source, it becomes an employed bee, memorizes the location, and starts to exploit the food source. The employed bee then takes a load of nectar from the food source, returns to the hive, and unloads the food. At this time, the employed bee attracts more onlookers through waggle dances or continues to forage by itself without attracting any onlooker. As soon as the amount of nectar in the food source is exhausted, the employed bee abandons that food source and again becomes an unemployed bee. The unemployed bee may then become a scout that searches for a new food source or become an onlooker and stay in the dancing area of the hive until it gets attracted to the waggle dance performed by other employed bees. After acquiring information about all the current rich sources through communication using waggle dances, an onlooker can engage itself on the most profitable source and become an employed foraging bee again.
Inspired by the intelligent foraging behavior of honey bee swarms, Karaboga [41] developed a bee swarm algorithm, called the artificial bee colony (ABC) algorithm, for optimizing multivariable numerical functions. In the ABC algorithm, the ABC contains three groups of bees: employed bees, onlookers, and scouts. The first half of the colony consists of the employed artificial bees, while the second half is composed of the onlookers. There is only one employed bee for each food source and an employed bee becomes a scout as soon as it abandons a food source. In addition, the position of a food source represents a possible solution to the optimization problem being considered, while the amount of nectar in a food source corresponds to the quality (fitness) of a solution. Suppose there are n decision variables in an optimization problem, the general implementation steps of the ABC algorithm are summarized as follows [41–43]:
-
Step 1: Randomly generate an initial population consisting of \( N_{f} \) feasible solutions (the positions of food sources) where each solution \( x_{i} = (x_{i}^{1} ,x_{i}^{2} , \ldots ,x_{i}^{n} )\,(i = 1,2, \ldots ,N_{f} ) \) is an n-dimensional vector.
-
Step 2: Evaluate the fitness of the initial solutions generated in Step 1.
-
Step 3: Each employed bee produces a candidate food position \( v_{i} = (v_{i}^{1} ,v_{i}^{2} , \ldots ,v_{i}^{n} ) \) \( \, (i = 1,2, \ldots ,N_{f} ) \) from the old one in its memory by,
$$ v_{i}^{j} = x_{i}^{j} + {{rn}}_{i}^{j} (x_{i}^{j} - x_{q}^{j} ),\quad \forall i = 1,2,\ldots,N_{f} ;\quad \forall j = 1,2, \ldots,n, $$(23)where \( q \in \left\{ {1,2, \ldots ,N_{f} } \right\} \) is a randomly chosen index that has to differ from i, and \( {{rn}}_{i}^{j} \) is a random number in the range (−1, 1).
-
Step 4: Evaluate the fitness of the candidate solutions created in Step 3. An employed bee memorizes the candidate food position \( v_{i} = (v_{i}^{1} ,v_{i}^{2} , \ldots ,v_{i}^{n} ) \) if the fitness corresponding to the candidate food position is superior to the fitness of its old food position. Otherwise, the employed bee keeps the old food position in its memory, i.e.,\( x_{i} = (x_{i}^{1} ,x_{i}^{2} , \ldots ,x_{i}^{n} ). \)
-
Step 5: An onlooker chooses a food source with a probability calculated by
$$ {{pb}}_{i} = \frac{{{{fit}}_{i} }}{{\sum\limits_{i = 1}^{{N_{f} }} {{{fit}}_{i} } }},\quad \forall i = 1,2, \ldots ,N_{f} , $$(24)where \( {{pb}}_{i} \) is the probability that the ith food source will be chosen by an onlooker as the target to forage and \( {{fit}}_{i} \) is the fitness of the ith food source.
-
Step 6: Each onlooker produces a modification of the position of the selected food source based on Eq. (23).
-
Step 7: Evaluate the fitness of the modified solutions made in Step 6. An onlooker memorizes the new position if the fitness corresponding to the modified solution is higher than that of its previous position.
-
Step 8: Memorize the position of the best food source found so far by the employed bees and onlookers.
-
Step 9: The employed bee abandons the food source \( x_{{i^{*} }} = (x_{{i^{*} }}^{1} ,x_{{i^{*} }}^{2} , \ldots ,x_{{i^{*} }}^{n} ) \) and becomes a scout if it cannot improve the fitness of the corresponding food position in \( C_{\text{limit}} \) search cycles.
-
Step 10: Each scout becomes an employed bee again and discovers a new food source based on
$$ x_{{i^{*} }}^{j} = x_{\hbox{min} }^{j} + {{sn}}^{j} (x_{\hbox{max} }^{j} - x_{\hbox{min} }^{j} ),\quad \forall j = 1,2, \ldots ,n, $$(25)where \( x_{\hbox{max} }^{j} \) and \( x_{\hbox{min} }^{j} \) are the upper and lower bounds of the jth decision variable, respectively, and \( {{sn}}^{j} \) is a random number in the range (0, 1).
-
Step 11: Repeat Steps 3 through 10 for MCN cycles and designate the position of the memorized best food source as the final optimal solution.
Notably, parameter \( C_{\text{limit}} \) is usually set as \( N_{f} \times n \) in the literature [42, 43]. The ABC algorithm has been widely applied to resolve problems in various fields and adequate results have been reported in the literature [15–18, 44]. Further discussions and analyses of the ABC algorithm can be found in Karaboga [41] and Karaboga and Basturk [42, 43].
4 Proposed integrated approach
In this paper, an integrated procedure for solving multiresponse parameter design problems using SVR, Taguchi loss function, and the ABC algorithm is proposed. The proposed solution approach comprises six stages that are described in detail as follows:
4.1 Stage 1: State the problem
-
Step 1: State the problem clearly and concisely according to the objectives of the quality improvement project.
-
Step 2: Determine the key quality characteristics, i.e., responses, of the concerned product/process, and the measurement systems and specification limits of these quality characteristics.
-
Step 3: Determine the major design/process parameters, i.e., control factors, to be evaluated in an experiment for their effect on the selected key quality characteristics and the operational limits of those control factors based on engineering principles, experience, and limitations in the manufacturing process.
-
Step 4: Identify the important noise factors to be evaluated for their effect on the quality characteristics of interest according to the limitations in the manufacturing process.
4.2 Stage 2: Design an experiment and collect data
-
Step 5: Determine the number of experimental levels and the values for all the experimental levels for each selected control/noise factor.
-
Step 6: Select an appropriate orthogonal array as the inner array to arrange the control factors and select an appropriate orthogonal array as the outer array to arrange the noise factors.
-
Step 7: Design an experimental layout based on the selected inner and outer arrays.
-
Step 8: Conduct each experimental trial and collect experimental data according to the designed experimental layout.
4.3 Stage 3: Build estimation models
-
Step 9: Normalize the key quality characteristics values obtained along with the values of the major design/process parameters in each experimental trial into a range of −1 to 1 according to their corresponding maximum and minimum values.
-
Step 10: Randomly divide the normalized quality characteristics values and design/process parameters into two groups: training data and test data, based on a pre-specified proportion.
-
Step 11: Train and determine an appropriate SVR model for each key quality characteristic to model the mathematical relationship between input control factors and the quality characteristic.
4.4 Stage 4: Evaluate overall performance of the product/process
-
Step 12: Evaluate the performance of each key quality characteristic using an appropriate Taguchi loss function as
$$ L(y_{i} ) = \left\{ {\begin{array}{*{20}c} {k_{i} (y_{i} - m_{i} )^{2} } \hfill & {{\text{for}}\,{\text{an}}\,{\text{NTB}}\,{\text{case}}} \hfill \\ {k_{i} y_{i}^{2} } \hfill & {{\text{for an STB}}\,{\text{case}}} \hfill \\ {k_{i} \frac{1}{{y_{i}^{2} }}} \hfill & {{\text{for}}\,{\text{an}}\,{\text{LTB}}\,{\text{case}}} \hfill \\ \end{array} } \right. $$(26)where y i is the estimated value of the ith key quality characteristic obtained by de-normalizing the outputted value from the corresponding SVR model constructed in Step 11, and k i , L(y i ), and m i are the quality loss coefficient, quality loss, and target value for the ith key quality characteristic, respectively.
-
Step 13: Normalize the quality loss of each key quality characteristic using
$$ L_{n} (y_{i} ) = \left\{ {\begin{array}{*{20}c} {\frac{{L(y_{i} )}}{{L_{\hbox{max} ,U} (y_{i} )}}} \hfill & {{\text{if}}\,y_{i} \ge m_{i} } \hfill \\ {\frac{{L(y_{i} )}}{{L_{\hbox{max} ,L} (y_{i} )}}} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.{\text{for}}\,{\text{an}}\,{\text{NTB}}\,{\text{case,}} $$(27)$$ L_{n} (y_{i} ) = \left\{ {\begin{array}{*{20}c} {\frac{{L(y_{i} ) - L_{\hbox{min} } (y_{i} )}}{{L_{\hbox{max} } (y_{i} ) - L_{\hbox{min} } (y_{i} )}}} \hfill & {{\text{if}}\,y_{i} \ge IV_{i,{\rm {STB}}} } \hfill \\ 0 \hfill & {\text{otherwise}} \hfill \\ \end{array} \quad {\text{for}}\,{\text{an}}\,{\text{STB}}\,{\text{case,}}} \right. $$(28)$$ L_{n} (y_{i} ) = \left\{ {\begin{array}{*{20}c} {\frac{{L(y_{i} ) - L_{\hbox{min} } (y_{i} )}}{{L_{\hbox{max} } (y_{i} ) - L_{\hbox{min} } (y_{i} )}}} \hfill & {{\text{if}}\,y_{i} \le IV_{i,{\rm{LTB}}} } \hfill \\ 0 \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.\quad {\text{for}}\,{\text{an}}\,{\text{LTB}}\,{\text{case,}} $$(29)where \( L_{n} (y_{i} ) \), \( L_{\hbox{max} ,U} (y_{i} ) \), \( L_{\hbox{max} ,L} (y_{i} ) \), \( L_{\hbox{min} } (y_{i} ) \), and \( L_{\hbox{max} } (y_{i} ) \) are the normalized, upper maximum, lower maximum, minimum, and maximum quality losses for the ith key quality characteristic, respectively; m i is the target value of the ith key quality characteristic; and IV i,STB and IV i,LTB are the ideal values for the ith key quality characteristic in the STB and LTB cases, respectively. Notably, the upper specification limit for an STB quality characteristic can be set definitely; however, the ideal value IV i,STB, which represents the optimal minimum of an STB quality characteristic, cannot be defined clearly and must be determined by consulting with design engineers. The same approach is also applied to set the ideal value IV i,LTB for an LTB quality characteristic. The \( L_{\hbox{max} ,U} (y_{i} ) \) and \( L_{\hbox{max} ,L} (y_{i} ) \) are calculated as follows:
$$ L_{\hbox{max} ,U} (y_{i} ) = k_{i} ({\text{USL}}_{i} - m_{i} )^{2} , $$(30)$$ L_{\hbox{max} ,L} (y_{i} ) = k_{i} ({\text{LSL}}_{i} - m_{i} )^{2} $$(31)where USL i and LSL i are the upper and lower specification limits for the ith key quality characteristic, respectively. In addition, \( L_{\hbox{min} } (y_{i} ) \) and \( L_{\hbox{max} } (y_{i} ) \) are calculated by
$$ L_{\hbox{min} } (y_{i} ) = \left\{ {\begin{array}{*{20}c} {k_{i} {\text{IV}}_{{i,{\text{STB}}}}^{2} } \hfill & {{\text{for}}\,{\text{an}}\,{\text{STB}}\,{\text{case}}} \hfill \\ {k_{i} \frac{1}{{{\text{IV}}_{{i,{\text{LTB}}}}^{2} }}} \hfill & {{\text{for}}\,{\text{an}}\,{\text{LTB}}\,{\text{case}}} \hfill \\ \end{array} } \right., $$(32)$$ L_{\hbox{max} } (y_{i} ) = \left\{ {\begin{array}{*{20}c} {k_{i} {\text{USL}}_{i}^{2} } \hfill & {{\text{for}}\,{\text{an}}\,{\text{STB}}\,{\text{case}}} \hfill \\ {k_{i} \frac{1}{{{\text{LSL}}_{i}^{2} }}} \hfill & {{\text{for}}\,{\text{an}}\,{\text{LTB}}\,{\text{case}}} \hfill \\ \end{array} } \right.. $$(33) -
Step 14: Calculate the weighted average quality loss using
$$ {{AQL}}_{w} = \frac{{\sum\limits_{i}^{{nq}} {w_{i} L_{n} (y_{i} )} }}{{\sum\limits_{i}^{{nq}} {w_{i} } }} $$(34)where w i denotes the weight of the ith key quality characteristic, and nq is the total number of key quality characteristics.
4.5 Stage 5: Optimize control factors settings
-
Step 15: Explore the experimental ranges of the major design/process parameters by using the ABC algorithm where the mathematical relationships between the design/process parameters and the key quality characteristics are described by the SVR models constructed in Step 11. The fitness in the ABC algorithm is defined by
$$ {{fit}}_{\text{ABC}} = 1 - {{AQL}}_{w} . $$(35) -
Step 16: Obtain the (near) optimal settings for the major design/process parameters.
4.6 Stage 6: Conduct a confirmation experiment
-
Step 17: Conduct a confirmation experiment to verify the feasibility and effectiveness of the optimal settings acquired for the major design/process parameters.
-
Step 18: If the confirmation result is unsatisfactory, repeat the entire procedure.
5 Case study
In this section, a case study aimed at improving the design of a TIR lens is used to verify the feasibility and effectiveness of the proposed integrated solution procedure for resolving multiresponse parameter design problems. Detailed implementation steps are presented in the following sub-sections, starting with the problem statement.
5.1 Problem statement
A light-emitting diode (LED) is a semiconductor diode that converts applied voltage into light. Early LEDs could only emit low-intensity red light. Nowadays, LEDs with diverse brightness and a wide range of wavelengths from visible to ultraviolet and infrared light are available and are being used extensively in various fields. For example, Fig. 1a is an MR16 LED lamp (where MR stands for multifaceted reflector and 16 is the diameter in eighths of an inch across the front face) that is used in most fixtures designed for a traditional MR16 halogen lamp. An MR16 LED lamp comprises four major components: one or multiple LED emitters, one or multiple TIR lenses, a heat sink, and a driver. In order to maximize the overall lighting performance of an MR16 LED lamp, the TIR lens, shown in Fig. 1b, requires an elaborate design. Traditional experimental design techniques and the Taguchi method, along with the principles of optics and experience, are common approaches used by design engineers to determine the optimal geometric design and selection of materials for a TIR lens. However, some trade-offs have to be made through engineering judgments in order to deal with conflicts when selecting an optimal setting for each design parameter in the simultaneous optimization of all quality characteristics. Therefore, the design parameter settings must be revised and further fine-tuned through a repeated trial-and-error process in order to determine the final design of a TIR lens. This trial-and-error approach increases the decision-making uncertainty. Furthermore, it is costly and time consuming, as well as being unable to ensure that the parameter settings of geometric designs and materials are truly optimal.
According to the objectives of the quality improvement project aimed at optimizing the design of a TIR lens, five key quality characteristics that are crucial to downstream clients were determined through discussions with LED design engineers and quality managers as follows:
-
(1)
Luminous flux (y 1)
Luminous flux is the energy per unit time that is radiated from a source over visible wavelengths from about 330 (nm) to 780 (nm). The SI unit of luminous flux is the lumen (lm).
-
(2)
Viewing angle at 0° (y 2)
The viewing angle is defined as the angle within which the luminous intensity (in candela, cd) is at least half of the maximum luminous intensity. The viewing angle at 0° is measured from the direction of 0°, i.e., the x axis, based on the LED emitter contained in the MR16 LED lamp.
-
(3)
Viewing angle at 45° (y 3)
The viewing angle at 45° is the viewing angle observed from the direction of 45°.
-
(4)
Viewing angle at 90° (y 4)
The viewing angle at 90° is the viewing angle observed from the direction of 90°, i.e., the y axis.
-
(5)
Viewing angle at 135° (y 5)
The viewing angle at 135° is the viewing angle observed from the direction of 135°.
The specification limits, response types, and associated weights for the above five key quality characteristics, as described in Sect. 4, for a TIR lens used in an MR16 LED lamp are summarized in Table 1. Notably, the viewing angles (y 2 to y 5) are fixed once the settings of the major design parameters, including the lens material (x 1), lens height (x 2), lens radius of curvature (x 3), micro-lens diameter (x 4), and micro-lens spacing (x 5), as described later, are determined regardless of the optical output power of the LED chips applied in an MR16 LED lamp. However, the luminous flux (y 1) can still be improved by using LED chips with a higher optical output power even when the design of the TIR lens has already been decided. Therefore, the importance of making these viewing angles meet their targets is relatively higher than improving the luminous flux; thus, after consulting with design engineers, this study assigned larger weights for the viewing angles (\( w_{i} = 2 \), for \( i = 2,3,4,5 \)) than for the luminous flux (\( w_{1} = 1 \)), as shown in the last row of Table 1.
As a result of brainstorming with design engineers, one important material property and four main geometric parameters, as illustrated in Fig. 2, of a TIR lens were selected as control factors to evaluate their effect on the above five quality characteristics. They are as follows:
-
1.
Lens material (x 1): the material used to fabricate the TIR lens.
-
2.
Lens height (x 2): the height of the TIR lens.
-
3.
Lens radius of curvature (x 3): the radius of curvature of the TIR lens.
-
4.
Micro-lens diameter (x 4): the diameter of the micro-lens.
-
5.
Micro-lens spacing (x 5): the spacing between two adjacent micro-lenses.
Notably, as shown in Fig. 2, the micro-lens spacing (x 5) not only denotes the spacing between two adjacent micro-lenses in the same circle but also represents the distance between two adjacent concentric circles where the micro-lenses were arranged. In addition, the design parameters for the geometric shape have manufacturing tolerances due to the limitations in precision when fabricating a TIR lens. Therefore, the following four noise factors were considered to evaluate their effect on the quality characteristics of interest:
-
1.
Tolerance in lens height (z 1): the manufacturing tolerance in the height of the TIR lens.
-
2.
Tolerance in lens radius of curvature (z 2): the manufacturing tolerance in the radius of curvature of the TIR lens.
-
3.
Tolerance in micro-lens diameter (z 3): the manufacturing tolerance in the diameter of a micro-lens.
-
4.
Tolerance in micro-lens spacing (z 4): the manufacturing tolerance in the spacing between two adjacent micro-lenses.
5.2 Experimental design and data collection
In order to estimate the nonlinear effects of the four main geometric parameters, i.e., x 2–x 5, upon the key quality characteristics, three experimental levels were set for each of these design parameters. For the material design parameter (x 1), two types of materials were considered for the TIR lens in this study. In addition, three noise levels were considered for each of the four noise factors, i.e., z 1–z 4. Table 2 summarizes the experimental settings for the design parameters and noise factors. Note that a too narrow spacing between two adjacent micro-lenses causes the micro-lenses to overlap. Therefore, the experimental settings of the parameter micro-lens spacing (x 5) were set as multiples of the parameter setting of the micro-lens diameter (x 4), as shown in the last column of Table 2.
An orthogonal array that can arrange one two-level and four three-level control factors requires a minimum degree of freedom of 9, i.e., \( (2 - 1) \times 1 + (3 - 1) \times 4 \). Hence, a Taguchi \( L_{18} (2^{1} \times 3^{7} ) \) orthogonal array was selected as the inner array to design the experiment. Here, the first design parameter (x 1) was assigned to the first column, while the remaining four design parameters, i.e., x 2–x 5, were assigned to the second to fifth columns. Similarly, the four noise factors were allocated to an outer array designed by a Taguchi \( L_{9} (3^{4} ) \) orthogonal array. Thus, the total number of experimental trials in this case study was 162, i.e.,\( L_{18} \times L_{9} \), as shown in Table 3.
To conduct the experiment, the SolidWorks 2010 (http://www.solidworks.com) modeling software was used to construct a geometric model of the TIR lens according to the settings for design parameters x 2–x 5 in Table 3, as well as to build the geometric model of an LED emitter. The SolidWorks model constructed was then fed into TracePro 5.0 (http://www.lambdares.com) simulation software for it to carry out optical simulations with the parameter setting of the lens material (x 1). Notably, the LED emitter used in this study comprised nine chips—each a square with an edge length of 0.61 (mm) and a thickness of 0.15 (mm). The spacing between two adjacent chips and the diameter of the optical lens were 0.15 (mm) and 4.5 (mm), respectively. The thicknesses of the base layer and substrate were set as 0.16 (mm) and 0.54 (mm), respectively, as shown in Fig. 3. The optical power emitted by the surface of each chip in the LED emitter was set at 30 (lumens), and the wavelength of the emitted rays was set at 550 (nm). The surface property of the substrate was set as “diffuse white”. In addition, the refractive index of the silicone used to fabricate the optical lens and base layer was set as 1.5.
In TracePro, the total number of tracing rays used can significantly influence the optical simulation results. Too few tracing rays cannot provide reliable, stable, and sufficiently converged simulation results. On the other hand, it is time consuming if too many tracing rays are simulated. Hence, a preliminary experiment with a total number of tracing rays starting from 1,000 with a step of 1,000 was carried out. The simulation results showed that the changes of five key quality characteristics within a range of 5,000 tracing rays starting from 31,000 to 36,000 tracing rays were smaller than 0.5 %. This was considered to be sufficiently stable, and thus, the total number of tracing rays used in this study was 36,000. Table 3 presents a part of the collected experimental results.
5.3 Building estimation models
In order to describe the functional relationship between each quality characteristic (output variables) and the five design parameters (input variables), the LIBSVM 2.86 [45] software for the SVR technique was applied in this study to construct the estimated mathematical models. Here, the radial basis function (RBF), defined by
was used as the kernel function for the following reasons: (1) it can nonlinearly map samples into a higher dimensional space, (2) it has only one parameter gamma (\( \gamma \)), and (3) it has less numerical difficulties [14]. In addition, the grid-search approach [14] was used to determine the best combination of parameters C, \( \gamma \), and \( \varepsilon \) for one problem by trying pairs of \( (C,\gamma ,\varepsilon ) \) and picking the one with the best cross-validation accuracy to minimize the prediction error. First, the values obtained for the five quality characteristics along with the values for the five design parameters in each experimental trial were normalized in the range −1 to 1 according to their corresponding maximum and minimum values. A fivefold cross-validation method was then applied to the normalized experimental data. That is, the original normalized data were randomly partitioned into five sub-groups. Of the five sub-groups, a single sub-group was retained as the test data for validating the constructed SVR model, and the remaining four sub-groups were used as the training data for constructing an SVR model. Table 4 summarizes the optimal parameters found in SVR and the information, including the mean squared errors (MSEs) and R-squares, for the approximation regression models obtained for the five trials for each quality characteristic. In order to maximize the prediction capability of the SVR model for the unknown test data that had never been encountered while training the SVR model, the SVR model obtained with the least test MSE, as denoted by an asterisk in Table 4, was selected as the optimal approximation regression model. These selected SVR models for the five quality characteristics are described as SVR y1, SVR y2, SVR y3, SVR y4, and SVR y5, respectively. Thus, the normalized values of the five key quality characteristics y 1–y 5 can be predicted by feeding the normalizations of the five design parameters x 1–x 5 into the corresponding selected SVR models.
5.4 Optimization of design parameters
To find the optimal settings for the five design parameters of the TIR lens, the ABC algorithm was used to explore the experimental ranges in which the functional relationships between design parameters and key quality characteristics were described via the SVR y1, SVR y2, SVR y3, SVR y4, and SVR y5 models constructed in Sect. 5.3. Here, each solution is represented by a five-dimensional vector, i.e., the total number of design parameters, and the parameters \( N_{f} \) and \( C_{\text{limit}} \), as described in Sect. 3.3, were set as 10 and 50, respectively. The fitness function was designed using Eq. (35), where the weights representing the relative importance of the key quality characteristics were set to \( w_{1} = 1 \) and \( w_{i} = 2 \) (for \( i = 2,3,4,5 \)), as shown in Table 1. Notably, Eqs. (26), (29), (32), and (33) were used to calculate the normalized quality loss for the quality characteristic y 1, an LTB response, as well as the normalized quality losses for y 2 to y 5, NTB quality characteristics, were obtained using Eqs. (26), (27), (30), and (31) along with the information shown in Table 1. The ABC algorithm was coded in Visual C++ 6.0 and ran on a personal computer with an Intel Core 2 Quad 2.66 GHz CPU and 2 GB RAM. The algorithm was terminated when the best solution found so far could not be further improved over the last 50 search cycles. Table 5 summarizes the execution results of implementing the ABC search procedure for 10 runs. Notably, the ABC algorithm converged on the same settings for the design parameters of the TIR lens, as shown in Table 5, in all 10 runs. The average and standard deviation for the CPU time were 69.1 and 7.0 (s). On the basis of the above information, the ABC algorithm can be considered an efficient and robust optimization method for finding the optimal settings for the design parameters of a product/process.
5.5 Confirmation experiment
To verify the feasibility and effectiveness of the optimal parameter settings obtained for the TIR lens, as shown in Table 5, a confirmation experiment using SolidWorks and TracePro software was conducted. The results are summarized in the first trial in Table 6. In addition, as mentioned in Sect. 5.1, four noise factors were considered in order to evaluate the effect of manufacturing tolerances on the five quality characteristics of interest. In order to evaluate the robustness of the optimal settings acquired for the design parameters of the TIR lens, a Taguchi \( L_{9} (3^{4} ) \) orthogonal array was employed to design another nine confirmation trials, as shown by the second to tenth trials in Table 6. The simulation results in Table 6 reveal that all five quality characteristics in all the trials confirm the specification requirements shown in Table 1. Furthermore, the coefficients of variation for all five quality characteristic were smaller than 0.02. The above information indicates that the proposed integrated approach used in this study is a feasible and effective method for resolving multiresponse parameter design problems. In addition, the TIR lens designed based on the optimal parameter settings obtained via the proposed solution procedure was highly robust. Therefore, the proposed method can be directly applied to a real manufacturing process. Thus, the case study on improving the overall lighting performance of an MR16 LED lamp by optimizing the design of the TIR lens can be considered a success.
5.6 Comparison with the Taguchi method
The Taguchi method is a well-known traditional approach for resolving parameter design problems. To demonstrate the superiority of the integrated approach proposed in this study over the Taguchi method for dealing with parameter design problems with multiple responses, the full experimental data, partially shown in Table 3, were further analyzed using the Taguchi method. As mentioned in Sect. 5.2, Taguchi \( L_{18} (2^{1} \times 3^{7} ) \) and \( L_{9} (3^{4} ) \) orthogonal arrays were used to design the inner and outer arrays, respectively. Therefore, there were eighteen signal-to-noise (S/N) ratios for each quality characteristic. Table 7 summarizes the results of analysis of these S/N ratios; the asterisks in the second to sixth rows denote the optimal level settings of design parameters for solely optimizing (maximizing) the S/N ratio for an individual quality characteristic. It can be seen in Table 7 that a conflict occurred while selecting the optimal setting for each design parameter that can simultaneously optimize all of the five key quality characteristics. For example, a setting of 15 (mm) was selected for the design parameter x 2 to maximize the S/N ratios for quality characteristics y 1, y 2, y 4, and y 5. However, design parameter x 2 was set as 25 (mm) while considering the quality characteristic y 3. By examining the factors’ effects based on the S/N ratios along with the suggestions of engineers, the optimal settings for the design parameters were finally determined, shown in the last row of Table 7. Simulation experiments were then conducted and the results were summarized in Table 8. The first experimental trial was carried out based on the optimal settings for the design parameters in Table 7, while the remaining nine experimental trials were implemented according to a Taguchi \( L_{9} (3^{4} ) \) orthogonal array that took the four noise factors into consideration.
In order to illustrate the functions and strengths of the SVR and ABC algorithms, the proposed integrated approach presented in Sect. 4 was implemented again without Stage 3 and Stage 5, which aim to build estimation models through SVR and optimize control factor settings using the ABC algorithm, respectively. In addition, the overall quality performance of the TIR lens was evaluated using the weighted average quality loss calculated based on Eq. (34). The eighteen weighted average quality losses obtained were then used to calculate the weighted average quality loss respective to each level setting for each design parameter. Table 9 summarizes the results, where the asterisks denote the optimal setting of each design parameter for minimizing the weighted average quality loss. Similarly, an experimental layout that includes one trial according to the optimal settings of the design parameters for the TIR lens in Table 9, along with another nine experimental trials based on a Taguchi \( L_{9} (3^{4} ) \) orthogonal array for arranging four noise factors, was designed. Table 10 summarizes the results of this simulation.
From the results in Table 8, it can be seen that none of the ten experimental trials could provide a design for a TIR lens that makes all of the five quality characteristics fulfill the specification requirements. This implies that manually making some trade-off for considering all quality characteristics at one time is an undesirable and inappropriate method for resolving a multiple-response parameter design problem. Furthermore, only four designs among the ten experimental trials can make a TIR lens fully meet its specification requirements, according to Table 10. This provides adequate evidence that the ABC algorithm can search the SVR models in the entire experimental ranges, thus finding the (near) optimal settings for design parameters in continuous domains rather than in the limited discrete experiment levels used in the original experimental layout. Therefore, the settings for the design parameters for a TIR lens obtained through the proposed approach in this study were more robust than those acquired by using the proposed approach without modeling by SVR and optimizing using the ABC algorithm. From the above results and analyses, the integrated approach proposed in this study can be considered for popularization as a feasible and effective tool for solving general multiresponse parameter design problems in the real world.
6 Conclusions
For most modern products/processes, there are usually several quality characteristics that must be optimized simultaneously by determining the optimal settings for their design/process parameters. Although the Taguchi method is a famous and common approach for tackling parameter design problems, it brings some uncertainties and difficulties since some subjective trade-offs have to be made in order to determine the optimal settings of control factors for simultaneously considering multiple quality characteristics. In this study, the SVR technique, Taguchi loss function, and the ABC algorithm were applied to design a six-staged procedure to deal with these complicated and troublesome problems. The feasibility and effectiveness of the proposed approach were demonstrated via a case study in which the design of a TIR lens used in fabricating an MR16 LED lamp was optimized. The experimental results indicate that the proposed solution procedure can provide highly robust design parameter settings for a TIR lens and is considered to be directly applicable in real manufacturing processes. A comparison of the proposed method with the Taguchi method revealed that resolving a multiple-response parameter design problem by manually making some trade-off for simultaneously considering all quality characteristics is an undesirable and inappropriate method. In addition, the ABC algorithm can search the (near) optimal settings of design parameters by exploring the SVR models in continuous domains instead of in the limited discrete experimental levels, thus finding a more robust design for the TIR lens than that obtained by the traditional analysis of variance. Therefore, the integrated approach proposed in this study can be considered feasible and effective and can therefore be popularized as a useful tool for resolving general multiresponse parameter design problems in the real world.
The proposed procedure also has certain limitations. First, the radial basis function (RBF) kernel and the related parameters found using a grid-search approach have not been proven for constructing an optimal SVR regression model describing the functional dependence of the output responses on the input control factors. Next, the ideal values IV i,STB and IV i,LTB must be set subjectively. Finally, the parameter settings of the ABC algorithm may influence the final search results; however, there are no exact rules for setting such parameters. Moreover, the optimal settings of the design parameters obtained from the ABC algorithm cannot be proven as the real optimal values, and their feasibility and effectiveness can only be verified experimentally.
Future work in this area may include the following: (1) determining the best combination of parameters of the SVR using heuristic algorithms with higher efficiency and effectiveness; (2) applying a Taguchi quality loss that is fully linked to the actual production or manufacturing cost for evaluating the overall performance of a product; and (3) utilizing various contemporary methodologies in the optimization stage while tackling multiresponse parameter design problems and comparing the efficiency and effectiveness of their solution.
References
Tong LI, Wang CH, Chen HC (2005) Optimization of multiple responses using principal component analysis and technique for order preference by similarity to ideal solution. Int J Adv Manuf Tech 27(3–4):407–414. doi:10.1007/s00170-004-2157-9
Routara BC, Mohanty SD, Datta S, Bandyopadhyay A, Mahapatra SS (2010) Combined quality loss (CQL) concept in WPCA-based Taguchi philosophy for optimization of multiple surface quality characteristics of UNS C34000 brass in cylindrical grinding. Int J Adv Manuf Tech 51(1–4):135–143. doi:10.1007/s00170-010-2599-1
Al-Refaie A (2012) Optimizing performance with multiple responses using cross-evaluation and aggressive formulation in data envelopment analysis. IIE Trans 44(4):262–276. doi:10.1080/0740817x.2011.566908
Lu DW, Antony J (2002) Optimization of multiple responses using a fuzzy-rule based inference system. Int J Prod Res 40(7):1613–1625. doi:10.1080/00207540210122202
Kovach J, Cho BR (2008) Development of a multidisciplinary-multiresponse robust design optimization model. Eng Optimiz 40(9):805–819. doi:10.1080/03052150802046304
Dubey AK, Yadava V (2008) Multi-objective optimisation of laser beam cutting process. Opt Laser Technol 40(3):562–570. doi:10.1016/j.optlastec.2007.09.002
Sibalija TV, Majstorovic VD, Miljkovic ZD (2011) An intelligent approach to robust multi-response process design. Int J Prod Res 49(17):5079–5097. doi:10.1080/00207543.2010.511476
He Z, Zhu PF, Park SH (2012) A robust desirability function method for multi-response surface optimization considering model uncertainty. Eur J Oper Res 221(1):241–247. doi:10.1016/j.ejor.2012.03.009
Bera S, Mukherjee I (2012) An adaptive penalty function-based maximin desirability index for close tolerance multiple-response optimization problems. Int J Adv Manuf Tech 61(1–4):379–390. doi:10.1007/s00170-011-3704-9
Kim KJ, Lin DKJ (2000) Simultaneous optimization of mechanical properties of steel by maximizing exponential desirability functions. J Roy Stat Soc C Appl 49:311–325. doi:10.1111/1467-9876.00194
Ramezani M, Bashiri M, Atkinson AC (2011) A goal programming-TOPSIS approach to multiple response optimization using the concepts of non-dominated solutions and prediction intervals. Expert Syst Appl 38(8):9557–9563. doi:10.1016/j.eswa.2011.01.139
Hsu CM (2012) Applying genetic programming and ant colony optimisation to improve the geometric design of a reflector. Int J Syst Sci 43(5):972–986. doi:10.1080/00207721.2010.547627
Salmasnia A, Kazemzadeh RB, Tabrizi MM (2012) A novel approach for optimization of correlated multiple responses based on desirability function and fuzzy logics. Neurocomputing 91:56–66. doi:10.1016/j.neucom.2012.03.001
Hsu C-W, Chang C–C, Lin C-J (2008) A practical guide to support vector classification. http://www.csie.ntu.edu.tw/~cjlin
Samanta S, Chakraborty S (2011) Parametric optimization of some non-traditional machining processes using artificial bee colony algorithm. Eng Appl Artif Intel 24(6):946–957. doi:10.1016/j.engappai.2011.03.009
Szeto WY, Wu Y, Ho SC (2011) An artificial bee colony algorithm for the capacitated vehicle routing problem. Eur J Oper Res 215(1):126–135. doi:10.1016/j.ejor.2011.06.006
Cuevas E, Sencion-Echauri F, Zaldivar D, Perez-Cisneros M (2012) Multi-circle detection on images using artificial bee colony (ABC) optimization. Soft Comput 16(2):281–296. doi:10.1007/s00500-011-0741-0
Karaboga D, Ozturk C, Karaboga N, Gorkemli B (2012) Artificial bee colony programming for symbolic regression. Inf Sci 209:1–15. doi:10.1016/j.ins.2012.05.002
Boser B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In: Haussler D (ed) Proceedings of the 5th annual ACM workshop on computational learning theory. ACM Press, Pittsburgh, pp 144–152
Cortes C, Vapnik V (1995) Support-vector networks. Mache Learn 20(3):273–297. doi:10.1007/bf00994018
Guyon I, Boser B, Vapnik V (1993) Automatic capacity tuning of very large VC-dimension classifiers. In: Hanson SJ, Cowan JD, Giles CL (eds) Advances in neural information processing systems, vol 5., Morgan KaufmannSan Mateo, CA, pp 147–155
Schölkopf B, Burges C, Vapnik V (1995) Extracting support data for a given task. In: Fayyad U, Uthurusamy R (eds) Proceedings of first international conference on knowledge discovery and data mining. AAAI Press, Menlo Park, pp 252–257
Vapnik V, Golowich S, Smola A (1997) Support vector method for function approximation, regression estimation, and signal processing. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol 9. MIT Press, Cambridge, pp 281–287
Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik V (1997) Support vector regression machines. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol 9. MIT Press, Cambridge, pp 155–161
Vapnik V (1998) Statistical learning theory. Wiley, New York
Vapnik V (1995) The nature of statistical learning theory. Springer, New York
Karush W (1939) Minima of functions of several variables with inequalities as side constraints. Master thesis, University of Chicago, Chicago
Kuhn H, Tucker A (1951) Nonlinear programming. In: Proceedings of the 2nd Berkeley symposium on mathematical statistics and probabilistics. University of California Press, Berkeley, pp 481–492
Li DC, Liu CW, Fang YH, Chen CC (2010) A yield forecast model for pilot products using support vector regression and manufacturing experience-the case of large-size polariser. Int J Prod Res 48(18):5481–5496. doi:10.1080/00207540903100051
Bi LZ, Tsimhoni O, Liu YL (2011) Using the support vector regression approach to model human performance. IEEE Trans Syst Man Cybern A 41(3):410–417. doi:10.1109/tsmca.2010.2078501
Corazza A, Di Martino S, Ferrucci F, Gravino C, Mendes E (2011) Investigating the use of support vector regression for web effort estimation. Empir Softw Eng 16(2):211–243. doi:10.1007/s10664-010-9138-4
Wang JJ, Li L, Niu DX, Tan ZF (2012) An annual load forecasting model based on support vector regression with differential evolution algorithm. Appl Energ 94:65–70. doi:10.1016/j.apenergy.2012.01.010
Tezcan J, Cheng Q (2012) Support vector regression for estimating earthquake response spectra. Bull Earthq Eng 10(4):1205–1219. doi:10.1007/s10518-012-9350-2
Xin N, Gu XF, Wu H, Hu YZ, Yang ZL (2012) Application of genetic algorithm-support vector regression (GA-SVR) for quantitative analysis of herbal medicines. J Chemom 26(7):353–360. doi:10.1002/cem.2435
Chevalier RF, Hoogenboom G, McClendon RW, Paz JA (2011) Support vector regression with reduced training sets for air temperature prediction: a comparison with artificial neural networks. Neural Comput Appl 20(1):151–159. doi:10.1007/s00521-010-0363-y
Hong WC (2012) Application of seasonal SVR with chaotic immune algorithm in traffic flow forecasting. Neural Comput Appl 21(3):583–593. doi:10.1007/s00521-010-0456-7
Li GZ, Meng HH, Yang MQ, Yang JY (2009) Combining support vector regression with feature selection for multivariate calibration. Neural Comput Appl 18(7):813–820. doi:10.1007/s00521-008-0202-6
Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, New York
Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199–222. doi:10.1023/b:stco.0000035301.49549.88
Kumar S (2005) Neural networks: a classroom approach. McGraw-Hill, Boston
Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report-TR06. Computer Engineering Department, Erciyes University
Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 8(1):687–697. doi:10.1016/j.asoc.2007.05.007
Karaboga D, Basturk B (2007) Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In: Melin P, Castillo O, Aguilar LT, Kacprzyk J, Pedrycz W (eds) Foundations of fuzzy logic and soft computing, Proceedings, vol 4529, Lecture notes in computer science. Springer, Berlin, pp 789–798
Yeh WC, Hsieh TJ (2012) Artificial bee colony algorithm-neural networks for S-system models of biochemical networks approximation. Neural Comput Appl 21(2):365–375. doi:10.1007/s00521-010-0435-z
Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Int Syst Tech 2(3):27:21–27:27
Acknowledgments
The author would like to thank the National Science Council, Taiwan, ROC for supporting this research under Contract No. NSC 101-2221-E-159-009. He would also like to thank Raymond Huang for his invaluable assistance during this study.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hsu, CM. Application of SVR, Taguchi loss function, and the artificial bee colony algorithm to resolve multiresponse parameter design problems: a case study on optimizing the design of a TIR lens. Neural Comput & Applic 24, 1293–1309 (2014). https://doi.org/10.1007/s00521-013-1357-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-013-1357-3