1 Introduction

To survive in the competitive industry and meet customer demands, industries should constantly enhance their productive processes. Improvements in quality, cost, flexibility, speed and reliability are frequent challenges faced by the sector. It is possible to find in the literature several articles that present mathematical and computational strategies to improve quality in industrial processes, such as: [1,2,3,4,5]. On the other hand, the relationship between quality and price is a very important factor since price represents a loss for the consumer at the time of purchase and low quality represents an additional loss during the use of the product. This “loss” includes the cost of customer dissatisfaction that leads to the denigration of the company’s reputation [6, 7]. This concept is very different from traditional guidelines for producers, and includes rework, waste, warranty and services costs as measures of quality.

For these reasons, Taguchi presented the quadratic quality loss function to redefine the quality of a product. All processes that have a decreasing value for the quality loss function (QLF) can assure that performance has been improved. QLF is a mathematical model that accounts for the quality loss in terms of monetary values resulting from the deviation in quality related to the target specification. When analyzing the QLF of a process, the existence of few variables facilitates the calculation. However, an industrial process with different combinations of parameters and different quality characteristics can make the analysis difficult. This problem is overcome with the advent of computing, which allows complex mathematical applications aimed at improving quality. The objective in calculating loss is to evaluate quantitatively the loss in quality caused by the variation. This can then be applied to various problems in different sectors such as health, real estate and manufacturing. Even though QLF can be applied in several areas due to its distinct flexibility, these applications are rarely exposed in the literature.

Besides, when analyzing applications aimed at industrial processes, such as welding, there is a need to promote modeling that allows quality optimization, but that considers the cost reduction related to losses. It is possible to verify several studies that use different optimization methods aimed at quality, such as genetic algorithm [4], particle swarm optimization [8], salp swarm algorithm [9], bat optimization algorithm [10] and sunflower optimization [11]. In addition, artificial neural network [12, 13] and normal boundary intersection (NBI) [14] applied to optimization algorithms and damage detection. Among them, the NBI method stands out, given its ability to create Pareto borders that are equidistant and not dominated. However, none of these contemplates the application on QLF.

Another common characteristic of industrial processes is the number of quality variables that impact the industrial cost and, consequently, may present a significant variance–covariance structure. Analyzing data of this nature, there is a need to use multivariate techniques [15]. This type of technique promotes the quality variables interpretation without neglecting the covariance between them. Among the most used multivariate techniques, the principal component analysis (PCA) can be highlighted. PCA is a technique widely used to reduce the dimensionality of extensive and correlated data [16, 17], promoting non-correlated response vectors. This technique can be verified in several applications, such as: [9, 16, 18].

Searching to contemplate the scarcity of QLF applications and contribute to research in this area, this study proposes a multivariate optimization method for the Taguchi loss function. For this, the proposal will merge techniques such as response surface methodology (RSM), ordinary least squares, PCA and NBI in a new methodology to find the best quality index based on the process cost in the face of loss. Based on a priori experimental design, the target value must be found and the loss function of each experiment must be calculated, creating a new loss function experimental matrix. In view of the correlated nature and the data extension, PCA technique is applied, reducing the data dimension (computational effort) and extracting uncorrelated scores. From the modeling of the components scores of loss functions, the multi-objective optimization is performed by the NBI method, creating a frontier of optimal loss function solutions. In addition, this study presents a total loss function metric to define the best point on the Pareto frontier based on minimizing the QLF. It is important to note that, to the authors’ best knowledge, there are no studies in the literature that present a multivariate optimization method for the Taguchi loss function using techniques such as PCA and NBI.

To demonstrate the proposed method in a real case, the flux-cored arc welding (FCAW) of stainless-steel cladding process will be investigated. This approach is characterized by a process of joining metals using the heat of the electric arc set between the wire and the workpiece [19]. FCAW has several quality characteristics that will be investigated in this study, such as bead width, depth of penetration, height of the reinforcement, dilution and productivity. In addition, this process stands out for being widely studied, as in research on [20,21,22,23,24].

This study can be divided as follows: in Sect. 2, the theoretical background is presented, describing techniques such as loss function, RSM, PCA and NBI; Sect. 3 presents the proposed method; the application, method, results and technical discussion of the method is detailed in Sects. 4 and; 5 draws the conclusion.

2 Theoretical background

2.1 Taguchi loss function

Taguchi loss function (or quality loss function) is a method of measuring loss as a result of a service or product that does not satisfy the demanded standards [7]. There are two reasons for using the Taguchi function. Firstly, the characteristics that have different measurement units can be converted into a common magnitude: loss scores. Also, the loss becomes even more significant when the value deviates from the target value, given that the loss of a quadratic function is not linear [25].

On loss modelling, Taguchi also affirms that customers become even more dissatisfied as performance deviates from the target. The author proposed a quadratic function to represent this dissatisfaction, which is defined when the first term derived from the target’s Taylor expansion is equal to zero. The curve is centred over the target-value, which represents the best performance. On the other hand, identifying this optimal value is not necessarily a simple task and the designer’s best approximation is usually adopted [6].

The Taylor series expansion for the loss function L(y)= f(yT) around the nominal value is defined as [7]:

$$L(y) = L\,(T) + \frac{{L^{\prime}(T)}}{1!}(y - T) + \frac{{L^{\prime\prime}(T)}}{2!}(y - T) + \cdots + \frac{{L^{n} (T)}}{n!}(y - T)$$
(1)

Given that L(y) is 0 when y = T (by definition, the loss of quality is zero when y = T), and that the function has a minimal value at this point, the first derivative in relation to m is equal to zero. Therefore, the first two terms of Eq. (1) are equal to zero. If the terms of order higher than two are disregarded (truncated at the 2nd order term), the equation is represented as follows:

$$L(y) = \frac{{L^{\prime}(T)}}{2!}(y - T)$$
$$L(y) = \delta (y - T)^{2} ,$$
(2)

where δ is the proportionality constant.The loss increases significantly as the difference between the real and target values increase, since the loss function is quadratic [7]. This loss is represented by a continuous function, indicating that it is possible to find a minimal point through optimization techniques that corresponds to the lowest losses in a manufacturing process.

2.2 Response surface methodology

Response surface methodology (RSM) is a tool that models, analyses and optimizes problems where the responses can be influenced by several variables. This is also valid where the relationship between these dependent and independent values is unknown [26].

An approximation of the real interaction can be used to analyse a process. The original argument of a multidimensional Taylor Series’ expansion can be used to approximate a high order polynomial. This is done by truncating the polynomial in the quadratic term to obtain a second order surface response. In regions where curvatures are present, this model will provide satisfactory results. This model can be expressed in mathematical terms by the Eq. (3), where β represents the coefficients of the model, k represents the number of independent variables being considered and ε represents the error. The importance of using this method can be found widely in the literature, where many papers have used the response surface methodology in optimization problems [27, 28].

$$Y\left( {\mathbf{x}} \right) = \beta_{0} + \sum\limits_{i = 1}^{k} {\beta_{i} } x_{i} + \sum\limits_{i = 1}^{k} {\beta_{ii} } x_{i}^{2} + \sum\limits_{i < j} {\sum {\beta_{ij} } } x_{i} x_{j} + \varepsilon .$$
(3)

2.3 Principal component analysis

Principal component analysis (PCA) is a multivariate analysis technique used to transform a set of responses or quality characteristics into a linear relation of the non-correlated components. PCA has three goals: exploration, reduction and data classification. The best results are obtained when the responses or quality characteristics are highly correlated1positively and negatively [29].

According to the Johnson and Wichern [30], PCA aims to explain the variance–covariance structure of variables defined through some linear combinations. These authors claim that if the multi-objective functions f1(x), f2(x),…, fp(x) have correlated response surfaces, they can be written as a random vector YT=[Y1, Y2,…,Yp]. If it is assumed that Σ is the variance–covariance matrix associated with this vector, one has that Σ can be factored into pairs of eigenvalues – eigenvectors (λi, ei),,≥(λp, ep), where λ1 λ1≥ …  λp 0. Thus, the ith principal component can be given as \({\text{PC}}_{1} = e_{i}^{T} Y = e_{1}^{T} Y_{1} + e_{2}^{T} Y_{2} + \cdots + e_{p}^{T} Y_{p}\), for i =1, 2,…,p.

PCA is often used to reduce the dimensionality of data sets, where they usually have many correlated variables [31]. The number of principal components is less than or equal to the number of original variables and the first few principal components retain most of the variation present in all data [32].

The Kaiser criterion is used to identify the number of principal components needed for the study, where an amount of at least 80% variation is required. In addition, the eigenvalues of the principal components must be greater or equal than 1 [30, 33].

2.4 Normal boundary intersection

In front of a The Normal Boundary Intersection (NBI) is a method capable of finding uniformly equidistant Pareto-optimal solutions [34], compensating for the deficiencies presented in the weighted sum method [35]. This formulation can be written mathematically by Eq. (4):

$$\begin{aligned} \mathop {Max}\limits_{{\left( {{\mathbf{x}} , {\text{t}}} \right)}} \quad t \hfill \\ S.t: \, {\mathbf{\bar{\varPhi }\beta }} + t{\hat{\mathbf{n}}} = {\bar{\mathbf{F}}}\left( {\mathbf{x}} \right) \hfill \\ \, {\mathbf{x}} \in \varOmega \hfill \\ \, g_{j} (x) \le 0 \hfill \\ \, h_{j} (x) = 0 \hfill \\ \end{aligned}$$
(4)

where, Ф presents the payoff matrix, obtained by the individual minimization of each objective function; \({\bar{\mathbf{\varPhi }}}\) is the scaled payoff matrix; β refers to the weight vector for each utopia point, and t is a scalar that is perpendicular to the utopia line. \({\hat{\mathbf{n}}}\) is the normal vector and \({\bar{\mathbf{F}}}\left( {\mathbf{x}} \right)\) represents the vector of the dimensioned objective functions.

NBI represents a line perpendicular to the utopia line (convex hull of individual minimaCHIM) where the normal line is defined by Eq. (5):

$${\vec{\mathbf{r}}}\left( t \right) = \left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {z_{0} } \\ \end{array} } \right]^{\text{T}} + t \times \vec{\nabla }f\left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {z_{0} } \\ \end{array} } \right]^{\text{T}} .$$
(5)

To create a Pareto frontier, only one point of the surface, as well as the direction vector, is required. In the payoff matrix (\({\varvec{\Phi}}\)) and in the scaled payoff matrix (\({\bar{\mathbf{\varPhi }}}\)), the ith line consists of minimum and maximum values of the fi(x) function, being the lower and upper limits respectively, and also being used.

The Utopia point is the vector which has the individual minimum \({\mathbf{f}}^{{\mathbf{U}}} = \left[ {f_{1}^{*} (x_{1}^{*} ), \ldots ,f_{i}^{*} (x_{i}^{*} ), \ldots ,f_{m}^{*} (x_{m}^{*} )} \right]^{\text{T}}\). It is the best possible value but is usually outside of the viable solution region [36]. In an antagonistic sense, the Nadir point has the maximum value of each objective function, and is the worst possible solution \({\mathbf{f}}^{{\mathbf{N}}} = \left[ {f_{1}^{N} , \ldots ,f_{i}^{N} , \ldots ,f_{m}^{N} } \right]^{{\text{T}}}\) [36, 37]. The payoff matrices are described by Eq. (6):

$${\varvec{\Phi}} \, = \, \left[ {\begin{array}{*{20}c} {f_{1}^{*} \left( {x_{1}^{*} } \right)} & \cdots & {f_{1}^{{}} \left( {x_{i}^{*} } \right)} & \cdots & {f_{1}^{{}} \left( {x_{m}^{*} } \right)} \\ \vdots & \ddots & {} & {} & \vdots \\ {f_{i}^{{}} \left( {x_{1}^{*} } \right)} & \cdots & {f_{i}^{*} \left( {x_{i}^{*} } \right)} & \cdots & {f_{i}^{*} \left( {x_{m}^{*} } \right)} \\ \vdots & {} & {} & \ddots & \vdots \\ {f_{m}^{{}} \left( {x_{1}^{*} } \right)} & \cdots & {f_{m}^{{}} \left( {x_{i}^{*} } \right)} & \cdots & {f_{m}^{*} \left( {x_{m}^{*} } \right)} \\ \end{array} } \right] \Rightarrow \, {\bar{\mathbf{\varPhi }}} \, = \, \left[ {\begin{array}{*{20}c} {\bar{f}_{1}^{*} \left( {x_{1}^{*} } \right)} & \cdots & {\bar{f}_{1}^{{}} \left( {x_{i}^{*} } \right)} & \cdots & {\bar{f}_{1}^{{}} \left( {x_{m}^{*} } \right)} \\ \vdots & \ddots & {} & {} & \vdots \\ {\bar{f}_{i}^{{}} \left( {x_{1}^{*} } \right)} & \cdots & {\bar{f}_{i}^{*} \left( {x_{i}^{*} } \right)} & \cdots & {\bar{f}_{i}^{*} \left( {x_{m}^{*} } \right)} \\ \vdots & {} & {} & \ddots & \vdots \\ {\bar{f}_{m}^{{}} \left( {x_{1}^{*} } \right)} & \cdots & {\bar{f}_{m}^{{}} \left( {x_{i}^{*} } \right)} & \cdots & {\bar{f}_{m}^{*} \left( {x_{m}^{*} } \right)} \\ \end{array} } \right],$$
(6)

where: \(\bar{f}_{i} \left( {\mathbf{x}} \right) = \left[ {\frac{{f_{i} \left( {\mathbf{x}} \right) - f_{i}^{U} }}{{f_{i}^{N} - f_{i}^{U} }}} \right] = \left[ {\frac{{f_{i} \left( {\mathbf{x}} \right) - f_{i}^{I} }}{{f_{i}^{MAX} - f_{i}^{I} }}} \right]\).

Therefore, for bi-objective problems, the NBI formulation of Eq. (4) can be rewritten as the Eq. (7):

$$\left\{ \begin{aligned} \mathop {\text{Min}}\limits_{{\mathbf{x}}} \begin{array}{*{20}c} {} \\ \end{array} F({\mathbf{x}}) = \bar{f}_{1} \left( x \right) \hfill \\ \begin{array}{*{20}c} {{\text{St}}.:} & {\bar{f}_{1} \left( {\mathbf{x}} \right) - \bar{f}_{2} \left( {\mathbf{x}} \right) + 2\beta_{1} - 1 = 0} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {} & {} \\ \end{array} {\mathbf{x}} \in \varOmega \hfill \\ \begin{array}{*{20}c} {} & {} \\ \end{array} g_{j} ({\mathbf{x}}) \le 0 \hfill \\ \begin{array}{*{20}c} {} & {} \\ \end{array} h_{j + 1} (x) = 0 \hfill \\ \end{aligned} \right..$$
(7)

3 NBI PCA-based multivariate Taguchi loss function optimization

Many studies use optimization approaches to improve product quality (as inferred in Sect. 1), in addition to reducing costs and losses during the process. When checking these industrial processes, it is possible to find a significant correlation between the quality responses, bringing the need to use appropriate techniques to treat these data. Therefore, this study proposes a multivariate approach to the Taguchi loss function using PCA and the NBI optimization technique. The method can be divided into five steps, illustrated in Fig. 1 and described below.

Fig. 1
figure 1

Flowchart of the multivariate Taguchi loss function optimization approach

  1. Step 1.

    Faced with a suitable design of experiments (DOE), such as RSM, an experimental matrix can be created for the process under analysis. The experimental lines must be generated randomly, so that there is no bias. Thus, it is possible to collect all responses to the process, such as quality characteristics, sustainability, among others, in addition to the process cost.

  2. Step 2.

    After collecting all the responses, the loss function must be calculated. For that, it is necessary to find the utopian value for each of the response. These values can be provided by the customer or through individual optimization for each quality response (for the application of this study, individual optimization will be considered). Based on this, it is possible to calculate (from Eq. (2)) the values of the loss functions for each DOE line.

  3. Step 3.

    Based on the new experimental matrix [considers the values of loss functions (Li)], one must analyze the degree of correlation between these values. If the responses have a significant variance–covariance structure, a multivariate strategy must be used, such as PCA. Hence, the necessary number of components is verified using the Kaiser criterion as presented in Sect. 2.3. After that, the principal components scores must be extracted.

  4. Step 4.

    Considering the loss functions component scores (LPCi), the DOE should be modeled and analyzed again based on these scores, calculating the coefficients necessary to perform the multi-objective optimization. For this, the NBI method must be used, which is capable of generating Pareto frontiers based on different weight distributions for the restrictions (detailed in Sect. 2). This step allows to find the real optimal values for the quality responses based on the loss functions.

  5. Step 5.

    From the Pareto frontier of the original responses, the loss functions are recalculated, considering the costs of each parameter. To carry out this step, the modeling of total costs must be formulated (such as labor, materials, energy, etc.). After that, the respective values must be found for each point on the Pareto frontier. This allows to obtain the loss value for each quality response.

As a criterion for decision making, the best point can be found according to Eq. (8), indicating the value that has the lowest total loss, indicated at the Pareto frontier. In this sense, this equation considers the sum of all loss functions (of the responses of interest) for each machine parameter, associated with their respective total cost (δ). The point with the lowest loss value is characterized as the best point on the Pareto frontier. This metric is called a total loss function (TLF).

$${\text{TLF}} = \sum\limits_{i = 1}^{n} {\left[ {\delta_{i} \frac{{\left( {\hat{F}_{{{\text{s}}\left( i \right)}} \left( {\mathbf{x}} \right) - T_{i} } \right)^{2} }}{{T_{i} }}} \right]} .$$
(8)

4 A case study of flux-cored arc welding (FCAW) of stainless-steel cladding process

4.1 FCAW process modeling

To apply the proposed method in a real case, the flux-cored arc welding of stainless-steel cladding process will be investigated. Experiments were carried out using an ESAB AristoPower 460 welding machine, an AristoFeed 30-4-watt MA6 module (employed to feed the wire), and a mechanical system device to control welding speed, torch distance and torch angle, which was defined as 15° to “pushing”. The base metal was AISI 1020 carbon steel cut into plates of 120 × 60 × 6.35 mm. Filler metal was a flux-cored stainless-steel wire type AWS E316LT1-1/4, with a diameter of 1.2 mm and linear density of 7.21 g/m. Chemical compositions of the materials are presented in Table 1. To carry out this study, Minitab®, Matlab® and Visual Basic for Applications (VBA®) were used.

Table 1 Chemical composition of base metal and filler metal [23]

A mixture of 75% Ar + 25% CO2 was used as the shielding gas at a flow rate of 16 L/min. The welding technique used in the experiments was bead on plate, setting the input variables according to the chosen DOE. Input variables were wire feed rate (Wf), voltage (V), welding speed (S) and the distance from the contact tip to the work piece (N).

Based on the steps described in Sect. 3, experiments based on the DOE technique should initially be performed. Following a Central Composite Design (CCD), 31 experiments were carried out: 16 factorial points (2k = 24), eight axial points (2k = 2 × 4) and seven center points. The parameter levels were established based on previous tests and are presented in Table 2.

Table 2 Input variables and levels [23]

The samples were cut at four different points along the specimens (Fig. 2) and their cross sections were attacked with nital solution (4%) and then photographed. The software Analysis Five® was used to measure the bead width (W), penetration (P), reinforcement (R), penetration area (A2) and total area (At = A1+ A2) of the weld, as shown in Fig. 2. Then, the dilution percentage (D) was obtained by calculating A2/At. In addition, the percentage of productivity (PI) was calculated, as described in Gomes et al. [23]. Table 3 presents the results of the experiments regarding the measured responses. In addition, this table also describes the electric current (I) values measured for each experimental parameter. These values will later be used to calculate the energy costs of the process.

Fig. 2
figure 2

Welding bead, cross sectional weld bead profile and Bead geometry

Table 3 Experimental matrix and electric current values.

4.2 Multivariate Taguchi loss function optimization for FCAW process

Considering the quality responses, it is possible to verify that the experimental design from the general second order polynomial model (step 1) presented in Eq. (3). The coefficients were estimated using the ordinary least squares algorithm and are described in Table 4. In view of the original responses of the process, it is possible to verify that they all present an appropriate adjustment (\(R_{\text{adj}}^{2}\)), as detailed in Table 4. Furthermore, it is possible to define the individual optimum to calculate the loss function of the responses (step 2). Considering the characteristics of the welding process, the characteristics W, R and PI must be maximized, while P and D must be minimized [23]. In this sense, the utopian values of each response were found, with Y* = [15.570 mm; 0.830 mm; 3.340 mm; 16.3%; 100%] for W, P, R, D and PI, respectively.

Table 4 Model coefficients for the RSM

In view of the optimum points and Eq. (2), it is possible to generate a new experimental design for the loss functions of FCAW process (In this initial stage, a δ value equal to 1 was considered). While Table 5 presents the values of the loss functions, Table 6 presents the variance–covariance structure of the loss function values. As a result, the PCA strategy was applied to extract the component scores that adequately represent all the analyzed values. In view of the Kaiser criterion (highlighted in step 3), Fig. 3 presents the Pareto Chart for the principal components of the loss functions, where it is possible to verify that two components (LPC1 and LPC2) are necessary to represent the entire data set (eigenvalues greater than 1 and explanation percentage equal to 81.6%).

Table 5 Experimental matrix of the loss function values and component scores
Table 6 Correlation analysis for the loss function values
Fig. 3
figure 3

Pareto chart and number of principal components for loss function

From the RSM, it is possible to estimate the coefficients of the experimental design based on the loss function, represented by the principal components. The regression equations are described in Eqs. (9) and (10), showing high adjustment values with \(R_{\text{adj}}^{2}\) equal to 93.89% and 94.39% for LPC1 and LPC2, respectively. To graph the behavior of the equations mentioned above, Fig. 4 and 5 illustrate the response surface plots (in addition to the contour plot) for LPC1 and LPC2, respectively. From these graphs, it is possible to verify that LPC2 presents a more linear behavior when comparing with LPC1. It is important to note that the parameters that do not appear on the axes were fixed in their respective center point. Both Figures show the possible combinations between the control variables. Figure 6 illustrates the main effects of the components in relation to the parameters, considering the significant relationships for a 95% confidence interval. In Fig. 6a, it can be seen that the values of LPC1 increase as the parameters Wf, V and S increase, showing an inverse behavior for parameter N. However, when analyzing Fig. 6b, the effects of Wf and S have opposite meanings for LPC2, while V and N have a lesser effect in the center point region.

Fig. 4
figure 4

Response surface graphic for LPC1

Fig. 5
figure 5

Response surface graphic for LPC2

Fig. 6
figure 6

Main effects plot for a LPC1 and b LPC2

$$\begin{aligned} L_{\text{PC1}} & = \, 6.4{-}2.217 \times W_{\text{f}} {-}0.259 \times V{-}0.020 \times S + 0.381 \times N + 0.0716 \times W_{\text{f}} \times W_{\text{f}} + 0.0069V \times V \\ & {-}0.001486 \times S \times S + 0.01024 \times N \times N + 0.0313 \times W_{\text{f}} \times V + 0.01186 \times W_{\text{f}} \times S \\ & {-}0.0069 \times W_{\text{f}} \times N + 0.00776 \times V \times S{-}0.02230 \times V \times N{-}0.00822 \times S \times N \\ \end{aligned}$$
(9)
$$\begin{aligned} L_{\text{PC2}} & = \, 29.8 + 0.069 \times W_{\text{f}} {-}1.495 \times V{-}0.276 \times S{-}0.249 \times N + 0.0151 \times W_{\text{f}} \times W_{\text{f}} \\ & + 0.02680 \times V \times V + 0.000456 \times S \times S + 0.00637 \times N \times N{-}0.0106 \\ & {-}W_{\text{f}} \times V + 0.01267 \times W_{\text{f}} \times S + 0.0005 \times W_{\text{f}} \times N + 0.00014 \times V \times S \\ & {-}0.00055 \times V \times N + 0.00021 \times S \times N. \\ \end{aligned}$$
(10)

From these coefficients, it is possible to perform multi-objective optimization using the NBI method (step 4). For this, the two principal components were considered, where they present different optimization directions. Concerning the original responses and their respective optimization directions, the behavior of the loss function components was analyzed to define the appropriate approach. In this sense, it is possible to verify that LPC1 presents a greater degree of explanation of the characteristics that need to be minimized. (P and D). However, when analyzing LPC2, it appears that this component explains the characteristics that should be maximized (W, R and PI). In this sense, LPC1 must be minimized, while LPC2 needs to be maximized. Figure 7 illustrates the level of similarity between the original quality characteristics and the principal components for loss function, where Ward linkage method was used, considering the absolute correlation. Then, it was possible to calculate the individual optimal values for each component and find the payoff matrix for the NBI method, according to Eq. (11).

Fig. 7
figure 7

Cluster analysis between the principal components of the loss function and the original responses

$${\varvec{\Phi}} = \left[ {\begin{array}{*{20}c} { - 2. 5 9 9} & { - 1.1824} \\ { - 1.8911} & {2.4985} \\ \end{array} } \right].$$
(11)

In view of the criteria established by the NBI method, the distribution of weights to create the constraints (necessary when forming the Pareto frontier) was formulated using the Simplex-Lattice mixture-design. Thus, it was possible to create a total of 21 different weight combinations. The importance of using the PCA also stands out here, because without reducing the data dimension, the optimization method would need 210 subproblems to contemplate the weight distribution for the 5 original responses. Applying the NBI method, it was possible to find 21 distinct optimal, where all represent optimal Pareto points. Table 7 presents the values found for the frontier, with the optimal values of the components and the real values of the processes.

Table 7 Pareto frontier for the multivariate Taguchi loss function optimization approach

4.3 Optimal point selection using the total loss function

Choosing the best point on a Pareto frontier is not a trivial task. In this sense, to find the best point on the frontier, one must recalculate the loss functions considering the optimal Pareto values. That is, it is possible to find the cost of the FCAW process for each point of the Pareto frontier, from the machine parameters. The FCAW process cost (Ct) was calculated based on Marques et al. [19] and is described, for each line on the Pareto border, in Table 7. In this work, Ct included machine and labor (Cml), filler metal and flux (Cmf), gas (Cg) and energy (Ce), as shown in Eq. (12). Additional information to estimate process costs is described in Table 8, based on machine parameters. Table 9 presents the equations used to calculate the components of Ct.

Table 8 Information used to estimate costs
Table 9 Equations of the costs included in Ct.
$$C_{\text{t}} = \, C_{\text{ml}} + \, C_{\text{mf}} + \, C_{\text{g}} + \, C_{\text{e}} .$$
(12)

Under previous information presented (and also the Eq. (8)), it is possible to find the total loss function value (step 5). This value includes the total loss, for all quality responses in relation to the cost of each machine parameter. So, to find the best point on the Pareto frontier, just find the lowest total loss function value, which represents the lowest loss for the process. For this, the total cost is calculated, with the factor δ, for each point on the Pareto frontier. Subsequently, Eq. (8) is applied to find the minimum “total loss” value. Table 10 presents the cost and the loss values for the Pareto solution, in addition to the TLF values, indicating the optimal point of the frontier. Therefore, it can be inferred that the best point on the border is represented by line 2, with parameters X = [9.43; 28.89; 21.18; 20.56] for Wf, V, S and N, respectively. Such a configuration represents the values Y = [14.655 mm; 0.959 mm; 3.511 mm; 18.03%; 89.94%] for W, P, R, D and PI, respectively, being the optimal response in relation to the Pareto frontier. Figure 8 illustrates the relationship of TLF with the values found in the optimization of the multivariate loss functions (LPC1 and LPC2). The red dot highlights the optimal value found.

Table 10 Cost, loss function and TLF values calculated for the Pareto frontier
Fig. 8
figure 8

Relationship between the Pareto frontier and the total loss function

4.4 Comparison of results through the TLF approach

To compare the results with another study already mentioned in the literature, the results found in Gomes et al. [23] were investigated. In this study, the authors performed a direct optimization for the FCAW process. For this, they mixed the PCA technique with the mean square error approach (MMSE). To analyze and compare the results found by the authors, an analysis of the process costs was carried out from the machine parameters to the optimum point found by Gomes et al. [23]. The MMSE method provided machine parameter values of Wf = 10.31[m/min]; V = 26.97[Volt]; S = 50.33[cm/min]; N = 23.36[mm]. For these reasons, it was possible to apply the TLF decision-making approach, described in Eq. (8). Table 11 presents the information and calculations of the optimal points found for both studies, in addition to the cost information and the total loss function values.

Table 11 Comparison of studies based on optimal values and TLF decision maker

From this, it is possible to verify that the optimum point found in the study by Gomes et al. [23] presented a cost of US$ 5.30, presenting a final value of TLF equal to 4.7011. However, the method proposed in this study showed a lower total loss value (TLF = 1.0521), proving to be a better option for this process. In other words, when analyzing the results of both studies, it appears that the multivariate method Taguchi loss function optimization (proposed in this work) provided results closer to the targets established. In addition, the method of this study considers the relationship established by the process cost, promoting results in a scope closer to the industrial reality.

5 Conclusion

This study presents a multivariate proposal to find the combination of parameters to minimize the total quality values based on cost and loss functions. This proposal includes QLF, DOE, PCA and NBI. A case study using the flux-cored arc welding of stainless-steel cladding process was applied to validate this method. Finally, the following conclusions are presented:

  • The multivariate Taguchi loss function optimization method presents a viable alternative to optimize quality responses, considering the loss functions calculated. In addition, the method presents an alternative for decision making at the optimum points of the Pareto frontier that consider the reduction of process costs;

  • In the application of FCAW, the method presented an optimal value of Y = [14.655 mm; 0.959 mm; 3.511 mm; 18.03%; 89.94%] for W, P, R, D and PI, respectively. Such amounts represent a cost of US$11.13, resulting in the best value of the Pareto frontier. These results also promote competitive advantages for the business, increasing the possibility of customer continuity and promoting a better company reputation as well as increase in market participation.

  • The method also provides an emphasis on quality characteristics based on the customer’s interest, where the target values may vary based on the customer’s objective. In this sense, the benefits from optimization are translated into higher quality and lower costs from the customer’s point of view.

  • The use of the PCA strategy allowed to reduce the data dimension, in addition to considering the existing variance–covariance structure in the data set. Combined with the NBI technique, PCA promoted a minimization of 90% of the optimization subproblems (210–21 subproblems), reducing the computational effort required for this application.

  • The comparison with results from another study in the literature, made it possible to infer that the proposed method presented better performance when considering the cost of the process. The results showed that the loss-based approach provided results with less deviation from the targets, in addition to promoting less total loss. The TLF decision-making method proved to be a valid option to find the best point on a Pareto frontier for industrial applications and can be extended to other segments.

Finally, as suggestions for future studies, the proposed method can be extended to stochastic applications, as well as the use of other optimization and multivariate techniques. In addition, the TLF strategy can be applied to decision making related to other processes.