Introduction

Since the end of the last century, it was clear to the metal forming community that the rapidity and accuracy of numerical simulation methods would have soon allowed to perform not simply feasibility and validation tasks, but to achieve truly optimized process designs and solutions. The earlier attempts were directed to steady-state forming processes [1], which required less computational capacity. Then, the non-steady sheet and bulk metal forming process started to be investigated employing simulations aimed at their optimization [2]. Research activities on process optimization were especially intense in Korea [3] [4], in the North American Midwest, namely Michigan and Ohio [5] [6], and southern Europe, namely France and Switzerland [7] [8].

In the same period, numerical simulations allowed the use of increasingly complex material rheological or tribological behaviors, which in turn required increasingly complex experimental determination approaches. A way to circumvent or reduce the experimental burden of material identification was to use an inverse approach, i.e. to use the computer simulation of a forming operation or technological test as a model for inversely determining or fine-tuning the material and tribological parameters. The combination of numerical optimization and simulation, therefore, could be used not only to find an optimal process design for a given material but to find the most accurate material model for a given problem [9]. In 1996, Lionel Fourment and co-authors [10] demonstrated that an inverse optimization approach, intimately embedded in the Finite Element solver, could be used for both tasks with minor differences: process optimization (geometrical, in that case) and material behavior identification.

In the first ten years of the new millennium, direct process optimization and inverse analysis became fully mature and accepted in the scientific community and were incorporated in numerous commercial software packages. The ESAFORM scientific community was among the first to recognize and acknowledge the birth of a new and relevant research area and, under the initiative of Lionel Fourment and Ton Van de Boogaard, launched a mini-symposium on the topic which has been regularly proposed year after year. Since the nineties, a terrific amount of diverse techniques, approaches, and applications have been proposed over the years, ranging from metamodel-based optimization [11] to methods that incorporate or simulate the uncertainty of real experiments, using stochastic approaches [12]. A recent and significant challenge, which combines the computational efficiency of metamodels with real-world data, is to perform optimal real-time process control and optimization [13].

We can find in the literature methods that use three different levels of interaction with the FEM program:

  1. 1)

    techniques that are intimately merged with the FEM solver, directly influencing its iterations [14] [15];

  2. 2)

    methods that build an input-output dialogue with the FEM pre- and post-processors (adaptively planning a new simulation run [16] or a new simulation step [17], to chase the optimal solution);

  3. 3)

    higher-level approaches, that require the prior design of batches of simulations [18].

The goal of the present paper is not to provide a comprehensive state-of-the-art review of the optimization and inverse analysis scientific field, which would be nearly impossible, but rather to picture a representative overview of the most frequently used methods and applications, through significant examples. The paper also identifies the most recent research trends and formulates some proposals and predictions for future needs of the industrial users of metal forming numerical optimization tools and packages.

The paper is divided into three main sections: the next section addresses the traditional process optimization problems that must be solved in the process planning phase. In this section, the use of optimization in real-time optimization and control of metal forming operation of each process is also briefly presented. In the following section, it is discussed how uncertainty is treated in stochastic approaches combined with optimization methods. The fourth section focuses on the inverse determination of material parameters and process conditions and the fine-tuning and updating of simulation parameters. Finally, in the Conclusions section, the paper is concluded with a summary of the main findings and a projection of future trends and research focus.

A-priori process optimization - relevant examples of optimization problems in metal forming

In this section, the most frequently encountered and most representative process optimization problems which have been studied by the scientific literature are presented. Together with many relevant examples of application, the typical techniques and approaches used in the field are briefly introduced. FEM-based optimization of metal forming processes is, still at present, a topic with increasing interest. When querying “process optimization metal forming simulation” in Scholar Google, 530,000 results are obtained, yearly distributed as in Fig. 1. However, for a more conjunctive search involving the different forming processes and optimization, Fig. 2 shows that the interest of all topics is increasing, particularly for uncertainty analysis in forming optimization and inverse material identification. More than 500 and 200 papers are currently produced per year concerning uncertainties in forming optimization and material identification, respectively. This analysis highlights the increasing interest in these particular subjects.

Fig. 1
figure 1

– Biennial distribution of references found in Scholar Google with the search string “process optimization metal forming simulation”

Fig. 2
figure 2

– Annual references found using the Web of Science (Clarivate) for the different processes optimizations in metal forming

Sheet metal forming

Sheet metal forming distinguishes from other metal forming operations by the small thickness of the material to be deformed. A recent review concerning sheet metal components and their processes can be found in [19]. However, as in other forming processes, the use of optimization in sheet metal forming was only possible to the large advances in sheet forming simulation, which is known to have started from the 1970s. Yet, the large increase in the practical use of sheet forming simulations within the industry was in the 1990s [20]. All these simulations resort to the finite element method (FEM). Therefore, the main goal of optimization is the search for the variable input process parameters for a desired output. The majority of the discussed optimization problems in sheet metal forming are the following:

  1. 1.

    Tool design to avoid wrinkling, edge cracking and localized through-thickness deformation;

  2. 2.

    Process control to avoid long-term disturbances (changes in material properties, tool wear, temperature, etc.) or short-term disturbances (variable sheet thickness, uneven lubrification, etc.);

  3. 3.

    Inverse form-finding, which includes the process of designing tailored blanks subject to strain or thickness constraints.

The challenge of tool design is increased when, additionally to the defects mentioned in 1, simultaneous springback compensation, thinning compensation, and blank edge geometry correction are taken to the process and to the sheet’s shape. An example of this problem can be found in [21].

Computer simulation and databases have opened the possibility for new approaches to monitoring and, in particular, to control the performance and the quality of production processes. A recent communication has presented an overview of the common process monitoring and control approaches whilst highlighting their limitations in handling the dynamics of the sheet metal forming process [22]. As expected, the natural conclusion of this paper is that monitoring and control systems must be used in total collaboration to enhance the production process performance.

Communications with regard to sheet metal forming optimization from the ESAFORM community go back to the previous century. However, their number and impact were felt only at the end of the first decade of this century. An example is the work of Staud and Merklein [23]. They presented an inverse approach to the efficient forming simulation of Al6000 tailored heat treated blanks. This inverse problem searched for the specific areas of the blank that should undergo detailed heat treatments, which change the material properties and subsequently enhance the local forming procedure.

Van den Boogaard et al. [24] have performed a numerical sensitivity analysis of the blankholder force and several blank shape parameters in a deep drawing of a benchmark B-pillar. These sensitivities, found to be nonlinear, were also validated experimentally. It was highlighted that simple linear screening techniques that are much applied in the industry would not capture these sensitivities. Sensitivity analysis for numerical sheet metal forming processes was also discussed by Maia et al. [25] to improve the current trial-and-error process for inverse problems. The effect of numerical noise and small disturbances on the tools design variables in the final formed sheet was evaluated using an integrated approach for tools geometry manipulation, based on a parametric NURBS description of the tools’ surface.

In 2009, considering that springback reduction in sheet metal forming is conflicting with thinning reduction, Di Lorenzo et al. [26] applied a Pareto optimal design approach for simultaneous control of thinning and springback in a DP600 U-channel shape stamping processes. In this work, both the friction conditions and blank holder force were optimized as design variables accomplishing the reduction of excessive thinning and avoiding excessive geometrical distortions. The same team, one year after, presented a progressive design approach able to manage and optimize complex stamping operations [27]. This approach, applied to design a complex automotive sheet stamping operation, integrates numerical simulations, Response Surface Methodology (RSM) and Pareto optimal solutions search techniques. This team has also performed a comparative analysis of different robust design approaches in sheet stamping operations [28] in the same year they tried to use moving least square strategies to increase the efficiency of response surface optimization methods for complex sheet metal forming processes [29].

In the ESAFORM conference of 2011, the inverse form-finding problem (see Fig. 3) was also addressed by Germain and Steinmann [30] using an inverse FEM, however, their approach was limited to hyperelasticity. Nevertheless, two years after, again Landkammer and Steinmann [31] applied the inverse FEM to a contact forming simulation. To this end, a FE-code applicable to use the inverse mechanical formulation was coupled with a commercial software and the contact problem was approximated by displacement control. In 2015, the same two authors [32] also applied this methodology to the ring compression test. Despite highly deformed elements and tangential contact with varying friction parameters, the convergence rates of the method were nearly linear. In the ESAFORM conference of 2018, the same group applied a node-based non-invasive form-finding algorithm to a novel class of forming processes called sheet bulk metal forming (SBMF) [33]. This algorithm minimizes the differences between the spatial configuration and the prescribed discretized target configuration by updating the material configuration iteratively up to the optimum semi-finished product geometry.

Fig. 3
figure 3

– Inverse form-finding problem as opposition to a direct structural analysis problem, the latter generally solved using FEA. The desired stresses, boundary conditions and deformed configuration are known in the inverse form-finding problem, however, the unknown is the undeformed geometry configuration of a component

An example and discussion on the use of inverse solver technologies to support die face design in the automotive industry can also be found in [34]. Technologies such as target strain, sculptured die face and offsetless mesh are discussed and applied in the die design for forming of a car body class “A” panel.

Pilthammar et al. [35], from Volvo, knowing that the use of rigid dies and press has a significant negative impact on process simulation and performance, used an optimization framework to virtually design the stamping dies considering elastic die and press deformations.

Robust optimization, alternatively to classical and stochastic optimization (see Fig. 4), was also an important issue in sheet metal forming. Different approaches can be seen in the last decades, and details are provided in Sect. 3. The importance of robust optimization in sheet metal forming was highlighted by van den Boogaard and co-workers [36], and in particular the importance of the integration of robustness (uncertainty), optimization, and Finite Element (FE) simulations for achieving better products and cost reductions in the metal forming industry. For this purpose, a metamodel-based robust optimization strategy was proposed for metal forming processes and, in particular, for an industrial V‐bending process. Again, van den Boogaard et al. [37] showed for a strip bending process the deteriorating global behavior of the Kriging surrogate modeling technique. Therefore, the solution proposed includes a Radial Basis Function (RBF) surrogate model with Multiquadric (MQ) basis functions, which performs equally well in terms of optimization efficiency and better in terms of global predictive accuracy.

Fig. 4
figure 4

– Comparison of different optimization approaches used in metal forming: (a) general deterministic certainty optimization; (b) Stochastic optimization and (c) robust optimization. Stochastic optimization adds uncertain parameters samples from distribution and solves for some expectation. Robust optimization search for the (robust) designs considering the worst cases over uncertainty sets, therefore, reducing the risks

To tackle the problem that optimal design configurations obtained by conventional design methods do not always meet the desired targets due to the effect of uncertainties, a multi-objective robust design optimization (MORDO) was applied for sheet metal draw bending process by Lafon and co-workers [38]. Due to several conflicting criteria in sheet metal forming, a Pareto multiple objective criteria decision-making approach based on capability indices was proposed in their paper. Two years later, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) was proposed to build a surrogate model for the Numisheet2011’s Benchmark (springback 3D bending benchmark) by the French team of Lafon [39]. It was concluded that the presented POD-RBF approach is highly accurate for sheet metal shape optimization.

In 2019, Prates et al. [40] presented a systematic comparison on the performance of different metamodeling techniques in the analysis of variability in sheet metal forming processes, such as the U-Channel and the Square Cup forming processes. For this purpose, three steel grades (DC06, DP600 and HSLA340) were selected as reference materials and (i) the Young’s modulus, (ii) the isotropic hardening law parameters, (iii) the anisotropy coefficients and (iv) the initial thickness of the sheet metal were defined as variability sources for (a) springback, (b) thinning and (c) equivalent plastic strain as output. It was shown that the performance of Kriging metamodels is generally better than RSM, of which the latter are strongly dependent on the number of design points. At the same conference, the group of van den Boogaard also discussed metal forming processes based on inverse robust optimization and presented an inverse methodology to tailor the variation of noise parameters based on the allowable tolerance in the output [41]. The results of a non-linear process of a lab-type B-pillar part demonstrate how to adjust the input noise parameters at a minimum cost to meet the required output tolerance.

Again, two years after, Prates and co-workers [42] presented an enhanced numerical study on the influence of the material and process uncertainty in the stamping results of a square cup. However, here, both a quasi-Monte Carlo method and a variance-based sensitivity analysis were used to evaluate the variability in the simulation outputs/inputs parameters, considering their uncertainty. It was concluded that the geometry is the most sensitive output and the hardening law parameters and the anisotropy coefficients have the most influence on the stamping results variability of a square cup.

It should be highlighted that, back to 2017, Strano et al. [43] introduced the fusion/hierarchical metamodel for the prediction of the bend deduction in sheet air bending. The approach is to first build a kriging interpolator over the results of simulations run with different (i) material parameters, (ii) tools geometry and (iii) punch stroke values and, then, a fusion metamodel is built, which uses the kriging interpolator as a predictor in a regression model built over a training set of physical experiments. This approach allows (a) reducing the numerical error of the simulations by using the experimental data results, (b) it can be applied in an online numerical control, and (c) integrates all relevant process and material variables.

Concerning process control, a feedback controller was suggested by Endelt and Danckert [44], which transfers process information from part to part, reducing the impact of long-term disturbances, e.g. gradual changes in the material properties. This controller minimizes the effect during the punch stroke and considers the flange draw-in as a feedback error. Previously to process control, several communications concerning the search for sources of scattering and process variations were presented. Havinga and van den Boogaard [45] also investigated sources of variations using a computational identification algorithm to investigate the process, as opposed to the empirical by-experience methodology. Measurements from an industrial press were used to identify the process parameters of a thin steel flap bending process, and a metamodel-based inverse analysis procedure together with proper orthogonal decomposition (POD) of the force curves was used.

Topological optimization, which is today a very active topic thanks to additive manufacturing applications [46], was already applied in sheet metal forming, in particular in the design of a part holder considering dynamic loads during the return stroke of tool and ram. Burkart et al. [47], from Mercedes-Benz AG, presented an approach that leads to reduced dynamic tool loads by lowering the weight of the part holder in the early design phase. The part holder stiffness of a series part holder is improved while reducing the moving mass weight by topology optimization.

Artificial intelligence (AI) has long been used in combination with optimization methods in sheet metal forming, especially for processes which are difficult to design [48]. In 2010, Wang and Li [49] used support vector regression (SVR) and an intelligent sampling strategy to optimize sheet forming design. The wrinkling, crack and through-thickness deformation were minimized using the proposed fast analysis tool to surrogate the time-consuming finite-element (FE) sheet forming procedure in the iterations of the optimization algorithm. Machine learning, which is the label given to the most recent developments of AI, still has few applications to sheet metal forming process optimization, e.g. [50], while it is being increasingly used in inverse analysis [51].

Bulk metal forming

The contribution of the ESAFORM community to optimization problems in bulk metal forming also goes back to the beginning of the century. In 2008, Koch et al. [52] applied an effective stochastic simulation for the optimization of time, costs, and quality in cold forming. Although only a single simulation has been run, the mathematical model of the forging process with a deterministic and a statistical component led to improved solutions for the process.

Sharhriari et al. [53] investigated the effect of flash thickness and various geometrical forging design parameters, including internal and external drafts and internal and external corner radii, on the die cavity filling, forging load and raw material cost when forging Nimonic80-A, a superalloy material. As a result, a response-surface-methodology (RSM) based method for forging load prediction was presented, saving trial-and-error simulations to achieve it.

In 2010, Fourment and his team [54] have tackled multi-objective optimization problems in the field of non-steady metal forming processes, such as forging or wire drawing, using a metamodel assisted genetic algorithm. To decrease the computational cost, a surrogate model was developed and replaced the FEA simulations during the optimization process. The application of this methodology in the forging of a connecting rod (see Fig. 5) and wire drawing was applied with large success. In the same year, the same team has also evaluated automatic optimization techniques applied to a large range of forging industrial test cases [Fourment 2010b]. This work validates that evolutionary algorithms coupled with metamodelling are a good optimization strategy for a large range of industrial process optimization.

Fig. 5
figure 5

– Example of a multi-optimization problem in bulk metal forming. (a) The preform shape of a cylindrical billet in order to forge a (b) connecting rod. This problem accounts for the mass component minimization while preserve proper filling of the finishing dies at the end of the forging [54]. The work of Fourment et al. was a landmark in optimization of bulk metal forming processes

Considering that the forging die design is the most important step for product quality control, other works were also devoted to the initial billet and forging dies shape optimization. In 2011, Meng, Lafon et al. [55] integrated CAD, FEA and optimization software to simulate the process, build a surrogate metamodel and find the Pareto curve for the two-step axisymmetric metal forming die and initial billet shape. This communication shows that the tools for process optimization are all available, however, an integration of these tools is needed. In the following years, the same authors have presented a new methodology for optimization of forging preforms based on a one-step pseudo-inverse approach, a very fast approach compared to the classical incremental approach, however with some limitations [56].

Tube metal forming

The scientific literature related to the optimization of tube metal forming processes is here presented by highlighting the most significant works in the field.

Tube hydroforming (THF) is probably the single tube forming process that has attracted most of the research efforts because of its inherent complexity and because the correct determination of its loading curves (internal pressure vs. time and axial feed vs. time) is crucial to the feasibility of a THF operation (see Fig. 6). Most of the research efforts in this field were deployed in the 2000–2010 decade, with an extensive proliferation of different methods and approaches.

Fig. 6
figure 6

– Schematic of the tube hydroforming process. The tube of constant diameter is inserted in a forming tool (die) and constraint by the rams. Then, a pressurized fluid is responsible to hydrostatically deforms the tube into the desired shape

The earlier attempts tried to interact directly with the FEM solver codes, in order to obtain sensitivity coefficients during a single-run gradient-based optimization. As an example, in 2001, Yang et al. [57] provided an optimization solution for the tube hydroforming process using the internal pressure and the axial displacement as design variables and by minimizing the tube thickness variations via determining the optimal loading path in the tube expansion forming using numerical simulation combined with an optimization tool. A gradient-based method including sensitivity analysis was used for optimization purposes of the process. However, this kind of intimate interaction with the codes is obviously not user-friendly and as long as the software houses did not provide dedicated tools, this line of research, although very promising, remained confined to few very expert users.

Sequential optimization of THF processes

A much wider range of applications can be found for gradient-based methods, in combination with several FEM runs, planned either sequentially or in batches or in sequential batches. In all cases, optimization can be performed because the results of the simulations are used to form a so-called metamodel or surrogate model which is used to estimate the location of extreme points in the design space. The FEM is coupled with an optimization code, which applies an algorithm that searches the best solutions and proposes new iteration runs or batches. Sometimes the codes are based on analytical gradient calculations, in other cases the methods are non-parametric, e.g. they rely on artificial intelligence techniques.

As an example, Fann and Hsiao [58] investigated the tube thickness distribution and the part geometry of a T-shaped metal tube by finding the optimized loading conditions between the internal pressure and the axial feeding. They combined the conjugate gradient method and finite element method for their optimization study. Two optimization procedures: batch mode and sequential mode were used. The optimization process with sequential mode led to better results as compared to batch mode. However, the choice of constraint was of utmost importance for optimizing the loading conditions when using sequential optimization mode. Optimization based on batches of simulations is facilitated by many commercially available tools. An example is given by Imaninejad, Subhash, and Loukus who employed finite element simulation and optimization software for optimizing the loading paths (axial feed, internal pressure, and bulge controller) for closed-die and T-branch tubes hydroforming experiments [57]. LS-DYNA and LS-OPT were used for FE simulation and optimization, respectively. The objective of the optimization was to determine the loading paths that would produce a part with minimum thickness variation while maintaining the maximum effective stress below the material ultimate stress. In closed-die hydroforming, the objective was also to conform the tube to the die shape whereas, in T-joint design, maximum T-branch height was desired. Two types of simulations: low pressure and high pressure were performed for both closed-die and T-joint hydroforming processes. Furthermore, single-stroke (S), double-stroke (D), and quadruple-stroke (Q) functions consisting of one, two, and four linear end-movements respectively were employed to describe the axial displacement movement of the tube-ends. It was concluded from simulations that to obtain the optimized thickness distribution (without failure), the majority of the end-feed should be applied after the tube material yields under internal pressure and realistic formability of the tube material can be better realized when multiple strokes are employed for axial and vertical actuators for both closed-die and T-joint cases. Experiments were conducted on aluminum tubes for validation of these results. A good correlation between the experimental results and simulations was obtained.

Mohammadi, Kashanizade, and Mashadi simulated tube hydroforming of an aluminum (AlMgSi05) T-joint with finite element method (FEM) using ABAQUS [58]. In the optimization problem, the two variables were internal pressure and axial feeding whereas the clamping force was taken as the objective function to be minimized. Wrinkling, minimum thickness and die filling served as the constraints to be met. A wrinkling indicator, a calibration indicator, and a minimum thickness less than 80% of the initial tube thickness as per the standards were set up as the constraint limits. By training a neural network, the objective and constraint functions were obtained. Several optimization methods including hill-climbing search, simulated annealing, and complex method were used to minimize the objective function and the global extremum was achieved. An experiment was conducted by using the results obtained by optimization methods. The experimental results were in good agreement with the FEM results and the constraints were well predicted by FEM.

Analysis of process optimization for hydroforming of Aluminium Alloy (AA6063) tubes was conducted by Loukus, Subhash, and Imaninejad [59]. The alloy was hydroformed in two conditions: solution treated and quenched (W temper) and in the naturally aged condition T4. Two geometries were selected for the study: a hydroformed central bulge using a closed die and a T branch. Tube conformance to the die geometry and minimal thickness variation was optimized in the closed die configuration. For the T-branch configuration, the process was optimized to achieve the highest possible bulge height and minimal thickness variation. LS-DYNA and LS-OPT were used for simulations and optimization, respectively. For the closed-die central bulge simulation, the simulated results for both the W temper and T4 condition were found to be almost the same. However, the W temper condition yielded lower hardness as compared to the T4 condition. For the case of T branch hydroforming, W temper facilitated forming of a bigger T branch but had lower hardness compared to the T4 condition. The optimization of the material heat-treatment conditions and the hydroforming process parameters resulted in strains well in excess of the traditional forming thus enabling forming of more complex geometries.

Abedrabbo and colleagues presented an optimization problem with internal hydraulic pressure and end feed rate as design variables [60]. Tubes of different advanced high-strength steel (AHSS) materials were used in the study. For the determination of the best loading paths, optimization software HEEDS (Hierarchical Evolutionary Engineering Design System), based on Genetic Algorithms, was used in combination with finite element code LS-DYNA (Fig. 7).

Fig. 7
figure 7

–Optimized hydroforming loading curves (left) for the tube shown in the center, which is a DP600-T1.8 mm tube formed with the high-expansion process and the 6 mm; process flow schematic of the optimization process showing the interaction between HEEDS and LS-DYNA (right) [60]

An inverse finite element model was used by Chebbah, Naceur, and Hecini, coupled with the response surface method, to optimize the tube hydroforming parameters such as material parameters (hardening exponent n) and geometric parameters (initial tube length L0) [61]. In particular, they developed a nonlinear axisymmetrical FE model called CAXI_K for the simulation of tube hydroforming using a modified inverse approach, and the proposed model was validated using a numerical application of hydroforming of axisymmetric bulge from aluminum alloy 6061-T6 tubing and by comparing the obtained results with both experimental results from literature and results obtained using the classical incremental dynamic explicit approach by ABAQUS. After validation of the model, the optimization of the parameters was done by a coupling between an RSM based on MLS approximation and the SQP algorithm. The numerical application of hydroforming of axisymmetric bulge from aluminum alloy 6061-T6 tubing confirmed that the proposed method of optimization was efficient and required a very small amount of computation time.

Mirzaali et al. optimized the forming parameters of tube hydroforming using a combination of simulated annealing (SA) algorithm written in MATLAB and the nonlinear finite element code ANSYS/ LS-DYNA [62]. Obtaining the maximum formability of two-dimensional (2D) ASTM C11000 copper alloy axisymmetric tubes was the objective of the research and forming limit diagram (FLD) was used to provide the failure criteria. The initial approximated pressure loading path was determined analytically using theoretical equations. The loading path optimization was carried out for two different cases: constraint bulging and free bulging and in both cases high formability of the tube was generated. Some experiments were also conducted according to the optimization results obtained, and a good agreement was established between simulated and experimental work.

Bucconi and Strano proposed an optimization strategy to reduce the total energy input for a tube hammering hydroforming process with pulsating pressure using a metamodel-based optimization algorithm and applied this strategy to an industrial case study [63]. The simulation was done using PAM-STAMP and the material of the blank was AISI 316 L. A total of 162 simulations were run with four design parameters: forming pressure, calibration pressure, pulsating pressure amplitude and punch displacement, and three response variables: the distance tube-die at the end of the process, the thinning, and external energy expenditure. Only 124 out of these proved to be valid satisfying all the constraints imposed by the forming limit diagram. A kriging metamodel was developed using the design and response variables and an optimization algorithm was implemented in MATLAB to minimize a multivariable nonlinear constrained function. Out of all the optimization results, the five best results were verified by further FEM simulations.

Adaptive simulation of THF processes

An ambitious approach to the optimization of loading curves in THF is trying to do it while the simulation of a single operation is running, adjusting the loading curves after reading the instantaneous simulations results. This approach could be called “adaptive simulation” [64] and requires that the solver is interfaced to an adaptive controller (Fig. 8).

Fig. 8
figure 8

–flow chart of the adaptive simulation approach to quickly determine a wrinkle-free loading path in 1 single simulation run, subdivided in small time steps [64]

As a relevant example, Aydemir et al. [17] applied an adaptive simulation approach to design the hydroforming process of a T-shaped component. The adaptive system aimed to obtain adequate hydroforming process parameters (the internal pressure and the axial feeding) by avoiding the onset of wrinkling and bursting via incorporating a wrinkle indicator, a necking indicator, and a fuzzy knowledge-based controller (FKBC). The wrinkling detection procedure is inspired by the plastic bifurcation theory. For necking detection, the criterion based on the forming limit curve (FLC) was employed. The process parameters were adjusted during the simulation via the fuzzy knowledge-based controller using the two criteria discussed above. The goal of the simulation was to manufacture the part with a maximum of material in the die cavity. The computationally derived process plan was obtained at the end of one single simulation. The FLC was closely approached at the end of the simulation. A virtual database-assisted fuzzy process control system was developed by Manabe et al. and applied to T-branch forming with a counterpunch to determine the optimal loading path of the tube hydroforming process [65]. The validity of the system was demonstrated for an aluminum alloy (A6063-T1) tube. Axial feed, counterpunch displacement, and internal pressure were the control variables. An explicit dynamic finite element code was used in the simulation for the virtual control system. An optimum loading path could be found using the fuzzy control algorithm. By utilizing the optimum loading path obtained via the virtual control system, experiments were carried out and T-branch product was successfully hydroformed. The results were compared with the conventional manual control path that is attained by a trial-and-error approach. The results showed that the designed control system along with the fuzzy control algorithm provided an adequate loading path in the hydroforming process for an aluminium alloy tube and thus confirmed the validity of the fuzzy control algorithm and virtual control system.

Shape optimization of THF processes

While most of the literature on THF addresses the problem of determining the optimal loading curves, some studies are focused on shape optimization of either the final geometry or the initial preform. An interactive design tool was developed by Kirby, Roy, and Kunju for the optimization of the tube hydroforming process by coupling nonlinear optimization methods with finite element analysis and morphing technology [66]. HyperForm along with HyperMorph, HyperOpt, and HyperStudy was used for developing an optimal design approach for improving the formability of the hydroformed part. Tool fillets and the hydroforming pressure were chosen as the design parameters and the initial shape of each shape variable (upper die radius, lower die radius, and die cross-section) was morphed to set up the design space. Maximum thinning was chosen as the quality function and the objective was to achieve a reference maximum thinning of 25%. The manufacturing effects (thickness and plastic strain) were transferred to a component-level crash model via HyperWorks Result Mapper (HWRM). A comparison between the nominal thickness run (no forming effects) and forming effects initialized run was conducted. The buckling showed by the forming effects initialized run was significantly different compared to the nominal run. The maximum barrier reaction force and the internal energy were also higher in forming effects initialized run and it was concluded that the inclusion of forming results can significantly alter the crash performance of the component. Yoon and colleagues developed a computational direct design method to guide the iterative design practices based on analytical methods and ideal forming theory for the design of non-flat preform for tube hydroforming processes [67]. A preform optimization methodology was also proposed based on the penalty constraint method to constrain the hydroforming design solution, such as cylindrical preform for an extruded tube. Furthermore, incorporation of the frictional effects was implemented by modifying the extreme plastic work criterion. The suggested formulation was verified by analyzing three examples: (a) Preform design for the bending process of an extruded hollow tube; (b) Preform design for hydroforming process by modeling a simple target geometry; (c) Preform design for a complex hydroformed industrial part. It was concluded that the proposed direct design method gave essential design information on optimum preform shape and helped in the evaluation of the feasibility of the target shape at the initial die design stage.

Multi.objective optimization of THF processes

A minority of papers can be found that deal with multi-objective rather than single-objective optimization problems in THF or in tube bending [68]. When multiple objectives are targeted, a general approach is to represent the space of the possible solutions with Pareto charts (Fig. 9).

Fig. 9
figure 9

– Pareto sets with regard to objective functions form f1 to f4 (fracture, wrinkling, severe thinning, and corner radius filling indicators), where one solution is highlighted as a larger green dot [68]

As an example, to determine the optimal load path for tube hydroforming in a die with a square cross-section, An, Green, and Johrendt developed a multi-objective optimization algorithm combined with the Taguchi statistical method and FEA [69]. The Taguchi method was used to create a design of virtual hydroforming experiments, and numerical simulations were carried out with the finite element code LS-DYNA. Moreover, ANOVA was utilized for sensitivity analysis of the hydroforming process to the various parameters that define the load path. The study involved multi-objective functions like necking/ fracture, wrinkling, and thinning, and the response surface methodology was used with the most sensitive factors to obtain a defect-free part. Another objective function based on the final corner radius in the part was also added in the optimization model. The forming limit stress and strain diagrams were used to evaluate the forming severity of the virtual hydroformed parts. Both normal boundary intersection (NBI) and the single-objective approach using LS-OPT were used to obtain the optimal load paths. The optimization done with NBI resulted in a more robust hydroforming process and led to improved part quality.

Optimization of bending processes

Another process that requires the determination of several process design variables, although in this case they are not time-dependent, is tube bending and especially the rotary-draw type, which is the method that allows obtaining the best geometrical quality, i.e. sharp bends with narrow bend radii. In the field of tube bending optimization, Xu and colleagues proposed a significance-based optimization method of the parameters based on the finite element (FE) simulation for numerically controlled bending of thin-walled aluminum alloy tubes with a small bending radius [70]. Multiple parameters were chosen for the study and their influence and significance on the maximum wall thinning ratio and the maximum cross-section distortion degree were analyzed. A fractional factorial design was used for the significance analysis of the parameters. The simulation was done using ABAQUS/Explicit and the tube material was 5A02O. Experiments were conducted to validate the FE model and the simulation results agreed with the experimental results. After validation of the FE model, an optimization process was carried out. Among the multiple parameters selected, the clearance between the tube and the wiper die was found to be significant and was selected for optimization and it was found that optimization of the significant parameter resulted in a more uniform deformation.

Table 1 references on optimization of tube forming processes

The role of uncertainty in forming process optimization

FEM simulations of forming processes are inherently deterministic. Real problems are not, they are affected by the uncertainty of several conditions: material parameters, geometrical variables, tribological conditions, process design variables. etc. The literature shown in Sect. 2 generally considers that no variation or uncertainty may affect the results.

In some cases, especially when the optimal solution is found on the edge of feasibility (which is a very frequent occurrence, indeed) neglecting the uncertainty of process conditions may be dangerous. For this reason, an increasing number of researchers have addressed the issue of process design optimization under uncertainty. While running FEM simulation of forming processes, a subset ξ of the parameters may be considered to be partly non controllable, i.e. coming from a random distribution (see schematics in Fig. 10). As a consequence, the vector z of the simulation responses will be also statistically distributed according to a distribution which is not known a-priori.

Fig. 10
figure 10

– FEM analysis under uncertainty, seen as a black box. [73]

Reliability assessment problems

Before performing any kind of optimization, reliability assessment of the process must be carried out, as a predecessor of reliability optimization. In other words, either the probabilistic distribution of the response vector z (Fig. 5) with its mean µzi and standard deviations σzi are required and/or the probability of failure Pf must be estimated. Reliability assessment incorporates the evaluation of the probability of failure for a given process solution (for example wrinkling, tearing and excessive thinning, etc. in the case of sheet metal forming operations).

Reliability assessment can also be used not only to assist optimization, but also to allow the validation of a material model or to the calibration of an FEM model. As an example, Baghdasaryan et al. proposed an approach for model validation via uncertainty propagation based on response surface methodology to create metamodels while incorporating various types of uncertainties involved in a model validation process [74]. They illustrated their approach using an example of a sheet metal flanging process for predicting springback angles by considering two FEM based on the combined hardening law and the isotropic hardening law, respectively. Polynomial RSMs were created, found to be accurate, and were used for uncertainty propagation for the FEM based on the combined hardening law. By a comparison between the performance distribution obtained from uncertainty propagation with the results from the single experiments, critical confidence levels were identified and the model was declared to be not statistically invalid. However, for the case of the FEM model based on the isotropic hardening law, the response distribution did not follow the normal distribution and the use of data transformations to the polynomial model based on the isotropic hardening law was suggested.

Merten, Liebold, and Haufe compared the metamodel and the classifier-based approach to estimate the robustness and reliability based on a drawing simulation of a fender geometry. The simulation was carried out using LS-DYNA and LS-OPT was used for the probabilistic analysis. Material properties like yield and tensile strength were alternated automatically within the framework of LS-OPT based on a sequential metamodel-based Monte Carlo Analysis. The influence of the variation of the input variables was investigated on thickness reduction and the formability index. Both metamodel approximations and classifiers were used to calculate the statistical values such as mean and standard deviation. Similar results were obtained from both the metamodel- and the classifier-based approach and identified the critical regions of the part to improve the reliability of the models [75].

Reliability optimization problems

A frequently used approach is to directly addressed the goal of optimizing the reliability of a process, i.e. either to minimize the the risk of failure P F or to minimise the variability of some response variable σzi of the process. This approach can also be called “robust design” and it must be preferred when the feasible process windows are expected to be small.

Nejadseyfi et al. applied an inverse robust optimization approach to determine the acceptable material and process scatter based on the specified product tolerance to the forming process of a lab-scale B-pillar made of dual-phase (DP 800) steel. MATLAB was used to solve the inverse robust optimization problem by implementing a gradient-based optimization algorithm, namely sequential quadratic programming (SQP). To carry out the inverse analysis efficiently, analytical propagation of uncertainty was combined with metamodeling. AutoForm R7 was used to build the FE model and a metamodel was made using the simulation results. Blank-holder force and blank corner radius were chosen as the design parameters and noise parameters were the strength coefficient in the Swift hardening law, the friction coefficient in the Coulomb friction model, and the thickness of the sheet. The final angle of the profile and percentage thinning were the response and constraint, respectively. The order of variation due to numerical errors in FE simulations was lower than the order of variation caused by variation of noise variables according to preliminary simulations. A DOE was created using the Latin hypercube sampling (LHS) technique and the Kriging method was used for metamodeling. The presented inverse approach was able to satisfy the specified product tolerance by predicting the required adjustment for each noise parameter [12].

Sahai et al. applied the Sequential Optimization and Reliability Assessment (SORA) method as a new probabilistic optimization strategy to design a sheet metal flanging process with the focus on springback. They used FEA simulations to determine a combination of sheet metal and tooling configurations resulting in the desired final springback of 110°. Owing to the variations in material and manufacturing properties, the sheet thickness and gap were selected as random design variables. The die corner radius r was chosen as a deterministic design variable and Young’s modulus and yield stress were selected as random parameters. The maximum absolute value of strains of the flanged sheet metal was required to not exceed a specified value. This constraint was a probabilistic constraint required to meet the reliability level of 99.99%. A second-order polynomial as the surrogate model was used to reduce the computational effort. The optimization converged in five probabilistic design cycles satisfying all the imposed conditions [76].

Buranathiti and colleagues used a sheet metal stamping process of mild steel for a wheelhouse to illustrate their approach of creating a system-level robust design model to consistently quantify the margin of safety/failure (tearing and wrinkling) and to efficiently take uncertainties into account in sheet metal stamping. The objective was to maximize the total mean value of margins and to minimize the total variance of margins to get a robust design for the wheelhouse stamping process. The margins were defined quantitatively by using stress-based forming limit diagrams (SFLD) for tearing and an energy-based approach for wrinkling. Draw beads were used to restrain the blank for the forming process instead of the blank holder force. Friction conditions, draw bead configurations, sheet metal properties, and numerical errors were the main parameters of interest. A weighted three-point-based method that estimates the statistical characteristics (mean and variance) of the responses of interest (margins of failures) was used as the uncertainty propagation technique and the results obtained were compared with results from other techniques (deterministic design and MCS). It was observed that the weighted three-point-based method offered a good solution that agreed with moments and responses from MCS more efficiently and robustly [77].

To tackle the uncertainties on the material properties, the geometry of the blank, and process parameters of a draw bending process, Lafon, Adragna and Nguyen proposed a multi-objective robust design optimization procedure. The procedure was applied to the NUMISHEET 2011 case study in which the investigation of the springback behavior of advanced high strength steels of DP780 steel was conducted. In this study, ABAQUS was used for modeling and numerical simulation of the draw bending process. The blank holder force, the friction coefficient between tools and the blank, the material properties of the DP780 steel: yield strength and tensile strength, the thickness of the blank, and the radii of die and punch were taken as the input parameters. The springback parameters: angles, the sidewall curl, and the displacement of a virtual hole were taken as the output parameters and computed using MATLAB. A good agreement between the results of their numerical model and the experimental results of the NUMISHEET 2011 benchmark was obtained. For robust optimization, the blank holder force, the radii of die, and punch were taken as the design parameters and the other four as noise parameters. To solve the optimization problem, a metamodel was created and DOE based on a full-factorial analysis was set up. Several metamodels: Kriging, Singular Value Decomposition of degree 2 (SVD2), Singular Value Decomposition of degree 3 (SVD3), Radial Basis Functions (RBF), and Neural Networks (NN) were tested with RBF coming out to be the best. Stochastic optimization algorithm NSGAII in the ModeFrontier software was used for the optimization process and it was demonstrated that the effect of uncertainties can be reduced by controlling the blank holder force [38].

Optimization under uncertainty

An effective form of optimization under uncertainty needs to build some objective function y of the response variables in Fig. 8. Its goal is to determine an optimal solution, with a reasonable probability for the solution of being feasible. This approach can be called “optimization under uncertainty”. The difference between the methods of the previous and the present section (…) is subtle, since robust optimization is clearly a special case of optimization under uncertainty.

Faes et al. presented a method for minimization of the mass of a deep drawn cup, taking into account the uncertainty arising from the production process. In their method, they coupled production process simulation with a structural model of the component (a steel sheet metal-formed cup) resulting in an integrated workflow. Probabilistic and interval approaches were then applied for design optimization. Both a reliability-based design optimization (RBDO) and interval-based design optimization (IBDO) approach were applied and a non-deterministic model was constructed for both approaches and the initial plate thickness was optimized using these two approaches. The results obtained from both approaches were then compared. Similar safety margins for the component were obtained using both RBDO and IBDO. However, IBDO was found to be more efficient in terms of computational efficiency and the probabilistic analysis weighed heavily on the quantification of the non-determinism in the uncertain model quantities [78].

Colosimo and colleagues proposed a metamodeling optimization method based on a hierarchical “fusion” combination of both experimental (Hi-Fi) and numerical (Lo-Fi) data to reduce the experimental and computational effort required for calibrating the parameters of FEM simulations models. The model was applied to a real problem: optimum design of aluminum foam-filled steel tube to be used as an anti-intrusion bar in automobiles. To describe the results of the computer experiments, Gaussian models were used, and then a linkage model was used to adjust the prediction provided by the first model for a more accurate representation. Experimental three-point bending tests and numerical simulations were performed with different tube shapes and materials and according to two designed plans scenarios, i.e. with calibration of experimental data and with no calibration. The optimization model was run with both scenarios, provided the same optimum solution, and had a similar predicting ability and resulted in a reduction of time and effort in the calibration of input simulation data [79].

Zhang, Sheng, and Shivpuri conducted a probabilistic-based optimization to find the optimal combination of blank holder force and friction coefficient under the presence of variation of material properties for the deep drawing process of Hishida aluminum (AA6181A) part by simulating the drawing process using PAM-STAMP. A quality index (QI) was established as the weighted sum of probability of no wrinkling and probability of no fracture for risk measurement and the objective was to maximize it. A number of optimizations were conducted using both deterministic design (DD) and probabilistic design (PD) based on a different value of weights of no wrinkling and no fracture and a significant difference was observed in the results obtained from the two designs. The deterministic design was found to be prone to extremes whereas probabilistic design makes a compromise between the wrinkling and fracture very well. PD was found superior to DD and the quality index was affected largely by friction coefficient compared to blank holder force [80].

Strano and Burdi proposed a method for stochastic optimization in problems requiring large computational times, having a large probability of failure, and large non-uniform variance of results using FEM analysis of the design variables of sheet metal processes. The method utilized Kriging interpolation of a cost function, which needed to be minimized under constraints depending on both deterministic controllable variables and random non-controllable variables. Binary logistic regression analysis of the simulation results was used to assess the failure probability. The described method was applied to optimize the fluid pressure curve in a flex forming operation of an inconel component. The output cost function was a function of indicators of thinning and wrinkling. The limitations of the flex forming press (maximum and minimum pressure values) were considered. Furthermore, the optimal solution was to be robust to a change in the lubrication conditions. The proposed method was successful to predict the optimum values while satisfying all the constraints [81].

Strano presented a new approach called Reliability-Based Economical Optimization (RBEO) for design optimization of deep drawing or stamping processes under epistemic uncertainty. The approach was based on the minimization of direct variable industrial costs (namely the material costs and the failure costs), rather than quality or reliability. The method was based on the knowledge of three cost coefficients: the sheet buying price [€/Kg]; the re-selling prices of scrap materials [€/Kg]; the cost of a defective part [€/part], a constant value which quantifies several costs “wasted” as a consequence of the production of an unprofitable part. The benchmark case of Numisheet 1993 was used for the demonstration of the method (see Fig. 11). Compared to the conventional RBDO approach, the proposed method was found to be useful particularly with an increment in the buying price of the sheet metals and/or the dimensions of the stamped parts and/or the dimensions of the process feasibility window or a decrement on the cost of defective parts [82].

Fig. 11
figure 11

- The deep drawing of a square panel (Numisheet 1993 benchmark) is used to demonstrate the RBEO approach. (a) Top view of 174 of the blank and formed part; the portion of the part out of the trimming line is sold as scrap. (b) Optimum BHF-values for different combinations of sheet buying (p b)and selling (p s) price. Retrieved from [82]

Table 2 references on optimization under stochastic conditions

Inverse material identification

The distinction of direct and inverse problems is linked to historical issues and the characterization of the cause and effect in a system. Oleg Alifanov [83] stated that the solution of an inverse problem entails determining unknown causes based on observation of their effects. This is in contrast to the corresponding direct problem, whose solution involves finding effects based on a complete description of their causes. The direct problem in science and engineering, which relates the model parameters to the data that is observed/measured, can be conceptually formulated as Model parameters→Data. The inverse problem is considered the “inverse” to the forward problem. Therefore, it relates Data→Model parameters.

In forming simulations, and to obtain accurate stress and strain fields, the FEA code requires secure input data such as geometry, mesh, non-linear material behaviour laws, loading cases, friction laws, etc. This problem fits the previously described direct problems, in which the quality of the results relies on the quality of the input data that are not always available. In order to overcome some of these difficulties, the inverse problem must be solved. The interest of the forming industry in inverse engineering approaches is increasing. This fact occurs mainly because trial and error design procedures, commonly used in the past, are no longer competitive. Considering the need to evaluate the input data, distinct inverse problems can be formulated.

One category of inverse problems in metal forming is called parameter identification or calibration of constitutive models. The aim of these problems, for instance, is to estimate material parameters for constitutive models. The development of new materials and the effort to characterize the existent materials led to the formulation of new complex constitutive models. However, many of these constitutive models demand the determination of a large number of parameters adjusted to the material whose behaviour is to be simulated, and, for the actual complex constitutive models, the procedure requires non-linear optimization.

The parameters’ determination should always be performed confronting mathematical and experimental results resulting in a function that must be evaluated and minimized:

$$\underset{\mathbf{A}}{\text{m}\text{inimize }}f\left({g}^{\text{n}\text{u}\text{m}}\left(\mathbf{A}\right)-{h}^{\text{e}\text{x}\text{p}}\right)$$
(1)

where A is the set of parameters to be searched by optimization, f is the cost function that guides all the parameter identification (optimization) process and g num and h exp represent the functions that account for the numerical and experimental observations, respectively. Altough g is a computational function of the parameters measured and, therefore, must be iteratively calculated, the experimental observations also undergo by some sort of calculations (e.g. DIC procedures) being a function stabelished at the beginning of the procedure.

Figure 12 presents the general procedure taken for the inverse identification of parameters for non-linear plasticity models. The first methodology, called Finite Element Model Updating (FEMU), compares measurable variables to obtain the cost function: local observations, such as the strains, and global observations, such as the load. The second methodology, called the nonlinear Virtual Field Method (VFM), uses a balance equation between the external and internal virtual work to calculate the cost function and to determine the parameters. However, this balance is built using special test functions, denoted as virtual fields, which are responsible to find new local/global balances within the specimen field and avoid the need for unknown data, such as the constrained boundary conditions. Although the balances brought by the VFM are of integral formulation, as opposed to the local and direct strain comparison made in the FEMU approach, the possibility of using an infinity of virtual field functions allows to have infinite relations.

Fig. 12
figure 12

– General methodologies for parameter identification: (a) Finite Element Model Updating (FEMU) and (b) Virtual Field Method (VFM)

The quality of the parameter identification procedure is not only reliant on the (i) methodology, such as the ones shown in Fig. 12, and its cost function but also to (ii) the nature, amount and quality of the experimental data; (iii) the optimization algorithm used; (vi) the numerical analysis and simulation.

State-of-the-art overview

A comparison of the general methodologies and strategies to solve the inverse problem of parameter identification of constitutive mechanical models using full-field measurements can be seen is reported by Martins et al. [84]. This paper, also written for researchers and starter engineers in the problematic of constitutive models’ calibration, collects and describe the four most used and promising methodologies for parameter identification for constitutive models. In this regard, another review paper that must be highlighted is from Avril et al. [85].

Recently, several enhancements were made for the achievement of experimental data on both the quantity and quality in comparison with the data achievement from classical tests acquired with strain gauges or linear transducers. Full-field optical measurements like digital image correlation (DIC) or the grid method have brought a paradigm shift in the experimental mechanics. A detailed overview concerning these methods can be seen in [86].

Although inverse identification techniques previously described as the FEMU and the VFM have been increasingly developed and there was a huge advance in the optical measurement system, mechanical test methods and specimens are generally not well adapted and cannot take advantage of this evolution. A recent review of the research concerning the design and optimization of heterogeneous mechanical tests for the identification of material parameters from full-field measurements can be seen in [87] by Pierron and Grediac. This state-of-the-art review paper clearly shows that designing mechanical tests for the actual technology, here called Material testing 2.0 (MT2.0), is an emerging research topic, which lies at the frontier between image processing, experimental and computational mechanics.

The choice of a robust optimization algorithm for the inverse problem of parameter identification in non-linear mechanics is a task not particular for this inverse problem and can be seen in other methods for parameter identification of physical models represented by partial differential equations [88]. However, for non-linear plasticity models and the generality of the cases, gradient-based optimization methods are used. The choice for a least-square gradient-based algorithm can be justified by the cost function generally defined in these problems, which is a square difference between the experimental and numerical observable variables or between the external and internal virtual work. These algorithms perform well and seem to be suitable but also present disadvantages, such as initial parameter set dependence and local minima tendency [89]. A few studies have explored the use of other optimisation algorithms and strategies, such as direct-search and stochastic methods [90], [91].

The numerical analysis and simulation using the constitutive model are generally performed using the Finite Element Analysis (FEA) of the experimental mechanical test. Although this FEA reproduces the boundary conditions of the test and nowadays the precision of the FEA codes and software is impressive, this simulation is still a numerical model including assumptions and simplifications. Nevertheless, in the last decades, the effort given to this task is clearly larger than to the others. From Turner to now, both the methods and computational power take a remarkable evolution [92]. A review concerning the modeling of metal forming can be seen in [93], in [94], or in well-known books on forming technology.

The beginning at ESAFORM

The contribution of the ESAFORM, particularly from the ESAFORM conferences, for this thematic was significant and the aim here is to give an overview. The first contributions date back to the beginning of the century, however, only after 2008 the contribution is clear.

Gavrus et al. [95] used the Erichsen drawing test to inversely identify the rheological parameters for steels. A scheme using a finite element model was successfully used. However, no heterogeneous full-field data was used. In the same year, Pottier et al. [96] have already used full-field measurements with digital image correlation and an inverse method to identify five parameters of an elastoplastic model. This procedure allows using the data coming from the diffuse necking stage to successfully identify the parameters, which would be impossible with classical homogeneous data. The discussion between the use of full-field (numerically reproduced by FEA) and homogeneous (numerically represented by a single point) data was later discussed by de-Carvalho et al. [90]. It was shown in the latter paper that the possibility of using heterogeneous deformation stages of the mechanical tests, such as the diffuse necking phase, enriches the inverse identification procedure at the expense of an increased computational effort. Additionally, the integration approximations resulting from the FEM influence the identification procedure.

An inverse analysis was also used by van Hoof and Lain [97] to identify the parameters of a micro-macro mechanical model (mean-field model coupled with a Gurson-Tvergaard porous plastic law). The heterogeneous mechanical response was simulated using Abaqus and genetic algorithms were used as optimization technique. Seven parameters of each phase (pearlite, ferrite, and graphite) were identified, being the ferrite phase the most difficult phase to characterize numerically.

In 2010’s ESAFORM conference, Aydin et al. [98] identified the parameters of the advanced Yld2000-2d yield criteria using the Erichsen cup drawing and tensile tests. For that purpose, FEA is compared with the experimental values and the gap of strain and stress fields and r-values are minimized using LS-OPT package.

Later, in 2013, Grilo et al. [99] presented a practical implementation of a FEMU inverse method for a non-quadratic Yld2004-18p yield criterion, combined with a mixed isotropic-nonlinear kinematic hardening law. Homogeneous single element tensile, shear and bulge tests were used, in monotonic and cyclic load, together with a least-square optimization technique to achieve quite good results for dual-phase steels and high strength aluminum alloys. Szeliga et al. [100] applied a similar procedure for internal variable models one year after. In their work, the internal variable represents the average dislocation density and relaxation tests were used in order to take the recrystallization kinetics into account. To improve the inverse identification process, a sensitivity analysis was performed. The identification of the microstructure evolution was discussed by the same authors in a later communication [101], however, the inverse analysis took two-step compression tests, which presents non-uniform fields of strains, stresses, and temperatures. Therefore, a full-field Finite Element Model Updating (FEMU) procedure was applied.

The opportunity of full-field data

The inverse identification process of constitutive parameters can be improved with enhanced and controlled input data, particularly with a large richness of strain states. Considering this assumption, Rossi et al. [102] presented a numerical procedure to design an optimal geometry for specimens that will be used to identify the hardening behaviour of sheet metals with the virtual fields method (VFM). The geometry of the specimen was parametrized and 25 geometries were evaluated considering the identification results and consequent stress-strain curves. In this procedure, all the errors that come from the optical technique, including the effect of resolution and spatial resolution were included with the use of DIC-filtered synthetic images. It was observed that a sharp strain concentration close to the notches is detrimental for the identification process and that, although most geometries are able to identify the hardening behaviour at low strain, only a few gave satisfying results at large strains (see Fig. 13).

Fig. 13
figure 13

– (a) Geometry and parametrization for specimen design. Seven design variables are used to obtain the best identification procedure. Equivalent plastic strain for the (b) final optimal geometry and for (c) a non-optimal one for comparison purposes (retrieved from [102])

Aquino et al. [103] presented a new methodology to design new heterogeneous specimens for parameter identification using shape optimization procedures and a strain richness criterion. Two new specimens were presented where strain states from compression to tensile were generated. Later, other specimen design methodologies were developed. Almeida et al. [104] used a preliminary topology methodology based on statistical information to find new specimens, however, much of their results and achieved shapes cannot be used in real mechanical tests due to the difficulty of specimen manufacturing.

In 2020, Zhang et al. [105] also designed a complex specimen using shape optimization procedures. However, a strain-richness assessment method, called identifiability, was proposed to evaluate the effectiveness of the new specimens in the identification process of anisotropic Hill48 mechanical model. Validation of the work was accomplished using DIC-based synthetic experimental work. Clear improvements of the classical dog-bone specimen were seen with the addition of a designed c-shape notch. The latter work on identifiability was recently extended to the Yld2000-2d yield function [106].

Macek et al. [107] compared different heterogeneous test designs from the perspective of the confidence interval quantification of inversely identified parameters, where the influence of systematic and random error of a DIC optical system are taken into account. For that purpose, the authors statistically monitored the uncertainty data processed from the DIC system in the last three years for over 850 heterogeneous test measurements, without any classification of the experiment types. The results expose the appropriateness of individual specimen designs for the identification of particular material parameters, giving large motivation to further development of enhanced specimen shapes, such as the one developed by Conde et al. [108]. The difference of this later work is that a specimen was designed considering shape optimization procedures and a design universe constrained to the interior notch. This decision was justified by the use of existing standard testing machines and grips, and the slipping control in the grips. Results show that the specimen’s heterogeneity is increased with a non-circular interior notch, originating both uniaxial tension and shear strain states in the plastic region.

In the last decades, one has seen the development of several inverse methods for the parameter identification of plasticity models using full-field strain data. Although the majority of the research uses the Finite element model updating (FEMU), as described previously, other methods have gained notoriety. Methods, such as the Constitutive Equation Gap Method (CEGM), the Constitutive Compatibility Method (CCM), the Dissipative Gap method (DGM); the Equilibrium Gap Method (EGM), the Self-Optimizing Method (SOM), and the Virtual Fields Method (VFM), have been first developed for elasticity and later extended for non-linear plasticity. From all these later methods, the VFM has clearly taken the lead and has presented itself as the principal competitor for the FEMU approach. Martins et al. [84] performed a comparative study of the FEMU and the VFM in the case of elastoplastic models. Though both techniques proved their feasibility and robustness in non-linear plastic constitutive models, the FEMU method is more CPU demanding and more sensitive to the number of time-steps. As opposed to the FEMU method, the VFM is plane-stress dependent.

Later, in 2020, Fu et al. [109] used the virtual fields method (VFM) to simultaneously identify the constitutive parameters of Hill 1948 anisotropic yield criterion and nonlinear kinematic hardening models for rolled sheet metals from a low cycle tension-compression dedicated test. This test presented a large range of strain states, allowing to characterize the anisotropy of the material. This work shows that the input parameters can be satisfactorily recovered from a designed tension-compression configuration and the VFM procedure, verifying the feasibility of using the VFM method in real experiments.

Temperature effects

Extension of the FEMU and VFM for Johnson-cook thermomechanical models can be seen in Martins et al. [110] and [89] using a virtual and real heterogeneous full-field database. The robustness of the proposed methodologies is tested with noisy data. Simultaneously, Oliveira et al. [111] present a procedure to identify the parameters of a thermo-mechanical Hockett-Sherby type law, for the EN AW 6061-T6 aluminium alloy, based on results from experimental uniaxial tensile tests performed on a Gleeble machine. The analysis of both previous communications highlights that the non-isothermal conditions promote the increase of the strain rate in the centre of the specimen. At the same year, Rossi and his team [112] used a variation of the VFM method, called Fourier-series-based VFM (F-VFM), to identify the properties of tailor heat-treated (THT) blanks, in terms of different thermomechanical hardening parameters. Again, this paper reveals the application of full-field identification methods to heterogeneous thermal and strain full-field measurements. However, for this latter case, the F-VFM was applied to identify the spatial distribution of the hardening parameters of a material subjected to heterogeneous heat treatments and, therefore, heterogeneous mechanical behavior. The results using experimental data were quite good. Nevertheless, the temperature added an additional level of difficulty in the parameter identification problems.

Optimization algorithms for parameter identification

All the previous inverse methods, when applied to non-linear constitutive models, require an optimization algorithm, and, in their generality, use gradient-based algorithms. However, Oliveira et al. [89] analysed the influence of the optimization algorithm in the inverse FEMU identification of the non-linear thermomechanical Johnson-cook model. A direct search (Nelder-Mead), least-square gradient-based (Levenberg-Marquardt), and a metaheuristic (Differential Evolution) algorithm were compared even using different levels of data noise. As expected and can be seen in Fig. 14, the metaheuristic algorithm demonstrates to be the most robust algorithm with the cost of higher CPU effort, however, it is also susceptible to local minima even though less than the others algorithms. Optimization strategies were suggested to overcome eventual problems.

The best optimization algorithm for inverse material identification is still an open issue. The parameter identification formulation, independently of the methodology (FEMU, VFM, etc.), points out for a least-square optimization technique. However, FEA and DIC calculations, with discrete meshes and subsets, results in non-continuous cost functions with noise levels not advised for gradient-based least-square algorithms. Therefore, researchers are using derivative-free algorithms, such as the Nelder-Mead simplex method or metaheuristics. For large computational effort, response surface methods (RSM) can also be used. Research focus is required to solvethis issue and provide guidelines to the ESAFORM community and the industry dealing with ESAFORMparameter identification.

In order to identify the most influencing parameters for robust parameter identification, Steffes-Lai [113] developed a fully automatic parameter classification procedure, which reduces the parameter space and minimizes the computational effort for subsequent analysis and optimization tasks. This work presented a first step towards robust identification processes.

Fig. 14
figure 14

- Evolution of objective function throughout function evaluations using a gradient-based least-square (LM), a direct-search (NM), and a metaheuristic (DE) algorithms, represented from left to right. Results correspond to data sets without noise and with different noise amplitudes, from top to bottom (retrieved from [89]). In deep analysis is required concerning the adequate optimization technique for inverse material parameter identification

Machine learning models for inverse material identification

In the inverse identification of constitutive model parameters, the use of machine learning and, particularly of artificial neural networks, should be emphasized. In 2008, for the ESAFORM conference 2008, Aguir et al. [114] presented a new methodology for the identification of elastoplastic behavior using a hybrid approach where Artificial Neural Networks (ANN) are trained by finite element results and the multi-objective identification procedure calls the ANN function in place of the finite element code, saving CPU effort. The same authors [115] proposed in 2012 an inverse analysis strategy coupled with an artificial neural network (ANN) model to identify the Cazacu and Barlat’2001 material parameters of an orthotropic standard mild steel DC06. In their work, the ANN model is trained by finite element simulations of the cylindrical cup deep drawing test. This calibration procedure, reducing the gap between the experimental responses and the numerical ones, is also made with a genetic algorithm (GA) to enhance the process. The following year, the same ANN-GA procedure was taken by the authors to apply to material Damage models [116] for the AISI 304 steel and now using the bulge test. Nevertheless, it was highlighted that the difficulty of this inverse problem still lies with the long computing time that is taken when an optimization procedure is coupled with a finite element computation (FEC) to identify the material parameters.

The calibration problems and the mathematical formulation constraint of analytical constitutive models were addressed by Gaspar and Andrade-Campos [117] when an ANN implicit plasticity model was developed, implemented in an FEA code and used in forming operations. Although the results were quite satisfactory, the accuracy of the model is much dependent on the large amount of data required for training. Here, synthetic data from the elastic-viscoplastic Chaboche model was used. The problem of achieving large amount of experimental data was highlighted.

Vuppala et al. [118] used a data-driven approach to characterize the flow curve for aluminum and copper under compression tests. In opposition to analytical and ANN models, here a data-driven tabular data at different displacement steps was developed as a flow curve and hence it is easier to represent complex flow curves. Two different methods, a heuristic and an iterative one, were discussed, however, only the iterative method was able to estimate flow curves for generalized deformation conditions yielding an error of less than 2%.

Although it is expected that promising ML techniques will be intensively used in the next decade, this approach is only giving its first steps and still lacks maturity. This fact is also seen in the ESAFORM community. Even a search in google scholar and in the Web of Science (Clarivate) database for machine learning in metal forming and parameter identification gives a very reduced number of results.

Conclusions

The most important current trend of the European manufacturing industry is the transition to a resource-efficient and information-based economy. In this respect, optimization and inverse analysis in metal forming play a crucial role in enabling this green and digital transition. Indeed, this field of study essentially targets to devise tools, techniques and strategies to foster innovative forming technologies and manufacturing routes yielding an optimal use of resources. The relevance of the field of study is still increasing as data-driven manufacturing is emerging. Over the years, optimization and inverse analysis in metal forming has branched into a multidisciplinary engineering discipline with a strong mixed numerical-experimental character. The cornerstone of process optimization is the numerical simulation of the forming process under investigation. The optimization aspect, however, requires clever and sophisticated numerical strategies to leverage the numerical simulation efforts and derive optimal forming conditions. A myriad of approaches and strategies has been proposed in the literature and the second section of this paper summarizes the developments chronologically and highlights the contribution of the ESAFORM community. Relevant examples of optimization examples are given with regard to sheet metal, bulk, and tube forming. It can be concluded that a significant body of knowledge concerning process optimization is established in the last decade. Ultimately, the aim is to implement these techniques in a complex and uncertain industrial metal environment. The community working on optimization soon realized that this requires profoundly understanding the role of uncertainty in forming process optimization. The third section of this paper reviews the developments to account for uncertainty in forming process optimization. Finally, in the fourth section of the is paper, we focus on a relatively young discipline, namely inverse material characterization. Given that the predictive accuracy of forming simulations highly depends on the accuracy of the adopted material model, and given that effective material models are complex and incorporate many parameters, reliable and efficient inverse methods are required to determine material model parameters from information-rich experiments. Inverse material characterization methods heavily rely on optimization techniques, yet the state of the art touches a multitude of disciplines including many aspects of experimental mechanics. Despite strong proof of concept, successful industrial valorization of inverse material characterization methods will require finding solutions to efficiently design information-rich experiments.