CAE:

Computer Aided Engineering

DTW:

Dynamic Time Warping

EARTH:

Error Assessment of Response Time Histories

MADYMO:

MAthematical DYnamic MOdel

NCAP:

USA New Car Assessment Program

NHTSA:

National Highway Traffic Safety Administration

NSGA:

Non-dominated Sorting Genetic Algorithm

n ε :

EARTH phase error

p:

number of responses

SME:

Subjective Matter Expert

ε m :

EARTH magnitude error

ε s :

EARTH slope error.

\(\mu_{n_\varepsilon }^b \) :

Mean phase error for baseline model

\(\mu_{n_\varepsilon }^o \) :

Mean phase error for optimal model

\(\mu_{\varepsilon_m }^b \) :

Mean magnitude error for baseline model

\(\mu_{\varepsilon_m }^o \) :

Mean magnitude error for optimal model

\(\mu_{\varepsilon_s }^b \) :

Mean slope error for baseline model

\(\mu_{\varepsilon_s }^o \) :

Mean slope error for optimal model

\(\sigma_{n_\varepsilon }^b \) :

Standard deviation of phase error for baseline model

\(\sigma_{n_\varepsilon }^o \) :

Standard deviation of phase error for optimal model

\(\sigma_{\varepsilon_m }^b \) :

Standard deviation of magnitude error for baseline model

\(\sigma_{\varepsilon_m }^o \) :

Standard deviation of magnitude error for optimal model

\(\sigma_{\varepsilon_s }^b \) :

Standard deviation of slope error for baseline model

\(\sigma_{\varepsilon_s }^o \) :

Standard deviation of slope error for optimal model

1 Introduction

With ever shortening time to market in automobile industry, Computer-Aided Engineering (CAE) is widely used to simulate vehicle interior, restraint system, and occupants in various impact modes. In occupant simulation, multiple dummy injury numbers with functional data, such as head injury criteria, chest acceleration, femur loads, and neck moment, are monitored simultaneously (Fu and Abramoski 2005). To ensure CAE as an effective tool, there is strong need to have high quality CAE models with good predictive capability. This requires CAE engineers to conduct model calibration with physical tests. Model calibration uses the results from CAE models and physical tests to find the best model parameters. The key challenges in the occupant restraint system model calibration are: (1) the dynamic system usually consists of multiple responses, (2) most of the responses are functional data or time histories, therefore both the global and local differences between CAE and test data, such as phase shift, magnitude, and shape, need to be considered, and (3) the traditional trial-and-error calibration approach is time consuming and highly depends on analyst’s expertise. These call for the development of an automatic and effective model calibration method. Several auto-correlation methods are proposed to automatically select the best values of the model parameters according to different criteria (Liu et al. 2005; Fu et al. 2009). However, a comprehensive approach to achieve an optimal compromise between multiple functional data and various major features within each response, such as phase, magnitude, and shape, was seldom found.

Model validation is the process of comparing CAE model outputs with test data in order to assess the validity or predictive capabilities of the CAE model for its intended usage. The validation metric is one of the critical elements in model validation. A proper model validation metric, which can also be used for calibration, can greatly enhance the efficiency and effectiveness of model calibration. Oberkampf and Barone (2006) and other researchers presented some critical and ideal characteristics when selecting a model validation metric (Fu et al. 2010). In past few years, quantitative model validation methods were reported in the literature (Mahadevan and Rebba 2005; Rebba and Mahadevan 2006; Jiang and Mahadevan 2007). However, few of them are dealing with functional data (Jiang and Mahadevan 2008; Jiang et al. 2009; Fu et al. 2010; Zhan et al. 2011a). These methods mainly emphasize on the whole distribution of the difference between test and CAE data. The local features such as phase, magnitude, and slope are seldom addressed. Recently, an objective rating metric named Error Assessment of Response Time Histories (EARTH; Sarin et al. 2008, 2010) was developed and showed good potential for dynamic system applications. However, the drawbacks of this metric are: (1) this metric produces three error measures which only give quantitative directions. It requires Subjective Matter Experts (SMEs) to combine them to an overall single rating for assessing the model quality, and (2) the uncertainties related to CAE and test data are not considered (Kokkolaras et al. 2011). One solution on the first challenge can be found in (Zhan et al. 2011b).

Traditionally, model calibration is conducted by CAE engineers, using trial-and-error approach and by visual and graphical comparison to optimize the CAE model parameters. This process is time-consuming and subjective. In this study, an objective metric combines with optimization method is employed for automatic model calibration. Despite its aforementioned issues, the EARTH metric is selected to quantify the differences between two functional data. In the following sections, the selection considerations or criteria in validation metric is first described. It is followed by a brief introduction of the EARTH metric. Next, an automatic model calibration method is proposed. In this method, a new multi-objective optimization problem is formulated and a Non-dominated Sorting Genetic Algorithm is used to optimize the model parameters. A frontal impact CAE model built by MAthematical DYnamic MOdel (MADYMO; TNO Automotive 2010) is used to demonstrate the effectiveness and efficiency of the proposed method. Finally, the summary is given in the end.

2 Metric selection

Development and selection of an appropriate objective metric is one of the most important factors to achieve successful applications of model validation and calibration. A validation metric is regarded as an ideal one if it has the following characteristics (Oberkampf and Barone 2006; Fu et al. 2010): (1) objectiveness: given the test and CAE results, a validation metric should produce the same assessment result no matter which analyst conducts it. This property ensures that the validation result is reproducible, independent of the attitudes or predilections of the analysts. (2) generalization: the validation metric should be suitable for various types of data comparisons, e.g., two random variables, two sets of scalar values, and two vectors considering uncertainty. The metric should be able to reflect the differences in both the full distribution of the test and CAE results and major features, such as phase shift, mean shift, and size or magnitude difference. (3) physical meaning and engineering knowledge: a metric should provide quantitative assessment of model quality with clear physical meaning, and be able to incorporate subject matter expert (SME)’s opinion. In addition, there are some other desired characteristics such as symmetry, simplicity, and incorporation of uncertainty. It is noted that a usable metric may not possess all the properties. The selection of an effective metric should be based on the application requirements. For the occupant restraint system applications, the first three characteristics are the primary consideration in this study.

3 Error assessment of response time histories metric

In order to minimize the interactions among features like phase, magnitude, and shape, an objective metric named Error Assessment of Response Time Histories (EARTH) was developed (Sarin et al. 2008). The rating structure of the EARTH metric is shown in Fig. 1. The EARTH metric are divided into global response error and target point response error. The global response error is defined as the error associated with the complete functional data with equal weight on each discretized point of the time histories. There are three main components of global response error and they are phase error n ε , magnitude error ε m , and slope error ε s . Target point error is defined as the error associated with certain localized phenomena of interest. They are generally application dependent, and thus are beyond the scope of this work. A unique feature of the EARTH metric is that it employs dynamic time warping (DTW; Rabiner and Huang 1993) to separate the interaction of phase, magnitude, and slope errors. DTW is an algorithm for measuring discrepancy between time histories. It aligns peaks and valleys as much as possible by expanding and compressing the time axis according to a given cost (distance) function (Lei and Govindaraju 2003).

Fig. 1
figure 1

EARTH metric rating structure

The phase error deals with the overall error in timing between two functional responses when considering all the points of the response and it is depicted in Fig. 2(a). Magnitude error is defined as the difference in amplitude of the two functional responses when there is no time lag between the two and it is depicted in Fig. 2(b). Slope error deals with error associated with the shape of the functional responses, such as the number of peaks, valleys, and slope etc., and it is depicted in Fig. 2(c). When calculate the magnitude and slope errors, dynamic time warping method are employed to minimize the effect of local or target point errors. Details of EARTH metric should be referred to (Sarin et al. 2010).

Fig. 2
figure 2

Examples to illustrate the three types of global response errors: a phase, b magnitude, c slope

4 Automatic model calibration process

The goal of this research is to develop an optimization process that can automatically optimize CAE model parameters, and find the feasible parameter configurations that can match multiple injury responses between CAE and test results. Based on the desired characteristics of validation metric described in section 2, the EARTH metric is selected for model calibration. The EARTH metric measures the quality of CAE model through three error measures including phase error n ε , magnitude error ε m , and slope error ε s . Smaller errors indicate CAE results are better matched with the test. Let p represents the number of responses of interest. A multi-objective optimization problem is formulated to find the most appropriate MADYMO model parameter values that can reduce the mean values and standard deviations of the three errors for all p responses. Because the three error values have different ranges, the baseline model errors are used to normalize the data. The mean values and standard deviations of the baseline model errors can be expressed as:

$$\begin{array}{rll} \mbox{Phase \, error}:\mu_{n_\varepsilon }^b &=&\sum\limits_{i=1}^p n_{\varepsilon \,i}^b /p\;, \\ \sigma_{n_\varepsilon }^b &=&\sqrt {\frac{1}{p}\sum\limits_{i=1}^p {\left( {n_{\varepsilon i}^b -\mu _{n_\varepsilon }^b } \right)^2} } \end{array}$$
(1)
$$\begin{array}{rll} \mbox{Magnitude \, error}:\mu_{\varepsilon_m}^{b} &=&\sum\limits_{i=1}^{p} \varepsilon_{m\;i}^{b}/p,\;\\ \sigma_{\varepsilon_m }^{b} &=&\sqrt {\frac{1}{p}\sum\limits_{i=1}^p {\left( {\varepsilon_{mi} -\mu _{\varepsilon_m }^b } \right)}^2} \end{array} $$
(2)
$$ \mbox{Slope \, error}:\mu_{\varepsilon_s}^b =\sum\limits_{i=1}^p {\varepsilon_{si}^{b} /p,\sigma_{\varepsilon_s }^b } =\sqrt {\frac{1}{p}\sum\limits_{i=1}^p {\left( {\varepsilon_{si} -\mu _{\varepsilon_s }^b } \right)}^2} $$
(3)

where the superscript b indicates baseline model. The same quantities of the optimal model are denoted as \(\mu_{n_\varepsilon }^o \), \(\sigma _{n_\varepsilon }^o \), \(\mu_{\varepsilon_m }^o \), \(\sigma_{\varepsilon_m }^o \), \(\mu_{\varepsilon_s }^o \), and \(\sigma_{\varepsilon_s }^o \). Hence, a new multi-objective optimization problem is formulated to minimize the ratios of the mean EARTH errors over those of the baseline model. They are written as:

$$ \mbox{Minimize}\quad {\begin{array}{*{20}c} \hfill {z_1 =\mu_{n_\varepsilon }^o /\mu_{n_\varepsilon }^b } \\[2pt] \hfill {z_2 =\mu_{\varepsilon_m }^o /\mu_{\varepsilon_m }^b } \\[2pt] \hfill {z_3 =\mu_{\varepsilon_s }^o /\mu_{\varepsilon_s }^b } \\ \end{array} } $$
(4)
$$ \mbox{Subject\thinspace to}\quad {\begin{array}{*{20}c} \hfill {g_1 =\sigma_{n_\varepsilon }^o /\sigma_{n_\varepsilon }^b \le 1} \\[2pt] \hfill {g_2 =\sigma_{\varepsilon_m }^o /\sigma_{\varepsilon_m }^b \le 1} \\[2pt] \hfill {g_3 =\sigma_{\varepsilon_s }^o /\sigma_{\varepsilon_s }^b \le 1} \\ \end{array} } $$
(5)

The standard deviation constraints in (5) are imposed to handle or smoothen out the “poor performed” responses with large EARTH errors. The main advantage of this formulation is that it is capable of minimizing the errors and improving the robustness as well. However, it is noted that the multi-objective optimization formulation in (4) and (5) is just one of formulations that can achieve the goal of model calibration. It may not be the best.

Because of the competing objectives, a Non-dominated Sorting Genetic Algorithm (NSGA-II; Deb et al. 2000) is employed to obtain the Pareto optimal set for different model parameter configurations.

Figure 3 shows the flowchart of the automatic model calibration process for an occupant restraint system, which inte grates the optimization solver NSGA-II, MADYMO model simulation, and a MATLAB program of EARTH metric to determine the optimal model parameters. The process starts from formulating the model calibration problem, obtaining test data, preparing the MADYMO model, and defining model parameters and ranges. Next, it conducts MADYMO simulations to predict multiple injury responses. It then executes a MATLAB program to calculate the three EARTH errors, followed by calculating the multi-objective functions. Finally, it employs NSGA-II to find the optimal values for the MADYMO model parameters. The optimization loop continues until a satisfying result is obtained.

Fig. 3
figure 3

Automatic model calibration process for occupant restraint system

5 A vehicle dynamic system case study

A driver side occupant restraint system MADYMO model (Fu et al. 2009) shown in Fig. 4 is used to demonstrate the proposed method. The model simulates a full frontal rigid barrier impact scenario at the speed of 35 mph with a 50th percentile belted Hybrid III dummy (NHTSA 2011) in a vehicle. This represents one of the USA New Car Assessment Program (NCAP) test modes.

Fig. 4
figure 4

A driver side occupant restraint system model

There are sixteen model parameters (shown in Table 1) identified and they are to be optimized. The reasons that they are selected as the MADYMO model parameters are as follows: (1) component tests have a significant range of variation, e.g. stiffness; (2) representative component test data are unavailable, e.g. friction; and (3) some are not controllable, e.g. impact load magnitude and location. Their lower and upper bounds are chosen based upon the component tests and engineering experience. There are eleven occupant responses (summarized in Table 2) that are monitored and compared with the test data to evaluate the quality of the MADYMO model.

Table 1 Sixteen model parameters
Table 2 Eleven occupant responses

Figure 5 shows the time history plots of the test and the baseline CAE model results with the eleven responses. It is noted that some CAE responses match well with the test (e.g. belt load at shoulder shown in Fig. 5e), but some do not, (e.g. right femur load shown in Fig. 5g and upper neck moment shown in Fig. 5j). This indicates that the predicative capability and accuracy of this CAE model can be further improved.

Fig. 5
figure 5

Time history plots for the test and baseline CAE results: a r 1 chest deflection, b r 2 chest acceleration in x-direction, c r 3 belt load at anchor, d r 4 belt load at retractor, e r 5 belt load at shoulder, f r 6 left femur load in z-direction, g r 7 right femur load in z-direction, h r 8 head acceleration in x-direction, i r 9 upper neck load, j r 10 upper neck moment, k r 11 pelvis acceleration in x-direction

The NSGA-II is then applied to identify the sixteen optimal model parameter values that can minimize the EARTH errors. The optimization process has 500 runs, among which 105 runs are feasible solutions, and 6 runs form the Pareto set. By further filtering, the 288th run is selected as the optimal design because it reaches good compromise of all objectives and constraints. The model parameter values of baseline and optimal models are listed in Table 3. The detailed optimization results are listed in Table 4, and the histograms of the errors are shown in Fig. 6. After the optimization process, the three objectives z 1, z 2, and z 3 are 0.72, 0.90, and 0.88, respectively. The three constraints g 1, g 2, and g 3 are all satisfied with the values of 0.78, 0.56, and 0.91, respectively. It is observed that, first, comparing with the baseline model, the optimal model has smaller average values of three EARTH errors. Secondly, because of the reduced error standard deviations, the “poor-performed” responses with relative large errors are less likely to be in the optimal model. Thirdly, the model parameters may be further improved, as only 500 runs are performed and all constraints are less than 1. Despite that, it is concluded that the optimal model has better predictive capability than the baseline model, based on the EARTH metric.

Table 3 Model parameters of baseline and optimal models
Table 4 EARTH metric results for each response of baseline and optimal CAE models
Fig. 6
figure 6

EARTH error histograms of baseline and optimal CAE models a phase error, b magnitude error, and c slope error

Figure 7 shows the time history plots of the test and the optimal CAE model results with the eleven responses. It is verified that the relative bad responses in the baseline model, such as left femur load in z-direction (Fig. 7f), right femur load in z-direction (Fig. 7g), head acceleration in x-direction (Fig. 7h), and upper neck moment (Fig. 7j), have been improved significantly after the model calibration. However, some other responses are compromised to achieve this overall model improvement.

Fig. 7
figure 7

Time history plots for the test and optimal CAE results a r 1 chest deflection, b r 2 chest acceleration in x-direction, c r 3 belt load at anchor, d r 4 belt load at retractor, e r 5 belt load at shoulder, f r 6 left femur load in z-direction, g r 7 right femur load in z-direction, h r 8 head acceleration in x-direction, i r 9 upper neck load, j r 10 upper neck moment, k r 11 pelvis acceleration in x-direction

6 Summary

This paper presents an automatic CAE model calibration methodology based on the EARTH metric and the Non-dominated Sorting Genetic Algorithm. The EARTH metric is selected to quantify the response differences for dynamic systems between CAE and test data. It computes three independent error measures which associated with the key features of the functional responses. A multi-objective optimization problem is formulated to systematically and automatically update the occupant restraint system model parameters to improve the quality of the CAE model. The optimization formulation not only minimize the errors, but also maximize the robustness. The method has been successfully implemented and demonstrated. Significant model improvement was achieved through a frontal impact case study. The automatic process can also save considerable engineers’ time, compared to trial-and-error method.