Introduction

Many industrial sectors (e.g., nuclear engineering, automotive, aeronautic, rail etc...) require experimental characterization of a large variety of materials under dynamic conditions. Material parameters that characterize the behavior are identified under relatively simple and well controlled experimental conditions and then are imported in complex computations related to engineering applications for design and certification purposes. However, material parameters are imperfectly known because of measurement uncertainties. In addition, validation and quality assessment of complex simulations usually consist in comparing measurements performed on the real system of interest and the computations performed on the basis of the identified material parameters. If significant discrepancies are observed, several questions arise in the design process. Are simulation choices and assumptions well verified for the complex system of interest? Is the material behavior model extrapolated too far from loading conditions (temperature, strain and strain rate levels) actually tested on specimens in the laboratory? Is the magnitude of discrepancies compatible with uncertainties on the identified material parameters? Thus, for engineering applications and probabilistic risk analysis, it is fundamental not only to identify material parameters with good quality experiments but also to estimate the overall uncertainty on the identified material parameters. Following this assertion, the present paper focuses on probabilistic interpretation of measurements provided by split Hopkinson pressure bar system (SHPB). Thus, this work deals with uncertainties brought from the experimental setup and diagnostics rather than test design procedure (interfacial friction, dispersion, etc).

The SHPB system is a well known and very common experimental apparatus for dynamic testing. Since the early work of [22] a considerable amount of work has been produced to improve measurement quality and wave analysis [6, 11, 12, 18, 19, 23, 31, 43, 44]. In particular, several corrections have been proposed, namely: friction at the specimen/bar interfaces [18, 23], punching of the specimen into the bars [31], wave scattering due to three-dimensional effects [43], deconvolution techniques for long tests or visco-elastic bars [44]. Despite this considerable effort to improve wave analysis, significant and undetected errors are sometimes made during material characterization campaigns, which can be critical for the aimed applications. Indeed, the SHPB system enables to analyze strain gauges measurements on each bar as inputs and provides displacements and forces at both ends of the specimen as outputs. However, stress and strain-rate as a function of strain are needed to identify material behavior. Some assumptions are therefore necessary among which the most significant is that equilibrium is rapidly reached in the cylindrical specimen (i.e., the compression wave is almost instantaneously propagated from one end of the specimen to the other end). Thus, discrepancies between the real material behavior and the obtained stress–strain curve are expected because of these simplifying assumptions and to a lesser extent the one-dimensional wave propagation model. The equilibrium assumption is sometimes very badly verified leading to significant and undetected errors. In that case (or when the specimen is not cylindrical), material parameters should be identified by using inverse methods relying on dynamic Finite Element modeling of the bar/specimen system (in order to release the equilibrium assumption). Several modeling strategies have been proposed [4, 8, 17, 20, 30, 40]. However, the computational cost of such approaches being very significant, most experimental campaigns rely on the classic assumptions in order to directly determine stress and strain-rate as a function of strain from displacements and forces at both ends of the specimen. Thus, the quality of the equilibrium assumption should be verified for each test with an acceptation-rejection criterion.

In addition, uncertainties and tests variability also affect measurements and material parameters identification. Within this framework, three issues are responsible for the overall uncertainty on the identified material parameters, namely:

  1. (i)

    Imperfect knowledge of the experimental setup The analysis enabling to transform strain gauges measurements into stress and strain-rate signals involves several experimental parameters (specimen size, material parameters of bars, strain gauge factors, wave propagating velocity etc.) that are imperfectly known, which in turn leads to uncertainties on the measured displacements and forces.

  2. (ii)

    Measurement noise Even though measurement noise may have deterministic causes, at the scale of the experimental setup it consists in a purely random signal affecting strain gauge measurements.

  3. (iii)

    Material variability and repeatability of tests Fabrication and forming processes have a great influence on material properties, and are more or less inhomogeneous along pieces whose specimens are extracted from. In addition, experimental conditions are not perfectly controlled. Two tests are never identical (e.g., striker speed, lubrication conditions at the specimen/bar interfaces etc...).

Moreover, the behavior model accounts only in a simplified way for real mechanisms responsible for the overall material behavior. Even, some behavior models are only phenomenological laws with limited validity. Thus, there are residual discrepancies between the real behavior and the behavior model. This uncertainty has not been taken into account in this contribution.

Usually material parameters identification is based on deterministic inverse methods. A significant effort has been done to adapt such methods to SHPB dynamic tests (e.g., among many others [17, 20]). A review has been recently published by [24]. Such methods are usually based on least-square minimization between Finite Element computations and measurements. Thus, it is possible to deal with complicated specimen design with inhomogeneous mechanical state. However, the overall uncertainty on the identified material parameters cannot be estimated conveniently.

On the contrary, within a probabilistic framework, Bayesian inference can be used to estimate material parameters involved in a specific behavior model and quantify related uncertainties. This approach is similar to computer model calibration problems [21]. General descriptions of Bayesian statistics are presented for instance by [3, 13]. Identification methods within the context of probabilist framework have also been presented by [38]. For instance, [14, 42] identified elastic parameters with a Bayesian approach. In addition, [26, 27] proposed a methodology to identify elastic-plastic material parameters accounting for model uncertainties. Bayesian inference has also been used in [2] within the context of characterization of visco-plastic models by considering that some material parameters depend on experimental conditions (e.g., temperature) and non-parametric Gaussian Process has been used to account for this variability. Within the framework of acoustics, a material characterization based on Bayesian analysis has been proposed in [5]. The well known Preston–Tonks–Wallas (PTW) model [25] has been characterized in [41] on the basis of Bayesian estimation by analyzing shock waves produced by flyer plate experiments.

Within the context of mechanical dynamic testing using SHPB system very few attempts to use Bayesian analysis have been published. A simple Bayesian approach has been developed in [34] to obtain a single PTW set of parameters bridging compression tests and Rayleigh–Taylor instability, which was unachievable for beryllium S200F. In addition, a hierarchical Bayesian analysis has been proposed in [10] to estimate material parameters of a PTW model for various materials. However, these works only considered measurement noise (centered normal random variable of unknown diagonal covariance matrix). On the contrary, this paper is an attempt to introduce prior uncertainties due to imperfect knowledge of the experimental setup. To do so, the experimental settings (e.g., bar stiffness, position of strain gauges, density etc.) are treated as unknown control inputs to the model enabling to interpret measured signals. There is an uncertainty associated with determining the value of each of those experimental parameters. In other contexts, similar calibration problems have been proposed [1], in which the control inputs were not exactly known, but randomly perturbed from the nominal input. In the present contribution, a simple methodology is proposed for estimating the additional uncertainty brought by the imperfectly known experimental settings. Experimental parameters needed to interpret strain gauges measurements are assumed to be random variables. Normal or rectangular distributions of known mean and variance are considered depending on measurement techniques. If necessary, a series of measurements is performed to obtain reliable statistics. As a result, stress and strain-rate are given for each test as random variables. Thus, the proposed Bayesian estimations do not rely on deterministic stresses and strain-rates affected by measurement noise but on random variables characterized by means and non-diagonal covariance matrices. This additional information enables us to quantify more realistically uncertainties related to the overall material parameters estimation.

The paper is constructed as follows. The classic one-dimensional wave propagation model is briefly recalled in section “Classic Wave Analysis”. Then, a probabilistic framework is introduced in sections Imperfect Knowledge of the Experimental Setup and Statistical Analysis in order to deal with uncertainties due to imperfect knowledge of the experimental setup. In addition, in section Experimental Campaign and Overall Uncertainty, a series of tests is performed on the aluminum alloy AA7075-O in order to address material variability and repeatability of tests so that the overall uncertainty is identified. Since controlling temperature during SPHB tests is uneasy, a few additional tests in the quasi-static regimes are performed at different temperatures to characterize the temperature dependence of behavior. A simple Steinberg–Cochran–Guinan model [36] is presented in section Modeling Choices and modeling choices are detailed. Then, in section Standard Bayesian Estimation, a standard Bayesian estimation exploiting the data in the quasi-static regime only is proposed to identify the material parameter associated with temperature dependence. Finally, remaining material parameters are identified in section Hierarchical Bayesian Estimation by developing a hierarchical Bayesian model exploiting the data in the dynamic regime (i.e., accounting for the uncertainty due to imperfect knowledge of the experimental setup). Conclusive remarks are addressed in section Conclusion.

Classic Wave Analysis

In this section, classic results are briefly recalled for the sake of clarity. Indeed, the statistical analysis proposed in the following relies on the one-dimensional wave analysis enabling to convert strain gauge measurements into displacement and force signals at the bar/specimen interfaces giving in turn stress and strain-rate as a function of strain by using simplifying assumptions. More advanced analysis proposed by [43] (accounting for instance wave dispersion due to 3D effects) can be used instead, but developments would be more technical. In addition, deconvolution techniques introduced by [44] have not been used in this contribution as well as the correction due to the punching of the specimen into the Hopkinson bars as proposed by [31]. The SHPB system and main notations are presented schematically in Fig. 1. Notations are listed in Table 1.

Fig. 1
figure 1

SHPB system

A compression wave is generated into the input bar by throwing a striker against it. The duration \(\varDelta t\) of the compression pulse depends only on the striker length \(L_S\) and the wave propagating velocity C in the striker (assumed to be made of the same material as the input and output bars):

$$\begin{aligned} \varDelta t=\frac{2 L_S}{C} \end{aligned}$$
(1)

Although the strain-rate in the specimen significantly varies during the test, the average strain-rate is controlled by the striker velocity (measured by laser techniques) that determines the compression pulse magnitude. The compression wave propagates through the input bar until it reaches the bar/specimen interface where a part of the wave is reflected (traction wave) and the rest is transmitted into the specimen. The same reflection/transmission phenomena occurs at the second bar/specimen interface. Thus, a compression wave is transmitted in the output bar and a traction wave is reflected into the input bar. Strain gauges are glued to the input and output bars to measure compression/traction waves. Measured voltage signals are denoted by \(V_I(t)\) and \(V_O(t)\) and converted into strains \(\widetilde{\varepsilon }_I(t)\) and \(\widetilde{\varepsilon }_O(t)\) by multiplying by calibration factors \(K_{SGI}\) and \(K_{SGO}\). Typical measured signals are presented in Fig. 2a. The input bar needs to be long enough so that the reflected traction wave does not overlap with the incident compression wave in order to avoid advanced deconvolution techniques.

Fig. 2
figure 2

Wave measurements

Basic wave analysis consists in cutting measured signals \(\widetilde{\varepsilon }_I(t)\) and \(\widetilde{\varepsilon }_O(t)\) (recorded on \(t\in \left[ t_{ini},t_{end}\right]\) where \(t_{end}-t_{ini}>\varDelta t\)) into incident, reflected and transmitted signals \(t\in \left[ 0,\varDelta t\right] \mapsto \varepsilon _i(t)\) (\(i\in \left\{ I,R,T\right\}\) where I stands for incident, R for reflected and T for transmitted). The time origin of \(\varepsilon _I(t)\) is set manually as shown in Fig. 2a. Time origins of \(\varepsilon _R(t)\) and \(\varepsilon _T(t)\) are not determined manually because of the difficulty to estimate the relatively smooth starting point of the pulse. Thus, the signal cutting process consists in determining the time origin of \(\varepsilon _R(t)\) and \(\varepsilon _T(t)\) automatically, by considering strain gauge positions \(L_{SGI}\) and \(L_{SGO}\) and computing the time for each wave to reach the strain gauge.

It is therefore necessary to estimate the wave propagating time in the specimen as it introduces a delay. To do so, the specimen mass \(m_0\), diameter \(d_0\) and length \(l_0\) are measured and the specimen density \(\rho _0\) is computed. The wave propagating velocity of the specimen \(c_0\) is then estimated based on the specimen Young modulus. This estimation may be difficult if the Young modulus is strain-rate sensitive. In this paper, \(c_0\) is estimated by assuming that the specimen Young modulus does not depend on strain-rate, which is consistent with the chosen aluminum alloy. However, for materials with a strain-rate sensitive Young modulus, further developments would be needed. The wave propagating time in the specimen is \(\varDelta t_0=l_0/c_0\), which can be an important parameter to obtain accurate stress–strain response, particularly for the specimens with very low wave velocities [6]. However, in this contribution, since the specimen length \(l_0\) is very small compared to the distances \(L_{SGI}\) and \(L_{SGO}\) and since \(c_0\) is similar to the wave propagating velocity in the bars, uncertainty on \(c_0\) has a negligible impact on the signal cutting process. For instance, for the tested experimental conditions, \(\pm 5\%\) variation on \(c_0\) has no effect on the stress–strain response, as the variation on the time origin is smaller than the time interval between two successive measurement points (with a frequency of acquisition set to 1 MHz). It should be noted that only the uncertainty on \(c_0\) is neglected and not the nominal value. Resulting signals are presented in Fig. 2b.

Forces \(F_I(t)\) and \(F_O(t)\) at both ends of the specimen read:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {F_I(t)=\frac{\pi D_I^2}{4}\rho C^2\left( \varepsilon _I(t)+\varepsilon _R(t)\right) }\\ \displaystyle {F_O(t)=\frac{\pi D_O^2}{4}\rho C^2\varepsilon _T(t)}\\ \end{array}\right. \end{aligned}$$
(2)

The displacement difference between both ends of the specimen reads:

$$\begin{aligned} u(t)=C\int _0^t\left( \varepsilon _T(\tau )-\varepsilon _I(\tau )-\varepsilon _R(\tau )\right) \text {d}\tau \end{aligned}$$
(3)

Assuming that the specimen is at equilibrium, that is to say \(F_I(t)\approx F_O(t)\) or equivalently \(\varepsilon _I(t)+\varepsilon _R(t)\approx \varepsilon _T(t)(D_O/D_I)^2\) and assuming that the stress/strain state is homogenous in the specimen, the nominal stress \(\sigma _0(t)\) and strain \(\varepsilon _0(t)\) can be computed in the specimen:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\sigma _0(t)\approx \frac{4F_O(t)}{\pi d_0^2}}\\ \displaystyle {\varepsilon _0(t)\approx \frac{u(t)}{l_0}} \end{array} \right. \end{aligned}$$
(4)

Since the equilibrium has been assumed, it is rather arbitrary to choose \(F_O\) for the computation of \(\sigma _0\) in (4). However, \(F_I\) is proportional to \(\varepsilon _I+\varepsilon _R\) and is therefore dependent on the synchronization of \(\varepsilon _I\) and \(\varepsilon _R\) although \(F_O\) is proportional to \(\varepsilon _T\) that is directly measured. In addition, the strain gauge distance on the input bar \(L_{SGI}\) is large in order to avoid that the incident wave overlaps with the reflected wave, although the strain gauge at the output bar can be fixed much closer to the specimen. Thus, three-dimensional effects (geometrical dispersion) affect much less the transmitted signal and \(F_O\) is often less affected by oscillations. Furthermore, the specimen itself acts as a low-pass filter (particularly when the specimen material is soft), which tends to reduce oscillations and less dispersion is usually observed [6].

The true stress \(\sigma (t)\) and true strain \(\varepsilon (t)\) read:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\sigma (t)=\sigma _0(t)\left( 1+2\nu _p\varepsilon _0(t)\right) }\\ \displaystyle {\varepsilon (t)=\ln \left( 1+\varepsilon _0(t)\right) } \end{array} \right. \end{aligned}$$
(5)

where \(\nu _p\) is the coefficient of plastic expansion that is set to 0.5 for metals due to deviatoric plastic flow.

In the following, stress, strain and strain-rate will refer to the true stress, true strain and true strain-rate according to (5).

Imperfect Knowledge of the Experimental Setup

Two sets of experimental parameters having an influence on measurements can be distinguished. The first set is composed of the parameters directly used in the wave analysis presented in section Classic Wave Analysis (see Table 1). These parameters enable to transform voltage signals \(V_I(t)\) and \(V_O(t)\) into a stress–strain response. The second set is composed of all other parameters having an influence on the measured signals \(V_I(t)\) and \(V_O(t)\) and not used in the wave analysis. For instance, bar alignmentFootnote 1, strain gauge length etc., have an influence on the measured signals and therefore on results. Indeed, since the analytical wave analysis presented in section Classic Wave Analysis relies on simplifying assumptions (such as that bars are perfectly aligned, straight and have flat and parallel impact surfaces), experimental imperfections can introduce a bias, that is to say a systematic error. Similarly, strain gauge dimension, which tends to average strain over the length, has been neglected. In addition, pulse shaping techniques [6, 9] have not been used in this paper. However, pulse shapers could be used to filter high frequencies of the compression pulse to limit the effect of wave dispersion. Pulse shaping techniques do not introduce significant additional uncertainties though, as it is not necessary to know the pulse shaper characteristics to interpret the test. (No assumption is done on the shape of the compression pulse).

The uncertainty related to the second set of parameters is more difficult to characterize, as these parameters do not appear quantitatively in the wave analysis presented in section Classic Wave Analysis. It is theoretically possible to introduce the effect of uncertainties related to these parameters, by considering a much more detailed model for the wave analysis. For instance, a fully three-dimensional mechanical model would enable to take into account bar alignment issues directly in the wave analysis, so that the effect of alignment uncertainties is quantified. Similarly a detailed model of the strain gauge would enable to take into account settings of strain gauges and Wheatstone bridges and the associated uncertainties. However, such detailed models imply significant computation time, which would slow down the following statistical approach. A surrogate model that mimic the behavior of the fully three-dimensional simulation of the SHPB test could be used to reduced the computational cost. This contribution being limited to the classical analytical interpretation of measured signals, only the first set of parameters listed in Table 1 is considered, that is to say that the bias introduced by neglecting other parameters is not estimated and corrected.

All experimental parameters used in section Classic Wave Analysis and listed in Table 1 are imperfectly known. One could estimate experimental parameters errors at the same time as material parameters (involved in the behavior model) through the Bayesian analysis. For instance, [10] analyzed the measured voltage signals in a deterministic way (as in section Classic Wave Analysis) in order to obtain stress signals. Then, these stress signals have been assumed to be affected by an unknown overall error, which was determined at the same time as material parameters through the Bayesian analysis. However, to take into account the uncertainty due to the imperfectly known experimental setup, this approach would necessitate to introduce many additional unknown standard deviations in the Bayesian analysis. (There are 15 experimental parameters listed in Table 1). Moreover, this approach would necessitate to introduce the classic signal analysis presented in section Classic Wave Analysis directly in the Bayesian model. This is why, in this paper, uncertainties on experimental parameters are estimated first and then the overall measurement uncertainties are inferred (see sections Statistical Analysis and Experimental Campaign and Overall Uncertainty). Finally, measurement uncertainties are introduced as known random errors in the Bayesian analysis to estimate material parameters (see sections Standard Bayesian Estimation and Hierarchical Bayesian Estimation).

Thus, each experimental parameter X is considered as a random variable with unknown probability density function depending on the measurement technique. Empirical estimation of uncertainties for each measurement technique could be performed by measuring several known standards (e.g., objects with calibrated weight and dimensions) in order to obtain statistical distributions of measurement errors. However, an alternative procedure has been preferred in this paper. Rectangular or normal probability density functions are chosen for each experimental parameter. In most cases, the parameter should have a positive value because of its physical meaning. As normal distributions allow negative values, the probability density function is truncated in order to avoid this issue. Experimental parameters have been measured to estimate the mean of the random variable denoted by \(\overline{X}\). Then, the standard deviation denoted by \(\varDelta X\) is set to a fixed value depending on the specific measurement technique. Indeed, classic standard deviations are usually associated to tape measurements, digital calipers, digital scales etc. on the basis of manufacturer specifications and good experimental practices (measurements performed by a trained operator with an adapted equipment). Of course, this procedure only enables to give a rough estimation of measurement uncertainties, as standard deviations are not characterized empirically but fixed at standard values. Nevertheless, this simplified approach has been chosen in this contribution, since it is convenient for real laboratory practices.

In addition, some parameters are computed from the others. For instance, mean and standard deviation of the density \(\rho =M_I/((\pi /4)D_I^2L_I)\) can be computed by simulating pseudo-random draws on the basis of the random variables \(M_I\), \(D_I\) and \(L_I\) (estimated by rectangular or normal distributions of mean and standard deviation denoted by \(\overline{M}_I,\varDelta M_I\) etc.). Alternatively one can use approximations such as:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\overline{\rho }\approx \frac{4}{\pi }\frac{\overline{M}_I}{\overline{D}_I^2\overline{L}_I}}\\ \displaystyle {\frac{\varDelta \rho }{\overline{\rho }}\approx \sqrt{\left( \frac{\varDelta M_I}{\overline{M}_I}\right) ^2+\left( \frac{\varDelta L_I}{\overline{L}_I}\right) ^2+4\left( \frac{\varDelta D_I}{\overline{D}_I}\right) ^2}} \end{array}\right. \end{aligned}$$
(6)

Strain gauges consist in Wheatstone bridges. Two gauges are glued to each side of each bar (one measuring the strain along the bar axis and the other measuring the strain perpendicularly) in order to compensate potential bending effects. Resulting signals are amplified and transfer coefficients \(K_{SGI}\) and \(K_{SGO}\) are measured by using a calibrated electric resistance simulating a 0.1% strain. Then output voltages are measured in two different positions (using a switch). This procedure is reliable and the associated uncertainty is estimated to around 1% by manufacturers at room temperature. This percentage refers to calibrated value of \(K_{SGI}\) and \(K_{SGO}\) and not to the measured signals. Since this uncertainty is provided by the manufacturer it is considered as an indicative value, and the standard deviations associated to \(K_{SGI}\) and \(K_{SGO}\) is roughly estimated to a fixed value of \(4\times 10^{-6}\), which is slightly more than 1% of the nominal values of \(K_{SGI}\) and \(K_{SGO}\). This estimation is a crude simplification, however the overall calibration method for \(K_{SGI}\) and \(K_{SGO}\) is nevertheless more accurate than alternative methods.

The wave propagating velocity C is usually estimated by performing tests without specimen by measuring the time \(\varDelta t_{SG}\) between the beginning of the incident wave at the strain gauge of the input bar and the beginning of the transmitted wave at the gauge of the output bar. Since there is no specimen, the distance covered by the wave is estimated by \(\overline{L}_{SGI}+\overline{L}_{SGO}\) and the wave propagating velocity reads:

$$\begin{aligned} C=\frac{\overline{L}_{SGI}+\overline{L}_{SGO}}{\varDelta t_{SG}} \end{aligned}$$
(7)

This procedure presented in Fig. 3a is not very accurate because of the difficulty to determine the starting point of each compression pulse because of the smooth transition to reach the compression plateau. Nevertheless, this estimation is used very often in the industry because of its simplicity, even though more advanced and accurate techniques have been proposed in [7]. The standard procedure presented in Fig. 3a is used in this contribution in order to assess uncertainties commonly affecting SHPB tests. The random variable C is statistically characterized by performing several tests without specimen. Each measurement is interpreted as a draw of the random variable C. The issue of finding the starting point of the incident and transmitted pulses is addressed by doing two independent estimations of the wave propagating velocity for each test without specimen. In this contribution, 15 tests without specimen have been performed for a total of 30 draws of the wave propagating velocity. The obtained statistic distribution of C is presented in Fig. 3b, and a rectangular distribution is chosen for C. All parameters means and standard deviations are listed in Table 1.

Fig. 3
figure 3

Wave propagating velocity

Table 1 Standard deviations

Statistical Analysis

Consider \(\varTheta\) the set of independent random variables necessary for the analysis of measured signals:

$$\begin{aligned} \varTheta= & {} \left\{ L_I,D_I,L_{SGI},K_{SGI},L_O,D_O,\right. \nonumber \\&\left. L_{SGO},K_{SGO},\rho ,C,l_0,d_0,c_0\right\} \end{aligned}$$
(8)

The classic one-dimensional wave analysis presented in section Classic Wave Analysis can be seen as a transfer function f associating both a particular draw \(\varvec{\theta }^*\) of the random variable vector \(\varvec{\theta }\in \varTheta\) and the measured signals \(V_I(t)\) and \(V_O(t)\) to the corresponding draw of true stress, true strain-rate and true strain \(\sigma ^*(t),\dot{\varepsilon }^*(t),\varepsilon ^*(t)\) defined by (5) (where the superscript \(*\) is referring to a particular draw of a random variable):

$$\begin{aligned} f :\left( \varvec{\theta }^*,V_I(t),V_O(t)\right) \mapsto \left( \sigma ^*(t),\dot{\varepsilon }^*(t),\varepsilon ^*(t)\right) \end{aligned}$$
(9)

Since the one-dimensional wave analysis f is analytical, it would be possible to approximate analytically mean and standard deviation of stress and strain-rate as a function of strain. However, since each call to the function f has a reduced computational cost, a simple and straightforward sampling technique is chosen for generating stress and strain-rate statistics as a function of strain. Consider that J loading conditions are tested, each of which including \(K_j\) specimens. For each loading condition (denoted by j where \(1\le j\le J\)) and each specimen (denoted by k where \(1\le k\le K_j\)), recorded signals are denoted by \(V_{I,j,k}(t),V_{O,j,k}(t)\). Then, N independent draws denoted by \(\varvec{\theta }_{j,k}^{(n)}\) (where \(1\le n\le N\)) are generated by pseudo-random numbers giving in turn N output signals \(\sigma ^{(n)}_{j,k}(t),\dot{\varepsilon }^{(n)}_{j,k}(t),\varepsilon ^{(n)}_{j,k}(t)\) (where the superscript (n) replaces *). Then, for each test k, one can construct stress and strain-rate as a function of strain \(\sigma ^{(n)}_{j,k}(\varepsilon ^{(n)})\) and \(\dot{\varepsilon }^{(n)}_{j,k}(\varepsilon ^{(n)})\) that can be interpolated with cubic splines. Thus, one defines a strain vector (of size M) denoted by \(\varvec{\varepsilon }=\left[ \varepsilon _1,\ldots ,\varepsilon _M\right]\) common to all draws and the interpolation with cubic splines reads: \(\sigma ^{(n)}_{j,k}(\varepsilon _m)\) and \(\dot{\varepsilon }^{(n)}_{j,k}(\varepsilon _m)\) (where \(1\le m\le M\)). For each loading condition j and each specimen k, consider stress and strain-rate vectors (of size M) denoted by \(\varvec{\sigma }^{(n)}_{j,k}=\left[ \sigma ^{(n)}_{j,k}(\varepsilon _1),\ldots ,\sigma ^{(n)}_{j,k}(\varepsilon _M)\right]\) and \(\varvec{\dot{\varepsilon }}^{(n)}_{j,k}=\left[ \dot{\varepsilon }^{(n)}_{j,k}(\varepsilon _1),\ldots ,\dot{\varepsilon }^{(n)}_{j,k}(\varepsilon _M)\right]\). It is assumed that \(\varvec{\sigma }^{(n)}_{j,k}\) and \(\varvec{\dot{\varepsilon }}^{(n)}_{j,k}\) are draws of normal random vectors (of size M) denoted by \(\varvec{\sigma }_{j,k}\) and \(\varvec{\dot{\varepsilon }}_{j,k}\). There is no reason a priori to assume that stress and strain rate are normal random variables, as they are computed from a non-linear combination (see section Classic Wave Analysis) of other random variables (experimental parameters). Nevertheless, this assumption is strongly supported by the resulting distributions of stress and strain rate. Thus, the set of draws is used to estimate means (of size M) and covariance matrices (of size \(M\times M\)). Classic estimators are used in this section and recalled in (10). It should be mentioned that one can consider adding artificial noise to the measurements to improve the characterization of the estimators for bias and other systemic issues.

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\varvec{\overline{\sigma }}_{j,k}=\frac{1}{N}\sum _{n=1}^N\varvec{\sigma }^{(n)}_{j,k}}\\ \displaystyle {\varvec{V}^{\sigma }_{j,k}=\frac{1}{N-1}\sum _{n=1}^N\left( \varvec{\sigma }^{(n)}_{j,k}-\varvec{\overline{\sigma }}_{j,k}\right) .\left( \varvec{\sigma }^{(n)}_{j,k}-\varvec{\overline{\sigma }}_{j,k}\right) ^T}\\ \end{array} \right. \end{aligned}$$
(10)

Similar expressions hold for strain-rates. One can also extract from covariance matrices the diagonal square root denoted by \(\varvec{\varDelta \sigma }_{j,k}\) (respectively \(\varvec{\varDelta \dot{\varepsilon }}_{j,k}\)) and corresponding for each loading condition j and each specimen k to the point-wise standard deviation of stress as a function of strain (respectively strain-rate). Typical results are presented in the form of mean stress \(\varvec{\overline{\sigma }}_{j,k}\) as a function of strain (respectively mean strain-rate \(\varvec{\overline{\dot{\varepsilon }}}_{j,k}\)) with an envelop corresponding to \(\pm 2\varvec{\varDelta \sigma }_{j,k}\) (respectively \(\pm 2\varvec{\varDelta \dot{\varepsilon }}_{j,k}\)), which corresponds to a probability of 95% to lie in the envelop. Results extracted for one particular test are presented in Fig. 4. The choice of normal distribution and \(N=10000\) is visually confirmed in Fig. 5. A more detailed analysis of confidence intervals is given in A.

Fig. 4
figure 4

Uncertainties for one test with \(N=10000\)

Fig. 5
figure 5

Histogram plots for \(\varepsilon \approx 0.09\)

Experimental Campaign and Overall Uncertainty

An experimental campaign on the aluminum alloy AA7075-O has been performed in order to illustrate the methodology. To avoid technicalities associated with the use of visco-elastic bars, an aluminum alloy has been chosen, so that steel bars can be used. Experimental results revealed that the sensitivity to strain-rate is negligible. A series of \(J=8\) loading conditions have been tested, each of which include \(K_j\) specimens, as listed in Table 2. The average striker velocity (measured by laser) is reported for each loading condition with the associated variation due to the fact that tests are not identical. All tests under dynamic regime (\(1\le j\le 4\)) with the SHPB system are performed at room temperature (at least initially because the specimen undergoes self-heating). It would be very difficult to identify the behavior dependence on temperature on this basis only. Consequently, these dynamic tests are completed by additional tests on the same material under quasi-static regime (\(5\le j\le 8\)) and controlled temperature conditions. The tested temperature range is 100 K, although the classic analysis based on the Taylor-Quinney coefficient (see section Modeling Choices) shows that the temperature increase is around 35 K for the dynamic conditions tested in this contribution. Thus, the temperature range for quasi-static conditions is sufficient to identify the behavior dependence on temperature. Despite the fact that the behavior does not significantly depend on strain-rate, and that the quasi-static tests could be sufficient to identify the material behavior, a large number of tests in the dynamic regime have been performed as the main purpose of the paper is to show how uncertainties associated with the SPHB system may be estimated and integrated in model-based identification. The fact that the material behavior does not significantly depend on strain-rate has no influence on the proposed method, and the SPHB system is commonly use to identify material behaviors that are not very sensitive to strain-rate.

Table 2 Experimental campaign summary

All tests have been made on specimens extracted from the same plateFootnote 2 at different positionsFootnote 3. For each loading condition j, experimental conditions are maintained as identical as possible. For instance, a large series of \(K_2=33\) tests has been performed at 1000 s\(^{-1}\) as a target average strain-rate. Thus, the measured stress mean (respectively strain-rate) denoted by \(\overline{\varvec{\sigma }}_{j,k}\) (respectively \(\overline{\varvec{\dot{\varepsilon }}}_{j,k}\)) and the standard deviation denoted by \(\varvec{\varDelta \sigma }_{j,k}\) (respectively \(\varvec{\varDelta \dot{\varepsilon }}_{j,k}\)) (\(1\le k\le K_j\)) are computed as detailed in section Statistical Analysis. One can compute the overall mean considering all \(K_j\) tests for each loading condition j:

$$\begin{aligned} \overline{\varvec{\sigma }}_{j}=\frac{1}{K_j}\sum _{k=1}^{K_j}\overline{\varvec{\sigma }}_{j,k} \end{aligned}$$
(11)

And the overall standard deviation is given by:

$$\begin{aligned} \displaystyle {\varvec{\varDelta \sigma }_{j}=\sqrt{\varvec{S}^2_{j,1}+\varvec{S}^2_{j,2}}} \end{aligned}$$
(12)

where

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\varvec{S}^2_{j,1}=\frac{1}{K_j}\sum _{k=1}^{K_j}\left( \varvec{\overline{\sigma }}_{j,k}-\varvec{\overline{\sigma }}_j\right) ^2}\\ \displaystyle {\varvec{S}^2_{j,2}=\frac{1}{K_j}\sum _{k=1}^{K_j}\varvec{\varDelta \sigma }^2_{j,k}}\\ \end{array} \right. \end{aligned}$$
(13)

Similar expressions hold for strain-rates.

Since other experimental conditions at 600 s\(^{-1}\), 2750 s\(^{-1}\) and 7500 s\(^{-1}\) include much less specimens, the overall uncertainty is computed by assuming that the uncertainty due to material variability and test repeatability (computed as a percentage of stress) is the same as for the series of \(K_2=33\) tests at 1000 s\(^{-1}\). Results are presented in Fig. 6. It is clear that the average measurement uncertainty \(\varvec{S}_{j,2}\) increases with the average strain-rate at the beginning of the test (i.e., \(0\le \varepsilon \le 0.07\)). This part of the stress–strain curve is more sensitive to measurement uncertainties because the stiffness is much higher on the one hand and the specimen length is smaller on the other hand (leading to higher absolute uncertainty).

Fig. 6
figure 6

Dynamic tests with overall uncertainties (\(N=10,000\))

In addition, the behavior of the chosen material does not seem to present sensitivity to strain-rate as shown in Fig. 7, as the average stress–strain responses are similar for very different strain-rates. Considering the overall uncertainty for each strain-rate condition, discrepancies between stress–strain responses in Fig. 7 are very likely due to material variability. Indeed, specimens are extracted from different places of a single plate, which can present inhomogeneous behavior.

Fig. 7
figure 7

Material variability and independence on strain-rate

Furthermore, since strain gauge measurements are transformed into stress and strain-rate as a function of strain with a simple model (see section Classic Wave Analysis), a bias is introduced because of modeling assumptions. One of the most significant assumption is that the specimen is at equilibrium, that is to say that forces at both ends of the specimen are approximately equal (i.e., \(F_I(t)\approx F_O(t)\)). Thus, for each test, the quality of the equilibrium assumption is quantified by computing the following ratio:

$$\begin{aligned} R(t)=\frac{F_O(t)}{F_I(t)} \end{aligned}$$
(14)

The ratio R is statistically determined as a function of strain as detailed in section Statistical Analysis and presented for one test in Fig. 8. Clearly, the equilibrium assumption is not verified during the whole test. Thus, the usable data is reduced to \(\varepsilon _{min}\le \varepsilon \le \varepsilon _{max}\) where \(\varepsilon _{min}=0.05\) and \(\varepsilon _{max}=0.38\) so that the model bias does not affect much the results.

Fig. 8
figure 8

Equilibrium

Quasi-static conditions (\(5\le j\le 8\)) have been considered only to identify the behavior dependance on temperature, which is controlled by one material parameter denoted by \(G^{\prime}{\text{\_T}}\), as detailed in section Modeling Choices. Mean stress is presented in Fig. 9a without the point-wise uncertainty at 95%, as there are only 2 or 4 tests for each condition. The uncertainty due to imperfect knowledge of the experimental setup has not been estimated for quasi-static conditions. As a result, data in the quasi-static regimes are only used to identify \(G^{\prime}{\text{\_T}}T\) in section Standard Bayesian Estimation with unknown uncertainty.

Moreover, quasi-static tests are also used to estimate the Young modulus E and to provide prior information on the yield stress, as shown in Fig. 9b. However, compressive quasi-static tests are less reliable for very small strains (\(\varepsilon \le 0.01\)), as contact conditions are not well controlled at the interfaces between the sample and the plates of the testing machine. The prior distribution of the yield stress is therefore a rough estimation associated with a rather large uncertainty.

Fig. 9
figure 9

Quasi-static tests

Modeling Choices

In the following, a plasticity model is considered as a typical example for material parameter estimation. Thus, the model relates the stress as a function of material parameters (to be determined) and explanatory variables such as temperature, strain-rate and strain:

$$\begin{aligned} \varPhi :\left( \varvec{\gamma },\dot{\varepsilon },T,\varepsilon \right) \mapsto \sigma \end{aligned}$$
(15)

Material parameters to be identified are denoted by a vector \(\varvec{\gamma }\in \varGamma\). A Steinberg–Cochran–Guinan (SCG) model is used in this contribution. This choice is consistent with the fact that the chosen aluminum alloy AA7075-O does not present significant dependency on the strain-rate but more complex models could have been used instead, such as the Preston–Tonks–Wallace model for instance. A classic Johnson Cook (JC) model could have also been used, but this model includes a dependance to strain-rate that should have been discarded in order to avoid difficulties in estimating the model parameter associated to strain-rate. In addition, since SCG and JC models share very similar mathematical structures, results would have been comparable. Therefore it is more straightforward to use a SCG model. In general, the mathematical model should not only be able to reproduce the overall experimental behavior, but should also present dependencies that correspond to the sensitivity of the experimental data with respect to the tested quantities (e.g., temperature, strain, strain-rate). Indeed, if the model depends on quantities with respect to which the experimental data are not sufficiently sensitive, the corresponding coefficients in the model would be poorly estimated with very large uncertainty that could affect the uncertainty associated to the other coefficients. Thus, the model is chosen as a compromise between the real behavior and the actual tests that have been performed.

The SCG model gives the yield stress Y as a function of the hydrostatic pressure P, the relative volume variation \(1/\eta\), the temperature T and the equivalent plastic strain \(\varepsilon _{eq}\) as follows:

$$\begin{aligned} Y= & {} Y_0\left( 1+\beta \varepsilon _{eq}\right) ^n\left( 1+\left( \frac{ G^{\prime} _p}{G_0}\right) \frac{P}{\eta ^{1/3}}\right. \nonumber \\&\left. +\left( \frac{ G^{\prime} _T}{G_0}\right) (T-T_{0})\right) \end{aligned}$$
(16)

with a saturation condition:

$$\begin{aligned} Y_0\left( 1+\beta \varepsilon _{eq}\right) ^n\le Y_{max} \end{aligned}$$
(17)

where \(T_{0}\) is the reference temperature, \(Y_0\) is the initial yield stress before hardening, \(Y_{max}\) is a saturation stress at ambient pressure and temperature, \(\beta\) and n are dimensionless coefficients, \(G_0\) is the reference shear modulus, \(G^{\prime} _p\) is dimensionless and associated to the dependence on pressure and \(G^{\prime} _T\) has the dimension of a stress over a temperature and is associated to the dependence on temperature. By assuming that the stress tensor is of the form \(\sigma \varvec{e}_x\times \varvec{e}_x\) (where \(\varvec{e}_x\) is a unit vector aligned with the bar axis) and that elastic strain is negligible, it is obtained \(Y=\sigma\), \(P=\sigma /3\), \(\varepsilon _{eq}=\varepsilon\) and \(1/\eta ^{1/3}=1\).

One can neglect the ratio \(( G^{\prime} _p/G_0)P\) in (16) since for the proposed tests \(P\approx 100\) MPa and following estimations for a similar aluminum alloy 77075-O are found in the work of [37]: \(G^{\prime} _p=1.74\) and \(G_0=26700\) MPa leading to \(( G^{\prime} _p/G_0)P\approx 0.00652\ll 1\). Moreover, no saturation is noticeable in Figs. 6 and 9a and the saturation stress is estimated by [37] to \(Y_{max}=810\) MPa that is much higher than the maximum stress reached in this paper. Thus, it is impossible to determine \(Y_{max}\) on the basis of the proposed experiments. Although aluminum is considered in both [37] and the present paper, the experimental testing methods are different. Thus, uncertainties associated to the experimental test methods are different, and extracting directly the ratio \(G^{\prime} _p/G_0\) from [37] may be questionable. However, even considering this uncertainty, the estimated value of \(( G^{\prime} _p/G_0)P\) is sufficiently close to zero to be neglected.

In addition, the proposed uni-axial compression tests do not enable us to estimate \(G_0\) that is consequently fixed to the estimated value proposed by [37]. Thus, the identification of \(G^{\prime} _T\) is equivalent to the identification of the ratio \(( G^{\prime} _T/G_0)\). The estimation proposed by [37] is \(G^{\prime} _T=-16.45\) MPa K–1 and the ratio \(( G^{\prime} _T/G_0)\) is expected to be relatively small that is consistent with the behavior dependence on temperature shown in Fig. 9a. Then, the resulting model is:

$$\begin{aligned} \varPhi \left( \varvec{\gamma },T(\varepsilon ),\varepsilon \right) =Y_0\left( 1+\beta \varepsilon \right) ^n\left( 1+\left( \frac{ G^{\prime} _T}{G_0}\right) (T-T_{0})\right) \end{aligned}$$
(18)

Thus, there are \(d=4\) material parameters to identify: \(\varvec{\gamma }=\left( Y_0,\beta ,n, G^{\prime} _T\right) \in \varGamma\), where \(Y_0\) is the initial yield stress before hardening given in MPa, \(\beta\) and n are dimensionless and \(G^{\prime} _T\) is given in MPa K–1.

The term \(T-T_{0}\) in (18) has to be estimated. Quasi-static tests have been performed at two different constant temperatures 293 K and 400 K (see Table 2). On the contrary, all dynamic tests have been performed at room temperature but plastic dissipation is responsible for self-heating. However, temperature evolution has not been measured with a specific experimental apparatus. Thus, the temperature evolution is inferred under dynamic regime from the following equation (discussed for instance by [29] or similar formulation discussed by [28]) that assumes that the ratio of the thermal dissipation to mechanical work is known (Taylor–Quinney coefficient):

$$\begin{aligned} \frac{\text {d}T}{\text {d}\varepsilon }=\frac{\beta _{TQ}}{ \rho _0c_p} \sigma (\varepsilon ) \end{aligned}$$
(19)

where \(\beta _{TQ}\) is the Taylor–Quinney coefficient, \(\rho _0\) is the specimen density as listed in Table 1 and \(c_p\) is the specific heat capacity at constant pressure. Thus the Eq. (19) reads:

$$\begin{aligned} T(\varepsilon )-T_{0}=\frac{\beta _{TQ}}{ \rho _0c_p} W(\varepsilon ) \end{aligned}$$
(20)

where the plastic work is:

$$\begin{aligned} W(\varepsilon )=\int _{0}^{\varepsilon }\sigma (\upsilon )\text {d}\upsilon \end{aligned}$$
(21)

For each loading condition j and each specimen k, N draws of the form \(\varvec{\sigma }_{j,k}^{(n)}\), \(\beta ^{(n)}_{TQ}\), \(\rho _0^{(n)}\) and \(c_p^{(n)}\) are simulated as detailed in section Statistical Analysis. As a result, there are N draws of the form: \(\varvec{T}_{j,k}^{(n)}=\left[ T^{(n)}_{j,k}(\varepsilon _1),\cdots ,T^{(n)}_{j,k}(\varepsilon _M)\right]\) with mean:

$$\begin{aligned} \overline{\varvec{T}}_{j,k}=\frac{1}{N}\sum _{n=1}^N\varvec{T}^{(n)}_{j,k} \end{aligned}$$
(22)

Thus, for each loading condition j, each specimen k and each material parameter \(\varvec{\gamma }\in \varGamma\), the model (18) can be presented as a vector of size M:

$$\begin{aligned} \overline{\varvec{\varPhi }}_{j,k}=\left[ \varPhi \left( \varvec{\gamma },\overline{T}_{j,k,1},\varepsilon _1\right) ,\ldots ,\varPhi \left( \varvec{\gamma },\overline{T}_{j,k,M},\varepsilon _M\right) \right] \end{aligned}$$
(23)

As already mentioned, for the studied material the ratio \(( G^{\prime} _T/G_0)\) is expected to be relatively small. Hence, it is not necessary to consider the uncertainty related to temperature. In the following, the mean temperature is considered as perfectly known for dynamic experimental conditions with the mean values \(\overline{\beta }_{TQ}=0.8\) and \(\overline{c}_p=876\) J.kg\(^{-1}\).K\(^{-1}\).

Standard Bayesian Estimation

In this section, the parameter \(G^{\prime} _T\), which is involved in the temperature dependence of the model (18), is determined by analyzing the tests in the quasi-static regime as listed in Table 2. Indeed, temperature variations occurring during the tests in the dynamic regime are not sufficient to accurately identify \(G^{\prime} _T\). The parameter \(G^{\prime} _T\) is identified separately from the other material parameters \(Y_0,\beta ,n\) by using only the tests in the quasi-static regime. Indeed, the tests in the quasi-static regime have been performed only to characterize the temperature dependance of behavior. The uncertainty associated to the corresponding quasi-static experimental setup has not been quantified as for the tests in the dynamic regime (see section Imperfect Knowledge of the Experimental Setup). Thus, for each test k of each condition j, the data \(\varvec{\sigma }_{j,k}\) obtained in the quasi-static regime (i.e., \(j=5\) to 8 in Table 2) are divided by the average stress data obtained at \(T_0=293\) K (i.e., \(j=5\) and 7) leading to dimensionless data denoted by \(\widehat{\varvec{\sigma }}_{j,k}=\left( \widehat{\sigma }_{j,k,1},\cdots ,\widehat{\sigma }_{j,k,M}\right)\). It is clear from the SCG model (18) that the reduced model fitting the dimensionless data \(\widehat{\varvec{\sigma }}_k\) reads:

$$\begin{aligned} \widehat{\varPhi }\left( G^{\prime} _T,T\right) =1+\left( \frac{ G^{\prime} _T}{G_0}\right) (T-T_{0}) \end{aligned}$$
(24)

A simple Bayesian analysis is performed to identify \(G^{\prime} _T\). A normal likelihood is assumed with mean \(\widehat{\varPhi }\) and unknown standard deviation \(\widehat{s}\). Thus, the likelihood reads:

$$\begin{aligned} \widehat{\sigma }_{j,k,m}| G^{\prime} _T,\widehat{s}\sim \mathcal {N}\left( \widehat{\varPhi }( G^{\prime} _T,T_j),\widehat{s}^2\right) \end{aligned}$$
(25)

where \(1\le m\le M\), and \(T_j=293\) K for \(j=5\) and \(j=7\) and \(T_j=400\) K for \(j=6\) and \(j=8\). Prior distributions for \(G^{\prime} _T\) and \(\widehat{s}\) are determined by exploiting available a priori information extracted from the literature and expertise. In this paper, it consists in the estimation of material parameters for an other aluminum alloy proposed in [37]: \(G^{\prime} _T=-16.45\) MPa K–1. Since the alloy studied in [37] is not identical to the material studied in this contribution, a flat uniform distribution is considered with rather large bounds for \(G^{\prime} _T\). The prior distribution of \(\widehat{s}\) is chosen as the conjugate prior distribution for a normal model as detailed by [13], namely the scaled inverse chi-square law, hence:

$$\begin{aligned} \left\{ \begin{array}{l|c|c} \displaystyle { G^{\prime} _T\sim \mathcal {U}\left( G^{\prime} _{T,min}, G^{\prime} _{T,max}\right) }&{} G^{\prime} _{T,min}=-50\ \text {MPa K}^{-1}&{} G^{\prime} _{T,max}=-5\ \text {MPa K}^{-1}\\ \displaystyle {\widehat{s}^2\sim \text {Inv-}\chi ^2\left( \widehat{S}^2,\widehat{\nu }\right) }&{}\widehat{S}=0.03&{}\widehat{\nu }=10\\ \end{array} \right. \end{aligned}$$
(26)

where \(\mathcal {U}\) denotes a uniform distribution and \(\text {Inv-}\chi ^2\) the scaled inverse chi-square law. The posterior distribution reads:

$$\begin{aligned} p\left( G^{\prime} _T,\widehat{s}|\widehat{\varvec{\sigma }}_{j,k}\right) \varpropto p\left( \widehat{\varvec{\sigma }}_{j,k}| G^{\prime} _T,\widehat{s}\right) p( G^{\prime} _T)p(\widehat{s}) \end{aligned}$$
(27)

Statistics of posterior probability density functions (27) are explored by Markov-Chain Monte Carlo (MCMC) sampling techniques. In practice, a No U-Turn Sampler (NUTS) developed by [16] is used within the framework of the PYMC3 package developed by [33] in Python [39]. NUTS is an extension of the Hamiltonian Monte Carlo algorithm, which avoids sensitivity to correlated parameters, but whose performance depends on two parameters that need to be specified, namely the step size and the number of steps. On the contrary, NUTS does not necessitate to hand-tune any parameter with equivalent efficiency. Results are presented in Fig. 10 with means and credible intervals at 94%.

Fig. 10
figure 10

Posterior densities

Sensitivity Analysis

In this section, as standard practice in model-based estimation, a sensitivity analysis of the SCG model (16) to parametric changes is provided. This analysis enables us to better interpret the results presented in section Results. There are numerous different sensitivity analysis methods. In this study, a variance-based sensitivity analysis (i.e., Sobol method) is proposed [32, 35]. Unlike local methods, which elucidate the model sensitivity to parametric changes around a specific value of the parameters, variance-based methods belong to the so-called global methods as the entire parameter set is usually sampled to perform the analysis.

Within a probabilistic framework, variance-based sensitivity analysis decomposes the variance of the model outputs into proportions, which can be attributed to variations of model parameters. For each value of \(\varepsilon\) the yield stress Y given by (16) is a function of the parameters \(Y_0,\beta ,n, G^{\prime} _T\). Since the parameter \(G^{\prime} _T\) has already been identified, the sensitivity analysis is performed only on \(Y_0,\beta ,n\). Thus, the output variance due to variation of \(Y_0,\beta ,n\) at a fixed \(\varepsilon\) can be explained by several contributions: (i) the first-order sensitivity index \(S_i\) (where \(1\le i\le 3\)), which is the main effect of each parameter \(\gamma _i\in \left\{ Y_0,\beta ,n\right\}\) varying alone, and (ii) the second-order sensitivity index \(S_{ij}\), which represents the interaction effect of varying pairs of parameters \((\gamma _i,\gamma _j)\) together. Of course, the decomposition can be pursued for higher-order sensitivity, even though first and second-order sensitivity indexes are the most common indicators. More precisely, the output variance \(\text {Var}(Y)\) can be decomposed into several contributions:

$$\begin{aligned} \text {Var}(Y)=\sum _{i=1}^3V_{i}+\sum _{i<j}^3V_{ij}+\cdots \end{aligned}$$
(28)

where \(\text {Var}(Y)\) is the variance of Y and:

$$\begin{aligned} \left\{ \begin{array}{l} V_i=\text {Var}_{\gamma _i}\left( E_{\varvec{\gamma }_{\sim i}}\left( Y|\gamma _i\right) \right) \\ V_{ij}=\text {Var}_{\gamma _i,\gamma _j}\left( E_{\varvec{\gamma }_{\sim ij}}\left( Y|\gamma _i,\gamma _j\right) \right) -V_i-V_j \end{array} \right. \end{aligned}$$
(29)

where \(1\le i\le 3\), \(\gamma _i\in \left\{ Y_0,\beta ,n\right\}\), \(\text {Var}_{\gamma },E_{\gamma }\) are respectively the variance and the expected value when \(\gamma\) is varying, and \(\varvec{\gamma }_{\sim i},\varvec{\gamma }_{\sim ij}\) denote the set of all variables except \(\gamma _i\) and \((\gamma _i,\gamma _j)\) respectively. The first and second-order sensitivity indexes \(S_i,S_{ij}\) (or Sobol indexes) represent the proportion of the output variance explained by the variation of the model parameters, therefore:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {S_i=\frac{V_i}{\text {Var}(Y)}}\\ \displaystyle {S_{ij}=\frac{V_{ij}}{\text {Var}(Y)}} \end{array} \right. \end{aligned}$$
(30)

These sensitivity indexes are computed using the SALib package developed by [15] in Python, by using the following intervals for the parameters variation: \((Y_0,\beta ,n)\in \left[ 60,160\right] \times \left[ 1000, 9000\right] \times \left[ 0.10,0.25\right]\), and the analysis is done for \(\varepsilon =0.4\) (i.e., highest plastic strain in the data). Results are listed in Table 3 and same conclusions would be obtained for other values of \(\varepsilon\).

Table 3 Sensitivity indexes in percentage with a confidence level of 95% for \(\varepsilon =0.4\)

The SCG model (16) is less sensitive to \(\beta\) than \(Y_0\) and n, even though the considered interval of variation for \(\beta\) is large. Difficulties to identify \(\beta\) are therefore expected. In addition, there is very little sensitivity to simultaneous variations of pairs of parameters, especially for the interaction of \(Y_0\) and \(\beta\). Based on this sensitivity analysis, the posterior distribution for \(\beta\) is expected to spread on a large interval, which is associated to higher uncertainty. However since the model is still slightly sensitive to \(\beta\), the experimental data is expected to make evolve the prior distribution but not in the extent of other parameters \(Y_0\) and n. Thus, posterior distributions of \(\beta\) are expected to be similar for all tests in section Results, although posterior distributions of \(Y_0\) and n should be clearly distinct. This is due to the fact that differences between tests are not sufficiently pronounced considering the higher uncertainty associated to \(\beta\).

Hierarchical Bayesian Estimation

Hyperprior Distribution

This section completes the identification of model parameters involved in the SCG model (18). A hierarchical Bayesian estimation is proposed in order to use in details the information provided by each test. Since the data obtained in the quasi-static regime (\(j=5\) to 8 in Table 2) have already been used to identify the parameter \(G^{\prime} _T\), the following analysis is based on one experimental condition in the dynamic regime (i.e., \(j=2\) in Table 2). Moreover, among the 33 tests performed at 1000 s\(^{-1}\), only \(K=20\) tests are analyzed and the 13 remaining tests are used for a comparison to model predictions.

The prior distribution of \(G^{\prime} _T\) is set as the posterior distribution obtained in section Standard Bayesian Estimation, and since the considered tests (\(j=2\)) are at room temperature, there is very little sensitivity of the tests in the dynamic regime with respect to \(G^{\prime} _T\) (i.e., self-heating is not sufficient). Therefore, the posterior distribution of \(G^{\prime} _T\) is extremely similar to its prior distribution. Thus, for the sake of simplicity, \(G^{\prime} _T\) is omitted in the following developments as the Bayesian inference on the tests in the dynamic regime has no influence on this model parameter. Thus, there are \(d=3\) remaining material parameters to identify in this section, namely \(Y_0,\beta ,n\).

As already mentioned, material parameters physically depend on each test k (\(1\le k\le K\)) because of material variability. Thus, it is legitimate to propose a hierarchical Bayesian analysis considering each test k as a group with specific material parameters \(\varvec{\gamma }_k=\left( Y_{0,k},\beta _k,n_k\right)\). In this approach, the tested specimens constitute a sample of \(K=20\) draws among all possible specimens. Material parameters \(\varvec{\gamma }_k\) are assumed to be independent samples from a common hyper random variable that is parametrized by a hyperparameter vector \(\varvec{\varphi }\) to which a hyperprior distribution is associated. The prefix hyper is used to highlight the fact that the sampling process of specimens is at a higher level than the rest of the Bayesian probabilistic approach. Thus, material variability is captured by the fact that each \(\varvec{\gamma }_k\) is conditionally dependent on \(\varvec{\varphi }\) as detailed by [13]. The approach proposed in section Standard Bayesian Estimation relies on a direct empirical estimation of material variability and repeatability of tests. On the contrary, the hierarchical approach relies on a hyper random variable determining the dispersion of material parameters from one test to another.

On the basis of results obtained in section Standard Bayesian Estimation, informative normal distributions are assumed for hyperprior distributions related to \(Y_0\) and n. Thus, the hyper random variable is assumed to be normal with mean \(\varvec{\mu }_{\gamma }\) and covariance matrix \(\varvec{\varSigma }_{\gamma }\). Thus, hyperparameter vector is \(\varvec{\varphi }=\left( \varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right)\) and:

$$\begin{aligned} \varvec{\gamma }|\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\sim \mathcal {N}_d\left( \varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right) \end{aligned}$$
(31)

where \(\mathcal {N}_d\) denotes a multivariate normal distribution of size \(d=3\). In addition, \(\varvec{\mu }_{\gamma }=\left( \mu _{Y_0},\mu _{\beta },\mu _n\right)\) and \(\varvec{\varSigma }_{\gamma }\) is a diagonal \(d\times d\) matrix of diagonal \(\left( \varSigma _{Y_0},\varSigma _{\beta },\varSigma _{n}\right)\). The conditional probability density function (31) accounts for material variability. In addition, since \(\left( \varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right)\) are unknown, an associated hyperprior distribution is needed and chosen to correspond to the classic conjugate prior distribution for a normal model as detailed by [13]:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {p(\varvec{\mu }_{\gamma })\varpropto \prod _{l=1}^d p(\mu _{l})}\\ \displaystyle {p(\varvec{\varSigma }_{\gamma })\varpropto \prod _{l=1}^d p(\varSigma _{l})} \end{array}\right. \end{aligned}$$
(32)

where

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\varSigma _{l}\sim \text {Inv-}\chi ^2\left( S^2_{l,0},\nu _{l,0}\right) }\\ \displaystyle {\mu _l\sim \mathcal {N}\left( \mu _{l,0},s_{l,0}^2\right) } \end{array} \right. \end{aligned}$$
(33)

where \(l\in \left\{ Y_0,\beta ,n\right\}\) and \(\text {Inv-}\chi ^2\left( S^2_{l,0},\nu _{l,0}\right)\) denotes the scaled inverse chi-square law. That is to say that the prior distribution of \(\varSigma _{l}\) is taken to be the distribution of \(S^2_{l,0}\nu _{l,0}/Z\) where \(Z\sim \chi ^2_{\nu _{l,0}}\). The fixed parameters that completely determine the hyperprior distribution are \(\left( S^2_{l,0},\nu _{l,0},\mu _{l,0},s_{l,0}^2\right)\) with \(l\in \left\{ Y_0,\beta ,n\right\}\). Normal prior distributions have been considered for the means \(\mu _l\) with conjugate prior distributions (i.e., scaled inverse chi-square law) for variances \(\varSigma _l\). This choice of normal prior distributions seems reasonable. Indeed, the hyperprior distributions characterize a priori information on how material variability is distributed in the aluminum plate, from which specimens have been extracted. Material variability is mainly due to heterogeneity of microstructure and residual stresses, which are respectively related to the temperature distribution during the annealing process, and previous plastic deformations during forming processes. Thus, some specimens have higher or lower values than the rest of the specimens depending on their respective location in the plate. However, heat treatments and forming processes are usually performed so that material parameters are as homogenous as possible. Therefore, most specimens likely share similar material parameters, which are a priori distributed around a mean value, leading to consider normal prior distributions.

One can define a global material parameter \(\varvec{\gamma }=\left( Y_0,\beta ,n\right)\) accounting for material variability and repeatability of tests (on the basis of the studied sample of specimens) whose prior probability density function is:

$$\begin{aligned}&p\left( \varvec{\gamma }\right) \varpropto \int _{\varvec{\mu }_{\gamma }}\int _{\varvec{\varSigma }_{\gamma }} \left( \prod _{k=1}^Kp\left( \varvec{\gamma }_k|\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right) \right) p(\varvec{\mu }_{\gamma })\nonumber \\&\quad p(\varvec{\varSigma }_{\gamma })\text {d}\varvec{\mu }_{\gamma }\text {d}\varvec{\varSigma }_{\gamma } \end{aligned}$$
(34)

Multivariate Normal Model

For each test k, the observations are the mean stress as a function strain \(\overline{\varvec{\sigma }}_k\) and the covariance matrix \(\varvec{V}^{\sigma }_k\) (determined by (10)). The covariance matrices \(\varvec{V}^{\sigma }_k\) only includes random measurement errors and uncertainties due to the imperfect knowledge of the experimental setup, since material variability is taken into account through the hierarchical approach. The explanatory variables are the strain \(\varvec{\varepsilon }\) and the temperature \(\overline{\varvec{T}}_k\). The likelihood distribution is given as a latent normal model with unknown mean and known covariance matrix \(\varvec{V}_0\) where

$$\begin{aligned} \varvec{V}_0=\frac{1}{K}\sum _{k=1}^K\varvec{V}_k^{\sigma } \end{aligned}$$
(35)

where \(\varvec{V}_k^{\sigma }\) is given by (10). The average covariance matrix is considered instead of the covariance matrix of each tests in order to reduce the amount of data to be processed during the Bayesian inference and because covariance matrices are very similar. Thus the likelihood reads:

$$\begin{aligned} \overline{\varvec{\sigma }}_k|\varvec{\gamma }_k\sim \mathcal {N}_M\left( \overline{\varvec{\varPhi }}_k,\varvec{V}_0\right) \end{aligned}$$
(36)

where \(\overline{\varvec{\varPhi }}_k\) is given by (23) and where \(\mathcal {N}_M\) is the normal distribution of size M. It should be mentioned that the normal model (36) relies on the assumption that measurement noise and uncertainties due to imperfect knowledge of the experimental setup are perfectly estimated by the statistic analysis proposed in section Statistical Analysis. Posterior density reads:

$$\begin{aligned} p\left( \varvec{\gamma }_k,\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_k\right) \varpropto p\left( \overline{\varvec{\sigma }}_k|\varvec{\gamma }_k\right) p\left( \varvec{\gamma }_k|\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right) p(\varvec{\mu }_{\gamma })p(\varvec{\varSigma }_{\gamma }) \end{aligned}$$
(37)

where \(p\left( \varvec{\gamma }_k|\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right)\) is given by (31). Marginal posterior distributions are also computed:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {p\left( \varvec{\gamma }_k|\overline{\varvec{\sigma }}_k\right) \varpropto \int _{\varvec{\mu }_{\gamma }}\int _{\varvec{\varSigma }_{\gamma }} p\left( \varvec{\gamma }_k,\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_k\right) \text {d}\varvec{\mu }_{\gamma }\text {d}\varvec{\varSigma }_{\gamma }}\\ \displaystyle {p\left( \varvec{\mu }_{\gamma }|\overline{\varvec{\sigma }}_1,\cdots ,\overline{\varvec{\sigma }}_K\right) \varpropto \prod _{k=1}^K\int _{\varvec{\varSigma }_{\gamma }}\int _{\varvec{\gamma }_k} p\left( \varvec{\gamma }_k,\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_k\right) \text {d}\varvec{\varSigma }_{\gamma }\text {d}\varvec{\gamma }_{k}}\\ \displaystyle {p\left( \varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_1,\cdots ,\overline{\varvec{\sigma }}_K\right) \varpropto \prod _{k=1}^K\int _{\varvec{\mu }_{\gamma }}\int _{\varvec{\gamma }_k} p\left( \varvec{\gamma }_k,\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_k\right) \text {d}\varvec{\mu }_{\gamma }\text {d}\varvec{\gamma }_{k}}\\ \end{array}\right. \end{aligned}$$
(38)

Statistics of posterior probability density functions (38) are explored by Markov-Chain Monte Carlo (MCMC) sampling techniques. In practice, a No U-Turn Sampler (NUTS) developed by [16] is used within the framework of the PYMC3 package developed by [33] in Python ([39]). Finally, the posterior density of the global material parameters \(\varvec{\gamma }\) reads:

$$\begin{aligned}&p\left( \varvec{\gamma }|\overline{\varvec{\sigma }}_1,\ldots ,\overline{\varvec{\sigma }}_K\right) \varpropto \int _{\varvec{\mu }_{\gamma }}\int _{\varvec{\varSigma }_{\gamma }}\prod _{k=1}^K p\left( \varvec{\gamma }_k|\varvec{\mu }_{\gamma },\varvec{\varSigma }_{\gamma }\right) \nonumber \\&\quad p\left( \varvec{\mu }_{\gamma }|\overline{\varvec{\sigma }}_1,\cdots ,\overline{\varvec{\sigma }}_K\right) p\left( \varvec{\varSigma }_{\gamma }|\overline{\varvec{\sigma }}_1,\ldots ,\overline{\varvec{\sigma }}_K\right) \text {d}\varvec{\mu }_{\gamma }\text {d}\varvec{\varSigma }_{\gamma } \end{aligned}$$
(39)

It should be noted that the hierarchical approach relies on hyperprior distributions instead of prior distributions. Therefore, resulting hyperprior parameters listed in Table 4 cannot be directly reduced to prior distributions on \(Y_0,\beta ,n\). Parameters \(\mu _{l,0}\) control the average values of material parameters and parameters \(s_{l,0}\) control the uncertainty on the estimation of \(\mu _{l,0}\). Parameters \(S_{l,0}\) control the range of possible values of material parameters and parameters \(\nu _{l,0}\) control the uncertainty on \(S_{l,0}\). Parameters \(\left( S_{l,0},\nu _{l,0},\mu _{l,0},s_{l,0}\right)\) with \(l\in \left\{ Y_0,\beta ,n\right\}\) are listed in Table 4. These parameters have been set by using prior information. For instance, quasi-static tests enable us to roughly estimate \(Y_0\) directly by determining an inflexion point on the stress–strain curve for small strains (see Fig. 9). In addition, n is roughly estimated from stress–strain curves. However, there is no a priori information on \(\beta\), therefore a significant uncertainty is associated to the corresponding hyperprior distributions.

Table 4 Parameters for hyperprior distribution

Results

Among the \(K_2=33\) tests performed at 1000 s\(^{-1}\), only \(K=20\) tests are analyzed to sample posterior marginal distributions and the 13 remaining tests are used for a comparison to model predictions. Marginal posterior distributions of \(Y_0,\beta ,n, G^{\prime} _T\) are presented for all tests in Fig. 11. As already mentioned, the posterior distribution of \(G^{\prime} _T\) is extremely similar to its prior distribution, which was identified with standard Bayesian inference using data in quasi-static regime (see section Standard Bayesian Estimation). This is due to the fact that there is very little sensitivity of the tests in dynamic regime with respect to \(G^{\prime} _T\). These results clearly show the uncertainty of each test on the one hand and dispersion of distributions due to material variability and repeatability on the other hand. For the \(\beta\) parameter, all tests have almost the same posterior distribution. This behavior was expected from the sensitivity analysis proposed in section Sensitivity Analysis. Indeed, it has been shown that the SCG model (18) is mainly sensitive to variations of \(Y_0\) and n although \(\beta\) explains only 5.4% of the variance. Thus, differences between tests are hidden by the large uncertainty associated to \(\beta\). Global marginal posterior distributions are inferred from (39) and are presented in Fig. 12. A comparison of mean and standard deviation between prior and posterior distributions is given in Table 5 in order to summary how the experimental data make evolve prior information.

Table 5 Comparison between prior and posterior distributions

Scatter-plots of marginal and pairwise joint densities are presented in Fig. 13 for test \(k=1\) (and similar results are obtained for the other tests). Probability density functions take place on the diagonal and scatter plots show the draws (produced by MCMC sampling) as a function of parameters pairs. A significant correlation between \(Y_0\) and \(\beta\) is observed. This correlation is due to the fact that the prior distribution of \(\beta\) spreads on a very wide range, and the model is only slightly sensitive to \(\beta\) as shown in section Sensitivity Analysis. Thus, large relative variations of \(\beta\) can be compensated by rather small relative variations of \(Y_0\). Therefore, the significant uncertainty associated to \(\beta\) has a negative influence on the posterior uncertainty associated to \(Y_0\). Of course, using a more informative prior distribution with less dispersion for \(\beta\) would significantly reduce this correlation. However, there is no a priori information that would justify such a choice. An other option to reduce the uncertainty associated to \(\beta\) and therefore the correlation between \(\beta\) and \(Y_0\) would be to analyze the tests without the equilibrium assumption. Indeed, as shown in Fig. 8, the equilibrium assumption implies to consider \(\varepsilon >0.05\). Since the model is almost only sensitive to \(Y_0\) for low values of \(\varepsilon\), using the data for \(\varepsilon <0.05\) would enable to estimate \(Y_0\) almost independently on \(\beta\). However, releasing the equilibrium assumption is uneasy and would require complex treatments of the experimental signals.

In addition, no significant correlation is observed between n and other material parameters in Fig. 13. This is due to the fact that n controls the overall “curvature” of the stress–strain curve. Even though the experimental data have been considered for \(\varepsilon >0.05\) , the range of strain variation is sufficient to identify n almost independently on the other material parameters. Indeed, for n values not in the range presented in Fig. 12, it is possible to adjust \(Y_0\) and \(\beta\) so that a part of the corresponding stress–strain curve fits the experimental data, but not on the entire range \(0.05\le \varepsilon \le 0.38\).

Fig. 11
figure 11

Material parameter posterior distributions

Fig. 12
figure 12

Marginal posterior densities for global parameters

Fig. 13
figure 13

Scatter-plots of marginal and pairwise joint densities

Maximum a posteriori (MAP) estimates are computed and listed in Table 6 to compute the calibrated model. The overall model uncertainty is directly computed as the interval defined by the quantiles at 2.5% and 97.5% obtained from the draws of the model \(\varPhi\), which are generated at the same time as the posterior distributions of material parameters \(Y_0,\beta ,n, G^{\prime} _T\). Good agreement is observed as shown for instance in Fig. 14 for different tests (\(k=1,5,10,15\)). The model uncertainty has been computed for \(\varepsilon \in \left[ 0.05,0.38\right]\) as for the experimental data, but the MAP estimates have been used to compute the model also for \(\varepsilon <0.05\) to show how the model behaves for small deformations. In addition, the mean of individual MAP estimates gives a global material parameter estimate that enables us to compute global model predictions. Posterior predictive checks sampling techniques are also used to simulate future experimental tests on the basis of the calibrated model accounting for experimental uncertainties. A comparison with the 13 remaining tests at 1000 s\(^{-1}\) (not used for the identification) is proposed. A good agreement is observed, as shown in Fig. 15 that is to say that the global average MAP predicts correctly the behavior of future tests.

Table 6 Maximum a posteriori estimates
Fig. 14
figure 14

Experimental data of each and calibrated models for \(k=1,5,10,15\)

Fig. 15
figure 15

Remaining experimental data, global model and predictive checks

Conclusion

This paper is an attempt to quantify uncertainties within the context of dynamic tests relying on a split Hopkinson pressure bar system. A classic one-dimensional wave propagation model is used to transform strain gauge measurements into force and displacement at both ends of the specimen. The approach necessitates to determine uncertainties due to imperfect knowledge of the experimental setup. Each measured parameter is modeled as a random variable. Then, a simple statistical analysis simulates draws of stress and strain-rate as a function of strain in order to determine this uncertainty. Addressing such uncertainties is a good experimental practice insofar as it leads to regularly and carefully measure components of the experimental setup with adapted measurement devices. An experimental campaign has been performed on the aluminum alloy AA7075-O in order to estimate material variability and repeatability of tests. Several tests have been performed for each experimental condition. For each condition, the mean stress as a function of strain has been determined as well as the overall uncertainty (accounting for random measurement errors, imperfect knowledge of the experimental setup, material variability and repeatability of tests). A simple Steinberg–Cochran–Guinan (SCG) behavior model has been calibrated because the studied material does not present significant dependance on strain-rate. Bayesian estimation has been performed to identify material parameters. Results are given as posterior probability density functions and the resulting overall uncertainty on material parameters is therefore clearly quantified. The fitted model agrees well with the measurements and model uncertainties are reasonable even though it has been shown that there is very little sensitivity of the SCG model with respect to one parameter, leading to significant uncertainty on this parameter. Thus, alternative models for which the sensitivity is similar for all parameters would reduce the overall uncertainty exhibited in this study.

The systematic quantification of uncertainties in dynamic tests opens interesting perspectives to analyze the response of structures and materials to impact, as calibrated models are generally extrapolated to conditions that have not been tested experimentally. Of course, this extrapolation should be limited to conditions involving the same physical phenomena as those actually tested. In addition, the probabilist framework considered in this paper enables to simply introduce uncertainties in the definition of design criteria to accommodate high-rate loading.