Keywords

Introduction

In the last decade, optical full-field measurements like Digital Image Correlation (DIC, [1]) or the Grid Method (GM, [2]) have provided in-depth transformations to the way researchers and engineers perform mechanical experiments on materials and structures. These measurement technologies have become mature, after intensive international research effort, as reflected by the recently published guide of good practice for DIC [3]. Such measurements make it possible to move from simple statically determinate tests to more complex ones, with a view to get more data from a single test, thus reducing the testing effort and increasing the quality of the material models. However, in this case, inverse solutions have to be employed to go from heterogeneous kinematic fields to material parameters (and/or unknown loading distributions). This has also been the object of extensive research, with two main techniques having emerged, Finite Element Model Updating (FEMU, [4]) and the Virtual Fields Method (VFM, [5]).

However, there is a third element to this new paradigm in materials testing: the test configuration itself. Most of our mechanical testing toolkit was developed at a time when only strain gauges or other point sensors were available. As a consequence, they are generally strongly inspired from traditional tests developed for a different type of metrological information.

There has been some research effort to look at the problem of designing heterogeneous tests that would be better adapted to full-field measurements and inverse identification for a given type of constitutive model, and this is the object of this short review. However, this is still an underexplored area of research that requires sustained effort to realise the full potential of this new paradigm in materials testing.

The Early Attempts

In the late 1980s and early 1990s, two pioneering research groups started to work on new testing procedures based on full-field measurements and inverse identification. The first one was at the School of Mines in St-Etienne, France, with the PhD of Michel Grédiac under the supervision of Alain Vautrin. They combined a deflectometry set-up and the Virtual Fields Method, first proposed by M. Grédiac in his PhD, to the identification of elastic anisotropic stiffness components of thin composite plates [6, 7]. The test configuration was a series of three bending tests realised on the same plate but moving the loading point, to create a database with enough information to identify the six anisotropic bending stiffness components. The second one was the group of Cees Oomens at TU Eindhoven, with a motivation to test biological membranes like skin [8], and later, metals [9]. In the latter, a double asymmetrical notch configuration was selected to create heterogeneity, though little justification was provided for this choice.

Research in this area expanded in the 1990s and 2000s, for anisotropic elasticity [10,11,12,13], elasto-plasticity [14,15,16,17,18], hyperelasticity [19,20,21,22,23], high strain rate behaviour [24,25,26,27,28,29,30] and heterogeneous materials [31,32,33,34,35], to cite but a few references.

Test Design and Optimization

Designs Based on the Strain State

Attempts to optimize test configurations were only scarce initially. To the best knowledge of the present author, the first paper on this was published in 1998 [36]. To identify the full set of in-plane elastic parameters for an orthotropic composite material, a tension/bending test on a unidirectional composite specimen in the shape of a ‘T’ was imagined (Fig. 16.1). The ends of the horizontal bar of the ‘T’ rest on rollers while the vertical bar is gripped and pulled down. The vertical bar is where the vertical modulus is activated, while the horizontal bar sees bending and shear, activating the horizontal and the shear moduli. The aspect ratio of the specimen was optimized using the native ANSYS APDL language so that the average horizontal strain in area 1, the average vertical strain in area 2 and the average shear strain in area 3 were identical. This was then validated experimentally in [10]. The results were rather good except for major’s Poisson’s ratio but this is likely to come from the manually-defined virtual fields used in that study.

Fig. 16.1
figure 1

T-shaped specimen, inspired from [36]

In [37], a number of indicators based on strain metrics were developed for elasto-plastic identification. These indicators were then used in a shape optimization problem by the same group [38]. Starting from a circular shape parameterized by cubic splines, the control points of the splines were adjusted with respect to a cost function that combined the indicators from [37]. The result was a ‘butterfly-shaped specimen and experimental validations was published in [39]. Another interesting shape was devised using similar criteria but including the strain rate for visco-plasticity identification [40].

Designs Based on Identification Quality

Strain metrics, while easy to compute, are several steps away from the actual identification results and therefore, may lead to non-optimal configurations in practice. A step forward consists in considering the identification uncertainty as the cost function to optimize test configurations. To the best knowledge of the present author, the first attempt was published in 2007 [41]. A rectangular unidirectional composite test specimen was loaded in a combination of bending and shear using a Iosipescu fixture.

The specimen free length (between supports) and off-axis fibre angle were used as design variables and a cost function established at balancing the four sensitivity-to-noise coefficients from the noise-optimized VFM [43], one for each individual orthotropic stiffness component. It should be noted that these coefficients scale down with the signal to noise ratio and therefore, a normalizing procedure had to be established to limit the allowable deformation in the purely elastic simulations. This is an essential step that has to be controlled carefully as it can significantly affect the end result. The other issue is that the length had to be constrained to a maximum value as the algorithm has a tendency to converge towards much longer specimens that would have been physically impractical because of the loss of spatial resolution. This was one of the motivations for moving to the full identification chain as reported in the next subsection. The problem was solved by driving ANSYS® through Matlab using a text input file for the FE model. An optimal configuration was found and successfully validated experimentally using ESPI [41]. Compared to the standard configuration used in [44] for instance, the improvement was significant. A very similar study was conducted using an Arcan test configuration and DIC to identify the orthotropic stiffness components of a PVC foams [45].

The same approach was used by Michel Grédiac’s group [42] to optimize the shape of thin plates in bending, to identify the full set of orthotropic bending stiffness components for a unidirectional composite. The only difference is that the design variables were much wider: shape, support and loading points, and fibre angle. The shape was parameterized using splines, as in [38]. The final result for the shape and loading points is non-intuitive (Fig. 16.2) but again, it may not be practical because of potential strain concentrations not well reconstructed by the measurement technique, and the loss of pixels as the shape moves further away from the camera aspect ratio. No experimental validation was attempted on this shape.

Fig. 16.2
figure 2

Optimized bending test, from [42]. Red triangles indicate supports, arrow indicates applied force

Designs Based on Full Identification Chain Simulation

The main limitation of the use of strains directly from finite element simulations to optimize test configurations based on identification quality is the fact that experimental constraints are not taken into account. All experimentalists familiar with DIC or the GM know that spatial resolution and camera noise play an important role in the quality of the results. Moreover, many processing and post-processing parameters have to be selected (DIC subset, step size, shape functions, strain window to cite a few in the case of DIC) and this choice does impact the quality of the identification. For instance, a particular test configuration may provide excellent heterogeneities but the strain gradients are too large to be faithfully rendered for a given imaging set-up and this will lead to a large systematic error that the procedures described above will miss.

To circumvent this issue, a programme was developed to simulate the full measurement and identification chain, so that complete uncertainty and error propagation could be studied. This was developed for the VFM, first with the grid method [46] and later extended to DIC [47]. The initial motivation was to evaluate random and systematic errors with a view to providing accurate uncertainty limits. This is an essential point for such tests to be used as standard tools in industry. The work in [45] on PVC foams was refined by optimizing the configuration based on the total identification error and a different optimal configuration was found [48]. Since the DIC parameters also affect the results, they were optimized in a second step, after a first configuration was found with pre-set DIC parameters. Finally, not only were the four orthotropic stiffness components identified successfully from a single test, but the standard deviations were predicted within less than a factor of 2. This was thought very satisfactory as a starting point as many experimental issues like variations in light (amplified by reflective hotspots), machine vibrations, specimen alignment and geometrical defects were not taken into account. A similar study was performed on three different test configurations to identify the orthotropic stiffness components of composites with a single test. Each configuration was optimized for geometry and fibre angle, and then for DIC parameters. Interestingly, all cases converged to quadratic shape functions and steps of 1 pixel [49].

Clearly, the ultimate stage is to combine geometry and measurement parameters simultaneously into a single optimization problem. This was attempted on the T-shaped specimen problem from [36]. Eight design variables were used: three geometrical parameters (b and c as defined in Fig. 16.1, plus the radius of a fillet joining the two bars of the ‘T’, total height and width were fixed to correspond to the camera sensor aspect ratio), the orthotropy angle, and DIC subset, step size, shape functions (linear or quadratic) and the strain window [50]. A genetic algorithm was used to solve the problem. The cost function was total error (systematic ± twice the standard deviation, whichever was the further away from the reference). This converged to the configuration shown in Fig. 16.3. It is very interesting to notice how the optimization always tries to minimize the number of lost pixels, which strongly affects the systematic error. Also worth noting that again, quadratic shape functions and step of 1 are obtained at convergence. It has not yet been attempted to experimentally validate this work but experimental difficulties may arise from the very thick vertical bar which may mead to instabilities. This suggests that a buckling load constraint may have to be added to the optimization problem.

Fig. 16.3
figure 3

Optimal (left) and non-optimal (right) configurations for the T-shaped specimen [50]. Strain transverse to the fibres represented. For optimal configuration: fibre direction 27° from horizontal axis, subset 36, step 1, quadratic shape functions and strain window of 8

Conclusion and Future Work

This short review has shown that although some research effort has been dedicated to the design and/or optimization of new heterogeneous tests for inverse identification from full-field measurements, there is still a long way to go. Most of the examples are on anisotropic elasticity, with a few on anisotropic plasticity and/or visco-plasticity. Also, the vast majority of these efforts have risen from the VFM community. This is not a surprise considering the large difference of computational efficiency between the VFM and FEMU (VFM quoted to be 125 times faster than FEMU in [23]). Since test optimization has to rely on many resolutions of subsequent identification problems, the use of FEMU for this makes it quickly computationally prohibitive.

The next stage is now be to extend the optimization based on the full measurement and identification simulator to anisotropic plasticity, to progress the interesting work by Andrade-Campos and Thuillier further. The recent development of the sensitivity-based virtual fields [51, 52] provide the perfect tool for this. Extension to hyperelastic models would also be interesting, though additional constraints may be needed to prevent wrinkling. Finally, once promising tests have been designed that seem fit for purpose, experimental validation of uncertainty bounds, coupled with round-robin testing will be needed to establish them as the next generation of test standards.