1 Introduction

Composite laminates are usually fabricated by overlaying several layers of composite materials. Each of these layers is commonly referred to as lamina. Many such laminae are held together by a resin and combined, thereby constituting a laminate. The overall sequence of orientations of each lamina in the laminate is called as the lamination scheme or stacking sequence [1, 2]. For a constant thickness, altering the stacking sequence of a laminate can significantly influence the in-plane stiffness and bending stiffness of the laminate due to the directional properties of each lamina. Each ply angle of the laminate has also a direct (but non-linear) effect on the in-plane stiffness and bending stiffness.

Optimization is a mathematical approach for making the ‘best’ possible use of available resources to achieve the desired target/goal [3]. Generally, the task of an optimization method is to maximize or minimize a desired target property, expressed in the form of an objective function. Additionally, locating a specific point or zone of the target property may also be a goal of optimization. A typical optimization problem can be stated as below:

$$\begin{aligned} & & {\text{Minimize/maximize}}f\left( x \right) \\ & {\text{subject}}\;{\text{to}}\;{\text{the}}\;{\text{constraint }}x_{i}^{{\min }} \le x_{i} \le x_{i}^{{\max }} \\ \end{aligned}$$
(1)

where xi is the ith design variable (i = 1,2,…,k), k is the maximum number of design variables, and ximin and ximax are the lower and upper bounds of the ith design variable respectively.

An optimization algorithm is a technique that is employed iteratively while comparing the previously derived solutions with the current one until an optimal or a satisfactory solution is achieved. With the advancement of high-speed computing facilities, optimization has become an intricate part of computer-aided design. There are mainly two distinct types of optimization algorithms:

  1. (a)

    Deterministic algorithms: They employ specific rules for moving from one solution to the other. Given a particular input, they would produce the same output solution even when these algorithms are executed multiple times. In fact, these algorithms would pass through the same sequence of states.

  2. (b)

    Stochastic algorithms: These algorithms rely on probabilistic translation rules. They are gaining much popularity due to certain critical properties that the deterministic algorithms do not have. They can efficiently deal with inherent system noise and can take care of the models or systems that are highly nonlinear, high dimensional, or otherwise inappropriate for classical deterministic algorithms [4].

All the optimization algorithms can further be classified as single-objective or multi-objective techniques based on the number of objective functions to be dealt with. If the goal of the algorithm is to optimize only a single objective function at a time, it is referred to as single-objective optimization technique. On the other hand, if it has to optimize multiple objective functions simultaneously, it is called as multi-objective optimization technique. However, it is almost impossible to find out the global optima for all types of design-related optimization problems by applying the same optimization procedure since the objective function in a design optimization problem and the associated design variables largely vary from one problem to the other. One optimization algorithm suitable for a particular problem may completely fail or may even be counterproductive to another separate problem. The basic formulation of any typical optimization process is shown in Fig. 1.

Fig. 1
figure 1

A flowchart of the optimal design procedure

1.1 Single-Objective Optimization

The basic aim of a single-objective optimization technique is to discover the ‘best’ solution, which corresponds to the minimum or maximum value of a single objective function. They are the simplest optimization techniques, and have found huge popularity among the decision makers due to their simplicity and apprehensiveness. Although, they can provide sufficient new insights about the nature of a problem, but usually, they have limited significance. Most of the design optimization problems need simultaneous consideration of a number of objectives which may conflict with each other. Thus, using single-objective optimization techniques, it is almost impossible to find out an optimal combination of the design variables that can effectively optimize all the considered objectives.

1.2 Multi-Objective Optimization

Numerous practical combinatorial optimization problems require simultaneous fulfillment of several objectives, like minimization of risk, deviation from the target level, cost; maximization of reliability, efficiency etc. Multi-objective optimization is generally considered as an advanced design technique in structural optimization [5], because most of the practical problems require information from multiple domains and thus are much complex in nature. Additional complexity arises due to involvement of multiple objectives which often contradict with each other. One of the main reasons behind wide applicability of multi-objective optimization techniques is their intrinsic characteristic to allow the concerned decision maker to actively take part in the design selection process even after formulation of the corresponding mathematical model. Since each structural optimization problem consists of multiple independent design variables significantly affecting the final solution, selection of the design variables, objectives and constraints are supposed to play pivotal roles. Sometimes, a multi-objective optimization problem may be replaced by an optimization problem having only one dominating objective function with the use of appropriate equality and inequality constraints. However, selection of limits of various constraints may be another challenging task in real-world design problems. When numerous contending objectives appear in a realistic application, the decision maker often faces a problem where he/she must find out the most suitable compromise solution among the conflicting objectives.

A multi-objective optimization problem can be converted into an equivalent single-objective optimization problem by aggregating multiple objective functions into a single one [6]. Reduction of a multi-objective optimization problem into a single-objective optimization problem is commonly known as scalarization. A classical scalarization technique is the weighted sum method where an auxiliary single objective function is formulated as follows:

$$f(x)\, = \,\sum\limits_{i = 1}^{m} {w_{i} f_{i} (x),\,\,\,\,\,\,\,\,\,\,\,w_{i} \,\, > \,\,0,\,\,\sum\limits_{i = 1}^{m} {w_{i} \, = \,1} }$$

where wi is the weight assigned to ith objective function and m is the number of objective functions.

Simplicity of the weighted sum scalarization method is indeed one of its major advantages [7]. However, in this method, values of the optimal solutions depend on the choice of the weight assigned to each of the objective functions. In absence of any prior knowledge with respect to the weights, it is desirable to have a set of equally feasible solutions. Each solution in the set should provide the best possible compromise among the objectives. This set of non-dominated solutions is referred to as Pareto optimal solutions or Pareto front. The Pareto optimality implies that no other solution can exist in the feasible range that is at least as good as some other member of the Pareto set, in terms of all the objectives, and strictly better in terms of at least one [8]. Thus, in the Pareto front, solution of one objective function can only be improved by worsening at least one of the other objective functions.

2 State-of-the-Art in High-Fidelity Design Optimization of Composite Laminates

Excellent mechanical properties of the composite laminates are mainly responsible for their widespread popularity in structural applications. However, to exploit the fullest potential of composite structures, optimal selection of shape, size, fiber angles, material etc. is essential which makes it a complex design optimization problem. This complexity arises not only due to involvement of various design variables, but also due to multimodal output response and large design space with unfeasible or expensive derivatives.

This section mainly categorizes and compares various optimization methods employed in optimal lay-up selection of composite laminates. The goal of the comprehensive literature review presented in this section is to offer a ready reference for choosing the suitable optimization techniques for a given problem. However, due to paucity of space, details of the adopted optimization algorithms are not explained here. Only their applications in composite laminate optimization are focused on.

In the literature, several categorizations for optimization of composite laminates have been suggested. For example, Fang and Springer [9] identified four groups of optimization approaches, e.g. (a) analytical procedures, (b) enumeration methods, (c) heuristic schemes and (d) non-linear programming. From a more structure-specific context, Abrate [10] categorized laminate optimization applications based on the objective function that could be either one or a combination of in-plane properties, flexural rigidity, buckling load, natural frequency and thermal effects. Venkataraman and Haftka [11] recommended categorization of the design methods as (a) single laminate design and (b) stiffened plate design, whereas, Setoodeh et al. [12] suggested classifying the literature on optimization of composite laminates as constant stiffness design and variable stiffness design. In context of this paper, some prominent literature are briefly reviewed and the adopted optimization techniques are grouped into three broad classes, i.e. gradient-based methods, specialized algorithms and direct search methods.

2.1 Gradient-Based Methods

Gradient-based methods are based on the gradients of the objective and constraints, whose functions can be approximated when the corresponding mathematical closed form expressions are not available. However, they are computationally expensive. Generally, these methods are unable to locate the global optima, but have quicker convergence rate as compared to direct and heuristic methods.

The most common approach to search out a stationary point of an objective function is to set its first gradient to zero. This approach was adopted by Sandhu [13] to predict the optimal layer angle of a composite lamina. Its main advantage is the fastness to locate all the stationary points of the objective function just in one run. However, it depends on the expression of objective function as a closed form equation. Moreover, it performs only for single-variable, unconstrained optimization problems, which imposes a serious bottleneck to its practical applications.

Another popular gradient-based method is the steepest descent technique that performs, at each step, a line-search in the opposite direction of the gradient of the objective function. For composite structure stacking sequence design problem, it may be used as a standalone technique [14] or as an aid for other optimization techniques [15]. Initially, steepest descent technique has quick convergence, however, as it approaches closer towards the global optima, it becomes sluggish. It is known to be got trapped in the local optima and its inability to deal with discrete variables is its serious drawback.

Hirano [16] employed Powell’s conjugate gradient (CG) method for maximizing buckling load in laminated plate structures under axial compression, which could work only on unimodal functions, requiring no gradient information.

Newton (or Newton–Raphson) methods require second-order gradient information and are seldom used for optimization of laminated composite design problems. Quasi-Newton (QN) methods, on the other hand, are frequently applied as they allow determining the Hessian without using second-order derivatives. Davidon, Fletcher and Powell (DFP) [17] applied QN techniques for predicting the optimal lay-up of laminated composites. The DFP-QN method, originally proposed by Fletcher and Powell [18], was adopted by Waddoups et al. [19] and Kicher and Chao [20] for design of the optimal composite cylindrical shells. A quadratic interpolation of the objective function, including strength and buckling failure, was considered in the one-dimensional minimization problem. Kim and Lee [21] also applied DFP method for optimization of a curved actuator with piezoelectric fibers. The QN methods generally have higher convergence rate than CG method, although their performance is problem dependent and may change from one case to another.

Method of feasible directions (MFD) attempts to find out a move to a better point without violating any of the constraints. Since a composite lay-up design problem usually includes several inequality constraints, MFD has been a good candidate for solving this problem [22]. However, like other gradient-based methods, it is not always able to search out the global optima. It has been adapted to be used in combination with finite element analyses [23].

2.2 Specialized Algorithms

These methods are explicitly developed for optimizing composite laminates while exploiting a number of their properties to simplify the optimization process. Often developed for a particular application, they generally simplify the problem by restricting the design space with respect to allowable lay-up, loading condition and/or objective function. Since they are tailored to a specific design problem, they occasionally lose robustness when applied to a general optimization problem. However, when designed for a particular problem, they can be much faster than other optimization techniques.

Using lamination parameters [24], which are integrated trigonometric functions based on thickness of a laminate instead of lay-up variables, has the advantage of reducing the number of parameters required to express a laminate’s properties to a maximum of 12, regardless of the number of layers [25, 25].

Besides the promising advantage of using lamination parameters, the challenge in dealing with those parameters is that they are not independent and cannot be arbitrarily prescribed. Several authors, such as Fukunaga and Vanderplaats [27], and Grenestedt and Gudmundson [28] suggested the necessary conditions for different combinations of lamination parameters, but the complete set of sufficient conditions for all the 12 parameters is still unknown [29]. Miki [30] proposed a method to visualize the admissible range of lamination and their corresponding lay-up parameters. Just like the in-plane lamination diagram, the flexural lamination diagrams were also developed [31]. Fukunaga and Chou [32] adopted a similar graphical technique for laminated cylindrical pressure vessels. Lipton [33] developed an analytical method to find out the configuration of a three-ply laminate under in-plane loading conditions. Autio [34], Kameyama and Fukunaga [35], and Herencia et al. [36] employed GA to solve the inverse problem.

A layer-wise optimization technique optimizes the overall performance of a composite laminate by sequentially considering one or some of the layers within a laminate. This method performs with one layer or a subset of layers in the laminate, first requiring selection of the best initial laminate and then addition of the layer that best improves the laminate performance, which is usually achieved by an enumeration search [37]. Lansing et al. [15] determined the initial laminate by assuming the layers with ply angles of 0°, 90° and ± 45° carrying all the longitudinal, transverse and shear stresses respectively. Starting with a one-layer laminate, Massard [38] determined the best fiber orientation for single-ply laminate. Todoroki et al. [39] proposed two other approaches to find out the initial laminate. Narita [40], and Narita and Hodgkinson [41] endeavored to solve this problem while starting with a laminate having hypothetical layers with no rigidity. From the outermost layer, all the layers were sequentially replaced by an orthotropic layer and the optimal fiber orientation angle was determined by enumeration. The first solution derived was subsequently applied as an initial approximation for the next cycle. Farshi and Rabiei [42] proposed a method for minimum thickness design consisting of two steps. The first step aimed at introducing new layers to the laminate, while the second one examined the probability of replacing higher quality layers with weaker materials. Ghiasi et al. [43] applied layer separation technique to keep the locations of different layers unchanged when a layer had been added.

2.3 Direct Search Methods

While the analytical methods are known for their fast convergence rate, direct search methods have the advantage of requiring no gradient information of the objective function and constraints. This feature has a significant benefit because in composite laminate design, derivative calculations or their approximations are often costly or impossible to obtain. Direct search methods systematically lead to the optimal solution only by using function values from the preceding steps. As a result, several of these techniques have become popular for optimization of composite lay-up design, as described in the following paragraphs. Stochastic search algorithms, a sub-class of direct search methods “[…] are better alternatives to traditional search techniques […] they have been used successfully in optimization problems having complex design spaces. However, their computational costs are very high in comparison to deterministic algorithms” [44].

One of the first attempts in optimal design of composite laminates is the application of enumeration search, consisting of trying all the possible combinations of design variables and simply selecting the best combination. Although cumbersome, this technique was adopted to find out the lightest composite laminate during the 1970s [45]. Nelder and Mead (NM) method was employed by Tsau et al. [46] for optimal stacking sequence design of a laminated composite loaded with tensile forces, while evaluation of stresses was performed by an FEM. It has been reported by Tsau and Liu [47] that the NM method is faster and more accurate than a QN method for lay-up selection problems with smaller number of layers (i.e. less than 4). Foye [48] was the first researcher who employed a random search to determine the optimal ply orientation angles of a laminated composite plate. Graesser et al. [49] also adopted a random search, called improving hit and run (IHR), to find out a laminate with minimum number of plies that could safely sustain a given loading condition.

The SA technique, which mimics the annealing process in metallurgy, globalizes the greedy search process by permitting unfavorable solutions to be accepted with a probability related to a parameter called ‘temperature’. The temperature is initially assigned a higher value, which corresponds to more probability of accepting a bad solution and is gradually reduced based on a user-defined cooling schedule. Retaining the best solution is recommended in order to preserve the good solution [50]. It is the most popular method just after GA for stacking sequence optimization of composite laminates [51, 52]. Generation of a sequence of points that converges to a non-optimal solution is one of the major problems in SA. To overcome this shortcoming, several modifications of SA have been proposed, such as increasing the probability of sampling points far from the current point by Romeijn et al. [53] or employing a set of points at a time instead of only one point by Erdal and Sonmez [50]. To increase the convergence rate, Genovese et al. [54] proposed a two-level SA, including a ‘global annealing’ where all the design variables were perturbed simultaneously and a ‘local annealing’ where only one design variable was perturbed at a time. In order to prevent re-sampling of solutions, Rao and Arvind [55] embedded a Tabu search in SA, obtaining a method called Tabu embedded simulated annealing (TSA). Although SA is a good choice for the general case of optimal lay-up selection; however, it cannot be programmed to take advantage of the particular properties of a given problem.

GA is more flexible in this respect, although it is often computationally more time consuming [51]. In terms of [56], “GAs are excellent all-purpose optimization algorithms because they can accommodate both discrete and continuous valued design variables and search through nonlinear or noisy search spaces by using payoff (objective function) information only”. Callahan and Weeks [57], Nagendra et al. [58], Le Riche and Haftka [59], and Ball et al. [60] are among the first few researchers who adopted GA for stacking sequence optimization of composite laminates. It was employed for different objective functions, such as strength [59], buckling loads [56], dimensional stability [61], strain energy absorption [62], weight (either as a constraint or as an objective function to be minimized) [63], bending/twisting coupling [56], stiffness [62], fundamental frequencies [63], deflection [64] or finding out the target lamination parameters [65]. It was also applied for design of a variety of composite structures ranging from simple rectangular plates to complex geometries, such as sandwich plates [66], stiffened plates [58], bolted composite lap joints [67], laminated cylindrical panels [64] etc. GA can often be combined with finite element packages to analyze stress and strain characteristics of composite structures [64].

One of the main drawbacks of GA is its high computational intensity and premature convergence, which may happen if the initial population is not appropriately selected. Sargent et al. [51] compared GA with some other greedy algorithms (i.e. random search, greedy search and SA) and noticed that GA could provide better solutions than greedy searches, which in some instances, were unable to determine an optimal solution.

The PSO technique was applied by Suresh et al. [68] for optimal design of a composite box-beam of a helicopter rotor blade. Kathiravanand Ganguli [69] compared PSO with a gradient-based method for maximization of failure strength of a thin-walled composite box-beam, considering ply orientation angles as the design variables. Lopez et al. [70] illustrated the application of PSO for weight minimization of composite plates.

GA [71], ACO [72], PSO [73] and ABC [74] are the some of the most commonly used stochastic search algorithms in composite laminate optimization. However, there are only a few comparative studies on the performance of different stochastic search algorithms in composite laminate frequency parameter optimization. Apalak et al. [74] proposed the application of ABC algorithm to maximize the fundamental frequency of composite plates considering fiber angles as the design variables. It was observed that despite ABC algorithm having a simpler structure than GA, it was as effective as GA. Ameri et al. [71] adopted a hybrid NM algorithm and a GA technique to find out the optimal fiber angles to maximize fundamental frequency. It was concluded that the hybrid NM algorithm was faster and more accurate than GA. However, it is hard to state whether the superior performance of the NM algorithm was genuinely due to algorithmic superiority or because the authors chose to incorporate the design variables as continuous in NM algorithm, whereas, in GA, their discrete values were considered. Similarly, Koide et al. [72] presented the application of an ACO algorithm to maximize the fundamental frequency in cylindrical shells and compared the optimal solutions with GA-based solutions derived from the literature. It was noted that the optimal solutions obtained using ACO were almost comparable with those of GA technique. Tabakov and Moyo [75] compared the relative performance of GA, PSO and Big Bang-Big Crunch (BB-BC) algorithm while considering a burst pressure maximization problem in a composite cylinder. Hemmatian et al. [76] applied ICA techniques along with GA and ACO to simultaneously optimize weight and cost of a rectangular composite plate. It was reported that ICA would outperform both GA and ACO with respect to the magnitude of the objective function and constraint accuracy.

2.4 Discussions

Tables 1 and 2 provide a comprehensive list of research works on single-objective optimization of composite laminates, while some important works on multi-objective optimization of composite laminates are presented in Table 3. It can be observed from these tables that FEM has been the most preferred solver because of its ability to simulate laminates of various shapes and sizes. Additionally, various types of load conditions, discontinuities and boundary conditions can also be easily simulated in FEM to mimic real-world applications. It provides enormous flexibility in choosing from a wide array of elements. The degrees of freedom and order of elements can also be effortlessly adjusted.

Table 1 Literature on high-fidelity optimization of composite laminates
Table 2 Literature on high-fidelity optimization of composite laminates for frequency parameter maximization
Table 3 Literature on high fidelity multi-objective optimization of composite laminates

The FSDT has been noticed to the most popular plate theory among the researchers during high-fidelity optimization of composite laminates. It is much more accurate as compared to CLPT and far less complicated than HSDT. However, it requires a good guess for the shear correction factor, which would be essential to account for the strain energy of shear deformation. Nevertheless, with a suitable value of shear correction factor, FSDT can estimate plate solutions that are comparable to HSDT, especially for thin and moderately thick plates. Majority of the works in the literature (and real-world applications) are either on thin plates or moderately thick ones, which have made FSDT so much popular.

Ply angles are the most preferred design variables in high-fidelity design and optimization of laminates. In most of the real-world applications, other parameters, like length, width, thickness, curvature of the laminate etc. cannot be easily altered as changing their values may generally require significant modifications in the plate design as well as associated components. Further, material variation may not always be feasible due to specialized nature of composite applications. For example, the composite material suitable for a structural load-bearing laminate may be unsuitable for an acoustics absorbent application or a rotor-blade application. From solution viewpoint, optimization of ply angles is an NP-hard problem. Further, the large design space of ply angles (± 90˚) poses significant challenges during the optimization phase. These reasons have encouraged the past researchers to attempt developing efficient strategies and algorithms to solve lay-up orientation optimization problems. For example, most researchers now treat lay-up orientation as a discrete optimization problem where ply angles with specific increments (say 5°, 15° or 45°) are only searched out during the optimization phase. This is not only computationally efficient but also resonates well with the traditional laminate manufacturing technologies that are unable to deal with arbitrary angles (say 19.21°). Lamination parameters are a convenient alternative to bypass discrete stacking sequence optimization. Moreover, lamination parameter optimization is a convex problem whose search space is a 12th-dimesnion hypercube with ± 1 bounds [26].

Weight reduction, buckling load maximization and frequency maximization have been the most common objective functions in high-fidelity optimization of laminates. It can also be noticed that majority of the researches have been conducted on rectangular composite plates. GA technique has been the most popular metaheuristic applied to high-fidelity optimization of laminates. However, gradient-based approaches have also been quite popular among the researchers. Researches on multi-objective high-fidelity optimization of laminates are much scarce which may be due to tremendous computational costs involved in such studies. Multi-objective GA has been the most popular optimizer employed for Pareto optimization of laminates.

3 State-of-the-Art in Metamodel-Based Design Optimization of Composite Laminates

High-fidelity design optimization is an important, accurate and powerful approach for determining the optimal parameters of a design problem. However, the finite element-based optimization strategy is quite time consuming and thus, computationally expensive. Based on the observations of Venkataraman and Haftka [11], optimization-related computational costs would depend on three indices, i.e. model complexity, analysis complexity and optimization complexity (see Fig. 2). For example, while evaluating a typical FEM run, say an 8-layer symmetric laminate using a 4 × 4 mesh, a 9-node isoparametric element-based Fortran program would require about 1/10th second for one function evaluation. However, an optimization trial of 50,000 function evaluations of the same FEM coupled with GA would roughly take 98 min, meaning that about 85–90% time would be consumed in objective function evaluations by the FEM core. The computation time would become a serious problem while considering the probabilistic nature of metaheuristic algorithms, each such optimization trial must be repeated multiple times to develop sufficient confidence in the predicted solutions. It has been noticed that despite continual advances in computing power, complexity of the analysis codes, such as finite element analysis (FEA) and computational fluid dynamics (CFD) seems to keep pace with the computing advancements [181]. In the past two decades, approximation methods and approximation-based optimization have attracted intensive attention of the researchers. These approaches approximate computation intensive functions with simple analytical models. This simple model is often called a metamodel and the process of developing a metamodel is known as metamodeling. Based on a developed metamodel, different optimization techniques can then be applied to search out the optimal solution, which is therefore referred to as metamodel-based design optimization (MBDO). The advantages of using a metamodel are manifold [182].

  1. (a)

    Efficiency of optimization is greatly improved with metamodels.

  2. (b)

    Because the approximation is based on sample points, which can be obtained independently, parallel computation (of sample points) is supported.

  3. (c)

    It can deal with both continuous and discrete variables.

  4. (d)

    The approximation process can help study the sensitivity of design variables, thus providing engineers insights into the problem.

Fig. 2
figure 2

Schematic showing types of complexity encountered in composite structure soptimization [11]

Considering all these advantages, it is thus advisable to deploy MBDO instead of high-fidelity design optimization when a little sacrifice in accuracy does not impose a serious problem. In fact, MBDO is now being widely recommended and employed for different applications in composite laminate structures (see Fig. 3) and research on this topic has gained significant interest recently.

Fig. 3
figure 3

Metamodeling and its role in support of engineering design optimization [182]

3.1 Metamodeling

A metamodel is a mathematical description developed based on a dataset of input and the corresponding output from a detailed simulation model, i.e. a model of a model (see Fig. 4). Once the model is developed, the approximate response (output) at any sample location can be evaluated and used in MBDO. The general form of a metamodel is provided as below:

$$y(x) = \hat{y}(x) + \varepsilon$$
(2)

where y(x) is the true response obtained from the developed model,\(\,\hat{y}(x)\,\,\) is the approximate response from the metamodel and ε is the approximation error. Typically, the following steps are involved in metamodeling (see Fig. 5):

Fig. 4
figure 4

Metamodel of a computational analysis for optimization applications produces approximations of the objective functions and constraints [183]

Fig. 5
figure 5

Concept of building a metamodel of a response for two design variables; a design of experiments, b function evaluations and c metamodel [184]

  1. (a)

    Choosing an appropriate sampling method for generation of data.

  2. (b)

    Choosing a model to represent the data.

  3. (c)

    Fitting the model to the observed data and its validation.

3.1.1 Sampling Strategy (Design of Experiments)

The process of identifying the desired sample points in a design space is often called the design of experiments (DOE). It can also be referred to as sampling plan [185]. Any metamodel generation process starts with a DOE, i.e. way to carefully plan experiments/simulations in advance so that the derived results are meaningful as well as valid. Ideally, any experimental design plan should describe how participants are allocated to experimental groups. A common method is a completely randomized design, where participants are assigned to groups at random. A second method is randomized block design, where participants are divided into homogeneous blocks before being randomly assigned to groups. The experimental design should minimize or eliminate confounding variables, which may offer alternative explanations for the experimental results. It should allow the decision maker to draw inferences about the existent relationship between independent and dependent variables. DOE reduces the variability to make it easier to find out differences in treatment outcomes. The most important principles in experimental design are mentioned as below:

  1. (a)

    Randomization: The random process implies that every possible allotment of treatments has the same probability, i.e. the order in which samples are drawn must not have any effect on the outcome of the metamodel. The purpose of randomization is to remove bias and other sources of uncontrollable extraneous variation. Another advantage of randomization (accompanied by replication) is that it forms the basis of any valid statistical test. Thus, with the help of randomization, there is a chance for every individual in the sample to become a participant in the study. This contributes to distinguishing a ‘true and rigorous experiment’ from an observational study and quasi-experiment [186].

  2. (b)

    Replication: The second principle of an experimental design is replication, which is a repetition of the basic experiment. While repeating an experiment multiple times, a more accurate estimate of the experimental error can be obtained. However, in context of in silico simulations, it has no consequence on the overall outcome, since FEM simulation-based data would have no variation even when repeated multiple times. Experimental error does not occur in high-fidelity FEM simulations because when the same experiment is run multiple times, same outputs are obtained.

  3. (3)

    Local control: It has been observed that all the extraneous sources of variation cannot be removed by randomization and replication. This necessitates a refinement of the experimental technique. In other words, a design needs to be chosen in such a manner that all the extraneous sources of variation are brought under control. The main purpose of local control is to increase efficiency of an experimental design by decreasing the experimental error. Simply stated, controlling sources of variation in the experimental results is local control. Again, in context of in silico simulations, it has no effect.

The DOE starts by choosing a training dataset. It refers to a set of observations used by the computer algorithms to train themselves to predict the process behavior. The computer algorithms learn from this dataset, and thus find relationships, develop understanding, make decisions and evaluate their confidence from the training data. Generally, better is the training data, better is the performance of a metamodel. In fact, quality and quantity of the training data have as much to do with the success of a metamodel as the algorithms themselves. In Kalita et al. [187], it has been shown how the quality of data would become an important factor in achieving a robust metamodel. A comprehensive list of various sampling strategies is reported in Fig. 6. Widely used ‘classic’ experimental designs include factorial or fractional factorial design [188], central composite design (CCD) [189], Box-Behnken [189], D-optimal design [190] and Plackett–Burman design [189].

Fig. 6
figure 6

Various sampling techniques

3.1.2 Metamodeling Strategy

The act of developing an approximate model to fit a set of training data is the core of any metamodeling strategy. Metamodeling evolves from the classical DOE theory, where polynomial functions are used as response surfaces or metamodels. Besides the commonly used polynomial functions, Sacks et al. [191] proposed the use of a stochastic model, called kriging [192], to treat the deterministic response as a realization of a random function with respect to the actual system response. Neural networks have also been applied for generating response surfaces for system approximation [193]. Other types of models include RBFs [194], MARS [195], least interpolating polynomials [196] and inductive learning [197]. A combination of polynomial functions and ANNs has also been archived in [198]. Giunta and Watson [199] compared the performance of kriging model and PR model for a test problem, but no conclusion could be drawn with respect to the superiority of one model over the other. A comprehensive list of various metamodeling strategies is presented in Fig. 7. Additionally, Fig. 8 depicts the suitability of each traditional sampling method in various metamodeling strategies.

Fig. 7
figure 7

Various metamodeling techniques

Fig. 8
figure 8

Surrogate modeling methods and corresponding sampling techniques [200]

3.1.3 Metamodel Validation

Validation of the accuracy of a metamodel with respect to the actual model or experiment is a prime task in completing the entire process of metamodeling. The objective of any metamodel is to represent the true model most accurately. Any metamodel should exhaustively and precisely capture all the information in the training dataset. In general, the performance of a metamodel representing the true model is validated based on the residuals. The difference between the metamodel value (yi) and true model value \((\hat{y}_{i} )\) is termed as residual.

$$\varepsilon_{i} \, = \,y_{i} \, - \,\hat{y}_{i}$$
(3)

where i represents the sample point among a total of n sample points. The algebraic sum of squares of residuals for the entire set of sample points is called SSR (squared sum of residuals).

$$SS_{R} \, = \,\sum\limits_{i = 1}^{n} {(y_{i} \, - \,\hat{y}_{i} )^{2} }$$
(4)

Similarly, the total sum of squares (SST) is calculated using the following equation:

$$SS_{T} \, = \,\sum\limits_{i = 1}^{n} {(y_{i} \, - \,\overline{y}_{{}} )^{2} }$$
(5)

where \(\overline{y}\) represents the mean value of the sample points. The sum of squares for the model (SSM) can now be calculated as follows:

SSM = SST–SSR.

From the above equations, it is clear that the sum of squares of residuals is the fitting error. Thus, it is always desirable that it should be close to zero. Its zero value indicates that the metamodel perfectly fits the training data. But, it should be always kept in mind that a perfectly fit model does not guarantee that it would perform with the same accuracy on unknown design samples.

  1. (a)

    Goodness-of-fit metrics

Goodness-of-fit or how well the metamodel fits the training data is a common approach among the researchers to validate the accuracy of metamodels. The coefficient of determination (R2) is a statistic that provides some information about the goodness-of-fit of a model. Its value can be estimated using the following equation:

$$R^{2} \, = \,1\, - \,\frac{{SS_{R} }}{{SS_{T} }}$$
(6)

As shown in Kalita et al. [187], the inherent assumption of R2 is that all the model terms are made up of independent parameters and have an influence on the dependent parameter, which is not necessarily true. The R2adj corrects this presumption to a certain extent by penalizing the model when insignificant terms are added to the model.

$$R_{adj}^{2} \, = \,1\, - \,\frac{n\, - \,\,1}{{n\, - \,k\, - \,\,1}}(1\, - \,R^{2} )$$
(7)

where k is the number of variables. The R2pred goes a step further by constructing the model using all the data except the one that it predicts:

$$R_{pred}^{2} \, = \,1\, - \,\frac{{\sum\nolimits_{i = 1}^{n} {(y_{i} \, - \,\hat{y}_{i/i} )^{2} } }}{{\sum\nolimits_{i = 1}^{n} {(y_{i} \, - \,\overline{y}_{i} )^{2} } }}$$
(8)

where \(\hat{y}_{i/i}\) is the observed \(\hat{y}_{i}\) value calculated by the model when the ith sample point is left out from the training set. This corresponds to the leave-one-out cross validation.

  1. (b)

    External validation metrics

All the three model accuracy metrics, i.e. R2, R2adj and R2pred are based on use or reuse of the training data. In Kalita et al. [187], the drawbacks of using R2s, and the importance of using independent testing data to have informed decisions regarding selection of the metamodels and their predictive power are discussed.

Thus, additional external validation metrics, like Q2F1 [201], Q2F2 [202] and Q2F3 [203] may also be used. The three metrics can be expressed as follows:

$$Q_{F1}^{2} \, = \,1\, - \,\frac{{\sum\nolimits_{i = 1}^{{n_{{{\text{test}}}} }} {(\hat{y}_{i} \, - \,y_{i} )^{2} } }}{{\sum\nolimits_{i = 1}^{{n_{{{\text{test}}}} }} {(y_{i} \, - \,\overline{y}_{{{\text{train}}}} )^{2} } }}$$
(9)
$$Q_{F2}^{2} \, = \,1\, - \,\frac{{\sum\nolimits_{i = 1}^{{n_{{{\text{test}}}} }} {(\hat{y}_{i} \, - \,y_{i} )^{2} } }}{{\sum\nolimits_{i = 1}^{{n_{test} }} {(y_{i} \, - \,\overline{y}_{{{\text{test}}}} )^{2} } }}$$
(10)
$$Q_{F3}^{2} \, = \,1\, - \,\frac{{\sum\nolimits_{i = 1}^{{n_{test} }} {(\hat{y}_{i} \, - \,y_{i} )^{2} /n_{test} } }}{{\sum\nolimits_{i = 1}^{{n_{train} }} {(y_{i} \, - \,\overline{y}_{train} )^{2} /n_{train} } }}$$
(11)

Equations (9) and (10) differ only in the treatment of the mean term. In Eq. (9), Q2F1 employs the mean value of the training data, whereas, mean value of the testing data is used in the calculation of Q2F2. This implies that Q2F2 contains no information regarding the training set since only testing dataset is used. On the other hand, Q2F3 attempts to remove any bias introduced in the estimations due to sample size, by dividing the total squared residual sum by the number of test samples and dividing the total squared sum of training data by the number of training samples. Consonni et al. [203] recently highlighted certain drawbacks of Q2F1 and Q2F2 in describing the predictive power of metamodels.

  1. (c)

    Error metrics

The R2-based metrics only provide an estimate of how much variation in a particular dataset is explained by the model. They render no information regarding the precision of the models. Precision, which determines, e.g. whether a model predicts frequencies with a standard error of 1 Hz or 10 Hz, is of great practical relevance in appraising quality of a metamodel. Root-mean-squared error (RMSE) is the standard deviation of residuals from the model [204]. It can be calculated from the test data using the following expression:

$${\text{RMSE}}_{{{\text{test}}}} \, = \,\sqrt {\frac{{\sum\nolimits_{i = 1}^{{n_{{{\text{test}}}} }} {(y_{i} \, - \,\hat{y}{}_{i})^{2} } }}{{n_{{{\text{test}}}} }}}$$
(12)

To calculate RMSE for training dataset, the errors in Eq. (12) are calculated for the training data and their squared sum is divided by ntrain. The RMSE can be a useful metric in identifying an appropriate metamodel, as a superior metamodel is always required to obtain an RMSE of 1 Hz for a lay-up metamodel encompassing (± 90°) range as opposed to one having a very small domain, say (± 10°). Since the residuals are squared in Eq. (12), a large residual for a particular sample point would have a greater influence on RMSE as compared to a sample point having a small residual in the same dataset. Thus, the calculation process for RMSE would provide more weight to the few samples with higher prediction error. This explicates why the researchers often tend to leave out 5% outliers in an effort to make better interpretations regarding the model. Due to this imbalanced nature of information provided by RMSE, a number of researchers have insisted on using mean absolute error (MAE) [205]. The MAE provides an absolute measure of prediction error in metamodels. It can be calculated for test data using the following equation:

$${\text{MAE}}_{{{\text{test}}}} \, = \,\frac{{\sum\nolimits_{i = 1}^{{n_{{{\text{test}}}} }} {\left| {y_{i} \, - \,\hat{y}_{i} } \right|} }}{{n_{{{\text{test}}}} }}$$
(13)

A series of structural engineering test problems is solved in Kalita et al. [187] to identify the appropriate criteria for accepting or rejecting a metamodel. Additional insight into the predictive power of all these metrics is also included in Kalita et al. [187]. However, as stated by Chai and Draxler [206] “Every statistical measure condenses a large number of data into a single value […], any single metric provides only one projection of the model errors and, therefore, only emphasizes a certain aspect of the error characteristics. A combination of metrics […] is often required to assess model performance”.

3.2 Metamodel-Based Design Optimization

Any optimization algorithm can be coupled with metamodels to form the basic MBDO framework. Once a metamodel is identified, selection of the optimization algorithm becomes trivial because even less efficient algorithm becomes easily affordable. However, superior optimization algorithms would still outperform the inefficient ones.

Wang and Shan [182] classified the MBDO strategies into three types (see Fig. 9). The first strategy is the traditional sequential approach, i.e. fitting a global metamodel and then using it as a surrogate of the expensive function. This approach employs a relatively large number of sample points at the outset. It may or may not include a systematic model validation stage. In this approach, cross-validation is usually applied for the validation purpose. Its application is found in [189]. The second approach involves validation and/or optimization in the loop in deciding the re-sampling and re-modeling strategies. In [207], samples were generated iteratively to update the approximation to maintain the model accuracy. Osio and Amon [208] developed a multi-stage kriging strategy to sequentially update and improve the accuracy of surrogate approximations as additional sample points were obtained. Trust regions were also employed in developing several other methods to deal with the approximation models in optimization [209]. Schonlau et al. [210] described a sequential algorithm to balance local and global searches using approximations during constrained optimization. Sasena et al. [211] applied kriging models for disconnected feasible regions. Modeling knowledge was also incorporated in the identification of attractive design space [212].

Fig. 9
figure 9

Metamodel-based design optimization strategies: a sequential approach, b adaptive MBDO and c direct sampling approach [182]

Wang and Simpson [213] developed a series of adaptive sampling and metamodeling methods for optimization, where both optimization and validation were employed in forming the new sample set. The third approach is quite recent and it directly generates new sample points towards the optimal with the guidance of a metamodel [214]. Different from the first two approaches, the metamodel is not used in this approach as a surrogate in a typical optimization process. The optimization is realized by adaptive sampling alone and no formal optimization process is required. The metamodel is used as a guide for adaptive sampling and therefore, the demand for model accuracy is reduced. Its application needs to be explored for high-dimensional problems. If a metamodel is used instead of a true model, the optimization problem, stated in Eq. (1), would become:

$${\text{Minimize}}/{\text{maximize}}\tilde{f}(x)$$
(14)

subject to the constraint \(x_{i}^{\min } \, \le \,x_{i} \, \le \,x_{i}^{\max }\) where the tilde symbol denotes the metamodel for the corresponding function in Eq. (1). Often a local optimizer is applied to Eq. (14) to derive the optimal solution. A few methods have also been developed for metamodel-based global optimization.

One successful development can be found in [210], where the authors applied the Bayesian method to estimate a kriging model, and subsequently identified points in the space to update the model and perform the optimization. The proposed method, however, has to pre-assume a continuous objective function and a correlation structure among the sample points. A Voronoi diagram-based metamodeling method was also proposed where the approximation was gradually refined to smaller Voronoi regions and the global optimal could be obtained [215]. Since Voronoi diagram arises from computational geometry, the extension of this idea to problems with more than three variables may not be efficient. Global optimization based on multipoint approximation and intervals was performed in [216]. Metamodeling was also employed to improve the efficiency of GAs [217, 218]. Wang et al. [219, 220] developed an adaptive response surface method (ARSM) for global optimization. A so-called Mode-Pursuing Sampling (MPS) method was developed in [214], where no existing optimization algorithm was applied. The optimization was realized through an iterative discriminative sampling process. The MPS method demonstrated high efficiency for optimization with expensive functions on a number of benchmark tests and low-dimensional design problems.

Recent approaches to solve multi-objective optimization problems with black-box functions need to approximate each single objective function or directly approximate the Pareto optimal frontier [221, 221, [221, 221]. Wilson et al. [222] adopted the surrogate approximation in lieu of the computationally expensive analyses to explore the multi-objective design space and identify the Pareto optimal points, or the Pareto set from the surrogate. Li et al. [223] applied a hyper-ellipse surrogate to approximate the Pareto optimal frontier for bi-criteria convex optimization problems. If the approximation is not sufficiently accurate, the Pareto optimal frontier obtained using the surrogate approximation would not be a good approximation of the actual frontier. Recent work by Yang et al. [224] proposed the first framework dealing with the approximation models in multi-objective optimization (MOO). In that framework, a GA-based method was employed with a sequentially updated approximation model. It differed from [222] by updating the approximation model in the optimization process. The fidelity of the identified frontier solutions, however, would still depend on the accuracy of the approximation model. The work in [224] also suffered from the problems of GA-based MOO algorithm, i.e. the algorithm had difficulty in finding out the frontier points near the extreme points (the minimum obtained by considering only one objective function). Shan and Wang [225] recently developed a sampling-based MOO method where metamodels were employed only as a guide. New sample points were generated towards or directly on the Pareto frontier.

In all the MBDO methods which are often presented as a viable alternative to high-fidelity optimization, developing the accurate and reliable metamodels forms the basic goal. This is because by using a metamodel, the computation cost becomes inconsequential and thus even a less efficient metaheuristic search algorithm becomes affordable. The estimation power of the metamodel determines the effectiveness of the optimization task, because if the design space is not accurately modeled, the metaheuristic may locate a false global optimal.

3.3 Discussions

Considering the above facts, the literature on composite laminate metamodeling other than the optimization applications, like stochastic application, reliability analysis, damage identification etc. is also reviewed to better understand the metamodeling process. However, unlike in conventional literature review, this literature review is reported in tabulated form (see Tables 4 and 5).

Table 4 Literature on application of metamodels in various structural analysis of composite laminates
Table 5 Literature on application of metamodels in optimization of laminates

Since the last few years, metamodels have gained immense popularity in structural analysis of laminates. Low computational requirement and abundance of machine learning algorithms to choose from have been the prime motivators for the researchers. As observed from Table 4, significant number of metamodel-based studies has been carried out on uncertainty quantification (UQ). The micromechanical properties (like elastic modulus, shear modulus, Poisson’s ratio etc.) and ply angles of the laminates have been generally considered as the sources of stochasticity by the researchers. Most of the works have considered Latin hypercube method for sampling the training data. Almost all the works have relied on FEM and FSDT to simulate the necessary data for training the metamodels. However, it should be pointed out that the metamodels for UQ studies are generally local in nature, i.e. they are trained for only a small section of the possible design space of the parameters. Thus, in most of the cases, remarkable accuracy (error < 1%) of the metamodels has been achieved. A handful of works on damage detection, predictive modelling and reliability analysis has also been available in the literature.

Table 5 summarizes the works on metamodel-based optimization of laminates. In most of the cases, response surface methodology (RSM) (polynomial regression) has been employed by the past researchers. Traditional DOEs, like CCD, BBD and D-optimal designs have been used in those works. The accuracy of such metamodels, especially those considering ply angles as the design variables, is bound to be low, primarily due to small training dataset and insufficient sampling capacity of the traditional DOEs to accurately map the complex landscape. However, it should be noted that most of those studies have reported excellent accuracy on training data. Further, in most of those RSM-based metamodeling studies, no independent testing data has been provided that makes it difficult to accurately gauge the overall accuracy of those metamodels. Some recent studies have adopted neural networks for metamodel-based laminate optimization. GA has been the most popular optimizer for single-objective optimization studies. A few studies on Pareto optimization have also been available, mostly dealing with multi-objective GA technique

3.4 Limitations

Selection of an appropriate metamodeling algorithm is a key step in any MBDO process. Many comparative studies have been made over the years to guide the selection of metamodel types, e.g. Dey et al. [200], Jin et al. [287], Clarke et al. [288], Kim et al. [289], Li et al. [290] and Shi et al. [291]. Despite this, it is not possible to draw any decisive conclusion regarding the all-purpose superiority of any of the metamodel types. In fact, efficiency and generalization of metamodels for each application is constrained due to the inherent assumptions and algorithms used [292].

However, as noticed from the literature survey, in structural engineering applications, LR, PR (RSM) and ANN are commonly employed in MBDO studies. The LR is simple to perform and a number of ready-to-use software platforms are available to implement it, thereby making it extremely popular. However, it is not useful for modelling of non-linear data [293]. Similarly, PR, despite its simplicity and widespread applicability, is often restricted in the literature to second-order [292]. It is seldom preferred for higher-order polynomials as the adequacy of the model is solely determined by systematic bias in deterministic situations [293]. ANNs are particularly suitable for deterministic applications and can be quickly deployed once trained. However, ANNs have relatively higher training time than LR and PR, and suffer from improper training if suitable hyperparameters are not selected [294]. In case of all the metamodels, a trade-off between the accuracy desired from the metamodel and time available to develop it needs to be decided. Thus, there is clearly no universally superior metamodel. In fact, each metamodel has its own advantages and disadvantages which coupled with the size, complexity and level of non-linearity of the problem (or phenomena to be modelled) can pose a serious decision making question to the user regarding which algorithm to choose.

Since the metamodels are dependent on high-fidelity data from physical experiments or simulation models, selection of suitable sampling points is a critical task [295]. If the training data used in metamodeling is skewed or does not adequately represent the true nature of the system or phenomena to be modelled, it would lead to bias and hence, inaccurate predictions. In general, for metamodeling, space filling sampling methods, like Latin hypercube sampling, Hammersley sampling etc. are found to be better than classical design of experiments, like factorial design, Box Behnken, CCD etc. [296, 296]. Moreover, the economic cost associated with physical experiments or computational expensiveness of high-fidelity data also needs to be addressed [298].

Another challenge of metamodels lies in its approximate nature which would introduce an added element of uncertainty to the analysis [293]. This problem is more in complex use cases, like structural engineering where the design space to be modelled is often too vast and complex. Any optimization search process when conducted on an ill-fitted metamodel would lead to erroneous optimal parameter prediction.

The lack of generalizability of metamodels is a serious hindrance to its real world applicability. Most metamodels have excellent interpolation but lack extrapolation capability [298]. In addition, there are often several parameters that must be tuned when a metamodel is developed. This signifies that the results can differ considerably depending on how well those parameters are tuned, and consequently, the results would also depend on the approach deployed to develop the metamodel. The lack of interpretability in many machine learning-based metamodels is also a serious hindrance in MBDO [299].

4 Conclusions

Optimizing composite structures to exploit their maximum potential is a realistic application with promising returns. In this paper, the majority of publications on optimization of composite laminated structures are reviewed and compiled. Based on the application of optimization techniques, the reviewed research papers are primarily classified into high-fidelity optimization and metamodel-based optimization. While high-fidelity optimization is characterized by excellent accuracy of the numerical solutions and is generally time consuming; the metamodel-based optimization can be quickly deployed and is cost-efficient, but it sacrifices some amount of numerical accuracy. Overall, from the comprehensive review of the literature, it can be concluded that:

  1. (a)

    FEM is by far the most popular numerical solver for modeling of composite structures. It is primarily due to its ability to model various complex geometries and boundary conditions. The liberty to choose from a plethora of elements with adjustable degrees of freedom according to the requirements also makes FEM extremely versatile.

  2. (b)

    FSDT is the most widely employed shear deformation theory in optimization of laminate structures. This is because it is less complex and has comparable accuracy with HSDT for thin and moderately thick plates.

  3. (c)

    Ply angle or stacking sequence is the most favored design variable for custom designing of laminates. This is perhaps because, for a given application, the other parameters, like geometry, thickness, material etc. are hard-to-change variables, i.e. changing their values may need extensive design changes in the structure and associated components. Moreover, lay-up orientation optimization is an NP-hard problem and the range of ply angles is ± 90˚, which makes the search space quite huge. Thus, most likely, any design methodology that succeeds to optimize lay-up orientations should conveniently succeed on material-as-design variable and geometry-as-design variable problems.

  4. (d)

    For high-fidelity design optimization, most of the pioneering works were carried out using gradient-based or mathematical direct search methods. However, subsequent researches have mostly used metaheuristics (90% of them being GA) to find out superior results and in cases, have shown the lacuna of gradient-based approaches in tackling local optima.

  5. (e)

    Metamodels for laminate modeling have become extremely popular since the last decade with majority of the works being concentrated in UQ and optimization. The computational cost of UQ-based studies involving multiple geometric, material and ply angle parameters is astronomical and thus, metamodels are the most promising option. However, majority of UQ-based studies have employed very small design parameter ranges, thereby making the metamodels local but with extremely high accuracy.

This review paper may have the following future scopes:

  1. (a)

    In allied fields, several recent metaheuristics, like GWO, WOA etc. have been appeared to be more efficient as compared to older generation metaheuristics. High-fidelity optimization studies involving those metaheuristics may yield better results leading to computational cost saving.

  2. (b)

    Despite their significant practical applications, studies involving laminated structures with holes, discontinuities or cut-outs are non-existent. This may be due to astronomical cost of high-fidelity optimization or inability to build high-accuracy global metamodels when such discontinuities are considered. Works towards using machine learning techniques to develop global metamodels for such cases may lead to promising results.

  3. (c)

    Optimization of laminated structures under uncertainties has gained limited attention. Probabilistic and non-probabilistic optimization studies on laminated plates and shells are the need of the hour.

  4. (d)

    Further research is also required on designing better sampling strategies which can more accurately represent the complexity of design landscape in stacking sequence optimization problems.

  5. (e)

    Detailed research on the impact of assumptions during metamodeling, effectiveness of hybrid metamodels and ensemble metamodels is also lacking in the literature. Owing to the curse of dimensionality, most machine learning-based metamodels are complex for high-dimensional problems and still treated as black-box type approaches. By integrating the designer’s domain knowledge and leveraging the knowledge derived from the mechanics of the problem, the black-box MBDO problems can perhaps be transformed to grey-box or white-box problems.

In essence, while high-fidelity design optimization methodology has overwhelming accuracy, the metamodel-based design optimization methodology has trifling computational time. As such, it is difficult to recommend one approach over the other. The final decision lies with the design engineer, who after carefully considering the application and its possible ramifications, should answer, what is more important—accuracy or computational time?