Abstract
In this paper the application of bees algorithm (BA) in the finite element (FE) model updating of structures is investigated. BA is an optimization algorithm inspired by the natural foraging behavior of honeybees to find food sources. The weighted sum of the squared error between the measured and computed modal parameters is used as the objective function. To demonstrate the effectiveness of the proposed method, BA is applied on a piping system to update several physical parameters of its FE model. To this end, the modal parameters of the numerical model are compared with the experimental ones obtained through modal testing. Moreover, to verify the performance of BA, it is compared with the genetic algorithm, the particle swarm optimization and the inverse eigensensitivity method. Comparison of the results indicates that BA is a simple and robust approach that could be effectively applied to the FE model updating problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Finite element (FE) model updating, the process of reducing the difference between the FE and experimental models, can be classified as an inverse problem. Gradient-based techniques are widely used to update FE model of structures. Among them, the concept of the inverse eigensensitivity method (IEM) introduced by Collins et al. (1974) is popular in the FE model updating. Bakir et al. (2007) developed a sensitivity-based FE model updating procedure using a trust region algorithm to detect, locate and quantify a simulated damage in a frame-type structure. Jaishi and Ren (2007) used eigenfrequency and modal strain energy residuals to update FE models of a simulated simply supported beam and a precast continuous box girder bridge. Despite their advantages, gradient-based techniques have some drawbacks in finding the global optimum. When the objective function has many local optima (e.g. highly nonlinear, non-smooth objective functions), it is difficult for these methods to find the global optimum and the risk for the solution to be trapped into a local optimum is high. Therefore, the application of global search techniques in the FE model updating has gained popularity.
Algorithms that are based on the random search are more effective in finding the global optimum. There are some random search techniques like evolutionary algorithms (EAs; Ashlock 2006), which use global search methods inspired from natural evolution. Among these methods, biology inspired and swarm-based optimization algorithms are the most popular. These methods search from a population of solutions instead of relying on a single point. No derivatives are required and the objective function can be non-differentiable. Moreover, the ability to find the global optimum makes them non-sensitive with respect to the initial guess. However, they have a drawback of being time-consuming in terms of the number of iterations and function evaluations.
Genetic algorithm introduced by Holland (1975) is one of the popular biology inspired EAs. There are several literatures concerning the application of GA in the FE model updating. Levin and Lieven (1998) compared genetic algorithm and simulated annealing for FE model updating using simulated data. Marwala (2002) used genetic algorithm and wavelet data to update the FE model of a beam. Dunn et al. (2005) used genetic algorithm to create a more reliable FE model for a complete F/A-18 aircraft based on experimental data. Feng et al. (2006) proposed a combination of the genetic algorithm and simulated annealing to enhance the FE model updating procedure and applied this approach to two typical rotors.
Among the swarm-based EAs, the ant colony optimization (ACO) initially proposed by Dorigo (1992) and the particle swarm optimization (PSO) introduced by Kennedy and Eberhart (1995) are widely used to solve optimization problems. Marwala (2005) used PSO to update the FE models of a simple beam and an unsymmetrical H-shaped structure. The results were compared with those of the genetic algorithm and simulated annealing. Their comparison showed the superiority of PSO. There are also several modified versions of these approaches trying to improve the efficiency of the original algorithms (Wang et al. 2010; Zhan et al. 2009).
During the past few years several swarm intelligence algorithms, based on the bees behavior, have been presented (Wedde et al. 2004; Teodorovic and Dell’Orco 2005; Yang 2005; Drias et al. 2005; Pham et al. 2005; Karaboga and Basturk 2007). Bees algorithm (BA) is a population-based search algorithm introduced by Pham et al. (2005, 2006). The algorithm is based on the food foraging behaviour of swarms of honeybees. It performs a neighbourhood search combined with random search and can be used for engineering optimization. To the authors’ best knowledge, the validity of BA in treating FE model updating has not yet been explored. Hence, this work is performed to investigate the application of BA in FE model updating. For this purpose, the FE model of a piping system is updated using BA. To verify the performance of this approach, the results are compared with those of GA, PSO and IEM.
The rest of the paper is organized as follows. Description of BA along with a brief introduction to GA, PSO and IEM are presented in Section 2. Section 3 presents details of the experimental study and the corresponding FE model of the piping system as well as the selection of the design parameters and objective function. Computational results are presented in Section 4, while conclusions are drawn in Section 5.
2 Theoretical background
2.1 Bees algorithm
There are some interesting social behaviors in organized societies, like honeybee swarms, which can be used in solving the optimization problems more efficiently. Each bee swarm has some scout bees that explore randomly for food sources. When scout bees find food sources, they return to the hive and evaluate the different patches based on some quality threshold. Then, they perform a special dance called waggle dance to inform the other bees of the direction, the distance and the nectar amount of food sources. Next, follower bees fly toward the identified food sources. More bees are sent to the food sources with more nectar amounts. This enables the colony to gather food in an efficient manner.
From N random solutions, N 1 solutions, which have higher fitness values, are considered as the best solutions. Among the best solutions, N 2 solutions with the highest fitness values are chosen as the elite ones. In hope to find better solutions, neighborhood searching around the best and elite solutions is performed. n 1 and n 2, in which n 2 is greater than n 1, are the number of neighborhood searches around the best and elite solutions, respectively. Moreover, the remaining N − N 1 solutions are selected randomly from the search space. After each iteration, the new population has two parts: representatives from neighborhood searches having the best fitness value in their neighborhood space and, randomly selected solutions. The iteration continues until the convergence occurs. At the end, there are a series of optimum fitness values. The best of them is hoped to represent the global optimum. Since BA does not require gradient information, it can easily escape from local optima (Pham et al. 2005). Figure 1 shows the flowchart of BA. Note that, the random solution x rand is calculated by:
where \(\boldsymbol\alpha \) is a random vector which its elements are between 0 and 1. \(\text{{\bf \textit{x}}}_{\bf min}\) and \(\text{{\bf \textit{x}}}_{\bf max}\) are the lower and upper bounds for the solution vector, respectively.
The neighborhood searching around each element of the solution vector (i.e. x i ) is performed using (2).
where xp i is the ith element of the new solution vector obtained from a neighborhood search around \(\text{{\bf \textit{x}}}_{i}\) with its radius equal to r.
2.2 Genetic algorithm
Genetic algorithm is one of the most popular evolutionary algorithms, which has been developed based on similar principles of genetics and natural selection (Holland 1975). In its standard version, the algorithm acts on a population of individuals, each representing a possible solution to the problem. Each individual, which is generated randomly, has a fitness value. By using selection operators, breeding population is created from the current population. The individuals with better fitness values are selected as the breeding population. Then, by performing reproduction operators including crossover and mutation, a new generation is produced.
In this study, uniform stochastic, Gaussian and scattered functions are used for selection, mutation and crossover functions, respectively. By utilizing the best individuals from the previous generation, the new generation constructs a better set of fitness values. Generations are produced until the population converges to an optimal solution and the desired fitness is obtained, hopefully the global optimum. The flowchart of GA has been shown in Fig. 2. The algorithm has two representation schemes including binary digits and real-valued numbers. Since real-valued GA has been proved to have more consistent results across replication (Michalewicz 1974), it is used for comparison purpose in this work.
2.3 Particle swarm optimization
Particle swarm optimization is an evolutionary algorithm designed and developed by Kennedy and Eberhart (1995). PSO has been inspired from social behaviors like fish schooling or birds flocking. Members of a swarm communicate good positions to each other and adjust their own position and velocity based on these good positions. The best previous position of each member and the position of the best member among all the swarm are recorded and used to find the new velocities (i.e. the rate of the position change) and positions. Figure 3 shows the flowchart of PSO. New versions of PSO utilize several parameters including an inertia weight to improve the performance of the original PSO. Although, there are just few parameters to adjust, the selection of these parameters seems to be problem-dependent (Shi and Eberhart 1998). In this study, after performing some trials the inertia weight was set at 0.2.
2.4 Inverse eigensensitivity method
Collins et al. (1974) proposed the inverse eigensensitivity method for the FE model updating. To minimize the square summation of the modal residuals, which is actually a nonlinear least-squares problem, IEM approximates the nonlinear problem by a linear one. Then, uses sensitivity information to find the solution in an iterative manner (see Fig. 4 for the flowchart of IEM).
One of the drawbacks of this approach is that its convergence largely depends on the initial model. In other word, if there is a large difference between the initial FE model and the experimental model, IEM has much more difficulties to escape from the divergence. Moreover, as mentioned before, the solution may be trapped into a local optimum. However, as the popular traditional FE model updating method, it has been selected to be used for comparison in this work. Note that, some trials were made to find the best initial guess to guarantee the convergence.
3 Model updating of the piping system
3.1 Experimental study
An experimental test on a piping system was performed in order to investigate the effectiveness of the BA in FE model updating. The piping system was assembled from straight pipes connected by elbows, as shown in Fig. 5a. Four supports were also used to hold the piping system. Modal testing was performed to extract the experimental model of the system required in the updating process. Out of plane vibration of the system was considered in this work. A hammer (Impulse Hammer AU02 2009) was used to excite the system, and an accelerometer (Piezo-tronic Voltage Source Accelerometer 2009) along with a signal analyzer (B&K type 3032A) and Pulse LabShop software (Pulse Analyzer Platform 2009) were used to capture the frequency response functions (FRFs) of the system. Then, using ICATS software (Imperial College 2000), the first six experimental modes of the system (i.e. natural frequencies, mode shapes and damping ratios) were obtained from FRFs data by global curve fitting method. A typical measured FRF of the system is shown in Fig. 6.
3.2 FE model updating of the piping system
The weighted sum of the squared error between the measured and computed modal parameters (i.e. natural frequencies, mode shapes and damping ratios) is considered as the objective function. The design parameters are several physical properties of the structure that need to be modified in order to bring the dynamic response of FE model closer to the measured one. However, selection of design parameters is of great importance. Only those parameters that the response is sensitive to are selected as the design parameters.
As shown in Fig. 2b, the FE model of the piping system is mainly composed of 2-nodded straight and curved pipe elements. Nine physical parameters of the FE model including Young’s modulus E, density ρ, concentrated mass at the joints M c, four vertical stiffnesses of the supports K 1, K 2, K 3, K 4 and two proportional damping constants β and γ are considered as the design parameters, respectively. Constants β and γ relate the damping matrix to the mass and the stiffness matrices. To prevent numerical problems, dimensionless parameters are used. The goal is to minimize the objective function that is given by:
where w i is the ith weighting factor. Since the mode shapes obtained from modal testing are usually less accurate than the natural frequencies, different weighting factors can be used to represent the measurement accuracy and the importance of different modal parameters. However, all the weighting factors assumed to be unity in this work. Modal parameters residuals, π, can be shown as follows:
where
subscripts X and A refer to the experimental and the analytical values, respectively. \({\boldsymbol\omega}\) and \({\boldsymbol\eta }\) are the vectors of the natural frequencies and damping ratios, and \({\boldsymbol\varphi }^{\rm i}\) is the ith mode shape which is normalized with respect to one of the degrees of freedom. m and n are the number of experimental modes and experimental degrees of freedom, respectively.
4 Results and discussion
Since there are some tunable parameters in BA, the effects of these parameters were investigated by conducting a small number of trials (Pham et al. 2006). The solution converged after 15 iterations. Therefore, it was used as the number of iterations in this work. Moreover, to take into account the stochastic nature of the method, the average results of 10 runs have been considered here.
By manipulating the number of the scout and recruited bees as well as the best and elite sites, four sets of the control parameters were selected (see Table 1 for the details). Updated values of the design parameters corresponding to each set have been listed in Table 2. Among the four sets, the lowest value for the objective function is obtained using the set D. Therefore, the corresponding FE model coincides better with the experimental model. To verify the performance of BA, the results have been compared with those of PSO, GA and IEM. The updated values of the design parameters obtained from these methods have been tabulated in Table 3. Note that, the same settings as of the sets A to D have been applied in PSO and GA (i.e. the same population size, number of function evaluations and number of averaged runs).
As can be seen from Tables 2 and 3, BA results in lower objective function values than GA and IEM. However, it has no superiority over PSO in terms of accuracy. Moreover, as shown in Fig. 7, BA accelerates the convergence to the optimum so that one can reach the optimal solution in less iteration than PSO and GA.
To better evaluate the solution quality of the stochastic methods, an indicator called practical reliability is defined as the probability of the solution to reach a practical optimum. Note that, a practical optimum is defined as an optimum solution within 0.1% of the global optimum (Kogiso et al. 1994). Another indicator called normalized price is defined as the average number of the function evaluations divided by the practical reliability. In this study, the best solution found after certain number of function evaluations is considered as the global optimum. Here, the best fitness value after 10,000 function evaluations was 0.055. Therefore, this value is considered as the global optimum. Table 4 presents the practical reliabilities and normalized prices of BA, PSO and GA results. It is noted that, for example, if 9 practical solutions have been obtained after 10 runs, the practical reliability is 0.9. It is observed that the practical reliabilities of BA results are higher than those of GA and PSO except for the sets A and C that are similar to the ones obtained from PSO. The same conclusion can be made for the normalized prices. The highest practical reliability and the lowest normalized price belong to BA by applying the set D.
Natural frequencies and damping ratios of the best updated FE model (i.e. by applying the set D of the control parameters) obtained from BA, PSO, GA and IEM have been compared with the experimental ones and tabulated in Tables 5 and 6. Note that, the results for IEM are based on good initial guesses obtained after several trials.
Moreover, the correlation between the mode shapes of the experimental and updated FE models is investigated by MAC values (Ewins 2000):
The results are tabulated in Table 7. All the MAC values (see Table 7) are greater than 0.7, which indicate well-correlated mode pairs (Saarenheimo et al. 2003). However, it is observed that the values of MAC obtained from BA are slightly better than those of other algorithms.
5 Conclusions
The application of bees algorithm for finite element model updating of structures was investigated. The method utilizes a simple methodology, which has the same pattern as of honeybee’s foraging behavior. BA does not require gradient information and can escape from local optima. Moreover, its convergence does not depend on the initial model, that is randomly selected in the algorithm. The FE model of a piping system was updated to investigate the effectiveness of the proposed approach. The results showed that BA successfully updated the FE model of the system and brought it closer to the experimental model. The performance of BA was examined by comparing its results with those of PSO, GA and IEM. Although, BA showed to have more accurate results than those of GA and IEM, it was rather similar to PSO in terms of accuracy. Besides, the practical reliabilities and normalized prices obtained from BA were completely better than those of GA and partially better than the ones obtained from PSO. The study showed that the proposed approach is robust, accurate and easy to implement, hence it could be effectively applied to FE model updating problems.
References
Ashlock D (2006) Evolutionary computation for modeling and optimization. Springer, New York
Bakir PG, Reynders E, Roeck GD (2007) Sensitivity-based finite element model updating using constrained optimization with a trust region algorithm. J Sound Vib 305(1–2):211–225
Collins JD, Hart GC, Hasselman TK, Kennedy B (1974) Statistical identification of structures. AIAA J 12(2):185–190
Dorigo M (1992) Optimization, learning and natural algorithms. Dissertation, Politecnico di Milano, Italy
Drias H, Sadeg S, Yahi S (2005) Cooperative bees swarm for solving the maximum weighted satisfiability problem. In: Proceeding of the 8th international work conference on artificial and natural neural networks. Lecture notes in computer science, vol 3512. Springer, Berlin, pp 318–325
Dunn S, Peucker D, Perry J (2005) Genetic algorithm optimisation of mathematical models using distributed computing. Appl Intell 23(1):21–32
Ewins DJ (2000) Modal testing: theory, practice and application, 2nd edn. Research Studies Press Ltd, Hertfordshire, England
Feng FZ, Kim YH, Yang BS (2006) Applications of hybrid optimization techniques for model updating of rotor shafts. Struct Multidisc Optim 32(1):65–75
Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor
Imperial College (2000) ICATS manual. Imperial College, London
Impulse Hammer AU02 (2009) GlobalTest Company. http://www.globaltest.ru/eng/udmol_au01-02_en.htm. Accessed 1 Jul 2009
Jaishi B, Ren WX (2007) Finite element model updating based on eigenvalue and strain energy residuals using multiobjective optimisation technique. Mech Syst Signal Process 21(5):2295–2317
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39(3):459–471
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, vol 4, pp 1942–1948. Perth, Australia
Kogiso N, Watson LT, Gurdal Z, Haftka RT (1994) Genetic algorithms with local improvement for composite laminate design. Struct Optim 7(4):207–218
Levin RI, Lieven NAJ (1998) Dynamic finite element model updating using simulated annealing and genetic algorithms. Mech Syst Signal Process 12(1):91–120
Marwala T (2002) Finite element model updating using wavelet data and genetic algorithm. J Aircraft 39(4):709–711
Marwala T (2005) Finite element model updating using particle swarm optimization. Int J Eng Simul 6(2):25–30
Michalewicz Z (1974) Genetic algorithms + data structures = evolution programs. AI series. Springer, New York
Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2005) The bees algorithm. Technical note, Manufacturing Engineering Centre, Cardiff University, UK
Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. In: Proceeding of the 2nd international virtual conference on intelligent production machines and systems. Elsevier, Oxford
Piezo-tronic Voltage Source Accelerometer (2009) DJB instruments. http://www.djb-instruments.com/DJB_UK/pdf/a120a.pdf. Accessed 1 July 2009
Pulse Analyzer Platform (2009) B&K company. http://www.bksv.com/Products/PULSEAnalyzerPlatform.aspx. Accessed 1 July 2009
Saarenheimo A, Haapaniemi H, Luukkanen P, Nurkkala P, Rostedt J (2003) Testing for modal analysis of a feed water pipeline. In: Transaction of the 17th international conference on structural mechanics in reactor technology. Prague, Czech Republic
Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. In: Porto VW, Saravanan N, Waagen DE, Eiben AE (eds) The 7th annual conference on evolutionary programming. Lecture notes in computer science, vol 1447. Springer, New York, pp 591–600
Teodorovic D, Dell’Orco M (2005) Bee colony optimization—a cooperative learning approach to complex transportation problems. In: Jaszkiewicz A, Kaczmarek M, Zak J (eds) Advanced OR and AI methods in transportation. Publishing House of Poznan University of Technology, Poznan, pp 51–60
Wang W, Shijun G, Nan C, Feng Z, Wei Y (2010) A modified ant colony algorithm for the stacking sequence optimisation of a rectangular laminate. Struct Multidisc Optim. doi:10.1007/s00158-009-0447-4
Wedde HF, Farooq M, Zhang Y (2004) BeeHive: an efficient fault-tolerant routing algorithm inspired by honey bee behavior. In: Dorigo M (ed) Ant colony optimization and swarm intelligence. Lecture notes in computer science, vol 3172. Springer, Berlin, pp 83–94
Yang XS (2005) Engineering optimizations via nature-inspired virtual bee algorithms. In: Yang JM, Alvarez JR (eds) International work conference on artificial and natural neural networks. Lecture notes in computer science, vol 3562. Springer, Berlin, pp 317–323
Zhan Z, Zhang J, Li Y, Chung H (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern, Part B, Cybern 39(6):1362–1381
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Moradi, S., Fatahi, L. & Razi, P. Finite element model updating using bees algorithm. Struct Multidisc Optim 42, 283–291 (2010). https://doi.org/10.1007/s00158-010-0492-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00158-010-0492-z