Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Cosmological theory is an exciting subject, because it shows how the universe happens, moves, and revolutions. One of the fascinating topics is where all of the stars and galaxies came from (and how, and why)? This question has long been explored by the physics. For example, two famous physics, i.e., Sir Isaac Newton and Albert Einstein, believed that the universe is unchanging and introduced a term, called cosmological constant. However, this would prove to be a mistake. In 1929, astronomer Edwin Hubble discovered that the universe was expanding. He found that in the early universe, gravity was very strong, as a result of the concentration of matter in a very small space—so small, in fact, that it was compressed down to a single point. Thus, it would suffer an incredible pressure and has expanded ever since, known as the big bang. This event was controversial until 1965, when an accidental discovery supported the theory. Today, the most advanced astronomical observations show that the big bang theory is likely true.

Scientists were originally very upset by the big bang theory, because they believed in an eternal universe (i.e., the universe does not change over time). However, they concerned soon with another question of what is the ultimate fate of the universe? One idea that was popular was that the universe would expand until gravity began to pull it back, resulting in a big crunch, where all matter returned to a single unified point—and then the cycle of expansion would start all over again. This hypothesis is known as closed universe. What happens after that? We cannot exactly tell for now.

1.1 Big Bang

Literally, every bit of matter and energy in our universe was created through a singular event (i.e., the unimaginable crucible of heat and light) that we call the big bang (Bauer and Westfall 2011). Just for the record, it was neither big (in fact, it was very small and fit onto the head of a pin), nor was there a bang. This event happened \( \left( {13.73 \pm 0.12} \right) \times 10^{9} \) years ago. Although this theory is not perfect, and over time physics made efforts in order to make it more consistent. One thing is true that the universe was well on its way to becoming what we observe today: filled with galaxies, stars, planets, and all other sorts of strange and exotic things.

1.2 Big Crunch

One hypothesis that the future of the universe is called big crunch model, which means that the universe contracts back into a point of mass. This will proceed almost exactly like the big bang, except in reverse. Whether the expansion of the universe will take place forever or will stop some day, it depends on the quantity of matter it has (Scalzi 2008).

2 Big Bang–Big Crunch Algorithm

2.1 Fundamentals of the Big Bang–Big Crunch Algorithm

Inspired from one of the cosmological theories that universe was “born” in the big bang and might “die” in the big crunch, Erol and Eksin (2006) proposed a new algorithm, namely, the big bang–big crunch (BB–BC) algorithm. In general, the proposed algorithm includes two successive phases. In the big bang phase (corresponding to the disorder caused by the energy dissipation in nature), random points are generated; whereas, in the big crunch phase (corresponding to the order due to gravitational attraction), those points shrank to a single representative point via a centre of mass or minimal cost approach.

2.1.1 Big Bang Phase

Just like the expansion of the universe, the main purpose of the big bang phase in BB–BC is to create initial populations. The initial position of each input is generated randomly over the entire search space. Once the population pool is created, fitness values of the individuals are calculated (Genç et al. 2010).

2.1.2 Big Crunch Phase

If enough mass and energy (i.e., inputs) is in the universe (i.e., search space), then that mass and energy may cause enough attraction to halt the expansion of the universe and reverse it—bringing the entire universe back to a single point. Similarly, in the big crunch phase, a contraction procedure is applied to form a centre or a representative point for further big bang operations. In other words, the big crunch phase is a convergence operator that has many inputs but only one output, which called “centre of mass”. Here, the term mass refers to the inverse of the fitness function value (\( f^{i} \)). The point representing the centre of mass that is denoted by \( x^{c} \) and calculated according to Eq. 18.1 (Erol and Eksin 2006):

$$ \vec{x}^{c} = \frac{{\sum\nolimits_{i = 1}^{N} {\frac{1}{{f^{i} }}\vec{x}^{i} } }}{{\sum\nolimits_{i = 1}^{N} {\frac{1}{{f^{i} }}} }}, $$
(18.1)

where \( x^{i} \) is a point within an n-dimensional search space generated, \( f^{i} \) is fitness function value of this point (such as cost function), N is the population size in big bang phase.

Instead of the centre of mass, the best fit individual (i.e., the lowest \( f^{i} \) value) can also be chosen as the starting point in the big bang phase.

The new generation for the next iteration in the big bang phase is normally distributed around the centre of mass, using Eq. 18.2 (Erol and Eksin 2006):

$$ x^{new} = x^{c} + \frac{lr}{k}, $$
(18.2)

where \( x^{c} \) stands for centre of mass, l is the upper limit of the parameter, r [or N(0, 1)] is a normal random number generated according to a standard normal distribution with mean (\( \mu \)) zero and standard deviation (\( \sigma \)) equal to one, and \( k \) is the iteration step. Then new point (\( x^{new} \)) is upper and lower bounded.

Summarizing the steps in the standard BB–BC algorithm yields to Erol and Eksin (2006):

  • Step 1: Initiation population of \( N \) candidate solution is randomly generated all over the search space.

  • Step 2: The fitness function value \( \left( {f^{i} } \right) \) corresponding to each candidate solution is calculated.

  • Step 3: The \( N \) candidate solutions are contracted into the centre of mass \( \left( {x^{c} } \right) \), either by using the Eq. (18.1) or by choosing the point that has lowest value after the calculation in Step 2.

  • Step 4: New population of solutions is generated around \( x^{c} \) by adding or subtracting a random number whose value decreases by increasing the iterations elapsed.

  • Step 5: Check if maximum iteration is reached; go to Step 2 for new beginning. If a specified termination criteria is satisfied stop and return the best solution.

2.2 Performance of BB–BC

To evaluate the performance of the BB–BC algorithm, (Erol and Eksin 2006) proposed six test functions, namely, Sphere function, Rosenbrock function, Step function, Ellipsoid function, Rastrigin function, and Ackley function. Compared with combat genetic algorithm (CGA), the BB–BC algorithm presented a better results of finding the global best solution.

2.3 Selected BB–BC Variants

Although BB–BC algorithm is a new member of computational intelligence (CI) family, a number of BB–BC variations have been proposed in the literature for the purpose of further improving the performance of BB–BC. This section gives an overview to some of these BB–BC variants which have been demonstrated to be very efficient and robust.

2.3.1 Hybrid BB–BC Algorithm

In Kaveh and Talatahari (2009, 2010b), the authors developed one of the first BB–BC hybrids, called hybrid BB–BC (HBB–BC). Overall, the HBB–BC introduced two improvements: using the particle swarm optimization (PSO) capacities to improve the exploration ability of BB–BC algorithm, and using sub-optimization mechanism (SOM) to update the search-space of BB–BC algorithm. Compared with the standard BB–BC and other conventional CI optimization methods such as genetic algorithm (GA), ant colony optimization (ACO), PSO, and harmony search (HS), the HBB–BC performed better.

In general, there are also two phases involved in HBB–BC: a big bang phase where candidate solutions are randomly distributed over the search space, and a big crunch phase working as a convergence operator where the centre of mass is generated. Compared with standard BB–BC algorithm, the main difference is that the HBB–BC employed the PSO capacities to improve the exploration ability. Kaveh and Talatahari (2009) pointed out that the reason to select PSO as the first reformation due to at each iteration, the particle moves towards both local best (i.e., a direction computed from the best visited position), and global best (i.e., the best visited position of all particles in its neighbourhood). Inspired by that, the HBB–BC approach not only uses the centre of mass but also utilizes the best position of each candidate and the best global position to generate a new solution. The calculation formulas in big crunch phase are as follows:

  • The centre of mass can be computed via Eq. 18.3 (Kaveh and Talatahari 2009): 

    $$ x_{i}^{c(k)} = \frac{{\sum\nolimits_{j = 1}^{N} {\frac{1}{{f^{i} }}x_{i}^{{\left( {k,j} \right)}} } }}{{\sum\nolimits_{j = 1}^{N} {\frac{1}{{f^{i} }}} }},\quad \, i = 1,2, \ldots ,ng, $$
    (18.3)

    where \( x_{i}^{c(k)} \) is the ith component of the jth solution generated in the kth iteration; \( f^{i} \) is fitness function value of this point (such as cost function); N is the population size in big bang phase.

  • The new generation for the next iteration in the big bang phase is normally distributed around \( x_{i}^{c(k)} \) and can be computed via Eq. 18.4 (Kaveh and Talatahari 2009): 

    $$ \begin{array}{*{20}c} {x_{i}^{{\left( {k + 1,j} \right)}} = \alpha_{2} x_{i}^{c(k)} + \left( {1 - \alpha_{2} } \right)\left( {\alpha_{3} x_{i}^{gbest(k)} + \left( {1 - \alpha_{3} } \right)x_{i}^{{lbest\left( {k,j} \right)}} } \right) + \frac{{r_{j} \alpha_{1} \left( {x_{\max} - x_{\min } } \right)}}{k + 1},} \\ \begin{gathered} i = 1,2, \ldots ,ng \hfill \\ j = 1,2, \ldots ,N, \hfill \\ \end{gathered} \\ \end{array} $$
    (18.4)

    where \( r_{j} \) is a random number from a standard normal distribution which changes for each candidate; \( x_{\max} \) and \( x_{\min } \) are the upper and lower limits; \( \alpha_{1} \) is a parameter for limiting the size of the search space; \( x_{i}^{{lbest\left( {k,j} \right)}} \) is the best position of the jth particle up to the iteration k; \( x_{i}^{gbest(k)} \) is the best position among all candidates up to the iteration k; \( \alpha_{2} \) and \( \alpha_{3} \) are adjustable parameters controlling the influence of the global best and local best on the new position of the candidates, respectively.

Another reformation in the HBB–BC is that the SOM has been employed as an auxiliary tool to update the search space. Based on the principle of finite element method, SOM was introduced by Kaveh et al. (2008). The work principle of SOM is repetitive dividing the search space into sub-domains and employing optimization process into these sub-domains until a specified termination criteria (such as required accuracy) is satisfied and return the best solution.

The SOM mechanism can be calculated as the repetition of the following steps for definite times, nc, (in the stage k of the repetition):

  • Calculating cross-sectional area bounds for each group. If \( x_{i}^{{gbest\left( {k_{SOM - 1} } \right)}} \) is the global best solution obtained from the previous stage for design variable i, then we have Eq. 18.5 (Kaveh and Talatahari 2009):

    $$ \begin{array}{*{20}c} {\left\{ \begin{gathered} x_{{\min} ,{i}}^{{\left( {k_{SOM} } \right)}} = x_{i}^{{gbest\left( {k_{SOM - 1} } \right)}} - \beta_{1} \cdot \left( {x_{{\max} ,{i}}^{{\left( {k_{SOM - 1} } \right)}} - x_{{\min} ,{i}}^{{\left( {k_{SOM - 1} } \right)}} } \right) \ge x_{{\min},{i}}^{{\left( {k_{SOM - 1} } \right)}} \hfill \\ x_{{\max} ,{i}}^{{\left( {k_{SOM} } \right)}} = x_{i}^{{gbest\left( {k_{SOM - 1} } \right)}} + \beta_{1} \cdot \left( {x_{{\max} ,{i}}^{{\left( {k_{SOM - 1} } \right)}} - x_{{\min},{i}}^{{\left( {k_{SOM - 1} } \right)}} } \right) \le x_{{\max} ,{i}}^{{\left( {k_{SOM - 1} } \right)}} \hfill \\ \end{gathered} \right.,} \\ \begin{aligned} & i = 1,2, \ldots ,ng \hfill \\ & k_{SOM} = 2, \ldots ,nc, \hfill \\ \end{aligned} \\ \end{array} $$
    (18.5)

    where is an adjustable factor which determines the amount of the remaining search space; and \( x_{{\min},{i}}^{{\left( {k_{SOM} } \right)}} \), \( x_{{\max} ,{i}}^{{\left( {k_{SOM} } \right)}} \) are the minimum and the maximum allowable cross-sectional areas at the stage, respectively. In stage 1, the amounts of \( x_{{\min} ,{i}}^{(1)} \) and \( x_{{\max} ,{i}}^{(1)} \) are set according to Eq. 18.6 (Kaveh and Talatahari 2009):

    $$ x_{{\min} ,{i}}^{(1)} = x_{\min} ,\quad x_{{\max} ,{i}}^{(1)} = x_{\max} ,\quad i = 1,2, \ldots ,ng, $$
    (18.6)

    where \( x_{\min} ,i^{1} \) and \( x_{\max} ,i^{1} \) are the minimum and the maximum allowable cross-sectional areas at the stage 1.

  • Determining the amount of increase in allowable cross-sectional areas via Eq. 18.7 (Kaveh and Talatahari 2009). 

    $$ x_{i}^{{ * \left( {k_{SOM} } \right)}} = \frac{{\left( {x_{{\max} ,{i}}^{{\left( {k_{SOM} } \right)}} - x_{{\min} ,{i}}^{{\left( {k_{SOM} } \right)}} } \right)}}{{\beta_{2} - 1}},\quad \, i = 1,2, \ldots ,ng, $$
    (18.7)

    where \( x_{i}^{{ * \left( {k_{SOM} } \right)}} \) is the amount of increase in allowable cross-sectional area; and \( \beta_{2} \) is the number of permissible value of each group.

  • Creating the series of the allowable cross-sectional areas.

    The set of allowable cross-sectional areas for group i can be defined as Eq. 18.8 (Kaveh and Talatahari 2009):

$$ x_{{\min} ,{i}}^{{\left( {k_{SOM} } \right)}} ,\;x_{{\min} ,{i}}^{{\left( {k_{SOM} } \right)}} + x_{i}^{{ * \left( {k_{SOM} } \right)}} ,\, \, \ldots ,x_{{\min} ,{i}}^{{\left( {k_{SOM} } \right)}} { + }\left( {\beta_{ 2} - 1} \right) \cdot x_{i}^{{ * \left( {k_{SOM} } \right)}} \,{ = }\,x_{{\max} ,{i}}^{{\left( {k_{SOM} } \right)}} ,\quad \, i = 1,2, \ldots ,ng. $$
(18.8)
  • Determining the optimum solution of the stage \( k_{SOM} \).

    This is the last step and the stopping creation for SOM can be defined as Eq. 18.9 (Kaveh and Talatahari 2009): 

    $$ x_{i}^{{ * ({nc})}} \le x^{*},\quad i = 1,2, \ldots ,ng, $$
    (18.9)

    where \( x_{i}^{{ * ( {nc})}} = \) the amount of accuracy rate of the last stage; and \( x^{ * } = \) the amount of accuracy rate of the primary problem.

In addition to HBB–BC, another hybridization between the BB–BC algorithm and simulated annealing (SA) technique was recently proposed in Altomare et al. (2013). In this approach, the value of fitness function is further submitted to a local optimization by SA with a fast annealing schedule. This new hybrid method has been implemented to solve crystal structure problems. Compared with traditional SA algorithm, the hybridized algorithm showed better results in terms of computation time.

2.3.2 Improved BB–BC Algorithm

To improve the BB–BC performance, Hasançebi and Azad (2012) proposed two enhanced variants of the BB–BC algorithm, called modified BB–BC (MBB–BC) and exponential BB–BC algorithm (EBB–BC), respectively. In the new formulation, the normal random number (r) is changed by using any appropriate statistical distribution in order to eliminate the shortcomings of the standard formulation (e.g., big search dimensionality). Furthermore, to meet the discrete data requirements, the improved BB–BC algorithm employed the way of round-off instead of the real values to nearest integers representing the sequence number. As a result, the new generation for the next iteration in the big bang phase can be formulated as Eq. 18.10 (Hasançebi and Azad 2012):

$$ x^{new} = x^{c} + round\left[ {\alpha \cdot N\left( {0,1} \right)_{i}^{3} \frac{{\left( {x_{\max} - x_{\min } } \right)}}{k}} \right], $$
(18.10)

where \( x^{c} \) is the value of discrete design variable, \( x_{\max} \) and \( x_{\min } \) are its lower and upper bounds, respectively. In addition, the power of random number is set to 3 based on extensive numerical experiments. This reformulation is referred to as MBB–BC.

In a similar vein, Hasançebi and Azad (2012) also proposed an alternative approach called EBB–BC to deal with the discrete design problem where the use of an exponential distribution (E) in conjunction with the third power of random number as shown in Eq. 18.11.

$$ x^{new} = x^{c} + round\left[ {\alpha \cdot E\left( {\lambda = 1} \right)_{i}^{3} \frac{{\left( {x_{\max} - x_{\min} } \right)}}{k}} \right]. $$
(18.11)

The probability density function for an exponential distribution is given as Eq. 18.12 (Hasançebi and Azad 2012):

$$ f(x) = \left\{ {\begin{array}{*{20}l} {\lambda e^{ - \lambda x} } \hfill & {x \ge 0} \hfill \\ 0 \hfill & {x < 0} \hfill \\ \end{array} } \right., $$
(18.12)

where \( \lambda \) is a real, positive constant. The mean and variance of the exponential distribution are given as \( 1/\lambda \) and \( 1/\lambda^{2} \), respectively.

Accordingly, if all the design variables in a new solution remain unchanged after applying Eq. 18.11, i.e., \( x^{new} = x^{c} \), the generation process is iterated in the same way by decreasing the \( \lambda \) parameter of the exponential distribution by half each time, and this is repeated until a different solution is produced, i.e., \( x^{new} \ne x^{c} \).

Hasançebi and Azad (2012) presented two numerical examples to investigate the performance of EBB–BC and MBB–BC. Compared with standard BB–BC, the improved variants gave the better results in terms of balancing between the exploration and exploitation characteristics of the algorithms. More recently, an upper bound strategy (UBS) with MBB–BC and EBB–BC was further integrated in Azad et al. (2013) for optimum design of steel frame structures. Computational results showed that the new effort significantly reduced the number of structural analyses.

Furthermore, to improve the convergence properties of the BB–BC, Alatas (2011) proposed a new methods called uniform big bang–chaotic big crunch (UBB–CBC) algorithm which involves two improved reformulation, i.e., an uniform population method to generate uniformly distributed random points in the big bang phase (called UBB), and the chaotic maps property to rapidly shrink those points to a single representative point in the big crunch phase (called CBC). Compared with benchmark functions, the performance of UBB–CBC showed superiority over the standard BB–BC algorithm.

2.3.3 Local Search-Based BB–BC Algorithm

As Kaveh and Talatahari (2009) concluded at the end of their study, the HBB–BC is worse than improved algorithms which have the extra local search ability. To fulfil this gap, Genç et al. (2010) introduced a local search move mechanism to BB–BC algorithm based on defining a possible improving direction to check neighbouring points.

In details, Genç et al. (2010) put the local search methods (i.e., expansion and contraction) between the original “banging” and “crunching” phases. The main objective is to modify the representative point with local directional moves, in order to easily attack the path going to optima and decrease the process time fro reaching the global minima. The direction vector can be formulated as Eq. 18.13 (Genç et al. 2010):

$$ IV_{1} = P\left( n \right) - P\left( {n - 1} \right), $$
(18.13)

where \( IV_{1} \) stands for the improvement vector of single step regression BB–BC; \( P(n) \) is the current best or fittest point; and \( P\left( {n - 1} \right) \) is the last stored best or fittest point.

To test the performance of the new algorithm, Genç et al. (2010) implemented it on the target tracking problem. The simulation results showed that the local search-based BB–BC algorithm outperformed the standard BB–BC algorithm in terms of data accuracy.

2.4 Representative BB–BC Application

According to the literature review, the main application of the BB–BC algorithm is in structural optimization. In general, there are three main groups of structural optimization applications (Sadollah et al. 2012; Hasançebi and Azad 2012): (1) sizing optimization; (2) shape optimization; and (3) topology optimization. In sizing optimization, it can further be divided into two subcategories: discrete and continuous. Hasançebi and Azad (2012) used MBB–BC and EBB–BC to solve the discrete sizing optimization, whereas Kaveh and Talatahari (2009) and (2010a) proposed the HBB–BC algorithm to solve problems with continuous domains.

2.4.1 Truss Optimization

Truss optimization is one of the most active branches of the continuous sizing optimization. The main objective for designing truss structures is to determine the optimum values for member cross-sectional areas (\( A_{i} \)) in order to minimize the structural weight (W), meanwhile, satisfy the inequality constraints that limit design variable sizes and structural responses.

In Kaveh and Talatahari (2009), the authors employed the HBB–BC method to address the above mentioned truss optimization problem. In their work, five truss structures optimization examples were presented, namely, a 25-bar spatial truss structure, a 72-bar spatial truss structure, a 120-bar dome shaped truss, a square on diagonal double-layer grid, and a 26-story-tower spatial truss. Compared with other CI techniques (e.g., GA, ACO, and PSO), the HBB–BC performed well in large size structures characterized by converging difficulty or easily getting trapped at a local optimum.

3 Conclusions

In summary, the BB–BC algorithm is a population-based CI algorithm that shares some similarities with evolutionary algorithms (Erol and Eksin 2006), such as randomly selected initialization and refinement of the value of fitness function according to the best fitted answers of the previous loop or loops (Kaveh and Farhoudi 2011). The core working principle of BB–BC is to transform a convergent solution to a chaotic state which is a new set of solutions (Erol and Eksin 2006). The leading advantages of BB–BC are its high convergence speed and the low computation time, together with its simplicity and capability of easy-to-implement (Desai and Prasad 2013).

With the rapid spreading of BB–BC, in addition to the representative applications detailed in this chapter, the BB–BC has also been successfully applied to a variety of optimization problems as outlined below:

  • Automatic target tracking (Genç et al. 2010; Genç and Hocaoğlu 2008).

  • Fuzzy system control (Kumbasar et al. 2008, 2011; Aliasghary et al. 2011).

  • Layout optimization (Kaveh and Farhoudi 2011).

  • Linear time invariant systems (Desai and Prasad 2013).

  • Course timetabling (Jaradat and Ayob 2010).

  • Power system (Sedighizadeh and Arzaghi-Haris 2011; Dincel and Genc 2012; Kucuktezcan and Gen 2012; Zandi et al. 2012).

  • Structural engineering (Altomare et al. 2013; Azad et al. 2013; Tang et al. 2010; Camp 2007; Camp and Huq 2013).

Interested readers are referred to them as a starting point for a further exploration and exploitation of the BB–BC algorithm.