Keywords

1 Recent Works and Advances of PSO

Presently, we have many variants of Particle Swarm Optimization (PSO) and are expected to grow further rapidly. Figure 1 describes the basic variants and modifications in PSO over the years. Various modifications to the original PSO has been proposed so far [43]. Also, novel ideas from other disciplines such as evolutionary algorithms have been imported to the framework of PSO. PSO algorithms can be divided into the global version (gbest model) and the local version (lbest model) types, with the ability of the lbest model to prevent a solution being trapped in local minima. The gbest model, on the other hand, has more chance to get trapped into a local optimum. However, the global version is superior to the local version in terms of the speed of convergence to the optimum solution and the computation time.

Fig. 1
figure 1

Basic variants of PSO

To reduce the possibility of particles flying out of the problem space, Eberhart et al. [42] put forward a clamping scheme that limited the speed of each particle to a range [−V max, V max]. To assist with the balance between exploration and exploitation a modified PSO, incorporating an inertia weight, w was introduced [128]. The initial experiments suggested that a value between 0.8 and 1.2 provided good results, although in later work Eberhart and Shi [40] indicated that the value is typically set to 0.9 (reducing the stepwise movement of each particle, allowing greater initial exploration) reducing linearly to 0.4 (speeding convergence to the global optimum) during an optimization run. A summary of various existing inertia weight strategies is given in Table 1 [94]. Constriction is an alternative method for controlling the behaviour of particles in the swarm. Rather than applying inertia to the velocity memory, Clerc and Kennedy (developed 1999, published [20]) applied a constriction factor to the new velocity. Eberhart and Shi [40] showed that with judicious parameter settings, the two approaches were algebraically equivalent and improved performance could be achieved across a wide range of problems.

Table 1 Description of different inertia weight strategies

PSO has also been successfully applied to solve the constrained optimization problems. A variety of approaches [4, 15, 56, 103, 134, 137] have been developed to work with constrained optimization methods.

Although the basic PSO was developed to find single solutions to optimization problems, but later it is observed that PSO has an inherent ability to find multiple solutions. Niching is an important technique for multimodal optimization. PSO can be used as an effective niching method for maintaining stable subpopulations (or niches) [44, 76, 95, 135, 144].

PSO was originally developed and applied to static problems where the objective function does not change. Later, It is realized that PSO be adapted to solve dynamic problems as well. Several simple modifications [39, 41, 54, 57, 80, 105, 157] have been applied to improve its performance in dynamic environment. One of the first studies in the application of PSO to dynamic environment came from Carlisle and Dozier [16], where the efficiency of different velocity models has been evaluated. Coelho et al. [22] successfully applied the split adaptive PSO design proportional integral (PI) controllers for nonlinear time-varying process in discrete time domain.

As far as the main body of PSO development work is concerned it concentrated on optimization in continuous search spaces, although some research has also been conducted into the application of the algorithm to discrete problems. Kennedy and Eberhart [61] developed the first discrete version of PSO for binary problems. Later, Al-kazemi and Mohan [3] compared this with an alternative velocity update technique called Multiphase Discrete PSO (M-DiPSO). Laskari et al. [71] presented an implementation of PSO that initially operated in continuous space but truncated the real number values to integers. Afshinmanesh et al. [1] have developed a hybrid approach, combining PSO and Artificial Immune Systems (AIS), for the optimization of binary problems.

For more developments in PSO one can refer the review papers by Parsopoulos and Vrahatis [104], Banks et al. [10, 11], Dian et al. [38].

A number of approaches have been proposed to extend the PSO for multiple objective problems. One of the earliest proposals for progression of PSO strategy for solving MOOP was made by Moore and Chapman [91] in an unpublished manuscript from 1999. Since then there have been several recent attempts to use PSO for multi-objective optimization problems only. Some of these concepts have been surveyed briefly in this section.

Dynamic Neighbourhood PSO

Hu and Eberhart [55] proposed this method in which, in each generation, after calculating distances to every other particles, each particle finds its new neighbourhood. Among the new neighbours each particle finds the local best particle.

The Multi-objective Particle Swarm Optimizer (MOPSO)

Coello and Lechuga [24] found PSO particularly suitable for MOOP mainly because of the high speed of convergence that the PSO presents for single objective optimization problem and proposed multi-objective particle swarm optimization (MOPSO). MOPSO used two archives, one for storing globally non-dominated solutions found so far by search process, while the other for storing the individual best solutions attained by each particle. They used method inspired by Knowles and Corne [65], for maintaining diversity. An adaptive grid feature used in this method, based upon objective functions values of archive members is applied to the archive, with the goal of producing well distributed Pareto optimal front. In this method first hyperspace is divided into small hypercube and each cube is assigned weight which is inversely proportional to the number of non-dominated solutions inside the cube. Then roulette wheel selection is used to select one of the hypercubes from which the gbest will be picked.

The Multi-objective Particle Swarm Optimizer (MOPSO)

Coello et al. [25] improved the aforementioned MOPSO by incorporating a genetic operator as mutation operator. The mutation operator boosts the exploration capability of MOPSO presented in Coello and Lechuga [24].

The Swarm Metaphor

Ray and Liew [117] proposed the Pareto dominance and combining concepts of evolutionary techniques with the PSO. All non-dominated individuals which are performing better and having less constraint violations are highly ranked based on Pareto ranking and saved as a set of leaders (SOL). The selection of a group leader as gbest from the SOL is based on roulette wheel selection which ensures SOL members with a large crowding radius have a higher probability of being selected as a leader.

The Approach of Mostaghim and Teich [92]

proposed the sigma method for finding the suitable gbest for each particle, they also used turbulence factor to enhance the exploration capability.

The Algorithm of Fieldsend and Singh [48]

suggested an approach in which they used an unconstrained elite archive (in which a special data structure called dominated tree is adopted) to store the non-dominated individuals found along the search process. The concept of the turbulence was also incorporated by them.

Bartz-Beielstein et al. [12] proposed an idea of using elitism (through the use of external archive) into PSO. They analysed different methods of selecting and deleting particles from the archive to generate a satisfactory approximation of the Pareto optimal front.

The Non-dominated Sorting PSO

Li [77] developed Non-dominated sorting particle swarm optimization (NSPSO) which incorporated the main mechanism of Non-dominated sorting genetic algorithm (NSGA-II) [34]. In his algorithm, the population of particles was combined with the personal best position and the best was selected from the new population to compose the next population.

Another Multi-objective Particle Swarm Optimization (AMOPSO)

Pulido and Coello [109] further improved the performance of PSO, and proposed an MOPSO algorithm, which is called AMOPSO. Their algorithm implements the subdivision of the decision space into multiple sub-swarms via clustering techniques. Their goal was to improve the diversity of solutions on the Pareto optimal front. At some point during the search process, different sub-swarms exchange information, as each sub-swarm chooses a different leader other than its own to preserve diversity.

The Algorithm of Parsopoulos et al. [106]

developed parallel vector evaluated particle swarm optimization (VEPSO), is a multi-swarm variant of PSO, which is inspired by the vector evaluated genetic algorithm (VEGA). In VEPSO, each swarm is evaluated using only one of the objective functions of the problem under consideration and the information it possesses for this objective function is communicated to other swarms through the exchange of their best experience.

The Algorithm of Sierra and Coello [133]

suggested a new MOPSO, which is also known as OMOPSO. In their design, the population is divided into three sub-swarms of equal size. Each sub-swarm adapted to a different mutation operator. In doing so, the ability of exploration and exploitation was enhanced during the search process.

Multi-Objective Particle Swarm Optimization with Crowding Distance (MOPSO-CD)

Raquel and Naval [112] developed another PSO based approach called MOPSO-CD, which incorporated the crowding distance into PSO and the distribution of non-dominated solutions was improved on the Pareto optimal front. The crowding distance mechanism together with a mutation operator maintains the diversity of non-dominated solutions in the external archive. Raquel and Naval [112] also showed that MOPSO-CD is highly competitive in converging towards the Pareto optimal front and generated a well-distributed set of non-dominated solutions. We discuss this approach in detail in the next chapter.

A Hybrid PSO

Liu et al. [79] proposed a hybrid PSO, which combined the global search ability of PSO with a synchronous local fine-tuning and used fuzzy global best to handle the premature convergence.

Time Variant MOPSO (TV-MOPSO)

Tripathi et al. [141] adapted the vital parameters in PSO, namely the inertia weight and the acceleration coefficients during the iterations.

Elitist Mutated (EM)-MOPSO

Reddy and Kumar [118] proposed (EM)-MOPSO, which incorporates an efficient mutation strategy called elitist mutation to enhance exploration and exploitation in the search space.

Dynamic Population Multiple Swarm MOPSO (DMOPSO)

Most recently Leong and Yen [73] proposed an algorithm DMOPSO, inspired by Pulido and Coello [109] and incorporates the following four proposed strategies: (1) cell-based rank density estimation scheme to keep track of the rank and density values of the particles; (2) population growing strategy to increase the population size to promote exploration capability; (3) population declining strategy to prevent the population size from growing excessively; and (4) adaptive local archives designed to improve the distributed solutions along the sections of the Pareto optimal front that associate with each sub-swarm.

Other than aforementioned methods several other methods [2, 13, 14, 33, 50] for MOPSO have been proposed till date. For more survey on various PSO proposals reader can refer to Reyes-Sierra and Coello [121], Fieldsend [49], Padhye et al. [97].

2 Reliability Optimization (An Overview)

Almost every one of us is acquainted with the term reliability in day-to-day life. When we assign attribute ‘reliable’ to a component or a system (a system may be consist of collection of several components) we precisely mean to say that the same will render service for a good or at least reasonable period of time. In the modern age of sciences, a high degree of reliability is being demanded from all kinds of users whether it is in general, public sectors, industries, defense and space research programmes. There is too much at stake in terms of cost, human life, and national security to take any risks with equipments which might not function properly when required. Moreover, the present day weapons used for military purposes consist of thousands of small parts, each interwoven into a complex web which constitutes the weapons. The failure of any one of these could adversely affect the operation of the weapon. Therefore, it becomes more important that each part of the complex equipment must be highly reliable so the equipment as a whole must be reliable. The concept of high reliability is equally important outside the Military field. Computers which are complex as well as expensive play a major role in industrial and scientific activities. If a Computer does not operate even for a single day even due to any software failure, it not only spells inconvenience but also cause financial loss, i.e. the software reliability testing is very important.

So the needs of obtaining highly reliable systems and components have acquired special importance with the development of the present day technology.

The theory of reliability is not very old; usually world war second in 1939 is regarded as the starting point of reliability discipline. Before world war second, in the first quarter of twentieth century a team of workers in ‘Bell Telephone Laboratories’ developed statistical methods for solving there quality control problems which is strongly linked with quality control. They provided the basis for development of statistical quality control. The American Society for Testing and Materials, The American Standard Association and The American Society for Mechanical Engineers also worked for the quality control techniques. But these technique were widely used till world war second 1939. Complexity and automation of equipments used in the war resulted in severe problems of maintenance and repairs. The equipment/component failed beyond the expectation. During this war army and navy in USA set up a joint committee known as Vacuum Tube Development Committee for the study of failure in vacuum tube which is considered to be one of the root causes of the trouble. The major committee on reliability was set up by U.S Defense Department in 1950. This was latter called the Advisory Group on Reliability of Electronic Equipment (AGREE). During 1950s Germany, Japan, and Britain also took interest in such type of study. The last 20 years have seen remarkable progress in the application of reliability in industries and in other departments of all the developed and developing countries.

The theory of reliability is the new scientific discipline that studies the general regularity that must be maintained under design, experimentation, manufacture, acceptance, and use of units/components in order to maximal effectiveness from their use. The need of obtaining highly reliable systems and components has acquired Special importance with the development in the present day technology.

The problem of increasing reliability of equipments/components becomes more important and urgent in connection with the complex mechanization, modern sophistication, and automation of industrial process in many fields of industry, transportation, communication, space technology, etc. The complex system, equipments, machines, etc., are not of much use if they cannot perform the work adequately for which they are intended.

A system is a combination of elements forming a planetary whole, i.e. there is a functional relationship between its components. The properties and behavior of each component ultimately affects the properties of the system. Any system has a hierarchy of components that pass through the different stages of operations which can be operational, failure, degraded or in repair. Failure does not mean that it will always be complete; it can be partial as well. But both these types affect the performance of system and hence the reliability. Majority of the systems in the industries are repairable. The performance of these systems can influence the quality of product, the cost of business, the service to the customers, and thereby the profit of enterprises directly. Modern repairable systems tend to be highly complex due to increase in convolution and automation of systems. During the last 45 years reliability concepts have been applied in various manufacturing and technological fields. Earlier researcher discussed reliability and steady state analysis of some realistic engineering systems by using different approaches. Reliability techniques have also been applied to a number of industrial and transportation problems including automobile industry. Here the study is focused on the engine assembly process of automobiles.

A high degree of reliability is also desirable from the economic point of view so as to reduce the overall costs. Sometimes the annual maintaining cost of some system in operable state is much higher than its original cost. Insufficient reliability of units engenders great loss in servicing, partial stoppage of equipment, and there may be accidents with considerable damage to the equipment and even the cost may be more serious in term of human life, national prestige and security. All these factors and many more, demanded high reliability in the design and operations of components/systems/equipments in various reliability models of practical utility, we often across with the situations of maximizing the profit. The profit earned out by an operable system besides other parameters depends upon the cost incurred against the repairmen needed to repair the failure stages of the system.

The present day theory of reliability has been developed during the last two decades by engineers and mathematicians of various countries.

Every science is based on some fundamental concepts and definitions, same is true for theory of reliability also. Some of the basic concepts on which the theory of reliability is formulated are given below.

System

A system is an arbitrary device consisting of different parts components or units.

Time

It is the period during which one can expect the satisfactory performance of a system.

Adequate

It indicates the criteria for operations of the device to satisfactory.

Failure Mode

It is the effect by which a failure is observed.

Failure Rate

It is the incremental change in the number of failures per associated incremental change in time. Or the expected rate of occurrence of failure or the number of failures in a specified time period. Failure rate is typically expressed in failures per million or billion hours. For example, if your television has a failure rate of five failures per million hours, you can watch one million hour-long television shows and likely experience a failure during only five shows.

Uptime

It is total time during which the system is in the acceptable operating conditions.

Downtime

The total time during which the system is not in the acceptable operating conditions.

The definition of reliability of a system is usually stated in straightforward terms as “the probability that the system will not fail during delivery of service [119], or alternatively, that the overall system performance figure of merit will not enter failure mode between the time a service is requested and when that service is delivered [136]. A system can be designed for optimal reliability either by adding redundant components or by increasing the reliability of components [68].

There are several ways for improving the system’s reliability. One way of improving reliability is either to duplex some of the unit or the whole system. Other way is to provide repair and maintenance to the system at the time of need. Some important technique of reliability improvements are as under.

Redundancy

In a redundant system, some additional paths are created for the proper functioning of the system. Even though one component is sufficient for successful operation of the system, we deliberately use some more components to increase probability of success, thus causing the system to become redundant. There are three types of redundancies.

Active Redundancy

An active redundant system with n-units is one which operates with every one unit. Here failure of system occurs only when all the units are fails.

Standby Redundancy

A standby redundant system is one in which one unit operates on line followed by a number of spare unit called standbys. On failure of the operating unit, a standby unit, if operable, is switched on to the line by perfect or imperfect switching device. Standby can be classifies as hot, warm and cold depending on how they are loaded in the standby state. Hot standbys are those which are loaded in exactly the same way as the operating unit. Warm standbys are those which are diminished load. And cold standbys are completely unloaded and never lose their operational ability and can not fail in standby state.

Partial Redundancy

The redundancy where in two or more redundant items are required to perform function k-out-of: m system. The system which is good iff at least k of it m items are good.

Maintenance

All recoverable systems which are used for continuous or intermittent service for some period of time are subjected to maintenance. Maintenance action can be classified in several categories, e.g. preventive, corrective, and priority maintenance.

Preventive Maintenance

Preventive maintenance is such type of check which keeps the system in a condition consistent with its built in level of performance, reliability and safety.

Corrective Maintenance

It deals with the system performance when the system gives wrong results. Repair/maintenance is concerned with increasing with system availability. In order to increase the system availability, failed unit are repaired to put them into operation.

Priority Maintenance

A redundant system which consist of n ≥ 2 units in which one of the units is called the priority unit (P-unit) and others are termed as non-priority units (O-units). The P-unit is the “preferred unit” for operating on line and is never used in the status of a standby. The O-units are allowed to operate on the line only when the P-unit is under failure.

Pre-emptive Priority

The repair of the O-unit is interrupted and its repair is continued as soon as the repair of the P-unit is completed. The resumed repair of the O-unit can follow any one of the following rules.

Pre-emptive Resume

The repair of the O-unit is continued from the point where it was left earlier.

Pre-emptive Repeat

The repair of the O-unit is started as a fresh; this implies that the time for the O-unit in the repair facility before it was left from service has no influence on its service time now.

Non Pre-emptive Priority

The repair of the O-unit is continued and the repair of the P-unit is entertained only when the repair of the O-unit is completed. It is also called the Head-of-line repair police.

Inspection

A system requires its inspection at random epochs in order to trace out the fault in redundant, particularly in deteriorating standby system.

The reliability optimization problems can be categorized in two ways: Single objective reliability optimization problems and Multi-objective reliability optimization problems.

  • Single Objective Reliability Optimization Problems

Let us consider a reliability optimization problem as follows:

Max \(f_{0} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , x_{n} } \right)\)

subject to

$$\begin{array}{*{20}l} {f_{i}^{c} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , x_{n} } \right) \le b_{i} ,} \hfill & {{\text{for}}\quad i = 1,2, \ldots ,m} \hfill \\ {l_{j} \le x_{j} \le u_{j} , x_{j} \in Z^{ + } , } \hfill & { {\text{for}}\quad j = 1,2, \ldots ,n} \hfill \\ {r_{j} \in \left( {0,1} \right) \subset R, } \hfill & {{\text{for}}\quad j = 1,2, \ldots , n} \hfill \\ \end{array}$$

where n is the number of components with m constraints. Component reliability of jth component is denoted by \(r_{j}\). \(x_{j}\) is the number of identical redundant components, i.e. the number of redundancies, at the jth component; \(f_{i}\) is the ith constraint function; \(b_{i}\) is the maximum allowable amount of the ith resource; \(f_{0}\) is an objective function of the problem; \(Z^{ + }\) is the set of non-negative integers while R denote the set of real numbers. The objective of the reliability optimization problem is to find the components reliability in such a way that it maximize the overall system reliability under the given resources constraints, or minimizes the total cost under minimum system reliability and other resource limitations.

  • Muti-Objective Reliability Optimization Problems

$$\begin{aligned} {\text{Max}}\;F & = (f_{1} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , \ldots , x_{n} } \right),\;f_{2} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , \ldots , x_{n} } \right), \ldots , \\ & \quad f_{K} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , \ldots , x_{n} } \right)) \\ \end{aligned}$$

subject to

$$\begin{array}{*{20}l} {f_{i}^{c} \left( {r_{1} , r_{2} , \ldots , r_{n} , x_{1} , x_{2} , x_{n} } \right) \le b_{i} , } \hfill & {{\text{for}}\quad i = 1, 2, \ldots ,m} \hfill \\ {l_{j} \le x_{j} \le u_{j} , x_{j} \in Z^{ + } ,} \hfill & {{\text{for}}\quad j = 1, 2, \ldots ,n} \hfill \\ {r_{j} \in \left( {0,1} \right) \subset R,} \hfill & {{\text{for}}\quad j = 1, 2, \ldots ,n} \hfill \\ \end{array}$$

\(f_{k} ,\forall k = 1,2, \ldots ,K\) is one of the objective functions of the problem, K is total number of objective functions. In most practical situations involving reliability optimization, there are several mutually conflicting goals such as maximizing system reliability and minimizing cost, weight, volume and constraints required to be addressed simultaneously. Some main objectives can be expressed as.

Objective 1:

The most important objective is the maximization of system reliability \((R_{s} )\). It enables the system to function satisfactorily throughout its intended service period

$${\text{Max}}\;R_{s}$$

As in our approach we are considering all minimization problems. Hence, the above objective is equivalent to minimization of system unreliability \((Q_{s} = 1 - R_{s} )\), can be expressed as follows:

$${\text{Min}}\;Q_{S}$$

Objective 2:

The addition of the redundant components increases not only the system reliability but also its overall cost \((C_{S} )\). A manufacturer has to balance these conflicting objectives, keeping in view the importance of reducing the overall cost

$${\text{Min}}\;C_{S}$$

Objective 3:

As with cost, every added redundant component increases the weight of the system. Usually, the overall weight of a system needs to be minimized along with its cost even as reliability is maximized (or unreliability is minimized)

$${\text{Min}}\;W_{S}$$

Reliability optimization problems can be categorized as redundancy allocation, reliability allocation and reliability–redundancy allocation problems in accordance to the type of their decision variables. If the number of redundancies, \(x_{j}\)’s for all j, are the only variables, the problem is called redundancy allocation problem. If component reliabilities, \(r_{j}\)’s for all j, are the only variables, the problem is termed as reliability allocation and if the decision variables of the problem include both the component reliabilities and redundancies, the problem is called a reliability–redundancy allocation problem. From the mathematical programming point of view, redundancy allocation is a pure integer nonlinear programming problem (INLP) while reliability allocation problems can be viewed as a continuous nonlinear programming problem (NLP) and reliability–redundancy allocation can be termed as a mixed integer nonlinear programming problem (MINLP).

The suitability of a metaheuristics varies problem to problem. In other words, a metaheuristic which is giving promising results on a particular set of problem may show poor performance on different problems. Optimization of reliability of complex systems is an extremely important issue in the field of reliability engineering. Over the past three decades, reliability optimization problems have been formulated as nonlinear programming problems within either single objective or multi-objective environment.

As discussed above, reliability optimization problems are categorized into three typical problems according to the types of their decision variables: reliability allocation, redundancy allocation and reliability–redundancy allocation. A number of algorithms—also categorized as approximate, exact, or heuristic/metaheuristic have been used to find optimal solutions to these problems. Algorithms such as the surrogate worth trade-off, the Lagrange multiplier, and geometric programming methods and their variants, which are efficient for the exact solution of continuous problems of the type posed by reliability allocation optimization, can only approximate the solution in the case of redundancy or redundancy–reliability allocation optimization [93, 153]. The approximation techniques involve the use of trial and error approaches to obtain integer solutions [148, 153]. The approximation techniques were popular when exact solution algorithms were not well developed. The advent of the exact algorithms, such as integer programming (IP), branch and bound, and dynamic programming (DP) [78] have made the approximation techniques less popular for solving redundancy allocation problems. The approximation and exact algorithms, though efficient with small-to-moderate sized problems having desirable properties such as convexity or monotonicity, are deficient with complex large scale ones, such as real-life network reliability and redundancy allocation optimization problems [7, 8]. Although the heuristic/metaheuristic approaches (example GA, SA, ACO, PSO and TS) yield solutions which are not exact, they do have the ability to efficiently handle complexity [5] and thus become increasingly popular in the reliability optimization field. The redundancy and the redundancy–reliability allocation optimization problems are generally more difficult to solve than the reliability allocation ones. This is because the former belongs to the class of NP-hard problems (this phenomenon was demonstrated by Chern [19], Coit et al. [32], Coit and Konak [27]) which involve non-convex and combinatorial search spaces and require a considerable amount of computational effort to find exact optimal solutions [62]. The reliability allocation problems on the other hand involve continuous optimization with a number of classical solution algorithms based on gradient and direct search methods at their disposal. They are thus relatively easier to solve. Examples of the solution algorithms which were applied in the context of the three optimization problem types are presented in Tables 2 and 3 [142].

Table 2 Different optimization techniques used in reliability optimization of SOOP category
Table 3 Different optimization techniques used in reliability optimization of MOOP category

Tillman et al. [140] has extensively reviewed the several optimization techniques for system reliability design. However, they reviewed the application of only derivative-based optimization techniques, as metaheuristics were not applied to the reliability optimization problems by that time. Mohan and Shanker [90] applied random search technique to optimize complex system. Luus [81] optimized such problems by nonlinear integer programming procedure. Over the last decade, metaheuristics have also been applied to solve the reliability optimization problems. To list a few of them Coit and Smith [31, 30], were the first to employ a GA to solve reliability optimization problems. Ravi et al. [114] developed an improved version of nonequilibrium simulated annealing called INESA and applied it to solve a variety of reliability optimization problems. Further, Ravi et al. [115, 116] first formulated various complex system reliability optimization problems with single and multi-objectives as fuzzy global optimization problems. They also developed and applied the non-combinatorial version of another metaheuristic, viz., threshold accepting to solve these problems. Recently, Shelokar et al. [127] applied the ant colony optimization (ACO) algorithm to these problems and obtained comparable results to those reported by Ravi et al. [114]. Vinod et al. [143] applied GAs to Risk Informed In-Service Inspection (RI-ISI) which aims at prioritizing the components for inspection within the permissible risk level thereby avoiding unnecessary inspections. A new fuzzy MOO method is introduced and it is used for the optimization decision-making of the series and complex system reliability with two objectives is presented by Mahapatra and Roy [82]. Mahapatra [83] considered a series-parallel system to find out optimum system reliability with an additional entropy objective function. Marseguerra et al. [86] applied GA to solve the reliability problem. Salazar et al. [125, 126] solved the system reliability optimization problem by using several EAs and MOEAs. Ravi [113] developed an extended version of the great deluge algorithm and demonstrated its effectiveness in solving the reliability optimization problems. Deep and Deepti [35] applied self-organizing migrating genetic algorithm (C-SOMGA) to optimize such type of problems. Furthermore Kuo and Prasad [70], Kuo and Wan [69] reviewed different reliability optimization and allocation techniques. More recently, Pant et al. [100, 102], Kumar et al. [67] applied PSO and cuckoos search algorithm (CSA) to solve reliability optimization problems.

3 Why Particle Swarm Approach to Reliability Optimization?

Reliability optimization problem are NP-hard in nature so it is quite difficult to achieve optimal reliability design [19]. The solution of such NP-hard optimization problems, however, is more difficult using heuristics or exact algorithms. This is because these optimization problems generate a very large search space, and searching for optimal solutions using exact methods or heuristics will necessarily be extremely time consuming. Such methods are particularly advantageous when the problem is not large. Therefore, metaheuristic algorithms, particularly cuckoos search algorithm (CSA), grey wolf optimization algorithm (GWO), ant colony optimization (ACO), genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), etc., are suitable for solving reliability optimization problems. The main concept of PSO is based on the food searching behavior of birds flocking or fish schooling. When PSO is adopted to solve problems, each particle has its own location and velocity, which determine the flying direction and distance, respectively. Comparing with other evolutionary approaches PSO has the following advantages [21, 53, 58, 120]:

  1. (i)

    It has less parameters.

  2. (ii)

    It is easy in implementation.

  3. (iii)

    It has fast convergence.

These advantages are good for solving the reliability optimization problems because a population of particles in PSO can operate simultaneously so that the possibility of paralysis in the whole process can be reduced. Different PSO methods have been already successfully applied by Zavala et al. [155], Chen [18], Pandey et al. [98], Levitin et al. [74], Yeh [152], Coelho [23], Zou et al. [160], Pant and Singh [101] Pant et al. [100,102], etc., in reliability optimization problems.