Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This book provided detailed state-of-the-art developments on the emerging socio-inspired metaheuristic technique of Cohort Intelligence (CI). The motivation of the methodology is also discussed in detail. The methodology was successfully tested and validated by solving several unconstrained problems with different modalities and dimensions. The solution quality was quite promising and encouraging in terms of objective function, robustness, avoidance of local minima, computational time and function evaluations. The effect of each individual parameter such as sampling interval reduction factor, number of candidates and number of variations on the computational performance was also tested.

The book also validated the constraint handling ability of the CI methodology by solving a variety of well known test problems including three mechanical engineering design problems. The objective functions were of type polynomial, quadratic, cubic and nonlinear. A penalty function approach was incorporated for handling the constraints. In all the problem solutions, the implemented CI methodology produced sufficiently robust results with reasonable computational cost. This also justified the possible application of CI for solving a variety of real world problems.

The CI algorithm has been applied for solving several cases of five combinatorial NP-hard problems. The 0–1 Knapsack Problem (KP), with number of objects varying from 4 to 75 was the first one. In all the associated cases, the implemented CI methodology produced satisfactory results with reasonable computational cost. Furthermore, according to the solution comparison of CI with other contemporary methods it could be seen that the CI solution is comparable and for some problems even better than the other methods. In addition, in order to avoid saturation of cohort at suboptimal solution and further make the cohort saturate to the optimum solution, a generic approach such as accepting random behavior was incorporated.

Furthermore, the CI algorithm has been applied for solving combinatorial NP-hard Traveling Salesman Problem (TSP) with number of cities varying from 14 to 29. The application of the CI methodology for solving combinatorial NP-hard problem such as the TSP is successfully demonstrated. The CI incorporated with the roulette wheel approach, best behavior selection as well as random behavior selection approaches was successfully proposed. It is demonstrated that always following the best behavior/solution may make the cohort to saturate faster; however may make the cohort stuck into local minima. In addition, in order to jump out of possible local minima and further make the cohort saturate to global minimum, a generic approach such as accepting worst behaviors was incorporated. The encouraging results may help solve the real world problems with increasing complexity as the TSP can be further generalized to a wide variety of routing and scheduling problems [1]. In addition, CI approach could be modified to make it solve Multiple TSP (MTSP) and Vehicle Routing Problem (VRP). In this context, author see potential real world applications related to the distributed communication system such as, path planning of Unmanned Aerial vehicles (UAV) and addressing the ever growing traffic control problem using Vehicular ad hoc network (VANET).

In addition to above NP-hard problems, the CI was successfully applied to solve the new variant of the assignment problem, which has applications in healthcare and supply chain management. The results indicate that the accuracy of solutions to these problems obtained using CI is fairly robust and the computational time is quite reasonable. The results were compared with the multi-random-start local search (MRSLS) method. Moreover, several cases of the complex combinatorial problem such as the sea cargo mix problem were also successfully solved. The results were also compared with the MRSLS implemented. The findings are that the performance of the CI is clearly superior to that of Integer Programming (IP), specially developed heuristics referred to as HAM and MHA as well as the MRSLS for most of the problem instances that have been solved. Furthermore, CI was successfully applied to solve a large sized cross-border shippers’ problem. The results indicate that the accuracy of solutions to these problems obtained using CI is fairly robust and the computational time is quite reasonable. Furthermore, the usefulness of CI in satisfactorily solving goal programming problems is also demonstrated. It is important to mention here that while solving the combinatorial problems an inbuilt probability based constraint handling approach was revealed and deployed to drive the solution towards the feasible region and further improve.

The book also in detail described the application of a MRSLS that can be used to solve the above three problems. The MRSLS implemented here is based on the interchange argument, a valuable technique often used in sequencing, whereby the elements of two adjacent solutions are randomly interchanged in the process of searching for better solutions. Our findings are that the performance of the CI is clearly superior to that of the MRSLS for many of the problem instances that have been solved.

As CI exhibited great potential to solve a variety of optimization problems including for data clustering. However, in the preliminary experiments solving unconstrained test problems, it was observed that as the problem size increased, CI may converge slowly and prematurely to local optima. With the purpose of assuaging these drawbacks modified CI (MCI) was proposed. It outperformed CI in terms of both quality of solutions and the convergence speed. In addition, a novel hybrid K-MCI algorithm for data clustering was also proposed. This new algorithm exploited the merits of the two algorithms simultaneously. This combination of K-means and MCI allowed our proposed algorithm to convergence more quickly and prevented it from falling to local optima. The proposed method can be considered as an efficient and reliable method to find the optimal solution for clustering problems. In this research, the number of clusters was assumed to be known a priori when solving the clustering problems. Therefore, we can further modify our algorithm to perform automatic clustering without any prior knowledge of number of clusters. We may combine MCI with other heuristic algorithms to solve clustering problems, which can be seen as another research direction. Finally, our proposed algorithm may be applied to solve other practically important problems such as image segmentation [2], dispatch of power system [3].

It is important to mention that the guiding principles of CI as an optimization procedure are grounded in Artificial Intelligence (AI) concepts. CI models the self-supervising behavior of a group of people seeking approximately the same goal. The self-supervising nature and rational behavior of the candidates among the cohort is illustrated along with the learning process that takes place among the candidates in order to further improve their individual characteristics/qualities. Furthermore, the inherent ability of the CI algorithm in handling complicated constraints lends to its applicability in solving real world complex problems. In addition, it is evident from the results that the variability as measured by standard deviation (SD) in the quality of solutions obtained using CI is commendable and remains almost stable as the problem size increases. This is because, even though the search space increases as the problem size increases, the number of characteristics in a learning attempt that need to be learnt by a candidate who is following the behavior of another candidate do not change. This results in an increase in the number of learning attempts in order to improve candidates’ individual solutions and to finally reach the cohort’s global solution.

Some limitations of the CI method should also be identified. The rate of convergence and the quality of the solution is dependent on the parameters such as the number of candidates and the number of variations and reduction factor. These parameters are derived empirically over numerous experiments and their calibration requires some preliminary trials. It should also be observed that the number of characteristics attempted to adopt/learn is an important parameter when dealing with combinatorial optimization problems. As fewer characteristics are considered during the learning stage, this may delay the method’s convergence rate significantly. The procedure may get stuck in the neighborhood of a local minimum, which may result into premature convergence. How to fine-tune the CI parameters and what to decide on the number of characteristics that needs to be learned by a candidate in every learning attempt can be done in an evolutionary and adaptive way as discussed in [4]. This may also help in increasing the accuracy of the solution as well as reducing the SD and overall performance of the algorithm. In addition, it should also be observed that the initial guess of the candidate solutions can affect the computational time of the algorithm. More specifically, if the initial candidate solutions are closer to the feasible region the chances of achieving saturation/convergence and reaching the optimal solution faster are high.