Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The methodology of Cohort Intelligence (CI) [14] has been applied successfully applied solving combinatorial problems such as Knapsack problem, Traveling Salesman Problem and the new variant of the assignment problem (also referred to as Cyclic Bottleneck Problem (CBAP)). This chapter discusses CI solution to the Sea Cargo Mix (SCM) problem is originally proposed in [5]. The performance of CI solving the SCM is compared with the Integer Programming (IP) Solution as well as a multi-random-start local search (MRSLS) method. In addition the solution is compared with the Heuristic algorithm for MDMKP (HAM) and the Modified Heuristic algorithm for MDMKP (HAM) [5].

8.1 Sea Cargo Mix Problem

As mentioned before, the Sea Cargo Mix (SCM) problem is originally proposed in [5]. The decision problem consists of choosing a sea cargo shipping schedule of accepted freight bookings over a multi-period planning horizon. The goal is to maximize profit subject to constraints such as the limited available volume capacity, weight capacity and the number of available containers at the port of origin. The mathematical formulation of this problem, which can be viewed as a multi-dimension multiple knapsack problem (MDMKP), is discussed below.

$$ \begin{aligned} & Maximize\,Z = \sum\limits_{1 \le k \le K} {\sum\limits_{{\upeta_{k} \le t \le \tau_{k} }} {v_{k} r_{kt} x_{kt} } } \\ & Subject\,to \\ \end{aligned} $$
(8.1)
$$ \sum\limits_{{k \in \tilde{K}_{t} }} {v_{k} x_{kt} } \le E_{t} , \quad \forall t \in \tilde{T} $$
(8.2)
$$ \sum\limits_{{k \in \tilde{K}_{tj} }} {v_{k} x_{kt} } \le V_{tj} ,\quad \forall t \in \tilde{T} ,\;\; \forall j \in \tilde{J} $$
(8.3)
$$ \sum\limits_{{k \in \tilde{K}_{tj} }} {w_{k} x_{kt} } \le W_{tj} ,\quad \forall t \in \tilde{T},\;\;\forall j \in \tilde{J} $$
(8.4)
$$ \sum\limits_{{\upeta_{k} \le t \le \tau_{k} }} {x_{kt} } \le 1,\quad \forall k \in \tilde{K} $$
(8.5)
$$ x_{kt} \in \left\{ {0,1} \right\},\quad \forall k \in \tilde{K},\quad t\left\{ {\upeta_{k} ,\upeta_{k} + 1, \ldots ,\tau_{k} } \right\} $$
(8.6)

where

$$ \begin{aligned} \tilde{K}_{t} & = \left\{ {k:k \in \tilde{K},\quad\upeta_{k} \le t \le \tau_{k} } \right\},\quad \forall t \in \tilde{T}, \\ \tilde{K}_{tj} & = \left\{ {k:k \in \tilde{K},\quad\upeta_{k} \le t \le \tau_{k} ,\upxi_{k} = j} \right\},\quad \forall t \in \tilde{T},\;j \in \tilde{J} \\ \end{aligned} $$

The objective function (8.1) maximizes the total profit generated by all freight bookings accepted in the multi-period planning horizon T. Constraint (8.2) ensures that the demand for empty containers at the port of origin is less than or equal to the number of all available empty containers at the port of origin in each period. Constraint (8.3) ensures that the total volume of cargoes which will be carried to port j in period t is less than or equal to the total available volume capacity of shipment to port j in period t. Constraint (8.4) indicates that the total weight of cargoes which will be carried to port j in period t is less than or equal to the total available weight capacity of shipment to port j in period t. Constraint (8.5) stipulates that each cargo may be carried in a certain period on or before its due date or refused to be carried in the time horizon T. Constraint (8.6) states that each cargo is either accepted in its entirety or turned down.

There are J destination ports and T periods in the problem, and each cargo is either to be delivered within its due date or refused to be carried in the planning horizon. Thus, the total number of knapsacks is \( T \times J \). Moreover, for each knapsack, there are three constraint sets, i.e., the set associated with the number of available empty containers, amount of available volume capacity and amount of available weight capacity.

8.2 Cohort Intelligence for Solving Sea Cargo Mix (SCM) Problem

In the context of CI algorithm presented in Chap. 2, the elements of cargo assignment set \( C = k_{t}^{{\upxi_{k} }} \) formed by assigning every cargo \( k,\quad k \in \left\{ {1,2, \ldots ,K} \right\} \) to a period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) being shipped to its port of destination \( \upxi_{k} \) are considered as characteristics/attributes/qualities of the cohort candidate. The port of destination \( \upxi_{k} \) for every cargo \( k \in \left\{ {1,2, \ldots ,K} \right\} \) is selected based on the condition below.

$$ \begin{array}{*{20}l} {\upxi_{k} = j , \quad if\;[K/J] \times (j - 1) < k \le [K/J] \times j,} \hfill & {for\;j = 1,2, \ldots ,J - 1} \hfill \\ {\upxi_{k} = J, \quad if\; [K/J] \times (j - 1) < k \le K,} \hfill & {\text{otherwise}} \hfill \\ \end{array} $$
(8.7)

The CI algorithm begins with the initialization of number of cohort candidates S, number of variations Y the cargo assignment set \( {\text{C}}^{\text{s}} \) of every candidate \( {\text{s}},\quad \left( {{\text{s}} = 1, \ldots ,{\text{S}}} \right) \), the convergence parameter ε and maximum number of allowable learning attempts \( L_{max} \).

In the cohort, every candidate \( s,\left( {s = 1, \ldots ,S} \right) \) randomly assigns every cargo \( c_{k} ,\quad k \in \left\{ {1,2, \ldots ,K} \right\} \) to a period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) to be shipped to destination \( \upxi_{k} \) and forms a cargo assignment set (behavior) \( C^{s} = k_{t}^{{s,\upxi_{k} }} \) and associated per volume profit are calculated as \( R^{s} = \sum\nolimits_{k = 1}^{K} {\sum\nolimits_{t = 1}^{1} {r_{k,t}^{s} } } \).

Step 1. :

(Constraint Handling) As a maximization problem, the probability associated with per volume profit of cargo \( R^{s} \) is calculated as follows:

$$ p_{R}^{s} = \frac{{R^{s} }}{{\mathop \sum \nolimits_{s = 1}^{S} R^{s} }}, \quad \left( {s = 1, \ldots ,S} \right) $$
(8.8)

There are constraints involved such as:

  1. 1.

    demand of empty containers \( \sum\nolimits_{k} {v_{k,t}^{s} } \) at the port of origin should be less than or equal to the number of all available empty containers \( E_{t} \) at the port of origin in each period \( t \in \left\{ {1,2, \ldots ,T} \right\} \)

  2. 2.

    total volume of cargoes \( \sum\nolimits_{k} {v_{k,j}^{s} } \) which will be carried to port \( j \in \left\{ {1,2, \ldots ,J} \right\} \) in period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) is less than or equal to the total available volume capacity \( V_{t,j} \), and

  3. 3.

    total weight of cargoes \( \sum\nolimits_{k} {v_{k,j}^{s} } \) which will be carried to port \( j \in \left\{ {1,2, \ldots ,J} \right\} \) in period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) is less than or equal to the corresponding total available weight capacity \( W_{t,j} \).

Kulkarni and Shabir [3] propose a modified approach to the CI method for solving knapsack problems. This approach makes use of probability distributions for handling constraints. This approach is also adopted here. For every constraint type as described in 1, 2 and 3 above a probability distribution is developed (refer to Fig. 8.1) and the probability is calculated based on the following rules:

Fig. 8.1
figure 1

Probability distributions for constraint handling

  1. 1.

    If \( 0 \le \sum\nolimits_{k} {v_{k,t}^{s} } \le E_{t} ,\;\forall t \), then based on the probability distribution presented in Fig. 8.1a \( p_{{E_{t} }}^{s} = slope_{1,Et} \times \left( {\sum\nolimits_{k} {v_{k,t}^{s} } - E_{t} } \right) \), else \( p_{{E_{t} }}^{s} = slope_{1,Et} \times \left( {0.001\;\% E_{t} } \right) \).

  2. 2.

    If \( 0 \le \sum\nolimits_{k} {v_{k,j}^{s} } \le V_{t,j} ,\;\forall t, \forall j \), then based on the probability distribution presented in Fig. 8.1b \( p_{{V_{t,j} }}^{s} = slope_{{1,V_{t,j} }} \times \left( {\sum\nolimits_{k} {v_{k,j}^{s} } - V_{t,j} } \right) \), else \( p_{{V_{t,j} }}^{s} = slope_{{1,V_{tj} }} \times \left( {0.001\;\% V_{t,j} } \right) \).

  3. 3.

    If \( 0 \le \sum\nolimits_{k} {w_{k,j}^{s} } \le W_{t,j} ,\;\forall t, \forall j \), then based on the probability distribution presented in Fig. 8.1c \( p_{{W_{t,j} }}^{s} = slope_{{1,W_{t,j} }} \times \left( {\sum\nolimits_{k} {w_{k,j}^{s} } - W_{t,j} } \right) \), else \( p_{{W_{tj} }}^{s} = slope_{{1,W_{tj} }} \times \left( {0.001\;\% W_{t,j} } \right) \).

As represented in Fig. 8.1, the \( slope_{1,Et} \), \( slope_{{1,V_{t,j} }} \) and \( slope_{{1,W_{t,j} }} \) represent the slope of lines going through points \( \left( {\left( {0,1} \right),\left( {E_{t} ,0} \right)} \right) \), \( \left( {\left( {0,1} \right),\left( {V_{tj} ,0} \right)} \right) \) and \( \left( {\left( {0,1} \right),\left( {W_{tj} ,0} \right)} \right) \), respectively. The overall (total) probability of selecting candidates to follow candidate \( s,\left( {s = 1, \ldots ,S} \right) \) is calculated as follows:

$$ p^{s} = \left( {p_{R}^{s} + \sum\nolimits_{t} {p_{{E_{t} }}^{s} } + \sum\nolimits_{t} {\sum\nolimits_{j} {p_{{V_{t,j} }}^{s} } } + \sum\nolimits_{t} {\sum\nolimits_{j} {p_{{W_{t,j} }}^{s} } } } \right) $$
(8.9)

It is clear from the above rules for probability calculation that the candidate’s behavior/solution/cargo assignment with better objective and constraint values closer to the boundaries will have higher probability of being followed.

Step 2. :

Every candidate generates Y new variations of the cargo assignment using two steps, which we refer to as ‘learning from others’ and ‘introspection’, as follows:

  1. 1.

    Learning from others: Every candidate \( s,\quad \left( {s = 1, \ldots ,S} \right) \) using roulette wheel approach [14] selects a candidate \( \overbrace {s}^{{}} \in \left( {1, \ldots ,S} \right) \) (not known in advance) in the cohort to follow, i.e. it incorporates an element from within \( c\overbrace {{^{s} }}^{{}} \) into its existing cargo assignment \( c^{s} \). More specifically, a quality from within \( c\overbrace {{^{s} }}^{{}} \) is selected randomly. Then the selected element is identified in \( c^{s} \) along with its location. It then swaps its position with the element at the location in \( c^{s} \) corresponding to its current location in \( c\overbrace {{^{s} }}^{{}} \). This way every candidate \( s, \quad \left( {s = 1, \ldots ,S} \right) \) generates \( Y/2 \) cargo assignments.

  2. 2.

    Introspection: In addition, every candidate \( s, \quad \left( {s = 1, \ldots ,S} \right) \) randomly selects an element from within its one of the periods \( t,\quad \left( {t = 1, \ldots ,T} \right) \) and relocates it to another period. This way every candidate \( s, \quad \left( {s = 1, \ldots ,S} \right) \) generates further \( Y/2 \) cargo assignments.

This way every candidate forms a total of Y new variations \( {\text{C}}^{{{\text{s}},Y}} = \left\{ {{\text{c}}^{{{\text{s}},1}} , \ldots ,{\text{c}}^{{{\text{s}},y}} , \ldots ,{\text{c}}^{{{\text{s}},Y}} } \right\} \) and computes associated per volume profit and constraint functions.

Step 3. :

As discussed in Step 1, every candidate \( s, \quad \left( {s = 1, \ldots ,S} \right) \) calculates its corresponding probability vector \( P^{s,Y} = \left\{ {p^{s,1} , \ldots ,p^{s,y} , \ldots ,p^{s,Y} } \right\} \). Furthermore, based on the feasibility-based rules shown below, the candidate accepts or rejects the solution associated with the maximum total probability value, i.e. \( max\left\{ {p^{s,1} , \ldots ,p^{s,y} , \ldots ,p^{s,Y} } \right\} \)

The feasibility-based rules are as follows:

Accept the current behavior/solution if

  1. 1.

    The cargo assignment in the previous learning attempt is feasible and current behavior/cargo assignment is also feasible with improved per volume profit

  2. 2.

    The cargo assignment in the previous learning attempt is infeasible and the current behavior/cargo assignment is feasible

  3. 3.

    The cargo assignment in the previous learning attempt is infeasible and the current behavior/cargo assignment is also infeasible with the maximum total probability value improved;

Otherwise, reject the current behavior/solution and retain the previous one if

  1. 1.

    The cargo assignment in the previous learning attempt is feasible and current cargo assignment is infeasible

  2. 2.

    The cargo assignment in the previous learning attempt is feasible and the current cargo assignment is also feasible with worse per volume profit

  3. 3.

    The cargo assignment in the previous learning attempt as well as current learning attempt are infeasible and the total probability value is lesser than the previous learning attempt.

After the completion of step 3, a cohort with S updated cargo assignments \( \left\{ {c^{1} , \ldots ,c^{s} , \ldots ,c^{S} } \right\} \) is now available.

Step 4. :

If either of the two criteria listed below is valid, accept the best possible cargo assignment from within the available \( \left\{ {c^{1} , \ldots ,c^{s} , \ldots ,c^{S} } \right\} \) in the cohort as the final solution \( c^{*} \) and stop, else continue to Step 1

  1. (a)

    If maximum number of learning attempts exceeded or

  2. (b)

    The cohort is saturated, i.e. if cohort candidates saturate to the same cargo assignment for any other number of successive learning attempts.

8.3 Numerical Experiments and Results

The following notation is used to describe the results of our numerical experiments:

\( N_{v} \)

Number of decision variable in the problem

\( N_{c} \)

Number of constraints in the problem

\( N_{in} \)

Number of tested instances

\( U \)

Upper bound

\( I \)

Integer programming solution (branch-and-bound method)

\( L \)

LP relaxation

\( H \)

Heuristic algorithm for MDMKP (HAM) (refer to [5])

\( M \)

Modified heuristic algorithm for MDMKP (MHA) (refer to [5])

\( CI \)

Cohort intelligence (CI) method

MRSLS

Multi-Random-Start Local Search

\( g_{XZ} \)

Average percentage gap between the best objective value of the solutions obtained using methods X and \( Z \)

\( g_{XZ}^{{ \sim }} \)

Average percentage gap between the average objective value of the solutions obtained using methods X and \( Z \)

\( g_{XZ}^{{ \wedge }} \)

Worst percentage gap between the worst objective value of the solutions obtained using methods X and \( Z \)

\( t_{X} \)

Average computational time (in seconds) of algorithm \( X \)

\( S_{{t_{CI} }} \)

Standard deviation of CPU time for CI method

\( S_{XZ} \)

Standard deviation of percentage gap between objective value of the solutions obtained using methods X and \( Z \)

The CI approach for solving the SCM Problem discussed in Sect. 8.1 is coded in MATLAB 7.7.0 (R2008B). The simulations are run on a Windows platform with an Intel Core2 Quad CPU, 2.6 GHz processor speed and 4 GB memory capacity. For this model, we solve 18 distinct cases. These cases, which are originally proposed in [5], are presented in Tables 8.1, 8.2 and 8.3. For every case, 10 instances are generated and every instance is solved 10 times using the CI method. The instances are generated as suggested in [5]. The per volume profit \( r_{k,t} \) for cargo \( k,\, \left( {k = 1, \ldots ,K} \right) \) shipped in period \( t, \,\left( {t = 1, \ldots ,T} \right) \) are uniformly generated in the interval \( \left[ {0.01,1.01} \right] \). The volume \( v_{k} \) and weight \( w_{k} \) of every cargo \( c_{k} , \,\left( {k = 1, \ldots ,K} \right) \) are uniformly generated from the interval \( \left[ {100, 200} \right] \). The number of all available empty containers \( E_{t} \) at the port of origin in each period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) are uniformly generated from the interval \( \left[ {100 \times \left( {K/T} \right), 200 \times \left( {K/T} \right)} \right] \), and the total volume \( V_{t,j} \) and weight \( W_{t,j} \) of cargoes which are carried to port \( j \in \left\{ {1,2, \ldots ,J} \right\} \) in period \( t \in \left\{ {1,2, \ldots ,T} \right\} \) are uniformly generated from the interval \( \left[ {100 \times \left( {K/T} \right), 200 \times \left( {K/T} \right)} \right] \).

Table 8.1 Results for small scale test problems
Table 8.2 Results for medium scale test problems
Table 8.3 Results for large scale test problems

The CI parameters such as number of candidates S and number of variations Y are chosen to be 3 and 15, respectively. The CI saturation/convergence plot for one problem instance given by \( (T, J, K) = (4, 13, 5479) \) is presented in Fig. 8.2. The plot exhibits the self-adaptive learning behavior of every candidate in the cohort. Initially, the distinct behavior/solution of every individual candidate in the cohort can be easily distinguished. The behavior/solution here refers to the total profit generated by all freight bookings accepted in the multi-period planning horizon T. As each candidate adopts the qualities of other candidates to improve its own behavior/solution, the behavior of the entire cohort saturates/converges to an improved solution.

Fig. 8.2
figure 2

Saturation/convergence of the cohort for instance of the SCM problem

The best and average CI solution for the objective function value for every case is compared with the associated upper bound (UB) solution achieved by solving the LP relaxation of the problem, and the integer programming (IP) solution. In addition, the solution is compared to the solution of the LP relaxation, and the problem-specific heuristic algorithm for MDMKP (HAM) and the modified heuristic algorithm for MDMKP (MHA) developed in [5]. The numerical results are presented in Tables 8.1, 8.2 and 8.3 along with the graphical illustration in Fig. 8.3. It is important to mention here that IP is not able to solve large-scale SCM problems.

Fig. 8.3
figure 3figure 3

Illustration of CI, IP, MRSLS and UB solution comparison

It is evident from the results in Tables 8.1, 8.2, 8.3 and the plots given in Fig. 8.3a, d that, for small scale SCM problems, the CI method produces a solution that is fairly close to the IP and UB solution. The gap gradually increases as the problem size grows; however, observe that the worst gap between the best CI solution and corresponding IP \( \left( {g_{ICI} } \right) \) and UB solution \( \left( {g_{ICI} } \right) \) is within 1.0459 % of the reported IP solution and 4.0405 % of the reported UB solution, respectively. Similarly, the worst gap between the average CI solution and corresponding IP \( \left( {g_{ICI}^{{ \sim }} } \right) \) and UB solution \( \left( {g_{ICI}^{{ \sim }} } \right) \) is within 2.2682 % of the reported IP solution and 5.5827 % of the reported UB solution, respectively. Also, the percent gap between the worst CI solution and corresponding IP solution \( \left( {g_{ICI}^{{ \wedge }} } \right) \) is within 3.0198 % of the reported IP solution. The corresponding UB solution \( \left( {g_{ICI}^{{ \wedge }} } \right) \) is within 7.1465 % of the reported UB solution.

Furthermore, as shown in Tables 8.1, 8.2, 8.3 and Fig. 8.3i, j, even though the standard deviation (SD) of the percent gap between the CI solution and the corresponding IP \( \left( {S_{ICI} } \right) \) and UB solution \( \left( {S_{ICI} } \right) \) increases with the problem size, the worst SD is 0.917. Moreover, Table 8.1, 8.2, 8.3 and Fig. 8.3f also show that the SD \( \left( {S_{{t_{CI} }} } \right) \) of CPU time for solving small- and medium-scale problems is within 0.046 and 0.708, respectively. For large-scale problems it is within 4.940. This is because the search space increases with an increase in problem size.

For every candidate the number of characteristics to be learnt in a learning attempt from the candidate that is being followed does not change. This results into different number of learning attempts to improve their individual behavior/solution and to eventually reach the saturation/convergence state. However, it is important to mention here that the overall SD obtained by solving the entire problem set is quite reasonable which lends support to the robustness of the algorithm.

Also, the percent gap between the worst CI solution and corresponding IP solution \( \left( {g_{ICI}^{{ \wedge }} } \right) \) is within 3.0198 % of the reported IP solution. The corresponding UB solution \( \left( {g_{UCI}^{{ \wedge }} } \right) \) is within 7.1465 % of the reported UB solution. This demonstrates that, even though the magnitude of \( S_{ICI} \), \( S_{UCI} \) and \( S_{{t_{CI} }} \) increases with increase in problem size, CI is able to produce solutions with reasonable accuracy for every case of the problem. In addition, the CI method achieves the optimum solution for medium- and large-scale problems in significantly less CPU time (refer to Fig. 8.3h). This demonstrates the ability of CI in solving large problems efficiently and highlights its competitiveness with the IP approach as well as the heuristics HAM and MHA discussed in [5].

In addition to the above, CI’s performance is also compared to the performance of a multi-random-start local search (MRSLS) that is used to solve the Sea Cargo Mix problem. The proposed MRSLS follows a similar pairwise interchange approach that we use for the CBAP discussed in Chap. 7. For each of the problem instances suggested in [5], a solution is first constructed. Then a pairwise interchange approach is used in every successive learning attempt where two time periods are selected randomly. Next a set of containers associated with each period is randomly chosen and then the positions of these two sets are interchanged (swapped). The MRSLS for every individual case of the SCM problem is run 50 times with different initializations. Also, for a meaningful comparison, every MRSLS case is initialized to start in the neighborhood of the CI’s starting point and is run for exactly the same time equal to the corresponding average CPU time the CI method takes to solve that case. The acceptance of the resulting solution in every learning attempt depends on following feasibility-based rules (see [6] for a detailed discussion): (1) if the existing solution is infeasible and the resulting solution has improved constraint violation, then the solution is accepted, (2) If the existing solution is infeasible and the resulting solution is feasible, then the solution is accepted, (3) if the existing solution is feasible and the resulting solution is also feasible yielding an improved objective function value Z, then the solution is accepted. If any of these conditions are not satisfied then the existing solution is retained and the resulting solution is discarded.

It is important to mention here that of the 50 MRSLS runs related to the SCM problems under study, only a few of the solutions obtained are feasible. Most of solutions are outside the feasible region. This is because for every MRSLS run a starting solution is randomly chosen and this solution can be infeasible. Furthermore, the MRSLS may not be able to discover a feasible solution during the entire run. Therefore, only the best of the feasible solutions are considered for meaningful comparison with the CI approach. From Tables 8.4, 8.5 and 8.6 as well as Fig. 8.3a, k it is clear that the rate of increase of the percentage gap between the solution obtained using MRSLS and that obtained using CPLEX is significantly more when compared to the rate of percentage gap increase between CPLEX and CI. In addition, the percentage gap between the solution obtained using MRSLS and LP relaxation for each case is also considerably larger as compared to that of CI versus LP relaxation. In short, for the Sea Cargo Mix problem, CI achieves better performance against the MRSLS implemented for this model, especially when the problem size is large.

Table 8.4 MRSLS results comparison for small scale test problems
Table 8.5 MRSLS results comparison results for medium scale test problems
Table 8.6 MRSLS results comparison results for large scale test problems

8.4 Conclusions

The emerging optimization technique of cohort intelligence (CI) is successfully applied to solve a complex combinatorial problem such as the sea cargo mix problem. For the problem a specific CI algorithm is developed. The results indicate that the accuracy of solutions to these problems obtained using CI is fairly robust and the computational time is quite reasonable. The chapter also describes the application of a MRSLS that can be used to solve several cases of the problem. The MRSLS implemented here is based on the interchange argument, a valuable technique often used in sequencing, whereby the elements of two adjacent solutions are randomly interchanged in the process of searching for better solutions. Our findings are that the performance of the CI is clearly superior to that of IP, HAM and MHA as well as the MRSLS for most of the problem instances that have been solved.

In agreement with the no-free-lunch theorem [7], any algorithm may not be directly applicable to solve all the problem types unless it can be enhanced by incorporating some useful techniques or heuristics. The CI method may also benefit from certain performance-enhancing techniques when it is applied to different classes of problems. A mechanism to solve multi-objective problems is currently being developed, which can prove helpful in transforming the model’s constraints into objectives/criteria (see [7] for new development in this area). This can help reduce the dependency on the quality of the candidates’ initial guess.