Keywords

1 Introduction

Immunological inspired computation represents an established and rich family of algorithms inspired by the dynamics and the information processing mechanisms that the Immune System uses to detect, recognise, learn and remember foreign entities to the living organism [11]. Thanks to these interesting features, immunological inspired algorithms represent successful computational methodologies in search and optimization tasks [4, 20]. Although there exist several immune theories at the basis of immunological inspired computation that could characterize their natural application to anomaly detection and classification tasks, one that has been proven to be quite effective and robust is based on the clonal selection principle. Algorithms based on the clonal selection principle work on a population of immunological cells, better known as antibodies, that proliferate, i.e. clone themselves – the number of copies depends on the quality of their foreign entities detection – and undergo a mutation process usually at a high rate. This process, biologically called Affinity Maturation, makes these algorithms very suitable in functions and combinatorial optimization problems. This statement is supported by several experimental applications [6,7,8, 18], as well as by theoretical analyses that prove their efficiency with respect to several Randomized Search Heuristics [14, 15, 23, 25].

In light of the above, we have designed and developed an immune inspired hypermutation algorithm - based precisely on the clonal selection principle - in order to solve a classical combinatorial optimization problem, namely the Weighted Feedback Vertex Set (WFVS).

In addition, we take into account what clearly emerges from the evolutionary computation literature: in order to obtain good performances and solve hard a combinatorial optimization problem, it is necessary to combine together metaheuristics and other classical optimization techniques, such as local search, dynamic programming, exact methods, etc. For this reason, we have designed a hybrid immunological algorithm, by including some deterministic criteria inside in order to refine the found solutions and to help the convergence of the algorithm towards the global optimal solution.

The hybrid immunological algorithm proposed in this paper, hereafter simply called Hybrid-IA, takes advantage of the immunological operators of cloning, hypermutation and aging, to carefully explore the search space and properly exploit the information learned, but it also makes use of local search for improving the quality of the solutions, and deterministically trying to remove that vertex that break off one or more cycles in the graph, and has the minor weight-degree ratio. The many experiments performed have proved the fruitful impact of such a greedy idea, since it was almost always able to improve the solutions, leading the Hybrid-IA towards the global optimal solution. For evaluating the reliability and efficiency of Hybrid-IA, many experiments on several different instances have been performed, taken by [2] and following the same experimental protocol proposed in it. Hybrid-IA was also compared with other three metaheuristics: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [1, 9]; and Memetic Algorithm (MA) [2]. From all comparisons performed and analysed, Hybrid-IA proved to be always competitive and comparable with the best of the three metaheuristics. Furthermore, it is worth emphasizing that Hybrid-IA outperforms the three algorithms on some instances, since, unlike them, it is able to find the global best solution.

2 The Weighted Feedback Vertex Set Problem

Given a directed or undirected graph \(G=(V,E)\), a feedback vertex set of G is a subset \(S \subset V\) of vertices whose removal makes G acyclic. More formally, if \(S \subset V\) we can define the subgraph \(G[S]=(V \backslash S, E_{V\backslash S})\) where \(E_{V\backslash S}=\{(u,v) \in E : u, v \in V \backslash S \}.\) If G[S] is acyclic, then S is a feedback set. The Minimum Feedback Vertex Set Problem (MFVS) is the problem of finding a feedback vertex set of minimal cardinality. If S is a feedback set, we say that a vertex \(v \in S\) is redundant if the induced subgraph \(G[S \backslash \{v\}]=( (V \backslash S) \cup \{v\}, E_{(V \backslash S) \cup \{v\}}, w)\) is still an acyclic graph. It follows that S is a minimal FVS if it doesn’t contain any redundant vertices. It is well known that the decisional version of the MFVS is a \(\mathcal{NP}\)-complete problem for general graphs [12, 17] and even for bipartite graphs [24].

If we associate a positive weight w(v) to each vertex \(v \in V\), let S be any subset of V,  then its weight is the sum of the weights of its vertices, i.e. \(\sum _{v \in S}w(v).\) The Minimum Weighted Feedback Vertex Set Problem (MWFVS) is the problem of finding a feedback vertex set of minimal weight.

The MWFVS problem is an interesting and challenging task as it finds application in many fields and in many real-life tasks, such as in the context of (i) operating systems for preventing or removing deadlocks [22]; (ii) combinatorial circuit design [16]; (iii) information security [13]; and (iv) the study of monopolies in synchronous distributed systems [19]. Many heuristics and metaheuristics have been developed for the simple MFVS problem, whilst very few, instead, have been developed for the weighted variant of the problem.

3 Hybrid-IA: A Hybrid Immunological Algorithm

Hybrid-IA is an immunological algorithm inspired by the clonal selection principle, which represents an excellent example of bottom up intelligent strategy where adaptation operates at local level, whilst a complex and useful behaviour emerges at global level. The basic idea of this metaphor is how the cells adapt for binding and eliminate foreign entities, better known as Antigene (Ag). The key features of Hybrid-IA are the cloning, hypermutation and aging operators, which, respectively, generate new populations centered on higher affinity values; explore the neighborhood of each solution into the search space; and eliminate the old cells and the less promising ones, in order to keep the algorithm from getting trapped into a local optimal. To these immunological operators, a local search strategy is also added, which, by restoring one or more cycles with the addition of a previously removed vertex, it tries again to break off the cycle, or the cycles, by conveniently removing a new vertex that improves the fitness of the solution. Usually the best choice is to select the node with the minimal weight-degree ratio. The existence of a cycle, or more cycles, is computed via the well-know procedure Depth First Search (DFS) [5]. The aim of the local search is then to repair the random choices done by the stochastic operators via more locally appropriate choices.

Hybrid-IA is based on two main entities, such as the antigen that represents the problem to tackle, and the B cell receptor that instead represents a point (configuration) of the search space. Each B cell, in particular, represents a permutation of vertices that determines the order of the nodes to be removed: starting from the first position of the permutation, the relative node is selected and removed from the graph, and this is iteratively repeated, following the order in the permutation, until an acyclic graph is obtained, or, in general, there are no more vertices to be removed. Once an acyclic graph is obtained, all possible redundant vertices are restored in V. Now, all removed vertices represent the S set, i.e. a solution to the problem. Such process is outlined in simple way in Algorithm 1. Some clarification about Algorithm 1:

  • Input to the Algorithm is an undirected graph \(G=(V,E);\)

  • given any \(X \subseteq V\) by \(d_{X}(v)\) we denote the degree of vertex v in the graph induced by X,  i.e. the graph G(X) obtained from G by removing all the vertices not in X.

  • if, given any \(X \subseteq V\) a vertex v has degree \(d_{X}(v) \le 1,\) such a vertex cannot be involved in any cycle. So, when searching for a feedback vertex set, any such a vertex can simply be ignored, or removed from the graph.

  • the algorithm associates deterministically a subset S of the graph vertices to any given permutation of the vertices in V. The sum of the weights of the vertices in S will be the fitness value associated to the permutation.

figure a

At each time step t,  we maintain a population of B cells of size d that we label \(P^{(t)}\). Each permutation, i.e. B cell, is randomly generated during the initialization phase (\(t=0\)) using a uniform distribution. A summary of the proposed algorithm is shown below. Hybrid-IA terminates its execution when the fixed termination criterion is satisfied. For our experiments and for all outcomes presented in this paper, a maximum number of generations has been considered.

figure b

Cloning Operator: It is the first immunological operator to be applied, and it has the aim to reproduce the proliferation mechanism of the immune system. Indeed it simply copies dup times each B cell producing an intermediate population \(P^{(clo)}\) of size \(d \times dup\). Once a B cell copy is created, i.e. the B cell is cloned, to this is assigned an age that determines its lifespan, during that it can mature, evolves and improves: from the assigned age it will evolve into the population for producing robust offspring until a maximum age reachable (a user-defined parameter), i.e. a prefixed maximum number of generations. It is important to highlight that the assignment of the age together to the aging operator play a key role on the performances of Hybrid-IA [10, 21], since their combination has the purpose to reduce premature convergences, and keep an appropriate diversity between the B cells.

The cloning operator, coupled with the hypermutation operator, performs a local search around the cloned solutions; indeed the introduction of a high number of blind mutations produces individuals with higher fitness function values, which will be after selected to form ever better progenies.

Inversely Hypermutation Operator: This operator has the aim to explore the neighborhood of each clone taking into account, however, the quality of the fitness function of the clone it is working on. It acts on each element of the population \(P^{(clo)}\), performing M mutations on each clone, whose number M is determined by an inversely proportional law to the clone fitness value: the higher is the fitness function value, the lower is the number of mutations performed on the B cell. Unlike of any evolutionary algorithm, no mutation probability has been considered in Hybrid-IA.

Given a clone \(\varvec{x}\), the number of mutations M that it will be undergo is determined by the following potential mutation:

$$\begin{aligned} \alpha = e^{-\rho \hat{f}(\varvec{x})}, \end{aligned}$$
(1)

where \(\alpha \) represents the mutation rate, and \(\hat{f}(\varvec{x})\) the fitness function value normalized in [0, 1]. The number of mutations M is then given by

$$\begin{aligned} M=\lfloor (\alpha \times \ell ) + 1 \rfloor , \end{aligned}$$
(2)

where \(\ell \) is the length of the B cell. From Eq. 2, is possible to note that at least one mutation occurs on each cloned B cell, and this happens to the solutions very close to the optimal one. Since any B cell is represented as a vertices permutation that determines their removing order, the position occupied by each vertex in the permutation becomes crucial for the achievement of the global best solution. Consequently, the hypermutation operator adopted is the well-known Swap Mutations, through which the right permutation, i.e. the right removing order, is searched: for any \(\varvec{x}\) B cell, choices two positions i and j, the vertices \(x_i\) and \(x_j\) are exchanged of position, becoming \(x'_i=x_j\) and \(x'_j=x_i\), respectively.

Aging Operator: This operator has the main goal to help the algorithm for jumping out from local optimal. Simply it eliminates the old B cells from the populations \(P^{(t)}\) and \(P^{(hyp)}\): once the age of a B cell exceeds the maximum age allowed (\(\tau _B\)), it will be removed from the population of belonging independently from its fitness value. As written above, the parameter \(\tau _B\) represents the maximum number of generations allowed so that every B cell can be considered to remain into the population. In this way, this operator is able to maintain a proper turnover between the B cells in the population, producing high diversity inside it, and this surely help the algorithm to avoid premature convergences and, consequently, to get trapped into local optima.

An exception about the removal may be done for the best current solution that is kept into the population even if its age is older than \(\tau _B\). This variant of the aging operator is called elitist aging operator.

\((\mu + \lambda )\)-Selection Operator: Once the aging operator ended its work, the best d survivors from both populations \(P_a^{(t)}\) and \(P_a^{(hyp)}\) are selected for generating a temporary population \(P^{(select}\) that, afterwards, will be undergo to the local search. For this process the classical \((\mu + \lambda )\)-Selection operator has been considered, where, in our case, \(\mu = d\) and \(\lambda = (d \times dup)\). In a nutshell, this operator reduces the offspring B cell population (\(P_a^{(hyp)}\)) of size \(\lambda \ge \mu \) to a new population (\(P^{(select}\)) of size \(\mu = d.\) Since this selection mechanism identifies the d best elements between the offspring set and the old parent B cells, then it guarantees monotonicity in the evolution dynamics. Nevertheless, may happens that all survived B cells are less than the required population size (d), i.e. \(d_a < d\). This can easily happens depending on the chosen age assignment, and fixed value of \(\tau _B.\) In this case, the selection mechanism randomly generates \(d-d_a\) new B cells.

Local Search: The main idea at the base of this local search is to repair in a proper and deterministic way the solutions produced by the stochastic mutation operator. Whilst the hypermutation operator determines the vertices to be removed in a blind and random way, i.e. choosing them independently of its weight, then the local search try to change one, or more of them with another node of lesser weight, improving thus the fitness function of that B cell. Given a solution \(\varvec{x}\), all vertices in \(\varvec{x}\) are sorted in decreasing order with respect their weights. Then, starting from vertex u with largest weight, the local search procedure iteratively works as follows: the vertex u is inserted again in \(V \setminus S\), generating then one or more cycles; via the classical DFS procedure, are computed the number of the cycles produced, and, thus one of these (if there are more) is taken into account, together with all vertices involved in it. These last vertices are now sorted in increasing way with respect their weight-degree ratio; thus, the vertex v with smaller ratio is selected to break off the cycle. Of course, may also happens that v break off also more cycles. Anyway, if from the removal of v the subgraph becomes acyclic, and this removal improves the fitness function then v is considered for the solution and it replaces u; otherwise the process is repeated again taking into account a new cycle. At the end of the iterations, if the sum of the new removed vertices (i.e. the fitness) is less to the previous one, then such vertices are inserted in the new solution in place of vertex u.

4 Results

In this section all analysis, experiments, and comparisons of Hybrid-IA are presented in order to measure the goodness of the proposed approach. The main goal of these experiments is obviously to prove the reliability and competitiveness of the Hybrid-IA with respect the state-of-the-art, but also to test the efficiency and the computational impact provided by the designed local search. Thus, for properly evaluating the performances of the proposed algorithm, a set of benchmark instances proposed in [3] have been considered, whose set includes grid, toroidal, hypercube, and random graphs. Each instance considered, besides to the topology, differs in the number of vertices; number of edges; and range of values for the vertices weights. In this way, as suggested in [2], is possible to inspect the computational performances of Hybrid-IA with respect to the density of the graph and weight ranges. Further, Hybrid-IA has been also compared with three different algorithms, which represent nowadays the state-of-the-art for the WFVS problem: Iterated Tabu Search (ITS) [3]; eXploring Tabu Search (XTS) [1, 9]; and Memetic Algorithm (MA) [2]. All three algorithms, and their results, have been taken in [2].

Each experiment was performed by Hybrid-IA with population size \(d=100\), duplication parameter \(dup=2\), maximum age reachable \(\tau _B=20\), mutation rate parameter \(\rho =0.5\), and maximum number of generations \(maxgen=300\). Further, each experiment reported in the tables below is the average over five instances with the same characteristics but different assignment of the vertices weights. The experimental protocol used, and not included above, has been taken by [2]. It is important to highlight that in [2] a fixed maximum number of generators is not given, but the authors use a stop criterion based on a formula (MaxIt) that depends on the density of the graph: the algorithm will end its evolution process when it will reach MaxIt consecutive iterations without improvements. Note that this threshold is reset every time an improvement occurs. From a simple calculus, it is possible to check how 300 generations used in this work are almost always lowest or equal to the minimum ones performed by the algorithms presented in [2], considering primarily the simplicity of fitness improvements in the first steps of the generations.

4.1 Dynamic Behaviour

Before to presents the experiments, and comparisons performed, the dynamic behavior and the learning ability of Hybrid-IA are presented. An analysis on the computational impact, and convergence advantages provided by the use of the local search developed has been conducted as well.

In Fig. 1, left plot, is shown the dynamic behavior of Hybrid-IA, where are displayed the curves of the (i) best fitness, (ii) average fitness of the population, and (iii) average fitness of the cloned hypermutated population over the generations. For this analysis the squared grid instance S_SG9 has been considered, whose features, and weights range are shown in Table 1. From this plot is possible to see how Hybrid-IA go down quickly in a very short generations to acceptable solutions for then oscillating between values close to each other. This oscillatory behavior, with main reference to the cloned hypermutated population curve, proves how the algorithm Hybrid-IA has good solutions diversity, which is surely helpful for the search space exploration. These fluctuations are instead less pronounced in the curve of the average fitness of the population, and this is due to the use of the local search that provides greater convergence stability. Last curve, the one of the best fitness, indicates how the algorithm takes the best advantage of the generations, managing to still improve in the last generations. Since this is one of few instances where Hybrid-IA didn’t reach the optimal solution, inspecting this curve we think that likely increasing the number of generations (even just a little), Hybrid-IA will be able to find the global best. Right plot of Fig. 1 shows the comparison between the average fitness values of the survivors’ population (\(P^{(select)}\)), and the one produced by the local search. Although they show a very similar parallel behavior, this plot proves the usefulness and benefits produced by the local search, which is always able to improve the solutions generated by the stochastic immune operators. Indeed the curve of the average fitness produced by the local search is always below to the survivors one.

Fig. 1.
figure 1

Convergence behavior of the average fitness function values of \(P^{(t)}\), \(P^{(hyp)}\), and the best B cell versus generations on the grid S_SG9 instance (left plot). Average fitness function of \(P^{(select)}\) vs. average fitness function \(P^{(t)}\) on the random S_R23 instance (right plot).

Besides the convergence analysis, it becomes important also to understand the learning ability of the algorithm, i.e. how many information it is able to gain during all evolutionary process, which affects the performances of any evolutionary algorithm in general. For analyzing the learning process we have then used the well-known entropy function, Information Gain, which measures the quantity of information the system discovers during the learning phase [6, 7, 18]. Let \(B_m^t\) be the number of the B cells that at the timestep t have the fitness function value m; we define the candidate solutions distribution function \(f^{(t)}_m\) as the ratio between the number \(B_m^t\) and the total number of candidate solutions:

$$\begin{aligned} f^{(t)}_m=\frac{B_m^t}{\sum _{m=0}^h B_m^t}=\frac{B_m^t}{d}. \end{aligned}$$
(3)

It follows that the information gain \(K(t, t_0)\) and entropy E(t) can be defined as:

$$\begin{aligned} K(t, t_0) = \sum _m f_m^{(t)} \log ( f^{(t)}_m / f^{(t_0)}_m), \end{aligned}$$
(4)
$$\begin{aligned} E(t) = \sum _{m} f_m^{(t)} \log f_m^{(t)}. \end{aligned}$$
(5)

The gain is the amount of information the system learned compared to the randomly generated initial population \(P^{(t=0)}\). Once the learning process begins, the information gain increases until to reach a peak point.

Fig. 2.
figure 2

Learning during the evolutionary process. Information gain curves of Hybrid-IA with local search strategy (left plot) and without it (right plot). The inset plots, in both figures, display the relative standard deviations.

Figure 2 shows the information gain obtained with the use of the local search mechanism (left plot), and without it (right plot) when Hybrid-IA is applied to the random S_R23 instance. In both plots we report also the standard deviation, showed as inset plot. Inspecting both plots is possible to note that with the using of the local search, the algorithm quickly gains high information until to reach the higher peak within the first generations, which exactly corresponds to the achievement of the optimal solution. It is important to highlight that the higher peak of the information gain corresponds to the lower point of the standard deviation, and then this confirm that Hybrid-IA reaches its maximum learning at the same time as it show less uncertainty. Due to the deterministic approach of the local search, once reached the highest peak, and found the global optimal solution, the algorithm begins to lost information and starts to show an oscillatory behavior. The same happens for the standard deviation. From the right plot, instead, without the use of the local search, Hybrid-IA gains information more slowly. Also for this version, the higher peak of information learned corresponds to the lowest uncertainty degree, and in this temporal window Hybrid-IA reaches the best solution. Unlike the curve produced with the use of the local search, once found the optimal solution and reached the highest information peak, the algorithm starts to lost information as well but its behavior seems to be more steady-state.

Fig. 3.
figure 3

Information Gain and standard deviation.

The two curves of the information gain have been compared and showed in Fig. 3 (left plot) over all generations. The relative inset plot is a zoom of the information gain behavior over the first 25 generations. From this plot is clear how the local search approach developed helps the algorithm to learn more information already from the first generations, and in a quickly way; the inset plot, in particular, highlight the important existing distance between the two curves. From an overall view over all 300 generations, it is possible to see how the local search gives more steady-state even after reaching the global optimal. In the right plot of the Fig. 3 is instead shown the comparison between the two standard deviations produced by Hybrid-IA with and without the use of the local search. Of course, as we expected, the curve of the standard deviation using the local search is taller as the algorithm gains higher amount of information than the version without local search.

Finally, from the analysis of all these figures – from convergence speed to learning ability – clearly emerges how the local search designed and developed helps Hybrid-IA in having an appropriate and correct convergence towards the global optimum, and a greater amount of information gained.

4.2 Experiments and Comparisons

In this subsection all experimental results and comparisons done with the state-of-the-art are presented in order to evaluate the efficiency, reliability and robustness of the Hybrid-IA performances. Many experiments have been performed, and the benchmark instances proposed in [2] have been considered for our tests and comparisons. In particular, Hybrid-IA has been compared with three different metaheuristics: ITS [3], XTS [1, 9], and MA [2]. As written in Sect. 4, the same experimental protocol proposed in [2] has been used. Tables 1 and 2 show the results obtained by Hybrid-IA on different sets of instance: squared grid graph, no squared grid graph, and toroidal graph in Table 1; hypercube graph and random graphs in Table 2. In both tables are reported: the name of the instance in 1st column; number of vertices in 2nd; number of edges in 3rd; lower and upper bounds of the vertex weights in 4th and 5th; and optimal solution \(K^*\) in 6th. In the next columns are reported the results of Hybrid-IA (7th) and of the other three algorithms compared. It is important to clarify that for the grid graphs (squared and not squared) n and m indicate the number of the rows and columns of the grid. The last column of both tables shows the difference between the result obtained by Hybrid-IA and the best result among the three compared algorithms. Further, in each line of the tables the best results among all are reported in boldface.

Table 1. Hybrid-IA versus MA, ITS and XTS on the set of the instances: squared grid; not squared grid and toroidal graphs.

On the squared grid graph instances (top of Table 1) is possible to see how Hybrid-IA is able to reach the optimal solution in 8 instances over 9, unlike of MA that instead reaches it in all instances. However, in this instance (S_SG7) the performances showed by Hybrid-IA are very close to the optimal solution (\(+0.2\)), and anyway better than the other two compared algorithms. On the not square grid graphs (middle of Table 1) instead Hybrid-IA reaches the optimal solution on all instances (9 over 9), outperforming all three algorithms on the instance S_NG7, where none of the three compared algorithms is able to find the optimal solution. Also on the toroidal graphs (bottom of Table 1) Hybrid-IA is able to reaches the global optimal solution in 9 over 9 instances. In the overall, inspecting all results in Table 1, is possible to see how Hybrid-IA shows competitive and comparable performances to MA algorithm, except in the instance S_SG7 where it shows slight worst results, whilst instead it is able to outperform MA in the instance S_NG7 where it reaches the optimal solution unlike of MA. Analysing the results with respect the other two algorithms, is clear how Hybrid-IA outperform ITS and XTS on all instances.

Table 2. Hybrid-IA versus MA, ITS and XTS on the set of the instances: hypercube and random graphs.

In Table 2 are presented the comparisons on the hypercube, and random graphs, which present larger problem dimensions with respect the previous ones. Analysing the results obtained on the hypercube graph instances, it is very clear how Hybrid-IA outperform all three algorithms in all instances (9 over 9), reaching even the optimal solution on the S_H7 instance where instead the three algorithms fail. On the random graphs Hybrid-IA still shows comparable results to MA on all instances, even reaching the optimum on the instances S_R20 and S_R23 where instead MA, and the other two algorithms fail. In the overall, analyzing all results of this table is easy to assert that Hybrid-IA is comparable, and sometime the best, with respect to MA also on this set of instances, winning even on three instances the comparison with it. Extending the analysis to the comparison with ITS and XTS algorithms, it is quite clear that Hybrid-IA outperforms them in all instances, finding always better solutions than these two compared algorithms.

Finally, from the analysis of the convergence behavior, learning ability, and comparisons performed it is possible to clearly assert how Hybrid-IA is comparable with the state-of-the-art for the MWFVS problem, showing reliable and robustness performances, and good ability in information learning. These efficient performances are due to the combination of the immunological operators, which introduce enough diversity in the search phase, and the local search designed, which instead refine the solution via more appropriate and deterministic choices.

5 Conclusion

In this paper we introduce a hybrid immunological algorithm, simply called Hybrid-IA, which takes advantage by the immunological operators (cloning, hypermutation and aging) for carefully exploring the search space, introducing diversity, and avoiding to get trapped into a local optima, and by a Local Search whose aim is to refine the solutions found through appropriate choices and based on a deterministic approach.

The algorithm was designed and developed for solving one of the most challenging combinatorial optimization problems, such as the Weighted Feedback Vertex Set, which simply consists, given an undirected graph, in finding the subset of vertices of minimum weight such that their removal produce an acyclic graph. In order to evaluate the goodness and reliability of Hybrid-IA, many experiments have been performed and the results obtained have been compared with three different metaheuristics (ITS, XTS, and MA), which represent nowadays the state-of-the-art. For these experiments a set of graph benchmark instances has been considered, based on different topologies: grid (squared and no squared), toroidal, hypercube, and random graphs.

An analysis on the convergence speed and learning ability of Hybrid-IA has been performed in order to evaluate, of course, its efficiency but also the computational impact and reliability provided by the developed local search. From this analysis, clearly emerges how the developed local search helps the algorithm in having a correct convergence towards the global optima, as well as a high amount of information gained during the learning process.

Finally, from the results obtained, and the comparisons done, it is possible to assert how Hybrid-IA is competitive and comparable with the WFVS state-of-the-art, showing efficient and robust performances. In all instances tested it was able to reach the optimal solutions, except in only one (S_SG7). It is important to point out how Hybrid-IA has been instead the only one to reach the global optimal solutions on 4 instances, unlike the three compared algorithms.