Keywords

1 Introduction

The SCP is one of 21 NP-Hard problems, representing a variety of optimization strategies in various fields and realities. Since its formulation in the 1970s has been used, for example, in minimization of loss of materials for metallurgical industry [1], preparing crews for urban transportation planning [2], safety and robustness of data networks [3], focus of public policies [4], construction structural calculations [5].

Considering a binary numbers array A, of m rows and n columns (\(a_ {ij}\)), and a C vector (\(c_j\)) of n columns containing the costs assigned to each one, then we can then define the SCP such as:

$$\begin{aligned} \text{ Minimize } \sum _{j = 1}^{n} c_j x_j \end{aligned}$$
(1)

where a:

$$\begin{aligned}&\sum \nolimits _{j = 1}^{n}\ a_{ij} x_{j} \ge 1 \ \forall \ i \in \{1,...,n\}\\&\quad \, x_{j} \in \{0,1\}; \ j \in \{1,...,n\} \end{aligned}$$

This problem was introduced in 1972 by Karp [6] and it is used to optimize problems of elements locations that provide spatial coverage, such as community services, telecommunications antennas and others.

The present work applied a strategy based on a binary algorithm inspired by black holes to solve the SCP, developing some operators that allow to implement an analog version of some characteristics of these celestial bodies to support the behavior of the algorithm and improve the processes of searching for the optimum. This type of algorithm was presented for the first time by Abdolreza Hatamlou in September 2012 [7], registering some later publications dealing with some applications and improvements. In this paper it will be detailed methodology, developed operators, experimental results and execution parameters and handed out some brief conclusions about them, the original version for both the proposed improvements.

2 Black Holes

Black holes are the result of the collapse of a big star’s mass that after passing through several intermediate stages is transformed in a so massively dense body that manages to bend the surrounding space because of its immense gravity. They are called “black holes” due to even light does not escape their attraction and therefore is undetectable in the visible spectrum, knowing also by “singularities”, since inside traditional physics loses meaning. Because of its immense gravity, they tend to be orbited by other stars in binary or multiple systems consuming a little mass of bodies in its orbit [8]. When a star or any other body is approaching the black hole through what is called “event horizon”, collapses in its interior and is completely absorbed without any possibility to escape, since all its mass and energy become part of singularity (Fig. 1). This is because at that point the exhaust speed is the light one [8].

Fig. 1.
figure 1

Event horizon in a black hole

On the other hand, black holes also generate a type of radiation called “Hawking radiation”, in honor of its discoverer. This radiation have a quantum origin and implies transfer of energy from the event horizon of the black hole to its immediate surroundings, causing a slight loss of mass of the dark body and an emission of additional energy to the nearby objects [9].

3 Algorithm

The algorithm presented in September 2012 by Hatamlou [7] faces the problem of determination of solutions through the development of a set of stars called “universe”, using an algorithm type population similar to those used by genetic techniques or particles swarm. It proposes the rotation of the universe around the star that has the best fitness, i.e., which has the lowest value of a defined function, called “objective function”. This rotation is applied by an operator of rotation that moves all stars in each iteration of the algorithm and determines in each cycle if there is a new black hole, that it will replace the previous one. This operation is repeated until it find the detention criteria, being the last of the black holes founded the proposed solution. Eventually, a star can ever exceed the defined by the radius of the event horizon [8]. In this case, the star collapses into the black hole and is removed from the whole universe being taken instead by a new star. Thus, stimulates the exploration of the space of solutions. The following is the proposed flow and the corresponding operators according to the initial version of the method (Fig. 2).

Fig. 2.
figure 2

Original black hole algorithm

3.1 Big Bang

It consists the creation of the initial random universe for the algorithm. The number of stars generated will remain fixed during the iterations, notwithstanding that many of the vectors (or stars) are replaced. The mechanism of creation of vectors is as follows and shall also apply in the intermediate steps that require the generation of new stars:

figure a

Where StarGeneration is the creation of a binary vector of n elements that comply with the restrictions of A matrix.

3.2 Fitness Evaluation

The each star \(x_ {i}\) fitness is calculated by evaluating the objective function, according to the initial definition of the problem. In algorithmic terms described in the following way:

figure b

It should be remembered that \(c_j\) corresponds to the cost of that column in the matrix of costs. In other words, the fitness of a star is the sum of the product of the value of each column covered with a star in particular, multiplied by the corresponding cost. The black hole will be those who have minor fitness among all existing stars at the time of the evaluation.

3.3 Rotation Operator

The rotation operation occurs above all the universe of \(x_{i}^{d}\) stars of t iteration, with the exception of the black hole, which is fixed in its position. The operation sets the new t+1 position follows:

$$\begin{aligned} X_i ^ {d}(t+1) = X_i(t) + random (X_{BH} - X_i(t)), where\ i=1,2,...,N \end{aligned}$$
(2)

Where random \(\in \) {0,1} and change in each iteration, \(x_i\)(t) and \(x_i\)(t+1) are the positions of the star \(x_ {i}\) at t and t+1 iterations respectively, \(x_ {BH}\) is the black hole location in the search space, random is a random number in the range [0,1] and N is the number of stars that make up the universe (candidate solution). It should be noted that the only exception in the rotation is designated as black hole star, which retains the position.

3.4 Collapse into the Black Hole

When a star is approaching a black hole at a distance called event horizon is captured and permanently absorbed by the black hole, being replaced by a new randomly generated one. In other words, it is considered when the collapse of a star exceeds the radius of Schawarzchild (R) defined as:

$$\begin{aligned} \text{ R } \text{= } \frac{f_{BH}}{\sum _{i = 1}^{n}f_{i}} \end{aligned}$$
(3)

where \(f_{BH}\) is the value of the fitness of the black hole and \(f_{i}\) is the ith star fitness. N is the number of stars in the universe.

3.5 Algorithm Implementation

The algorithm implementation was carried out with a I-CASE tool, generating Java programs and using a relational database as a repository of the entry information and gathered during executions. The parameters that will be presented are the result both of the needs of the original design of the algorithm improvements made product of the tests performed. In particular, attempted to improve the capacity of exploration of the metha heuristics. Is contrast findings with tables of known optimal values [10, 11], in order to quantitatively estimate the degree of effectiveness of the presented metha heuristics. We found no publications that present information regarding algorithmic specifications or the original version of the algorithm implementation, so there are various aspects to which the authors do not point solution, but we can speculate that they are similar to other metaheuristic algorithms that is does have information from other authors:

The process begins with the random generation of a population of binary vectors (Star) in a step that we will call “big bang”. With a universe of m star formed by binary vectors of n digits, you must identify that with better fitness, i.e. one that is evaluated with the objective function release the lowest value among all those that make up the universe generated. The next step is to rotate the other stars around the black hole detected until some other presents a better fitness and take its place.

The number of star generated will remain fixed during the iterations, notwithstanding that many vectors (or star) will be replaced by one of the operators.

3.6 Transfer Functions and Binarization

The transfer functions aims to take values from the domain of the real to the range [0..1]. For this, many functions was tested, been the inverse exponential function used for the definitive benchmarks.

$$\begin{aligned} \frac{1}{1+(e^{-x/3})} \end{aligned}$$
(4)

In addition, the binarization function is aimed at conveying the value obtained in the previous transformation in a binary digit. Therefore be tested the following routines, where random is a random value between 0 and 1 inclusive.

figure c

The binarization best results have been achieved with that was the standard to be applied in the subsequent benchmarks.

3.7 Feasibility and Repair Operators

The feasibility of a star is given by the condition if it meets each of the constraints defined in the matrix A. In those cases which unfeasibility was detected, opted for repair of the vector to make it comply with the constraints. We implemented a repair function in two phases, ADD and DROP, as way to optimize the vector in terms of coverage and costs. The first phase changes the vector in the column that provides the coverage at the lowest cost, while the second one removes those columns which only added cost and do not provide coverage.

3.8 Collapse into the Black Hole

One of the main problems for the implementation of this operator is that the authors refer to vectorial distances determinations or some other method. However, in a 2015 publication, Farahmandian, and Hatamlouy [12] intend to determine the distance of a star \(x_ {i}\) to the radius R as:

$$\begin{aligned} \left| f(x_{BH}) - f(x_{i}) \right| \end{aligned}$$
(5)

I.e. a star \(x_ {i}\) will collapse if the absolute value of the black hole and his fitness subtraction is less than the value of the radius R:

$$\begin{aligned} \left| f(x_{BH}) - f(x_{i}) \right| < R \end{aligned}$$
(6)
Table 1. Execution parameters
Table 2. Experimental results

3.9 Parameters

For the purpose of implementing all the features and operators that are detailed in this document in multiple configurations, a table of parameters was built for the algorithm. Below are parameters values used and which values are those that gave the best results (Table 1).

The first parameter refers to the fixed number of stars that comprise the full universe to be processed, while the second specifies the maximum number of iterations to perform in total. The third and fourth parameters refer to the functions that will be used in the transfer and binarization of the variables in each iteration.

Fig. 3.
figure 3

SPC41 results

Fig. 4.
figure 4

NRH1 results

4 Experimental Results

The original algorithm was subjected to a test by running the benchmark 4, 5, 6, A, B, C, D, NRE, NRF, NRG and NRH from OR library [13]. Each of these data sets ran 30 times with same parameters [14], presenting the following results (Table 2).

Fig. 5.
figure 5

Evolution of maxs and mins (Color figure online)

5 Experimental Analisis and Conclusions

Comparing the results of experiments with optimal ones reported in the literature, we can warn that the results obtained are acceptably close to the best known optimum for benchmarks 4, 5 and 6 and very far away from them in the case of the final ones (A, B, C, D, E and NR). In the case of the first ones are deviations between 6,06 % and 61,54 %, while in the case of the latter ones reach 3.301,59 % of deviation. Both cases despite the execution of the algorithm a lot of times. The rapid initial convergence is achieved, striking finding very significant improvements in the first iteration, finding very significant improvements in early iterations, being much more gradual subsequent and requiring the execution of those operators that stimulate exploration, such as collapse and Hawking radiation. This suggests that the algorithm has a tendency to fall in optimal locations, where cannot leave without the help of scanning components. In order to illustrate these trends, some graphics performance benchmarks are presented (Figs. 3 and 4).

While in the benchmark results which threw poor results have significant percentages of deviation from the known optimal, in absolute terms the differences are low considering the values from which it departed iterating algorithm. It is probably that these tests require greater amount of iterations to improve its results, since the values clearly indicate a consistent downward trend, the number of variables is higher and the difference between the optimum and the start values is broader. An interesting analysis element is that the gap between the best and the worst outcome is small and relatively constant in practically all benchmarks, indicating the algorithm tends continuously towards an improvement of results and the minimums are not just a product of suitable random values. The following chart explains this element (Fig. 5).

On the other hand, it is also important to note that in those initial tests in which the stochastic component was greater than that has been postulated as the optimal, the algorithm presented lower performance, determining optimal much higher probably by the inability to exploit areas with better potential solutions. All this is what it can be noted that the associated parameters to define the roulette for decision-making are quite small ranges in order that the random component be moderate. Other notorious elements are the large differences in results obtained with different methods of transfer and binarization, some ones simply conspired against acceptable results. Various possibilities already exposed to find a satisfactory combination were explored. Some investigation lines that can be interesting approach for a possible improvement of results may be designed to develop a better way to determine the concept of distance, with better tailored criteria to the nature of the algorithm, as well as a more sophisticated method of mutation for those stars subjected to Hawking radiation. Additionally, some authors treat the rotation operator by adding additional elements such as mass and electric charge of the hole negros [16], what was not considered in this work because the little existing documentation.