Keywords

1 Introduction

The scope of the ARGO-YBJ project is to study cosmic gamma radiation, identifying transient emissions and performing a systematic search of steady sources [1]. The detection of very small size air showers (at low energy \(< \textit{TeV}\)), is needed to reach this scope, because standard ones would sample only a small percentage of the shower particles. The achieving of the objective is committed to a new instrument located in Yangbajing Laboratory (Tibet, China), at a very high altitude (4,000 m a.s.l.) in order to approach the maximum size development of low energy showers. This detector uses a full coverage layer of Resistive Plate Counters (RPCs) that can provide a high granularity sampling of particle showers. It covers an area of about 6,700\(\,\mathrm {m^2}\) and allows a detailed space-time picture of the shower front.

This work is related to the optimization of RPCs location on the layer, to capture a uniform cosmic source distribution, constrained by a limited number of receivers due to a budget limitation.

Considering the capture surface of a single receiver shaped as a circular area, the problem has many points in common with a classic sphere packing problem [5, 13, 18]. The problem of packing circles in different geometrical shapes in the plane has always attracted researchers for the large amount of fields on which it can be applied. In the last decades many results, mainly for small packings, were obtained. The increasing performance of computing systems and the development of new optimization algorithms for large problems have recently brought to the forefront this kind of problems. Usually the circle packing problem can be stated as that of spreading points and it’s needed to find a configuration of points in the given region such that the minimum mutual distance between the points is as large as possible. The packing problem is dual to the covering one, in which the optimal location of points is needed to cover as much as possible the area of interest. Typical solutions can be found in several fields and are addressed using several algorithmic optimization procedures [3, 11, 14, 16, 19].

In this paper we are interested in finding the optimal location of a limited number of receivers to maximize the total detection area. This experimental design problem can be faced as a Nash equilibrium problem as stated in Game Theory: the choice of the variables in \(n\) experiments is made by \(n\) players, each of them has to decide his location far as possible from the opponents and also from the border of the region. On this model, it is possible to compute the equilibria by using a numerical procedure based on a genetic algorithm [4, 6, 10, 15, 17, 20]. In Sect. 3.2 the constrained location problem is introduced and the procedure to solve it by a Nash game is shown; in Sect. 3.3 the Nash genetic algorithm for the facility location game is presented with several test cases. In Sect. 3.4 concluding remarks and some further developments are discussed.

2 Constrained Location Problem

2.1 Preliminaries

Let us consider an \(n\)-player normal form game \(\varGamma \) (\(n\in {\fancyscript{N}}\), where \({\fancyscript{N}}\) is the set of natural numbers), that consists of a tuple

$$ \varGamma = \langle N; X_1,\ldots ,X_n;\, f_1,\ldots ,f_n \rangle $$

where \(N=\{1,2,\ldots ,n\}\) is the finite player set, for each \(i\in N\) the set of player \(i\)’s strategies is \(X_i\) (i.e. the set of player \(i\)’s admissible choices) and \(f_i\): \(X_1\times \cdots \times X_n \rightarrow {\fancyscript{R}}\) is player \(i\)’s payoff function (\({\fancyscript{R}}\)  is the set of real numbers). We suppose here that players are cost minimizing, so that player \(i\) has a cost \(f_i(x_1, x_2,\ldots ,x_n)\) when player \(1\) chooses \(x_1\in X_1\), player \(2\) chooses \(x_2\in X_2\), ..., player \(n\) chooses \(x_n\in X_n\).

We define \(X=X_1\times \cdots \times X_n\) and for \(i\in N\): \(X_{-i}= \varPi _{j\in N\backslash \{i\}} X_j\). Let \(\mathbf{x}=(x_1, x_2,\ldots ,x_n)\in X\) and \(i\in N\). Sometimes we denote \(\mathbf{x}=(x_i, \mathbf{x}_{-i})\), where \(\mathbf{x}_{-i}=(x_1,\ldots ,x_{i-1}, x_{i+1},\ldots ,x_n)\).

A Nash equilibrium [2, 12] for \(\varGamma \) is a strategy profile \(\hat{\mathbf{x}}=(\hat{x}_1, \hat{x}_2,\ldots ,\hat{x}_n)\in X\) such that for any \(i\in N\) and for any \(x_i\in X_i\) we have that

$$ f_i(\hat{\mathbf{x}})\le f_i(x_i, \hat{\mathbf{x}}_{-i}). $$

Such a solution is self-enforcing in the sense that once the players are playing such a solution, it is in every player’s best interest to remain in his strategy. We denote by \(NE(\varGamma )\) the set of the Nash equilibrium strategy profiles.

Any \(\hat{\mathbf{x}}=(\hat{x}_1,\ldots ,\hat{x}_n)\in NE(\varGamma )\) is a vector such that for any \(i\in N\), \(\hat{x}_i\) is solution to the optimization problem

$$ \displaystyle \min _{x_i\in X_i} f_i(x_i,\hat{\mathbf{x}}_{-i}). $$

2.2 The Facility Location Game

We consider the unit square \(\varOmega =[0,1]^2\): the problem is to decide for two variables \(x\) and \(y\) the values of \(n\) available experiments (\(n\in {\fancyscript{N}}\) given).

Problem 1

Experimental Design (ED)

The problem is to sattle \(n\) points \(P_1, P_2,\ldots , P_n\) in the square \(\varOmega \) in such a way that they are far as possible from the rest and from the boundary of the square.

This implies to maximize the dispersion of the points in the interior and the distance from the boundary of \(\varOmega \) as in experimental design ([9]). Various concrete situations satisfy these requirement, for example the location of sensor device to capture cosmic rays in a region that will be discussed in the next section.

There is a competition between the points in the square, because the dispersion depends on the mutual position of all the points, also with respect to the boundary of \(\varOmega \), so we use a game theoretical model and assign each point to a virtual player, whose decision variables are the coordinates and whose payoff function translates the dispersion in terms of distances.

As it happens in applications, forbidden places may be present inside the square. We consider the location problem in the constrained case depending on the admissible subregion of \(\varOmega \), say \(\varOmega _c\subset \varOmega \).

In the constrained case we define the following \(n\)-player normal form game \(\varGamma _c= \langle N; \varOmega _c,\ldots , \varOmega _c; f_1,\ldots ,f_n \rangle \) where each player in \(N=\{1,2,\ldots ,n\}\), for each \(i\in N\), minimizes the cost \(f_i : A_c \rightarrow {\fancyscript{R}}\) defined by

$$ f_i(P_1,\ldots , P_n)= \sum _{1\le j\le n, j\ne i} \frac{1}{d(P_i, P_j)} + \frac{1}{\sqrt{2 d(P_i, \partial \varOmega )}} $$

being \(A_c=\left\{ (P_1,\dots ,P_n)\in \varOmega _c^n:P_i\in (]0,1[)^2, P_i\ne P_j\;\forall i, j=1,\dots ,n,j\ne i\right\} \) and \(d\) is the Euclidean metric in \({\fancyscript{R}}^2\). The first \(n-1\) terms in the definition of \(f_i\) give the distance between the point \(P_i\) and the rest of the points, the last term an decreasing function of the distance of \(P_i\) from the boundary of the square.

Definition 1

Any \((\hat{x}_1, \hat{y}_1,\ldots ,\hat{x}_n, \hat{y}_n)\in A_c\) that is a Nash equilibrium solution of the game \(\varGamma _c\) is an optimal solution of the problem \((\textit{ED})\). For any \(i\in N\), \((\hat{x}_i, \hat{y}_i)\) is solution to the optimization problem

$$ \displaystyle \min _{(x_i, y_i)\in \varOmega _c} f_i(\hat{x}_1, \hat{y}_1,\ldots , \hat{x}_{i-1}, \hat{y}_{i-1}, x_i, y_i, \hat{x}_{i+1}, \hat{y}_{i+1},\ldots , \hat{x}_n, \hat{y}_n) $$

with \((x_1,y_1,\ldots ,x_n,y_n)\in A_c\).

A very common situation is to consider \(\varOmega _c=\varOmega \) \(\setminus T\) with \(T\) a closed subset of \(\varOmega \) (a triangle, a circle, etc.) that corresponds to a facility location problem with an obstacle (a lake, a mountain, etc.). Other concrete cases for the admissible region \(\varOmega _c\) can be considered: in the following Section we will examine the location problem when the admissible region is given by a set of segments.

2.3 Location of Sensor Devices on a Grid

Given the set \(\{h_1,\ldots ,h_k\}\) (\(h_i\in \ ]0,1[, {\text {i}}=1,\ldots , {\text {k}} \)) we consider the set of possible location of \(n\) sensor devices able to capture cosmic particles

$$ \varOmega _c=\{[0,1]\times \{h_1\},\ldots ,[0,1]\times \{h_k\} \}. $$

We are obliged to locate the sensors on the given \(k\) segments in the square: for example because of electricity constraints.

In terms of coordinates, if \(P_i=(x_i, y_i), i\in N\) the distance of a point \(P=(x,y)\) from the set \(\partial \varOmega \), the boundary of \(\varOmega \), is

$$ d(P, \partial \varOmega )= \min _{Q\in \partial \varOmega } d(P, Q)= \min \{x, y, 1-x, 1-y \} $$

and we have for \((x_1, y_1,\ldots ,x_n, y_n)\in A_c\)

$$\begin{aligned} f_i(x_1, y_1,\ldots , x_n, y_n)&= \sum _{1\le j\le n, j\ne i} \frac{1}{ \sqrt{ (x_i-x_j)^2 + (y_i-y_j)^2 }}\\&\quad +\, \frac{1}{\sqrt{2 \min \{x_i, y_i, 1-x_i, 1-y_i \}}} \end{aligned}$$

for \((x_1,y_1,\ldots ,x_n,y_n)\in \varOmega _c\cap A_c\).

The optimal location of the sensors will be the Nash equilibrium solutions of the game \(\varGamma _c= \langle N; \varOmega _c,\ldots , \varOmega _c; f_1,\ldots ,f_n \rangle \), where each player in \(N=\{1,2,\ldots ,n\}\), for each \(i\in N\), minimizes the cost \(f_i : A_c \rightarrow {\fancyscript{R}}\) for \((x_1,y_1,\ldots ,x_n,y_n)\in \varOmega _c\cap A_c\).

3 Nash Genetic Algorithm for the Location Problem

3.1 Genetic Algorithm

Let \(X_1, X_2,\ldots , X_n\) be compact subsets of an Euclidean spaces, denoted as search space. Let \(f_1, f_2,\ldots , f_n\) be real valued functions, defined on \(X_1 \times X_2 \times \cdots \times X_n\), representing the objective functions to be maximized.

Let \(s=x_1, x_2,\ldots , x_n\) be the individual (or chromosome) representing a feasible solution in the search space. A finite set of individuals make up a population. It can be viewed as a sampling of the problem domain that generation by generation maps zones with an higher probability of presence of the optimum ([10]).

A typical genetic algorithm consists of several steps:

  • Population initialization: at the first step, a random population is set to map the search domain.

  • Selection: on the sorted population, a probabilistic based selection of parents is made to permit coupling of best individuals without wasting worst chromosomes that may be useful to move towards unexplored zones of search space.

  • Crossover: on selected parents, a crossover operator is applied to create two new individuals. This operator may be applied in several forms.

  • Mutation: to avoid premature stagnation of the algorithm a mutation operator is used, randomly changing a bit of the just created chromosomes.

  • Fitness computation: objective function and constraints must be evaluated to sort individuals in the population.

  • Termination criterion: usually two criteria are defined in a GA, one on the maximum number of total generations and one on the maximum number of total generations without improvements on the best chromosome.

3.2 Nash Equilibrium Game

According to the definition of Nash equilibrium presented in 3.2.3, the algorithm for a \(n\) players Nash equilibrium game is presented [68, 15].

The algorithm is based on the Nash adjustment process [12], where players take turns setting their outputs, and each player’s chosen output is a best response to the output that his opponent chose the period before. If the process does converge, the solution is an optimal location of the \(n\) sensor devices.

Let \(\underline{x}=\underline{x}_1,\ldots , \underline{x}_n\) be a feasible solution for the \(n\) player Nash problem. Then \(x_i\) denotes the subset of variables handled by player \(i\), belonging to a metric space \(X_i\), and optimized by an objective function called \(f_i\). Player \(i\) search the optimal solution with respect to his objective function by modifying \(\underline{x}_i\).

At each step \(k\) of the optimization algorithm, player \(i\) optimizes \(x_i^k\) using \(x_{(-i)}^{k-1}=\underline{x}_1^{k-1},\ldots , \underline{x}_{i-1}^{k-1}, \underline{x}_{i+1}^{k-1},\ldots \underline{x}_n^{k-1}\).

The first step of the algorithm consists of creating \(n\) different populations, one for each player. Player \(i\)’s optimization task is performed by population \(i\). Let \(\underline{x}_i^{k-1}\) be the best value found by player \(i\) at era \(k-1\). At era \(k\), player \(i\) optimizes \(\underline{x}_i^{k}\) using \(\underline{x}_{-i}^{k-1}\) in order to evaluate the chromosome. At the end of \(k\)th era optimization procedure players \(-i\) communicate their own best value \(\underline{x}_{-i}^{k}\) to player \(i\) who will use it at era \(k+1\) to generate their entire chromosome, using only \(\underline{x}_i^{k}\) for common GAs crossover and mutation procedures. A Nash equilibrium is reached when no player can further improve his objective function, or a generation number limit is reached.

3.3 Test Cases

In this section, numerical results for the constrained location model are shown. They have been obtained using the Nash Genetic Algorithm presented above, with parameters summarized in Table 3.1.

Table 3.1 Genetic algorithms characteristics
Fig. 3.1
figure 1

Cases \(n=4\) and \(Y=\{0.3, 0.7\}\); \(n=5\) and \(Y=\{0.3, 0.5, 0.7\}\)

First results are relative to the grid constrained case, in which RPCs can be located only at defined values of the second coordinate \(h_1,\ldots ,h_k\). In this case, the genetic algorithm is modified to handle a discrete variable \(y \in Y\), where \(Y=\{h_1,\ldots ,h_k\}\) is the set of feasible bands.

In Figs. 3.1, 3.2 and 3.3 the comparison for uncostrained and constrained cases are shown, changing the number of rows on which the RPCs are contrained case by case, depending on the results of the unconstrained cases. The optimal location points are denoted by blue circles in the unconstrained case, and by red squares in the constrained case.

Fig. 3.2
figure 2

Cases \(n=6,7\) and \(Y=\{0.2, 0.4, 0.6, 0.8\}\)

Fig. 3.3
figure 3

Cases \(n=8\) and \(Y=\{0.2, 0.4, 0.6, 0.8\}\); \(n=10\) and \(Y=\{0.15, 0.35, 0.5, 0.65, 0.85\}\)

4 Conclusions

In this paper the problem of locating a given number of sensor devices has been solved by means of a facility location problem whose solutions are the Nash equilibrium profiles of a suitable normal form game. The objective functions are given according to physical requirements. For such a problem a numerical procedure based on a genetic type of algorithm has been used to compute the final configurations. We considered the special case where the admissible region is made by a set of parallel segments, due to operative constraints (for example, electricity lines).

Other possible cases could be examined, for example the case where in the admissible region a convex obstacle is present. In this case the optimal location of the sensors will be the Nash equilibrium solutions of the game \(\varGamma _c= \langle N; \varOmega _c,\ldots , \varOmega _c; f_1,\ldots ,f_n \rangle \), where each player in \(N=\{1,2,\ldots ,n\}\), for each \(i\in N\), minimizes the cost \(f_i : A_c \rightarrow {\fancyscript{R}}\) for \((x_1,y_1,\ldots ,x_n,y_n)\in \varOmega _c\cap A_c\) and \(\varOmega _c=\varOmega \setminus T\) with \(T\) a closed subset of \(\varOmega \) (a triangle, a circle, etc.). In the numerical procedure the objective functions can been modified to handle obstacles as penalty functions applied to the principal objective. In particular, \(f_i\) the objective function relative to the \(i\)th player, it is penalized by:

$$ f_i=f_i/f_{pen} $$

where \(f_{pen} \in [0,1]\) is a suitable penalty function.

For example, for a circular obstacle \(f_{pen}=d(x,y)/r_c\), where \(d(x,y)\) is the minimum distance between the sensor \((x,y)\) and the center of the circular obstacle, \(r_c\) is the radius of the circle. Two test cases are shown in Fig. 3.4 with \(T\) given by the circle centered at \((0.5, 0.5)\) with radius \(0.25\).

Fig. 3.4
figure 4

Cases for \(n=5,10\) with circle shaped obstacle

In other cases, for example if we have a rectangular obstacle, a constant penalty (\(f_{pen}=0.1\)) can be applied for each sensor located in the unfeasible region. Two test cases are shown in Fig. 3.5.

Fig. 3.5
figure 5

Cases for \(n=5,10\) with box shaped obstacle \(T=[0, 0.5]^2\)

A more systematic study of the constrained case from a theoretical as well as from a numerical point of view will be the object of future research.