1 Introduction

Since their creation of the camera, visual networks have known an ever growing success within scientific and industrial communities. Thanks to its various advantages, this technology has been able to establish itself as a key player in the placement of current cameras.

Meta-heuristic optimization algorithms are approaches that have been of great interest from several applications of researchers in different fields such as operational research, computer engineering, bioinformatics and automation. Their interest is due to their flexibility, their robustness, their simplicity and their flexibility to ensure good results as NBPSO can solve a variety of optimization problems in a reasonable calculation time.

Perfectly that research possesses and carried out in the field of true methods, the consistency NP of the underlying difficulty produces their use largely in practice. Also it is made possible and accessible to control or monitor large areas (areas) using cameras, the prayers that we are coming now for this problem have gone beyond the adequacy of basic research area displacement processes. Towards this logic, the property of metaheuristics has largely devoted tensions to an optimal placement of the camera.

Indeed, the quality of the monitoring in such application depends on the position and poses of the cameras. The main driving force of this work is to improve the off-line camera placement for surveillance applications. Camera placement depends on feasible location of cameras, obstacles present in sensitive areas, and the assigned priority of the area. Hence the placement problem becomes an optimization problem with inter related and competing constraints. Since constrained discrete optimization problems do not have efficient algorithmic solution (time complexity, memory saturation), evolutionary algorithms are used.

In this article, we have examined the deployment and adjustment of evolutionary techniques such as new binary particle swarm optimization (NBPSO), [1,2,3,4,5,6,7,8] be widely used and adapted to BILP difficulties at a very wide scale. And approximate to find a global solution with the fewest errors. On the other hand, the methods of particle swarm optimization (PSO) require some improvements to provide theoretical solutions of the global optima. In order to increase the probability rate to reach the good and the improved global solution of the optimum, we present in this article, an active approach based on the stimulated probabilities of BPSO (BPSO-IP) to maximize the spatial optical coverage d 'a 2D in a surveillance environment.

We present in this paper the NBPSO approach to optimize the spatial illustration coverage of a 2-D in milieu. This paper is organized as follows: Sect. 2 reviews the associated of OCP on this topic. In Sect. 3, modeling the placement problem with the presence of obstacle and optimization problem is proposed. A review for the NBPSO on Evolutionary method is set in Sect. 4. Assesses and discusses the consequences obtained for diverse scenarios In Sect. 5.

1.1 Background on Camera Placement

Optimal camera placement is a very challenging problem that has been proven to be NP-hard for most of the formulations of sensor deployment [7]. Indeed, placement problems are often represented as an integer linear programming (ILP) model [9, 10]. However, when using deterministic ILP methods in large scale problems most computers used failed to obtain a solution and got stack due to the solution search requirements and memory issues [10,11,12,13,14,15].

In the case of genetic algorithm (GA) [16], extended by Holland in 1992, is examined and will concede as the preferable evolutionary algorithm. Optimizer based on biogeography (BBO) [17, 18], There are certain other popular evolutionary algorithms: evolutionary programming (EP) [19], evolution strategy (ES) [20], differential evolution (DE) [21], incremental probability-based learning (PBIL)) [22], and genetic programming (GP) [23].

Perfectly that research possesses and carried out in the field of true methods, the consistency NP of the underlying difficulty produces their use largely in practice. Also it is made possible and accessible to control or monitor large areas (areas) using cameras, the prayers that we are coming now for this problem have gone beyond the adequacy of basic research area displacement processes. Towards this logic, the property of metaheuristics has largely devoted tensions to an optimal placement of the camera.

In contrast to traditional sensor networks which assume mono-directional sensors, camera networks are considered as directional sensors which introduce additional complexity to the camera placement problem [24, 25]. Similar to the mono-directional case, this problem is often described as an ILP [24]. To tackle such complexity, several heuristics have been proposed to find suboptimal solutions and good approximations [26,27,28].

However, the exact solution is proven to be NP-Hard [29]. Similarly, Floodlight Illumination Problems deal with the illumination of planar regions by light sources [30].

In Computational Geometry, very good progress has been made in solving optimal guard location problems for a polyp/Jai area, e.g., Art Gallery Problem (AGP) and its variants. In this problem, the main task is to determine a minimal number of guards and their static positions, such that all points in a polygon are observed [31,32,33,34]. However, the exact solution is proven to be NP-Hard [36]. Similarly, Floodlight Illumination Problems deal with the illumination of planar regions by light sources [30]. A variant of the AGP is known as Watchmen Tours where guards are allowed to move inside the polygon [37,38,39]. The objective is to find an optimal number and route for guards guaranteeing the detection of some intruder with an unknown initial position and unlimited speed. Authors in [Suzuki 2001] introduce another variant of watchman problem termed boundary search where guards are allowed to move only along the boundary of the polygon.

Nevertheless, all above works in AGP employ unrealistic assumptions about camera's capabilities such as unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In order to avoid such assumptions, some works take into account camera placement limiters, by proposing some algorithms and methods suitable for solving the camera placement problem [40,41,42]. The drawbacks with these algorithms are that they are only able to handle orthogonal polygons, have a bad time complexity, produce no mathematically optimal solution and assume that cameras have similar capabilities. This assumption makes the above algorithms unsuitable for most real-world computer vision applications. In fact, one objective of our work is to bridge the gap between the highly theoretical, well-established computational geometry and more realistic requirements of computer vision since the aim here is to ensure maximum coverage of a given area to be monitored formed by different zones of priority (coverage threshold) with optimum parameters such as locations and number of cameras. We have also added some realistic requirements, such as:

  • The case of heterogeneous cameras with different fields-of-views at different levels of cost;

  • The limited effective range of cameras;

  • The possibility area containing obstacles which limit the line of sight of the different cameras to be placed.

According the literature review presented above, the placement problem is assimilated to an optimization problem with interrelated and competing constraints. However, it is known that constrained discrete optimization problems do not have efficient algorithmic solution in large scale. These problems are usually solved using evolutionary algorithms. Among the different evolutionary techniques, Genetic Algorithms (GA) and standard Particle Swarm Optimization (PSO) are found to be the most used in the context of sensors and cameras placement: [43] presented a brief survey explaining how PSO is tailored to address wireless sensor networks issues such as optimal deployment, node localization, clustering and data-aggregation while [44] used standard PSO to solve coverage problems with randomly deployed sensors [45]. Presented a sequential PSO technique to address sonar sensor placement problem. Their algorithm consists in selecting a sensor and subsequently optimizing its placement with respect to the other sensors (assumed fixed) using the standard PSO algorithm [46] proposed a framework based on hybrid GA to deploy and to configure a set of given sensors [47]. Presented three dynamic PSO-based techniques including PSO algorithm and learning automata to solve the deployment problem. It is shown from these reviews that PSO is the most suited and appropriated algorithm to solve placement problems in large scale [48,49,50,51,52,53]. However, almost of the cited papers from the literature differ deeply from our problem formulation and its solving using a more realistic sensing model. In addition, PSO algorithm still requires improvement to theoretically provide global optima when dealing with binary or discrete variables. In order to increase the probability of reaching the right and best solution (global optima), we propose in this chapter an efficient BPSO inspired probability (BPSO-IP) based approach to optimize the spatial visual coverage of a 2-D and 3-D security monitoring environment.

Modeling a Camera’s FoV in Space: Among these factors, the recovery rate and the distance to the subject seem to us to be the most sensitive factors to the placement of the cameras. To do this, we modeled a "reverse virtual camera" using a light source created with the attribute "point light" in an Autodesk Maya scene. There is a range of specific modifiers for this, including the GOBO function, which simulates the grip of a light frustum using a specific texture mask. We will also use the linear dimming attribute to approximate, as shown in Fig. 5, the effect produced by the distance from the camera to the subject. This very simple model will make possible the visual and qualitative appreciation of the desired effect as long as the relative quantities mobilized in the model remain homogeneous with each other.

For our work, we used an experimental architectural model chosen for its restrictive topological characteristics (developed, masking, self-occlusion) and then put to the test by exploiting four camera placement protocols among the most used during survey campaigns.: they are mostly recommended by the community of researchers interested in this field of study and put into practice in the field by end users, experts or non-experts:

  • Terrestrial circular campaign,

  • Horizontal oblique aerial circular campaign,

  • Oblique aerial circular campaign plunging at 45°,

  • Nadir15 shooting while respecting the most commonly applied intra and inter-band recovery factors (80 and 60%).

  • For this "reverse virtual camera" which will therefore project a light beam in the format of the characteristics of the on-board camera by the drone used for the test sessions—the MAVIC PRO brand DJI—we have retained the following parameters Table 1:

    The Field Of View calculation is done by the following formula:

  • A horizontal FOV = 2. arctan (xc/(2. Focal length in mm))

  • Vertical FOV = 2. arctan (yc/(2. Focal length in mm))

  • Diagonal FOV = 2. arctan (d/(2. Focal length in mm))

Figure 1 summarizes all of the factors involved in the calculation of the camera parameters. This model was put to the test by a real measurement of the field of the on-board camera.

Fig. 1
figure 1

Optical characteristics of the MAVIC Pro on-board camera with d = 8.052849 mm, f = 4.73 mm, h = 10.00 m, and the diagonal α = 80.82°

This allows us to deduce the theoretical detection angle (angle of coverage) taking into account the parameters announced by the manufacturer, which will be equal to approximately 84.03°. We can thus calculate the area covered by standing 10 m away from the subject in Nadir shooting: Horizontal overlap = tan (FOV hz/2) 0.2 and Vertical overlap = tan (FOV vt/2) 0.2.

The angle of view is a measure of the part of the world that is found in the image with photography and comparable techniques. This means that half of the world is visible at an angle of 180°. In practical applications, the image is usually a rectangular plane, so the angle of view should also indicate how it was measured. The viewing angle is generally expressed in degrees, measured on a diagonal of the image: \(\mathrm{\alpha }= 2\mathrm{arctan }(\frac{\mathrm{d}}{2\mathrm{f}})\) where d is the length of the image diagonal, the image sensor or the film, and f is the focal length of the lens. The diagonal of the image is the distance from one corner of the image to the opposite corner. This can be calculated with the Pythagorean Theorem: \(\mathrm{d }= \sqrt{\left({\mathrm{h}}^{2}+ {\mathrm{v}}^{2}\right)}\) with h is the width and v is the height of the image.

2 Problem Definition

In this section consider the case of the presence of obstacles which could affect the placement of cameras. Indeed, there could be a set of points not observed because of the imposed geometry of the obstacles, which occlude the fields of view entering their areas. Figure 2 presents a simplified example of this type of placement; where we regarded the perimeter of the obstacles as being the allocated positions of the cameras. The consideration of this hypothesis allows us to represent enclosures or building present in a monitoring area.

Fig. 2
figure 2

Field-of-view \({\Gamma }_{i}\) of camera \({S}_{i}\) in 2D space with the presence of obstacle in placement problem

Each obstacle is represented by a polygonal form. Where, each perimeter edge of th6 obstacle is subdivided in a set of multiple edges with equal lengths depending to a preset threshold. Therefore, we recover the. New points representing the vertexes of each sub each perimeter edge. These points will become the points of a new grid representing the allocated positions of the cameras. It results from them finally two grids. One for the points to be covered and one for the allocated placement of the cameras.

The placement model remains almost the same as stated above. The only change will be in the manner of defining the two following binary variables \({S}_{i1 j1 \varphi nc}\) and \({V}_{cp}^{nc}({i}_{1}, {i}_{1}, {\varphi }_{1}, {i}_{2}, {j}_{2})\). These later will be expressed as follows:

$$ S_{i1 j1 \varphi nc} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {If\;a\;camera\;of\;type\;nc\;with\;orientation\,\varphi \,is\;placed\;at\;point\left( {i_{1} ,j_{1} } \right)o\;grid1} \hfill \\ 0 \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right. $$
(1)
$$ V_{cp}^{nc} \left( {i_{1} , i_{1} , \varphi_{1} , i_{2} , j_{2} } \right) = \left\{ {\begin{array}{*{20}l} 1 \hfill & {If\;a\;camera\;of\;type\;nc\;with\;orientation\varphi is\;placed\;at\;point\left( {i_{1} ,j_{1} } \right)\;of\;grid1\;and\;cover\;the\;point\left( {i_{1} ,j_{1} } \right)of\;grid2} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
(2)

However, solving this type of camera placement model can be infeasible. This case can happen when different cameras at different allocated positions cannot cover some points due to their constrained positions in another grid different to the point's grid. To overcome to this problem, we relaxed the problem stated above where instead of minimizing the number of cameras while insuring total points coverage; the objective will be to minimize the number of cameras while maximizing the number of coverage points. This optimization problem is formulated as follows:

Let a new variables \(w1\) defined as:

$$ w1_{i2j2} = \left\{ {\begin{array}{*{20}c} {1\; if \;the \;point \left( {i2,\;j2} \right)is covered} \\ {0 } \\ \end{array} } \right. $$
(3)

The objective function becomes as follows:

$$ G1 = \mathop \sum \limits_{nc = 1}^{NC} k_{nc} \left( {\mathop \sum \limits_{\varphi = 1}^{{f_{\varphi } }} \mathop \sum \limits_{{i_{1} = 1}}^{{f_{x1} }} \mathop \sum \limits_{{j_{1} = 1}}^{{f_{y1} }} S_{i1 j1 \varphi nc} } \right) - \mathop \sum \limits_{{i_{1} = 1}}^{{f_{x2} }} \mathop \sum \limits_{{j_{2} = 1}}^{{f_{y2} }} w1_{i2j2} $$
(4)

Subject to the following constraints:

$$ \left\{ {\begin{array}{*{20}l} {w1_{i2j2} + \mathop \sum \limits_{nc = 1}^{NC} k_{nc} \left( {\mathop \sum \limits_{\varphi = 1}^{{f_{\varphi } }} \mathop \sum \limits_{{i_{1} = 1}}^{{f_{x1} }} \mathop \sum \limits_{{j_{1} = 1}}^{{f_{y1} }} S_{i1 j1 \varphi nc} .V_{cp}^{nc} \left( {i_{1} , j_{1} , \varphi_{1} , i_{2} , j_{2} } \right)} \right) \ge 0} \\ {\mathop \sum \limits_{nc = 1}^{NC} \mathop \sum \limits_{\varphi = 1}^{{f_{\varphi } }} S_{i1 j1 \varphi nc} \le L, 0 \le i_{2} \le f_{x2} ; 0 \le j_{2} \le f_{y2} } \\ \end{array} } \right. $$
(5)

The area covered by or visible from a camera relates to research known as visibility analysis. The observable points or areas of a camera are restricted; by geographical features, like buildings and trees, as well as the characteristics of a camera. Such characteristics include working distance, horizontal and vertical angle limits: There, fore, the visibility between each camera to be placed and the point to be viewed is analyzed. The visibility is decided by the line segment connecting each camera and grid points coordinate within its field of view.

As shown in Fig. 3, the line segment AB crosses no block. Therefore, point B is visible from camera A and vice versa. The line segment AC crosses a block. Therefore, point C is invisible from camera A and vice versa.

  1. (a)

    Placement problem with possibility of selecting essential zones

Fig. 3
figure 3

Test illustrating the influence of the obstacle on the range of vision

A zone of interest or (essential zone), is a critical and very important zone which deserves more attention at the time of monitoring operation; such as the entry of a company, the armament in a military barracks,.. etc.). However, in our work; a zone of interest is equivalent to a selected set of points since the coverage area is approximated by a grid of points. The fact of selecting essential zones can be beneficial in large scale coverage, where a large number of cameras are required to cover the entire area to be monitored. The cameras should observe all essential regions and as many other points as possible. Here, we introduce a new parameter \(w2\), defined as follows:

$$ w2 = \left\{ {\begin{array}{*{20}l} {1 if\;the\;point \left( {i2,j2} \right)is\;an\;essenatial\;point} \\ {0\;otherwise} \\ \end{array} } \right. $$
(6)

The optimization problem is formulated as follows:

$$ G2 = \mathop \sum \limits_{nc = 1}^{NC} k_{nc} \left( {\mathop \sum \limits_{\varphi = 1}^{{f_{\varphi } }} \mathop \sum \limits_{{i_{1} = 1}}^{{f_{x1} }} \mathop \sum \limits_{{j_{1} = 1}}^{{f_{y1} }} S_{i1 j1 \varphi nc} } \right) - \mathop \sum \limits_{{i_{1} = 1}}^{{f_{x2} }} \mathop \sum \limits_{{j_{2} = 1}}^{{f_{y2} }} (1 - w2_{i2j2} )w1_{i2j2} $$
(7)

Subject to:

$$ \left\{ {\begin{array}{*{20}l} {\sum\limits_{nc = 1}^{NC} {\sum\limits_{\varphi = 1}^{f\varphi } {\sum\limits_{{i_{1} = 1}}^{{f_{x1} }} {\sum\limits_{{j_{1} = 1}}^{{f_{y1} }} {S_{i1j1\varphi nc} \cdot V_{cp}^{nc} (i_{1} ,i_{1} ,\varphi_{1} ,i_{2} ,j_{2} ) \ge M,if \quad w2_{i2j2} = 1} } } } } \\ { - w1_{i2j2} = \sum\limits_{nc = 1}^{NC} {\sum\limits_{\varphi = 1}^{f\varphi } {\sum\limits_{{i_{1} = 1}}^{{f_{x1} }} {\sum\limits_{{j_{1} = 1}}^{{f_{y1} }} {S_{i1j1\varphi nc} \cdot V_{cp}^{nc} (i_{1} ,i_{1} ,\varphi_{1} ,i_{2} ,j_{2} ) \ge 0,if \quad w2_{i2j2} = 1} } } } } \\ {0 \le i_{2} \le f_{x2} ; \quad 0 \le j_{2} \le f_{y2} } \\ \end{array} } \right. $$
(8)

3 Novel Binary Particle Swarm Optimization (NBPSO).

As our problem of camera network placement is treated on discrete optimization space where variables can take a value of 0 or 1 only, we focus our study first on the case of the PSO. The binary PSO algorithm poses two problems in the case of placing cameras in an unstructured network and main concerns, in the first case the parameters of the binary PSO and in the second is the problem with the memory capacity of the binary PSO.

In this section, the camera position of the swarm is updated as in a continuous or binary variant. The primary difference between this algorithm and another version of the binary PSO algorithm is the speed accuracy. In this case, as in the continuous variant of PSO, the speed of a particle position is the speed at which the particle position changes its value in bits. We introduce two vectors for each particle position like \({\overrightarrow{V}}_{i}^{0}\) and \({\overrightarrow{V}}_{i}^{1}\). As \({\overrightarrow{V}}_{i}^{0}\) is the probability of the bits of the particle at zero, against the probability \({\overrightarrow{V}}_{i}^{1}\) is the bits of the particle evolve by one. Because in the equation for updating these speeds, which will be shown the inertia term is used, these speeds are not additional. Also, the probability of conversion in the j-th bit of the i-th particle is given as follows:

$$ V_{ij}^{c} = \left\{ {\begin{array}{*{20}c} {V_{ij , }^{1} \quad if \; x_{ij} = 0} \\ {V_{ij}^{0} , \quad if \; x_{ij} = 1} \\ \end{array} } \right. $$
(9)

In this way, simply calculates the velocity of the particles to improve optimal placement. The algorithm proposed for \({\overrightarrow{V}}_{i}^{0}\) and \({\overrightarrow{V}}_{i}^{1}\) is given as follows:

Consider the optimal position visited for a particle is \({P}_{ibest}\) and the optimal global camera position value for the particle is \({P}_{gbest}\).

Identically examine that the j-th bit of the i-th optimal particle value is equal to one. So, to develop the bit j-th of the i-th particle towards its best position and direction of the cameras, the speed goes to the change of one (\({\overrightarrow{V}}_{i}^{1}\)) for that the enlarged particle and the speed of change to zero (\({\overrightarrow{V}}_{i}^{0}\)) reduce. Using this concept, the following regulations have been extracted:

$$ \left\{ {\begin{array}{*{20}c} {if \quad P_{ibest}^{j} = 1 \quad then \quad d_{ij,1}^{1} = c_{1} r_{1} \quad and \quad d_{ij,1}^{0} = - c_{1} r_{1} } \\ {if \quad P_{ibest}^{j} = 0 \quad then \quad d_{ij,1}^{0} = c_{1} r_{1} \quad and \quad d_{ij,1}^{1} = - c_{1} r_{1} } \\ {if \quad P_{gbest}^{j} = 1 \quad then \quad d_{ij,2}^{1} = c_{2} r_{2} \quad and \quad d_{ij,2}^{0} = - c_{2} r_{2} } \\ {if \quad P_{gbest}^{j} = 0 \quad then \quad d_{ij,2}^{0} = c_{2} r_{2} \quad and \quad d_{ij,2}^{1} = - c_{2} r_{2} } \\ \end{array} } \right. $$
(10)

where \({d}_{ij}^{1}, {d}_{ij}^{0}\), are the two temporary values in the area of interest. \({r}_{1}\) and \({r}_{2}\) of the order of (0,1) which are managed randomly optimized at any iteration. Again \({c}_{1}\), \({c}_{2}\) are two established variables which are determined by the user.

$$ \left\{ {\begin{array}{*{20}c} {V_{ij}^{1} = wV_{ij}^{1} + d_{ij,1}^{1} + d_{ij ,2 }^{1} } \\ {V_{ij}^{0} = wV_{ij}^{0} + d_{ij,1}^{0} + d_{ij ,2 }^{0} } \\ \end{array} } \right. $$
(11)

where w is represent the term inertia. This work, the algorithm has improved global variables if the jth bit is zero, in this case the speed (V ij0) will be increased. And the likelihood also decreases with the same ratio.

After the speed of the particles \({V}_{ij}^{0}\) and \({V}_{ij}^{1}\) obtained changes according to Eq. (6). Subsequently we perform the transformation to normalized values with the use of the sigmoid function \({V{^{\prime}}}_{ij}=sig\left({V}_{ij}\left(t\right)\right)=\frac{1}{1+{e}^{-{V}_{ij}(t)}}\).

$$ x_{ij} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {\overline{x}_{ij} \left( t \right), \quad if \; r_{ij} < V^{\prime}_{ij} } \\ {\overline{x}_{ij} \left( t \right), \quad if \; r_{ij} > V^{\prime}_{ij} } \\ \end{array} } \right. $$
(12)

With the variable \({x}_{ij}\) is the complement of the variable \({\overline{x} }_{ij}\), that is to say \({\overline{x} }_{ij}\) Differently from \({x}_{ij}\). And \({r}_{ij}\) is a uniform random value between 0 and 1.

The meaning of the elements used in the speed relation is precisely the same as for the continuous PSO. The density of inertia used now reserves the anterior direction of the particle bits towards the improved personal bit or the improved overall bit for either the 1 or the 0. Again, the previous directions of the particles bits are improved. Using Eq. (8), the prior particle value is taken as an extract, similarly as in BPSO, for a single velocity of the particle. Also, for improved gains for better learning of the experiences of the proposed algorithm are detailed:

figure c

4 Experimental Results

In this section, we present the results of NBPSO for the problem of placing cameras in a uniform network which semi has constraints (obstacle, unstructured,…). One of the difficulties in the generation of an NBPSO algorithm is to be successful in the representation, that is to say the assignment matching adapted between the result of the problem and the state of the NBPSO particles. For NBPSO represents a dimensional network composed of particles, for a whole value a binary column indicates the orientation and the position of the camera to extend.

The algorithm is simulated in this work to solve the problem of placing the camera network in a uniform system with a 2D grid of 2 × 2 points and 2 position and orientation values. Several studies, brought in research works, extend the PSO to direct the constrained optimization problems. In this work, an active framework to solve the constrained difficulty described in the previous one is introduced for all the methods based on the swarm of particles. The maintain feasibility line is used NBPSO to direct the time constraints and the memory to satisfy all the intoxication constraints.

Two changes have been introduced to direct the constraints of the NBPSO algorithm. In the first case, the parameters of the binary PSO and in the second, the problem of the memory capacity of the binary PSO. The idea for these changes is to bring a problem to an unconstrained problem using NBPSO.

Experiment tests have been completed using some approaches based on the PSO algorithm presented in Figs. 3 and 4. (NBPSO, BPSO, Improved BPSO (I-BPSO), BPSO Using an Artificial Immune System—Based on Negative Selection (BPSO-AIS-NS), BPSO-AIS Based on Compatibility Degree (BPSO-AIS-CD), BPSO Inspired Probability (BPSO-IP)).

Fig. 4
figure 4

In several tests of optimal location of surveillance cameras in an obstacle and uniform environment

Before observing the results of the comparison, we show some results removed by the NBPSO algorithm in a case of random camera deployment. We estimated the NBPSO algorithm with several dimensions and types of cameras the results are given in by the Fig. 4e–f. Subsequently, we estimated the NBPSO algorithm using a medium with obstacle see Fig. 4a–d.

In all the Fig. 4a–f, the squares represent the borders of the area to be covered while the light green triangles represent the grid of the area detected by camera using our NBPSO approach. The grid nodes to cover are the points of interest. The large black dots represent the optimal position of the cameras to be deployed using the proposed NBPSO algorithm. Figure 4 illustrates examples where the majority of all the points of the grid observed are covered with an optimal placement of the cameras while making the cameras to be located only on the accepted black grid.

In all the results illustrating the cases where we rather have a uniform or complex coverage area, are presented in Fig. 4. In this figure, we can observe the effect of the presence of an obstacle in a visual field. All grid cameras are covered and each obstacle and taken into consideration. However, some small areas in some cases are not covered. This localization can be avoided by increasing the sampling frequencies of the grid. The results of the proposed NBPSO algorithm are illustrated in Fig. 4.

Figure 5 shows the different results obtained for the 2D surveillance benchmark used to plot all the algorithms used. Each technique was performed 80 times. For each algorithm realization, the optimal configuration and the number of iterations to reach the conclusion. The success rate, the median number of iterations, are after procedures for each method. We can distinguish in the Fig. 4 that our proposed algorithm precedes the visible between the algorithms and gives the corrected results to reach the global optimum.

Fig. 5
figure 5

Evaluation results of the proposed method for optimal location of surveillance camera network in a uniform and obstacle environment. a Success rate of this method, b Leverage and minimum iteration, c Inference of number of cameras on average computation time, d Camera number influence on total covered per camera

The goal is to minimize the total number of cameras in a placement and therefore minimize the cost of cameras to install while ensuring that all points on the grid are hidden by the deployed cameras. In our work, the area coverage is occupied which states that all the points of the grid are covered. Also, the variation of the spatial sampling frequency improved the percentage of the coverage area (see Fig. 5a).

On the other hand, the Fig. 5b represents the concept of average and maximum number of iterations obtained for each algorithm. Note that the method used is convergent only from another method. The Fig. 5c presents the results of comparison in terms of computation time several algorithms for other values of optimization variables. The NBPSO algorithm vastly outperforms the rest of the methods. Indeed, the computation time interval between NBPSO and the rest of the algorithms increases with the number of cameras installed in the camera placement problem. In the Fig. 5d represents the comparison between PSO and NBPSO in measurement the rate of cover of the surveillance camera.

We compare our method with other techniques such as (NBPSO, BPSO, Improved BPSO (I-BPSO), BPSO Using an Artificial Immune System—Based on Negative Selection (BPSO-AIS-NS), BPSO-AIS Based on Compatibility Degree (BPSO-AIS-CD), BPSO Inspired Probability (BPSO-IP)) that it gave good and comparable result.

5 Conclusion

In this article, we have approached the request for automatic placement of the camera network using scalable features. The results of simulations have demonstrated the advantage of the proposed algorithm, NBPSO to solve the difficulties of coverage with the presence of obstacle and a uniform area, in particular for large-scale measurements to solve the problem of terms of memory and computing time. All the proposed techniques being uncertain, the major difficulty was to ensure a global optimum for a uniform structure. For this reasoning, this observation allowed and clarified that the result of the overall optimum for the optimization methods. The manipulations proposed in this article we conclude from original problems in the future to progress the robustness of NBPSO method. We will learn the possibility of combining the NBPSO algorithm with other algorithms to speed up the processing process responding to uniform media and to guarantee the global optimal. We also propose to apply this technique in the future in robotic navigation applications.

Table 1 Camera specifications for the comparison test