Keywords

1 Introduction

Search and rescue techniques are well known and have been used extensively and successfully for many different missions. However, these usually operate with a small number of agents due to available assets. This creates a problem as a considerable amount of information is required to complete the mission in the shortest amount of time and thus maximise survivability.

With the introduction of swarm robotics into search and rescue missions, they can now deploy a much larger number of search assets. However, the impact of increased assets’ availability needs to be investigated to ensure the maximum benefit is generated from the increased number of agents available. Thus, the effect that the number of agents in a swarm has on its performance is examined in this article. This is important because swarms consist of distributed, relatively simple intelligent agents that when combined a group intelligence emerges. This emergent intelligence is often unpredictable, and the process is often disruptive. In this article, a swarm of unmanned aerial vehicles is used to conduct a search and rescue mission. Drones can undertake various standard searches but may have limitations due to available intelligence and restricted sensor capability. As such it is un-determined what advantages this approach will generate. Some obvious issues that need to be investigated given that swarms generate a collective or emergent intelligence using relatively limited distributed intelligence are: How smart do agents need to be if you have enough of them? Is there a point where agents no longer need to be so smart? How intelligent does the overall rescue operation need to be in order for the information required to be generated and how does this relate to the numbers deployed? Hence, to answer these questions, several investigations are demonstrated in this article to examine the relationship between intelligent levels and number of agents.

The aim of this paper is to look at what effect the number of agents participating has on a search and rescue mission. This is done through simulation which is a way to investigate swarms with various potential searches and rescue operations safely.

2 Search Methods

There are a number of search strategies outlined in the National Search and Rescue Manual [4]. These search methods are the standard for search strategies around the world, and by and large have remained unchanged for years. For this paper six were considered. Creep, Parallel, Square, Random, Weighted Random and Greedy.

2.1 Creep and Parallel Search

This is a fairly common method of search which can be modified as shown Fig. 1. If the path of the lost vehicle is well known, then a creep search is preferred. If it is believed the track may have deviated, a parallel search may be used.

Fig. 1.
figure 1

Creep (left) and Parallel (right) search 2 agents

2.2 Square Search

If the initial location of the event is well defined, but there is little additional information, then a square search is often the best strategy adopted. The starting point of the search is the area with highest probability. Then, the search takes square-shaped lines to cover the area around the starting point. The expanding square shape is applied to search for victims that have moved from the initial location (Fig. 2).

Fig. 2.
figure 2

A square search

2.3 Random Search

Simply searches a space randomly ignoring the areas already searched.

2.4 Weighted Random Search

A weighted random search is an improvement on the random search that biases the selected location based on other information that may be available such as wind, currents and most likely location of the initial situation.

2.5 Greedy Search

The greedy search uses probability to maximise the chance of locating the victim. A Bayesian type probability map is constructed by using experience or a selected prior distribution, and the agents search out the distribution from the highest point to the lower probability areas. Theoretically, this should provide the best outcome because it takes into account all the available data from past and current events. But, in practice, it is very sensitive to data error and an ill-selected prior probability. Note for this simulation greedy is given accurate information to test it as a well-informed search, the only searches that are incorrectly informed are; parallel, creep and square (Fig. 3).

Fig. 3.
figure 3

Typical result of a greedy search [5]

3 Simulation

In order to investigate and compare the six methods, simulation was used. A number of simulations were carried out with each method using various numbers of agents, search widths and placements distributions for the people. Anylogic© was used as the simulation platform which is a commercial product able to model using dynamic modelling, event modelling and particle modelling [6]. For this work particle modelling was used. The average time before a person was detected was determined for thirty runs using a set number of agents and averaged. This was repeated with a different number of agents for a total of thirty cases.

3.1 Beach

The area in which the simulation was run is defined as the beach. For this simulation the scale was set to one pixel being equivalent to one metre.

Figure 4 shows the beach which is the working area of the simulation during a run of the random strategy. The red area is that already explored while the light blue is that waiting to be explored.

Fig. 4.
figure 4

Beach during a random search run [7] (Color figure online)

4 Results

The results were plotted in a number of ways and for each run, the results followed a similar form. From Fig. 5, we can see that the greedy search reached a high level of success quickly and continued to improve until about nine agents were deployed. The weighted random followed a similar trajectory but did not do as well as the greedy search. The success of weighted random search closely following greedy is unexpected as it was not expected to approach the success of greedy search as fast as it did.

Fig. 5.
figure 5

Typical results from simulation with a random distribution for the person [7]

The parallel and the creep strategies perform about as well as each other. These two search methods are the most effective for searching for a person who is expected to be placed in a random distribution.

The square search is the least effective of the methods graphed, again as one might have expected as it is incorrectly told to expect the person to be at the centre of the area, so tends to spend most time at the centre rather than searching the entire area evenly.

The plotting of the random is not visible in this result as it is indistinguishable from weighted random search in this case.

What this shows is that for the size of the search area defined, after about 11 drones the amount of information becomes irrelevant.

Another feature of this graph is the greedy swarm, seems to improve up to about nine agents, which agrees closely with some inhouse research. The Weighted Random swarm may improve a little with a further two agents before the noise takes over. The square search does not seem to improve much beyond, a swarm of three agents. This however is a limitation of the simulation because of the size of the search area. The complexity of the search algorithm begins to increase rapidly at this agent count. So greedy search is used as an intelligent search at higher numbers of agents.

When the area to be searched is increased the number of agents to minimise the time does increase but not as much as one might have expected. The basic pattern however remains the same. This introduces the idea that it is more efficient to search large areas with higher drone counts. This could lead to a massive change in search and rescue methods, as the requirement to gather large amounts of information to reduce the search area may not be necessary (Fig. 6).

Fig. 6.
figure 6

Effect of a significantly increase search area for a random distribution of the person [7]

5 Conclusion

The simulation results follow the trend that might be expected, however it shows that the speed and efficiency may not be to expectation. One point of interest is the possibility that search and rescue could be split between; over saturated search strategies, where you use the agents to search areas where the number of agents exceeds the point of convergence; and under saturated search strategies, which are the more traditional search methods. The minimisation of the time taken to locate the lost person being hunted, particularly in an ocean or other exposed location is vital to their survival. The results indicate that the more information available, the better the outcome, for small number of agents. However this assumes the data is sound, and this limitation seems to disappear if you are able to reach the number of agents, where the results seem to converge. In the case of the greedy search for example the probability is built into a Bayesian map. The location of the victim was then selected based on a scholastic search of the space. This implies that the information the probability map is based on is completely accurate both in terms of the variables selected and the values attached to them. In a real-world situation this is unlikely to be the case and any errors might be very significant. To overcome this, it would make sense to use a heterogeneous rather than a homogeneous swarm with individual agents adopting different strategies in a situation where the number of agents is limited; this would be a useful area of future research.

Another area that we is currently being researched is having one or more of the agents human controlled. These humans would not direct the swarm but act as an element of it. The advantage of such a system is that while computer-based systems by-and-large outperform humans when sufficient data is available, they find it hard to make a decision on limited data where humans do rather better. Many successful rescues have resulted from human intuition and it is hard to build hunches into AI.

Another area of interest is looking at the possibility and effectiveness of search methods where the swarm searches in a normal search method using the information provided, but tells the agents to search in smaller search areas using where the agents are oversaturated and can search randomly.