1 Introduction

The recent developments in artificial intelligence, spatio-temporal mapping, sensing technologies, and computing resulted in rapid improvements in the ability of robots to act independently and to deal with unexpected situations [2]. However, similar to other fields of science, robotics research is currently facing a crisis of reproducibility, which has several causes. First is complexity: each robotic system comprises of a large number of individual submodules, which are often evaluated in isolation and their impact on the efficiency of the entire system is often neglected. The second reason is experimental costs: Performing a field experiment requires one to solve several technical and logistical issues, which are not scientifically appealing. Thus, evaluations are often based on datasets which do not reflect the complexity and unpredictability of real environments. The third reason is the evaluation criteria: failures of the methods are often assumed to be caused by technical issues, and the failed cases are not accounted for in experimental results. Therefore, the reliability and robustness of the methods are often not included in experimental evaluations, which typically focus on other aspects, like accuracy or computational complexity [7, 13].

All of the aforementioned issues contribute to sub-optimal performance of robotic systems when deployed in real-world situations. To alleviate this problem, several experts have proposed comparing the performance of robotic systems by means of robotic contests, such as MIROSOT [35], Eurobot [20], RoboTour [10], RockIn [31] or MBZIRC [32]. Each of these contests aims at different abilities of the robots, and they evaluate the complete system performance in a limited number of experimental trials. Thus, in these contests, system reliability is more important than the performance of the individual modules (Fig. 1).

Fig. 1.
figure 1

Team of our ground and aerial robots deployed in the DARPA contest mine operated by the Federal Centre for Disease Control.

One of the long researched topics in the robotics community is autonomous exploration, where a robot is supposed to build a detailed spatial [4] or spatio-temporal [28] map of its operational environment. Apart from the mapping itself, exploration requires the robot to estimate its own position accurately, decide where to move to refine and complete its map, traverse difficult terrain, avoid dangerous situations, and recover from mapping or self-localisation errors. While the aforementioned problems were thoroughly studied separately, there are not many systems capable of performing exploration in a reliable manner without supporting infrastructure allowing for their supervision [28].

The ability to rapidly explore the operational environment is desirable especially in security and search-and-rescue scenarios, where the robots are supposed to create a map of the affected area and identify locations of potential victims or other relevant objects. It is assumed that knowledge provided by the robots, i.e., detailed maps of the environment, improves the safety and efficiency of search-and-rescue teams. Moreover, robots can provide information from non-accessible or otherwise dangerous areas, such as gas-filled tunnels, unstable ground, damaged structures, large crevices, or contaminated locations.

The robots for search-and-rescue missions are often teleoperated by radio, and their ability to act independently is limited to ensure efficiency and safety during the mission. However, large-scale underground disaster response scenarios with no prior communication infrastructure illustrated by the DARPA Subterranean Challenge require a certain level of autonomy since the connection with the robots might be lost. Thus, the robots deployed in these scenarios have to be able to act independently and autonomously in order to perform the exploration.

The DARPA SubT challenge aims to boost the development of reliable autonomous robotic exploration in adverse underground environments without any supporting infrastructure. The common practice in robotics research is to evaluate the individual capabilities of the robotic systems. But, during the contest, the performance of robotic teams is evaluated by their ability to quickly and accurately locate relevant objects in underground sites with a variable degree of complexity. Such evaluation puts the whole robotic system to the test.

In this paper, we describe the multi-robot system developed for the DARPA SubT challenge by the team CTU-CRAS of the Czech Technical University in Prague. The description contains details about the hardware of the robots, localization systems, mapping, navigation, as well as our approach to multi-robot coordination.

2 Related Work

Despite the advances in mapping and localisation, robotic exploration of hazardous environments in search-and-rescue scenarios is typically performed by means of teleoperation. However, limitations in bandwidth, data delays and outages, sensor noise, sensor deficiencies and adverse environmental factors impose a high cognitive load on the robot operator. This, along with the need for fast mission execution, imposes significant stress, which often results in performance deterioration and frequent errors with potentially serious consequences. All of these factors are multiplied if the operator has to supervise and direct multiple robots, see Fig. 2. These factors, along with the ones mentioned in Sect. 1, motivated the research in autonomous exploration of adverse environments, such as underground facilities.

Fig. 2.
figure 2

Setup of the operator station and signs of stress when supervising a team of robots with limited autonomous capabilities

One of the first reported attempts to acquire a 3D map of underground mines was reported in [33], followed by [9]. While these works were successful, they focused on 3D mapping itself, and it did not consider autonomous exploration or object detection. Rather, these mapping systems consisted of a sensor-equipped cart, which was guided through the mine by a human operator. After this pioneering effort, more authors addressed the same problem later on. Some of the efforts are summarized in [16], which compares the performance of different subterranean mapping methods. In [16], the authors conclude that the sensor technologies, data processing algorithms and mapping methods are not ready to compose a reliable system for autonomous exploration. The importance of using robots in mine search and rescue operations was stressed in [18], who summarized the impact of mine accidents, and their experiences of using robots in underground search-and-rescue missions.

Since then, new sensors like 3D lidars, high-dynamic range cameras, as well as better localisation and mapping algorithms have emerged leading to new solutions to the problem. For example [37] propose to use a marsupial setup consisting of a “mother” and “baby” robot, which is released to map hard-to-access areas. The authors of [19] investigated the performance of different mapping strategies using a fast 3D laser scanner. However, the need for reliability still favors robust solutions, which can withstand explosions and fire [34] or can operate in flooded mines [15]. These often exploit installed infrastructure, like magnetic wires, ceiling lights or radio beacons [27], and use reactive behaviors for navigation. Still, the advances in 3D mapping, traversability analysis and autonomous navigation allowed for the design of systems that perform the exploration in an autonomous manner while reasoning about the mine structure [21]. Similar systems were reported to autonomously venture hundreds of meters into abandoned mines [14].

In this paper, we report on the hardware, software and communication design and performance of a multi robot system, which builds upon the experiences of the aforementioned works as well as lessons learned during projects aimed at robotic search and rescue missions [12].

3 Contest Specification

DARPAs Subterranean Challenge (SubT) is one of the contests organized by the Defense Advanced Research Projects Agency (DARPA) to test and push the limits of current technology. The SubT challenge is focused on exploration of unknown, large subterranean environments by teams of ground and aerial mobile robots. In particular, the task of the robotics teams is to actively search for people and other objects of interest. The efficiency of the team is assessed by the number of objects found and accuracy of their position estimation [6].

The contest occurs at several different underground courses with a structure unknown to the participating team. A typical course is expected to span over 8 km of length, and contain about 20 objects of interest. The contesting robots are expected to operate in adverse conditions such as fog, rubble, dripping water, or mud and they have to be able to traverse inclined, declined and constrained passages. A contest team can setup their control station close to the course entrance, but only the robots are allowed to enter the course itself. The organizers provide basic information of the type of course, and indicate which types of objects are to be located. After that, robots are sent inside and only one operator is permitted to supervise their status from the control station located at the course entrance. The robots are then required to locate the specified objects and report their positions to the supervising team. The team score increases each time a robot locates an object and reports its position within 5 m from its true position. However, the team score is decremented in case of a false positive detection or mislocalisation.

3.1 Contest Environment

The challenge is divided into four rounds, where the first three occur in specific environments (“Tunnels”, “Urban” and “Caves”), with a different set of objects to be located. The “Tunnel” environment round occurs in a mine, the “Urban” in an underground parking lot or a subway station and “Caves” in naturally-formed caves. The final, fourth round will comprise of all three environments.

All environments are expected to have little or no GPS coverage and very limited range of radio communication. The absence of communication imposes strong requirements on the robots’ ability to operate autonomously. The robots will have to autonomously locate the objects and return back to the control station to report their position.

Each environment brings its own challenges in terms of physical barriers and dynamics.

The “Tunnel” environment will comprise of long tunnels of similar width, with ground varying from dry concrete to debris, mud, shallow water and train tracks. Moreover, the robots are expected to encounter fog, smoke and dripping water.

The second, “Urban” track resembles urban underground areas, such as subway stations or basements. Thus, one can expect better lighting conditions, and open, large areas interconnected with narrow tunnels accompanied by stairs, ladders and inclines. We also expect perceptual problems caused by glossy, reflective surfaces, which pose a problem both for cameras and laser rangefinders.

The last type of environment – the “Caves” – is most likely going to be set in larger caverns with smaller constrained passages in between. This environment is going to be challenging due to the vastness of the caverns and difficult traversability of the terrain, which is expected to be non-even and slippery.

3.2 Artifacts and Scoring

The scoring of teams is performed for each run of each track. Points are earned for messages sent to the DARPA-provided interface – these messages contain identification of the detected object (artifact) along with its position. The score is increased if the correct type is reported within the correct error bound which is illustrated in Fig. 3. Each of the artifact types is specified by DARPA, providing the exact item appearance and a link where to obtain them. For the first track, there are several artifacts published comprising of: backpacks, fire extinguishers, small drills, human survivors (dressed up mannequins) and cell-phones Fig. 3. For the latter tracks, the robots might have to perform other tasks than purely object search, e.g. they might need to measure the presence of hazardous gasses or identify certain specific locations of the infrastructure (position of ingress etc).

Fig. 3.
figure 3

(Left) Artifact detection error boundaries, (Right) DARPA-provided artifact types

3.3 Competition Timeline and Rounds

The competition is held from September 2018 until August 2021.

Between these dates, there are 3 competitions tracks followed by a final event, see Table 1. Apart from these tracks, there are two exercise events to provide an opportunity to further test the systems in close-to-real conditions. One was already held at April 2019 called STIX and the other one will probably occur on April 2021. In this paper, we will describe the results achieved during the STIX and Tunnel circuit rounds.

Table 1. Timeline of whole competition [6]

4 Robots and Sensory Equipment

The team is comprised of two tracked and one wheeled UGV, two six-legged crawling robots and two quadrotor UAVs. The capabilities of the individual robots of the team are complementary. The tracked robots can traverse more difficult terrain, while the wheeled robot is faster. The crawling robots, which are rather slow, are capable of navigating in adverse terrain, through narrow tunnels and other spatially constrained areas. The quadrotor robots are not constrained by the terrain at all, but their operation time is short and they do not carry all the sensors of their ground counterparts because of their limited payload.

4.1 Wheeled Robot - Husky A200

The husky is a wheeled platform produced by Clearpath capable of speeds of up to 3 m/s, with a rugged chassis capable of withstanding and traversing mud, light rocks, and steep inclines. This platform is powered by lead-acid batteries capable of sustaining several hours of operation with payloads of up to 50 kg. The onboard sensors consist of a RoboSense 3D lidar with \(30^\circ \,\times \,360^\circ \) field of view and range up to 200 m, which is the primary sensor for localization and mapping. To detect the objects of interest, the robot is equipped with five Bluefox RGB cameras positioned to achieve 360\(^\circ \) field of view. To accommodate for the low light conditions, the cameras are running at 15 FPS and the robot is equipped with LED stripes attached to all sides of its body. Object detection itself is performed on an NVidia Jetson TX2 board, which achieves detection rates of around 2 frames per second. A NUC-i5 computer performs the calculations for localization, mapping and control.

Since the payload capacity far exceeds the weight of the sensory and computational equipment, the robot carries some extra devices. Currently, it is equipped with two eight-slot containers for communication relays. The robot can deploy the individual relays in order to keep a low-bandwidth communication link with the command station [5]. In the future, the robot will be fitted with docks for hexapods and UAVs, and deploy them at locations suitable for these platforms similar to [30].

Fig. 4.
figure 4

The husky platform after leaving a muddy mine. Note the Robosense 3D lidar on the top and the gray carousel between the wheels. The carousel stores and releases the communication modules described in Subsect. 4.5.

4.2 Tracked Robots - Absolem

The two tracked Absolem platforms produced by BlueBotics SA are designed to traverse terrain common in disaster sites. Their size and weight is slightly lower than the Husky robot, and while their payload capacity is sufficient to carry all the desired sensors, they would not be able to carry other robots. Each Absolem robot has two base tracks and four helper tracks (called flippers) which are controlled independently by a method based on [22, 23]. The primary sensor for navigation is a SICK LMS151 lidar attached to a pivot on the front of the robot. The pivot rotates along the robot’s longitudinal axis, rotating the lidar’s scanning plane so that the robot obtains a 3D point cloud of the environment in front of it. The 3D point cloud is processed by an embedded, Intel Core i7-based PC, which runs navigation, mapping, localization and exploration algorithms. Similar to the Husky robot, object detection is performed by a Jetson TX2 board, which processes images provided by the PointGrey Ladubyg3 omnicamera. Both Absolem robots are supposed to thoroughly search areas not accessible by the Husky robot. Since their sensory and computational capabilities are similar, the Absolem and Husky robots are using the same localization, mapping and exploration methods which will be described further on in Sect. 5.

4.3 Crawling Robots - Hexapods

The crawling robots are based on the six-legged PhantomX Mark II platform, which can carry over 2 kg of equipment. The robots can move with velocities of around 0.2 m/s, and their batteries allow for 1 h of autonomous operation. Both navigation and object detection is based on a rig composed of 10 W LED directional illumination, and two Intel Realsense cameras. The T265 camera, which performs visual SLAM onboard, provides the main computer with an estimate of its position, alleviating it from computationally costly localization algorithms [3]. The D435 RGBD camera provides data for map building, exploration and object detection, performed by an NVidia Jetson TX2 board. Due to their excellent terrain handling capabilities, these robots are supposed to explore areas inaccessible by the other ground robots [8]. However, their movement speed is rather low, so in future missions, they will be carried by and deployed by the Husky robot (Fig. 5).

Fig. 5.
figure 5

Absolem (back) and hexapod (front) platforms after a run in the mine

4.4 Aerial Robots - Quadrotors

The aerial robots are based on the F450 kit by DJI, controlled by the PixHawk board. Their primary sensor to perform localization and mapping is an RPLidar A3, which provides 16000 distance measurements per second in a 360\(^\circ \) circle in one plane. The sensor range is 25 m, and its (adjustable) rotation rate is set to 10 RPS, which provides sufficient overview to safely move in narrow tunnels with speeds of up to 1 m/s. Laser-based localization is based on the Hector SLAM [11] package, which is supposed to work well in conditions that we anticipate in the contest environment [29]. For object detection, each UAV carries a Bluefox RGB camera with an LED illumination stripe. The detection itself is performed by a modified YOLOv2 CNN on the CPU of the onboard i7 NUC PC, which also performs localization and mapping. With this setup, the processing of one image takes approximately 2 s, which is just sufficient in order not to miss an object when flying at standard velocities. Both UAVs are setup to run the exploration process automatically after launch, so they can be deployed easily by any member of the team (Fig. 6).

Fig. 6.
figure 6

One drone platform entering the mine

4.5 Communication

As mentioned before, one of the limiting factors for search and rescue robots is the constrained radio communication in the winding underground tunnels. The lack of radio link with sufficient bandwidth not only prevents direct teleoperation, but also makes mission supervision and monitoring difficult. Ideally, robots should provide the mission supervisor with a live video feed and share the information about their environment with each other. This would require them to maintain a high-bandwidth radio link during the entire mission, which is unrealistic. While autonomously operating robots do not need to provide live video and sensor data streams, they still need to share their positions and report the locations of objects to the command station. This means that it is desirable to maintain a reliable, low-bandwidth radio link. To deal with these requirements, our system uses 3 different kinds of communications with different levels of reliability, bandwidth and usage.

Short-Range Link: WiFi. During the system setup and initial deployment, all robots and the command station are close to each other. Therefore, one can use standard, off-the-shelf WiFi, which has sufficient bandwidth to data-intensive sensor measurements and videos in real time. This is necessary during system setup, because one needs to verify functionality of the individual software modules. Moreover, the robots can use the WiFi to exchange the information they gathered about their environment and share their 3D maps.

Mid-range Link: Mobilicom. For mission monitoring and teleoperation, we use high-power, mesh-enabled communication systems produced by Mobilicom. While this system is able to transmit signals over large distances, the signal power drops significantly in cases where line of sight is not maintained. To deal with this issue, we coordinate the mission so that the robots which carry the mobilicom modules try to maintain line of sight. This allows them to re-transmit the signals from any of them to the base station, located at the mission control site. The system achieves about 1 Mbit throughput with the individual stations being 100 m apart from each other. This allows them to share low-quality video feeds for teleoperation, elevation maps to coordinate exploration, and pictures to confirm object detection. However, the size of the modules allows their use on the wheeled and tracked robots only, which limits the system range to \(\sim \)300 m.

Long-Range Link: Motes. To overcome the aforementioned limitation, we employ another communication system, which has a limited throughput sufficient to share the robot and object positions only. These “Mote” modules are small enough to be carried by all of our robots, The Husky robot, which is always deployed in the beginning of the exploration mission, carries up to 16 of these modules (see Fig. 4), and can drop them to create a network of relays, providing communication infrastructure for the other robots. While the bandwidth of this network is low, it is still sufficient to share the positions of the robots and artifacts amongst the team and transmit this data to the base station.

5 Software

While appropriate hardware is necessary to perform the challenge, the software influences the performance of the system in a significant way. To address the importance of the software solutions, DARPA also organized a “virtual track” where the teams provide software solutions only, and the contest occurs in realistic simulations of the deployment sites. As expected, most teams regularly test their software in simulation as part of the development process, as simulation of the systems is much less tedious compared to real-world testing. Moreover, simulation allows us to test the individual software modules in isolation as well as in a holistic manner. The software used on all of our robots had to solve the problems of localization, mapping, navigation, object detection and exploration. Moreover, the robots had to coordinate their exploration efforts.

5.1 Object Detection

The performance of the exploration system is evaluated in terms of its ability to detect and locate potential victims as well as objects that provide a cue of their location. Object detection is performed by every robot of our team by processing camera data by a neural network using a customized version of the YOLOv3 [25, 26] object detector. If the robot has a 3D map of its surroundings, the bounding box provided by the YOLOv3 detector is projected into the 3D map. Then, the final position of the detected object is established by application of Kalman filtering over the detections that are temporally consistent. Only if the detection is certain, the robot sends position and RGB snapshot of the detected object. This prevents flooding of the communication network with image data, potentially preventing other robots from reporting the objects detected.

Since the rules explicitly prohibit training the neural network at the mine where the challenge occurs, we trained the YOLO on imagery gathered in other mines. Good performance of the detection method during the contest indicated a good ability of the neural network to generalize the training data.

5.2 Localization and Mapping

Apart from detecting objects of interest, one has to exactly determine their position. For that, a robot, which detects the object, has to know its own position with sufficient accuracy. The problem of self-localization, i.e. reliable estimation of the robot position, is tackled differently depending on the given platform. The wheeled and tracked robots are exploiting the richness of the 3D data provided by their 3d rangefinders. This allows them to combine their odometry with a 3D version of the iterative closest point method (ICP), which performs simultaneous localization and mapping (SLAM) [24]. The UAVs combine a Hector SLAM [11] method, based on their 2D lidars, with visual localization, based on the ORB-SLAM2 method [17] - both of the methods create a map of the environment as part of the ego-motion estimation process. Localization of the hexapods is based on the Intel T265 camera module which provides a position estimate based on a proprietary gray-scale visual-inertial SLAM algorithm. The robots are equipped with a secondary RGBD camera (Intel D435) which builds a 3D map [3], and uses the map to guide the robot to unexplored areas.

5.3 Navigation

The maps built in the previous steps are first transformed from full 3D to a 2.5D elevation map, which is subsequently analyzed for traversability. Then, a motion plan is generated and executed by running the A\(^*\) pathfinding algorithm over the elevation map. To improve their ability to overcome adverse terrain, the tracked robots incorporate the information of their RGBD cameras and position their auxiliary tracks accordingly [22, 23]. The UAVs implemented a guided random walk algorithm which forces the vehicle to go in a specified direction while avoiding obstacles on the way.

The method used proved to be universally applicable and it works well on our crawling, wheeled and tracked robots. As the aerial robots do not move on the ground, they plan their path to simply avoid obstacles detected by the 2D lidar.

5.4 Exploration

The ground robots are using two modes of exploration, which are both based on the frontier paradigm [36]. The tracked UGVs use frontiers generated by RGBD cameras since we established that we cannot reliably recognize objects at greater distances. Entropy of those frontiers is then calculated using position data from all of the robots, which causes the robots to prefer unvisited frontiers over those already seen by other robots or by itself.

However on the Husky platform the frontiers are extracted by the 3D lidar so that they appear at a greater distance from the platform, which is used to move towards them with the speed the Husky robot offers. The robot assesses the accessibility of all unexplored cells in the elevation map and marks them as possible frontiers, and then tries to move towards the position of the closest one.

5.5 Coordination

Coordination of the robots is done using the powerful data-link made available by the 4G Mobilicom mesh communicators in the tunnel. This allows the operator to guide the robots towards the locations of interest. Robots also share their positions to the rest of the robot crew using the low-bandwidth link established by the droppable “Motes” described in Subsect. 4.5. Sharing this information allows better coordination of the exploration efforts by avoiding exploration of areas already visited by other robots. Knowledge of other robot positions is also useful in detecting situations where a robot is stuck or lost.

5.6 Interface

Since in the rules of the competition allow only one human to communicate with the robots (including reporting artifacts and teleoperation), it was necessary to design a user-friendly interface for the robot team as well as for each of the larger robots. The large UGVs are streaming their data to a control PC at the operation base on which all of the information from each robot is displayed. Detections, positions, maps and other useful information can be shown to the operator so they can eventually guide the robots from adverse situations by full teleoperation, or by providing them with intermediate goals in the elevation map. On the other hand, UAVs only have a fully autonomous mode, and while they provide the positions of the detected objects, they don’t allow the operator to interfere with their operation. The artifact detections reported by the ground and aerial robots are provided to the team supervisor via a secondary computer, who can then decide to accept or reject a given object detection.

6 Experiments and Deployments at Mines

So far, our robotic team was deployed at three experimental sites: “URC Josef” – a CTU experimental mine, “Edgar” Coal Mine in Idaho Springs and “NIOSH” mine facility near Pittsburgh.

The “Edgar” mine deployment, which was organized by DARPA, consisted of four independent exploration runs. During these runs, the robots were able to detect three objects at correct positions and report them to the fleet operator. However, due to the technical difficulties with inter-robot communication, only one of the aforementioned runs was performed autonomously. After the runs, we were invited to enter the mines in person, and the organizers pointed out difficult situations which we could encounter in the future runs, like a fog-generating machine, which filled the corridors with thick haze, or difficult-to-find objects, such as a cellphone located underneath a grilled floor.

The second round of experiments was conducted at “Josef” experimental mine owned by the Czech Technical University in Prague. These were focused on evaluations of the system robustness, dataset gathering, and verification of particular components and approaches. During these tests, we were capable of exploring substantial parts of the mine, going over gravel and uneven terrain, while detecting and localising objects. Moreover, we gradually improved the efficiency of our team, which was reflected in a decrease of the deployment and setup time as well as an increase of the run lengths and numbers of objects detected.

The final four experiments were conducted as part of the DARPA Subterranean “Tunnel” challenge. Two experiments were performed at the ‘Safety research mine’ and the other two at the ‘Experimental mine’ of the National Institute of Occupational Safety and Health (NIOSH). During the first run, our system suffered from radio interference from the emergency stop system, causing our robots to halt every few meters and prevented smooth operation. The problem was fixed in software and during the subsequent runs, we were able to find and locate five objects per trial. This score placed us one point behind the team CoStar, which was a joint team of researchers from MIT, JPL/NASA, Caltech and KAIST.

The most difficult problem encountered was the operator’s cognitive load when coordinating several robots in the exploration mission. The second problem was related to a lack of perceptual capability to distinguish between shallow, passable puddles, and deeper pits filled with water. Other problems were related to communication – the fleet had to be carefully coordinated so that the robots closer to the base station acted as communication relays to ensure that the radio link to the most distant robot is maintained. On the other hand, the localisation, mapping and object detection algorithms all worked in a satisfactory manner.

7 Conclusion

In this paper, we described a multi-robot system for search and rescue operations in underground areas. The team consisted of three heavy-duty platforms with advances sensors and strong computational capabilities, two crawling robots and two aerial vehicles with limited payload and action radius. We provided basic information of the hardware, software, and communication methods, which should be sufficient to get a basic insight into our system workings.

The system described here was not implemented for the purposes of the contest. Rather, it is a result of the integration and refinement of several software components used in other scientific projects at the CTU’s Center for Robotics and Autonomous Systems. The integration efforts started in December 2018, with some of the platforms (e.g. Husky and its sensors) arriving in May 2019, making it rather challenging to perform the integration and testing prior to the contest.

The performance of the system described was thoroughly tested during the contest, organised by the Defense Advanced Research Project Agency (DARPA). During the “Tunnel” round, where our robots were supposed to search for specific objects in abandoned mines, our team achieved third position out of 11 teams, and first place in the non-DARPA funded teams category [1].

For the future rounds, we will focus on improving the autonomous behaviors, implementation of coordination methods with restricted communication, reduction of the operator load and improved handling of hard terrain. Since the aforementioned issues are not only technical, we will approach them as scientific questions. Thus, participation in the DARPA contest will help us to identify new problems, which are worth being investigated.