1 INTRODUCTION

The successful use of mobile robots in deferent fields of civil industries brings researchers to the task of robotic group effective control. As an examples can be mentioned object detection in different types of environment, use a group as distributed sensor system for chemical analysis, etc.

Generalizing all the tasks, we can say that robotic group collaboration in a densely cluttered terrain is one of the core problems in swarm robotics. In such type of surroundings group is usually distributed in the area. Such distribution gives them an ability to perform tasks of real time dynamic events detection in surroundings, searching for objects of interest, etc. in a better way than, for example, a sensor network. This is achievable because each individual robot can cover larger part of the area by patrolling rather than staying in static. This means that the group can perform monitoring of an environment with fewer amount of sensors. For achieving automatization inside the group, while performing such tasks, robots must operate with the predefined rules of collective behavior.

Each individual robot within the group is a device with a specific set of features (sensing, communication, movement and computational processing) and factors limiting them. We consider using a homogeneous group of robots ([1], [2]) that are able to perform different simple tasks independently. In addition, robots must be able to move to their goal in the environment without losing contact with the group members along the way. That is why were allocated three types of problems that need to be solved simultaneously in real-time:

1. Obstacle detection and localization. The success of a mobile group of robots in an unstructured environment is only possible if the robots has ability to adjust to variable factors in the environment. These changes, which include the presence of new obstacles, can be detected using sensory systems like laser technical vision systems, cameras etc. Obstacle detection serves as an initial state of motion planning to predict possible routes.

2. Group communication process ([3], [4]). The need for data transfer can occur in the robotic teamwork in the case where it is necessary to make a collective decision, the lack of its own data to make an individual decision or as well as to inform other robots of the occurrence of abrupt changes in the environment or conditions.

3. Motion planning or collective navigation ([5], [6], [7], [8]) for robotic group in cluttered terrain implements the task of developing the potential trajectories through an unknown obstacle field from a given initial position to a desired goal position.

So the main contribution of the present paper to review the possible solutions of this problems and by implementing them into the model understand the influence of data exchange on the robotic group interaction with environment during the task of mapping [9] and goal localization.

This paper organized as follows: Section 2 describes obstacle detection and digitalization. Section 3 reveals the principles of dynamic data transferring network forming. Section 4 covers the aspects and methods of motion planning optimization. Section 5 describes basic modeling results considering group of three robots.

2 TECHNICAL VISION SYSTEM

The most common vision systems are using approaches based on CCD or CMOS ([10], [11], [12], [13]), more expensive equipment such as Time-of-Flight [14] cameras or some laser based [15]. Most of them are good for object recognition, path planning, etc. However, in case of more complicated conditions, like low light or a sharp landscape change and so on, vision system should satisfy these circumstances. Reviewing approach is based on laser technical vision system (TVS) [16] (Fig. 1). This system in comparison to others has wide opening field of view angle (up to 160°) and perfect matches for operation in completely dark environment, which is completely impossible for camera systems.

Fig. 1.
figure 1

Technical Vision System.

It uses method called dynamic triangulation [17]. System calculates Cartesian coordinates according to two detected angles B ij and C ij (in this particular case ij are horizontal and vertical scanning steps) of laser instantly highlighted point and fixed distance a between a projector and a receptor. In obtained triangle (Fig. 2) angle B ij is calculated as simple ratio of two counters codes: number of clock pulses between two home pulses and in interval ‘home pulse – spot pulse’ (1).

$${{B}_{{ij}}} = \frac{{2\pi {{N}_{A}}}}{{{{N}_{{2\pi }}}}},$$
((1))

where N A is the number of reference pulses when laser rays are detected by the stop sensor and N is the number of clock reference pulses when the 45° mirror completes a 360° turn detected by the zero sensor.

Fig. 2.
figure 2

Dynamic triangulation Principle of laser scanning TVS.

To calculate x, y and z coordinates the next equations are used (2-5):

$${{x}_{{ij}}} = a\frac{{\sin {{B}_{{ij}}} \cdot \sin {{C}_{{ij}}} \cdot \cos \sum\limits_{j = 1}^j {{{\beta }_{j}}} }}{{\sin [180^\circ - ({{B}_{{ij}}} + {{C}_{{ij}}})]}},$$
((2))
$${{y}_{{ij}}}\, = \,a\left( {\frac{1}{2} - \frac{{\sin {{B}_{{ij}}} \cdot \sin {{C}_{{ij}}}}}{{{\text{sin}}[180^\circ \, - \,({{B}_{{ij}}}\, + \,{{C}_{{ij}}})]}}} \right)\quad {\text{at}}\quad {{B}_{{ij}}} \leqslant 90^\circ ,$$
((3))
$$\begin{gathered} {{y}_{{ij}}} = - a\left( {\frac{1}{2} + \frac{{\sin {{B}_{{ij}}} \cdot \sin {{C}_{{ij}}}}}{{\sin [180^\circ - ({{B}_{{ij}}} + {{C}_{{ij}}})]}}} \right) \\ {\text{at}}\quad {{B}_{{ij}}} \geqslant 90^\circ , \\ \end{gathered} $$
((4))
$${{z}_{{ij}}} = a\frac{{\sin {{B}_{{ij}}} \cdot \sin {{C}_{{ij}}} \cdot \cos \sum\limits_{j = 1}^j {{{\beta }_{j}}} }}{{\sin [180^\circ - ({{B}_{{ij}}} + {{C}_{{ij}}})]}},$$
((5))

The detected obstacle or moreover surroundings for the robot is presented as a mesh consisted of points (Cartesian coordinates) on the surface (Fig. 3).

Fig. 3.
figure 3

Examples of scanned obstacles.

TVS is able to give a highly detailed information about the detected obstacle. In the task of recognition or classification it is considered as a benefit. During the motion planning for robot to avoid collision the information about an obstacle must contain general data (width, depth, height and large convexities of its surface which can obstruct the movement). In other words, the obstacle should be represented for robot as a simple 3D figure with low resolution. This is done by calculating the equivalents of opening angles that are needed on the different striking distances in order to maintain the same resolution of the received TVS image.

The [18] recommends three different angles 14.5636°, 5.5455°, 1.9091°. They represent three types of resolution presented as linguistic variables: “Low”, “Medium” and “High”. Field of view of the TVS can be presented as an arc (Fig. 4). Difference between horizontal steps we will call “Opening angle” and the radius of arc is “Striking distance” (distance to the detected object).

Fig. 4.
figure 4

Field of view scanning arc.

According to these assumptions will be calculated opening angles needed to reach the specific resolution (amount of point on arc) for different striking distances (Table 1).

Table 1. Comparison of scanning angles

The range of striking distances is separated into three parts: “Effective” where there is no need for robot to drop down its speed to avoid collision; “Optimal” the reaction depends on the maneuver needed to be done; “Critical” robot needs to slow down or even stop.

As can be seen, average angle for the “Critical” range in the end of the distance will give a low resolution that is not enough for critical obstacle detection. In this case, it is necessary to increase the resolution. That is why the limit value of angle for “critical” range is taken. The set of angles becomes next: 5.209°, 2.474°, 1.247°.

Using the obtained data can be performed a set of rules for robots speed control and resolution stabilization during the movement (Table 2).

Table 2. Rules for resolution and speed control

Implementing variable frequency of data storing (Change of opening angles) will help to leave conventionally equal level of obstacles detalization in the memory of single robot. Moreover, this discretization process allows performing data exchange within the group to update the common knowledge base more often than with raw data from TVS.

Implementation of the variable angle for data storing will help to maintain the equal level of obstacle detalization in memory of single robot for different striking distances. Such surrounding discretization provides a reduced datasets. It allows to make path planning faster (robots have to analyze less data) and performing more often the data exchange within the group, with the aim to upgrade the common knowledge database. That is why on this stage occur the need of data exchange network forming for group.

3 DYNAMIC NETWORKS

Article considers two models of data transferring: information exchange with centralized management (Fig. 5a) and strategies of centralized hierarchical control (Fig. 5b). On the figure presented solution for general case and for the reviewed model with the group of three robots.

Fig. 5.
figure 5

Models of data transferring.

3.1 Network Forming for Robotic Swarm

Consider a general case of swarm can be proposed a method of network forming based on creating a spanning tree. Algorithm consists of seven steps and includes the use of classical approaches. Steps of dynamic network forming for robotic swarm are next:

1. Build a fully-connected network graph (Fig. 6a);

Fig. 6.
figure 6

Hierarchical network forming.

2. Use the Kruskal algorithm to build the minimum spanning tree (Fig. 6b);

3. In the obtained tree, use the Floyd-Warshall algorithm to receive the list of all possible routes in the network;

4. Calculate the average route length for each node;

5. Select the node with the lowest average length and configure it as a high level node;

6. Nodes with “one side” connection configured as Low level nodes;

7. Other nodes configured as mid-level nodes (Fig. 6c);

3.2 Leader based Method

According to the complexity of Kruskal and Floyd-Warshall algorithms in group with small amount of robots would be more rational to use a role distribution methodology.

One of the models that describe the locally interacting robots organization is static swarm [5]. It is characterized by the absence of a given control center and is some kind of a fixed network – a set of agents. The basic properties of static swarm are activity, local interactions and functional heterogeneity. That is why will be reviewed the method of role distribution based on the task of selecting a leader.

Under the term “Leader” we will understand the central node of data exchange (robot for a short period of time will become server to store and marge data). For choosing a leader robots will be using a voting process.

Each robot can be described as set of parameters

$$R = (I,L,E,N).$$
((6))

Where I – identifier of robot, L – identifier of voted leader, E – evaluation of the leader L (amount of voices that have to be given for a leader), N – list of connections available for robot (its neighbors).

The voting process on the initial step goes the following way: each robot evaluates his neighbors for the role of leader according to the set of previously defined its characteristics; each of these characteristics has their own weight; using the membership function robot selects the neighbor with the highest value.

For the vote value will be implemented a linguistic variable e = “evaluation of robot”. Its value is based on the scale of M = {“very low”, “low”, “medium”, “high”, “very high”} or it can have a digit equivalent M = {1, 2, 3, 4, 5}. After voting process many alternatives for E will be generated, so it will have next form:

$$E = \{ {{e}_{1}},{{e}_{2}},...,{{e}_{n}}\} ,$$
((7))

where e i – alternative “candidate” at i = 1..n and n is amount of visible neighbors.

For robot evaluation are offered following characteristics:

1) surroundings: с 1 = “the number of neighbors evaluable for candidate”;

2) territorial: с 2 = “the distances to the neighbors or levels of signals”;

3) status: с 3 = “the physical state of the robot”.

Fig. 7.
figure 7

Calculated networks.

Each of these characteristics are estimated by a voting robot for each of its neighbors:

$${{C}_{i}} = \{ {{c}_{{i1}}},{{c}_{{i2}}},...,{{c}_{{ik}}}\} ,$$
((8))

where – characteristic value of i-th “candidate” at j = 1..k.

Each of the characteristics has its weight:

$$W = \{ {{w}_{1}},{{w}_{2}},...,{{w}_{k}}\} ,$$
((9))

where – the j-th characteristics weighting,

Evaluation of the i-th neighbors uses the following formula:

$${{e}_{i}} = \sum\limits_{j = 1}^k {{{w}_{j}}{{c}_{{ij}}}} .$$
((10))

To determine the value of linguistic variable we use three types of membership functions (11–13), where a general view represented on Fig. 8.

$${{f}_{{vl}}}({{e}_{i}}) = \left\{ \begin{gathered} 1,x < vle \hfill \\ \frac{1}{2} + \frac{1}{2}\cos \left( {\frac{{{{e}_{i}} - vle}}{{vl - vle}}} \right),vle \leqslant x \leqslant vl \hfill \\ 0,x > vl \hfill \\ \end{gathered} \right\},$$
((11))

where – the threshold to which the membership function is equal “1”; – a threshold, after which the membership function is equal “0”;

$${{f}_{{vh}}}({{e}_{i}}) = \left\{ \begin{gathered} 1,{{e}_{i}} > vhs \hfill \\ \frac{1}{2} + \frac{1}{2}\cos \left( {\frac{{{{e}_{i}} - vhs}}{{vhs - vh}}\pi } \right),vh \leqslant {{e}_{i}} \leqslant vhs \hfill \\ 0,{{e}_{i}} < vh \hfill \\ \end{gathered} \right\},$$
((12))

where vhe – threshold, after which the membership function is equal to “1”; vh – a threshold, after which the membership function is equal “0”;

$${{f}_{{gb}}}({{e}_{i}}) = \frac{1}{{1 + {{{\left| {\frac{{{{e}_{i}} - c}}{a}} \right|}}^{{2b}}}}},$$
((13))

where c – mid of a membership function; a – value at which \({{f}_{{gb}}}(c + a) = 1\) and \({{f}_{{gb}}}(c - a) = 1\); b – the value of function smooth regulation.

Fig. 8.
figure 8

Membership functions.

For obtaining the results, we will review leader changing method implementation on centralized hierarchical control network with five nodes (robots) in it. Initially robotic group works based on centralized control model where they have a central node for data margin. On Fig. 9 it is represented the total requests for processing and amount of requests that were processed (shows the sum from all available robots). There difference between series is explained by the data loss and duplicates of requests.

Fig. 9.
figure 9

Total amount of sent and processed requests.

Data loss in such tasks causes additional load on path planning modules (unknown parts of terrain cause addition recalculations of rout) that also increase the time needed to solve main tasks (terrain mapping, object search, etc.). According to this should be implemented structural changes in to data transferring network.

Implementation of leader changing system gives a possibility for robots that lost connection to the central node to send their obtained data using neighbors, compensating by this all losses (Fig. 10).

Fig. 10.
figure 10

Total amount of sent, received and processed requests with signal loss using centralized hierarchical control.

Figure 11 compares the processed requests before and after using the leader changing system.

Fig. 11.
figure 11

Comparison of the number of processed requests before and after leader changing system implementation.

Presented results of our approach help to see that for each individual robot in-group common data (knowledge) becomes evaluable more frequently. To see the positive influence of such data exchange and formation of common knowledge base needs to be used a robotic group in some cluttered environment.

4 MODELING SYSTEM

4.1 Software Structure

To prove the theoretical basis of presented questions was used framework for the simulation and robotic group collaboration. Presented framework has been developed in Unity 5 [19], it is a multiplatform engine provided with different features and tools.

Framework is developed using programming language C# in MonoDevelop integrated development environment (IDE) for Windows 10.

Developed system (Fig. 12) has three operating modes “Without common knowledge”, “Pre-known territory”, “With common knowledge”. First two use only part of decision-making system for path planning and obstacle avoidance. The third one implements full stack of decision-making process. In the end, system returns data about environment and state of each robot in moment of time.

Fig. 12.
figure 12

System structure.

Fig. 13.
figure 13

Robot entity structure.

4.2 Motion Planning

Most of the research reviews motion planning in the frames of path building such as [5] where authors represented an approach that uses motion primitive libraries. Other examples are [6], representing an attempt to implement animal motion for robot behavior, or [20] suggested an algorithm of collision free trajectory for robots. The articles cover some aspects of certain subtasks and widely describe special cases of behavior in a group of robots. However, no one has considered the task from the point of view of the interconnected global approach. It must include the correlation of robots’ technical vision systems with communication and navigation rules in defined group of n-agents.

In mathematics, there are well-developed algorithms (Fig. 14) for finding the way in an unknown or partially known environment (Optimal and heuristic algorithms). For this purpose, discrete mathematics (graph theory) and linear programming are usually used. Tasks of the shortest path search in the graph are known and studied (for example, Dijkstra’s, Floyd-Warshell’s, Prim’s, Kruskal’s, algorithms, etc. [21], [22]).

Fig. 14.
figure 14

Dynamic Path Planning Algorithm.

According to such matrix-based structure will be considered to use A* algorithm [23] with two-step post processing for dead reckoning.

Based on this logic, the agent moves in the direction of the target as long as no obstacle is to be detected. After the presence of obstacles is detected by TVS in real scene robots initialize clear A* path planning. Points that are causing the necessity of changing the vector of movement are separated from the full path, forming the graph of movement. Such points will be called the break points. To improve the smoothness of the trajectory used the approximation – the second step of the post. For approximation used Bezier method with four points polygons ([24], [25], [26]). Thereafter, the agent starts to move to the first of the break points, for which the heuristic evaluation is minimal. During its movement the robot calculates new break points after each new warning of TVS (obstacle detection), and moves to them until the heuristic evaluation of the distance to goal stops decreasing (reaches a local minimum).

The above-mentioned TVS represents the surroundings as a cloud of points. Therefore, for path planning it is possible to separate terrain in small areas (cells) to decrease the amount of nodes needed for faster dead reckoning (Fig. 15).

Fig. 15.
figure 15

Environment simplification.

5 EXPERIMENTAL RESULTS

The experimental results include the modeling of three scenes (Fig. 16). Each of them consider the group of three robots (n = 3) with possibility to communicate with each other (share obtained knowledge) and use laser-based technical vision system for environment interaction.

Fig. 16.
figure 16

Scenes examples.

Experiments consider next scenario: for the group of three robots is given unknown terrain, each of the robots has the same common target to reach and one individual secondary point; first robots start to move to their individual targets, after reaching it, robots continue movement to common target; when all robots reach the common target scenario stops.

Obtained results are shown at Figs. 17–25. Series on figures uses the next marking like W1GR1 or W1SR1, here W{scene number} corresponds the scene number, G – goes for group movement (with common knowledge), in case of S as single movement (no common knowledge), and R{robot identifier} is robot number. The summary of the modeling presented in Table 3.

Fig. 17.
figure 17

Trajectories length for scene# 1, robot #1.

Fig. 18.
figure 18

Trajectories length for scene# 1, robot #2.

Fig. 19.
figure 19

Trajectories length for scene# 1, robot #3.

Fig. 20.
figure 20

Trajectories length for scene# 2, robot #1.

Fig. 21.
figure 21

Trajectories length for scene# 2, robot #2.

Fig. 22.
figure 22

Trajectories length for scene# 2, robot# 3.

Fig. 23.
figure 23

Trajectories length for scene# 3, robot #1.

Fig. 24.
figure 24

Trajectories length for scene# 3, robot #2.

Fig. 25.
figure 25

Trajectories length for scene# 3, robot #3.

Table 3. Motion planning comparing results

Table shows the results of the averaged distances for each individual robot in case of common knowledge comparing to the single robot movement. Result are shown that implementing path planning with common knowledge base can decrease the route of robots up to 21.3%.

6 CONCLUSIONS

This article offers an original solution that improves group routing in unknown terrain and collaboration between robots in-group. Proposed the dynamic data-transferring network forming models for the robotic swarm (spanning three-based model) and group by using leader-changing system. Communication within the group using voting process (leader-changing) shows the improvements in data exchange process.

According to the modeling results implementation of leader-changing system based on voting process and common knowledge base improves the path planning performance. Implemented approach creates continuous trajectories during the movement. The implemented method of resolution stabilization gives an additional option of the detalization adjustment according to specifics of the environment or current task. The use of fuzzy logic in various subtasks makes the behavior of robots more variable and allows for stable functioning of the group at lower energy costs. According to the obtained results individual trajectories length have improvements up to 21.3%.