1 Introduction

In the classical model of distributed computing by mobile robots, each robot is modeled as a point in the plane [12]. The robots are autonomous (no external control), anonymous (no unique identifiers), indistinguishable (no external identifiers), and disoriented (no agreement on local coordinate systems and units of distance measures). They execute the same algorithm. Each robot proceeds in Look-Compute-Move (LCM) cycles: When a robot becomes active in a cycle, it first gets a snapshot of its surroundings (Look), then computes a destination based on the snapshot (Compute), and finally moves to the destination (Move). Moreover, the robots are oblivious, i.e., in each LCM cycle, each robot has no memory of its past LCM cycles [12]. Furthermore, the robots are silent because they do not communicate directly, and only vision and mobility enable them to coordinate their actions.

While silence has advantages, in many situations, e.g., hostile environments, direct communication is assumed. One model that incorporates direct communication is the robots with lights model [7, 12, 14], where each robot has an externally visible light that can assume colors from a constant sized set; robots explicitly communicate with each other using these colors. The colors are persistent; i.e., the color is not erased at the end of a cycle. Except lights, the robots are oblivious as in the classical model.

In this paper, we study the following fundamental Complete Visibility problem in the robots with lights model: Given any initial configuration of N autonomous mobile robots located in distinct points on a plane, they reach a configuration in which each robot is in a distinct position from which it can see all other robots. Initially, some robots may be obstructed from the view of others (i.e., obstructed visibility) and the total number of robots, N, is not known to the robots. The importance of solving this problem is that it makes it possible to solve many other robot problems under obstructed visibility, including gathering, shape formation, and leader election. Di Luna et al. [11] gave the first algorithm for robots with lights to solve this problem. Di Luna et al. [11] arranged robots on corners of a convex polygon, which naturally solves this problem. We also form a convex hull as a solution to this problem. Di Luna et al. [11] proved the correctness of their algorithm but gave no runtime (except a proof of finite time termination). Subsequent papers [10, 15] gave solutions minimizing the number of colors.

The runtime analysis for Complete Visibility has been studied relatively recently [16, 18]. Vaidyanathan et al. [18] presented an \(\mathcal{O}(\log N)\) time algorithm using \(\mathcal{O}(1)\) colors in the fully synchronous setting. Sharma et al. [16] presented an asymptotically optimal \(\mathcal{O}(1)\) time algorithm using \(\mathcal{O}(1)\) colors in the semi-synchronous setting.

The intriguing open question was whether an \(\mathcal{O}(1)\) time, \(\mathcal{O}(1)\) color algorithm can be designed for this problem in the asynchronous setting. Algorithms in the asynchronous setting are interesting and at the same time difficult to design and analyze since this setting is the weakest among the robot synchronization models, namely fully synchronous, semi-synchronous, and asynchronous. There is a simple \(\mathcal{O}(N)\)-time algorithm simulating the \(\mathcal{O}(1)\) time, \(\mathcal{O}(1)\) color semi-synchronous Complete Visibility algorithm of Sharma et al. [16] to the asynchronous setting using the simulation technique of Das et al. [7]Footnote 1. Recently, Sharma et al. [17] presented an \(\mathcal{O}(\log N)\)-time algorithm for this problem using \(\mathcal{O}(1)\) colors in the asynchronous setting. However, there is still a gap of a \(\mathcal{O}(\log N)\) factor on time compared to the trivial \(\varOmega (1)\) lower bound. This work closes this \(\mathcal{O}(\log N)\) gap.

Contributions. We consider the robot model of Di Luna et al. [11], namely, robots are oblivious except for a persistent light that can assume a constant number of colors. Visibility could be obstructed by other robots in the line of sight, N is not known, and the robots may be disoriented. Moreover, we assume that the robot setting is asynchronous – there is no notion of common time and robots perform their LCM cycles at arbitrary time. The moves of the robots are rigid – a robot in motion cannot be stopped (by an adversary) before it reaches its destination point. As in [11], we assume that two robots cannot head to the same destination point and their paths cannot cross; this would constitute a collision. In this paper, we prove the following theorem (comparison is in Table 1) which, to our knowledge, is the first asymptotically time-optimal Complete Visibility algorithm for robots with lights in the asynchronous setting.

Theorem 1

For any initial configuration of \(N\ge 1\) robots with lights in distinct positions on a plane, there is an algorithm that solves Complete Visibility in \(\mathcal{O}(1)\) time with \(\mathcal{O}(1)\) colors and without collisions in the asynchronous setting.

Our algorithm is deterministic and has seven stages, Stages 0–6, that execute one after another. Stage 0 breaks up any initial linear arrangement of robots and places all robots within or on a convex polygon P (convex hull of points) in \(\mathcal{O}(1)\) time. After that, Stages 1–5 place all robots on the corners or sides of a convex polygon \(P'\). Finally, Stage 6 moves each robot on a side of \(P'\) to a corner of a new convex polygon \(P''\). Keys to Stages 1–5 are corner moving, internal moving, and beacon-directed curve positioning procedures that permit all interior robots of P to move to the sides of \(P'\) executing each stage in \(\mathcal{O}(1)\) time. Key to Stage 6 is a corner insertion procedure that moves side robots of \(P'\) to corners of \(P''\) in \(\mathcal{O}(1)\) time while retaining convexity.

Table 1. Comparison of results for Complete Visibility on the lights model

These seven stages are similar in structure to the \(\mathcal{O}(\log N)\)-time asynchronous algorithm of Sharma et al. [17]. Stages 0, 1, 4, and 6 of Sharma et al. [17] run in \(\mathcal{O}(1)\) time and Stages 2, 3, and 5 run in \(\mathcal{O}(\log N)\) time. Therefore, the \(\mathcal{O}(\log N)\) time for the algorithm of Sharma et al. [17] is due to the runtime of Stages 2, 3, and 5. In this paper, we develop two fundamental building blocks to run Stages 2, 3, 5, and 6 in \(\mathcal{O}(1)\) time, giving an overall \(\mathcal{O}(1)\) time algorithm closing the \(\mathcal{O}(\log N)\) gap left in Sharma et al. [17]. The building blocks that we develop here are non-trivial. We list the specific set of techniques we develop and how they help to remove the \(\mathcal{O}(\log N)\) gap factor.

  • We develop a framework called Beacon-Directed Curve Positioning (Sect. 3) in which a set R of robots moves onto a (k-point) curve under the following conditions. On the curve, 2k robots (called beacons) are properly placed. Paths from robots in R to the curve must avoid collisions and cannot intersect the curve at more than one point (Definition 2). If no robot is in transit to the curve, then each robot of R in its original position must see the 2k beacons and all the robots that have moved to the curve and become new beacons. We showed that this framework runs in \(\mathcal{O}(\log k)\) time using 3 colors, irrespective of the size of R.

  • We develop a technique that fulfills the conditions of the Beacon-Directed Curve Positioning framework to apply it to run Stages 2, 3, 5 and 6 in \(\mathcal{O}(1)\) time. Fulfilling the conditions of the framework turned out to be particularly challenging for Stages 2, 3, and 5 (which we discuss in Sect. 4).

Previous Work. The problem of uniformly spreading robots on a line [3] considers the case of obstructed visibility, but these robots are classical robots [12]. Obstructed visibility is also captured in the so-called fat robots model where robots are non-transparent unit discs [1, 6]. However, the runtime analysis is not provided. Similarly, much work on the classical robot model [2, 19] for Gathering does not have runtime analysis, except in a few cases [5, 8] Furthermore, Izumi et al. [13] considered the robot scattering problem (opposite of Gathering) in the semi-synchronous setting and provided a solution with an expected runtime of \(\mathcal{O}(\min \{N,D^2 + \log N\})\); here D is the diameter of the initial configuration. The computability and power of the robots with lights compared to the classical oblivious robots (with no lights) is studied in [7, 9].

Roadmap. In Sect. 2 we provide details on the robot model and touch on some preliminaries. We provide a beacon directed framework of positioning a set of robots on a curve in Sect. 3. We then devote Sect. 4 to prove Theorem 1 which uses the framework of Sect. 3. Many proofs and details are omitted due to space constraints.

2 Model and Preliminaries

We consider a distributed setting of N robots \(\mathcal{Q}=\{r_0,r_1,\cdots ,r_{N-1}\}\). Each robot is a (dimensionless) point that can move in an infinite 2-dimensional real space \(\mathbb {R}^2\). We use a point to refer to a robot as well as its position. A robot \(r_i\) can see, and be visible to, another robot \(r_j\) if and only if (iff) there is no third robot \(r_k\) in the line segment joining \(r_i,r_j\). Each robot has a light that can assume a color at a time from a constant number of different colors.

Look-Compute-Move. At any time a robot \(r_i\in \mathcal{Q}\) could be active (participating in an LCM cycle) or inactive. When a robot \(r_i\) becomes active, it performs the “Look-Compute-Move” cycle as follows. (i) Look: For each robot \(r_j\) that is visible to it, \(r_i\) can observe the position of \(r_j\) on the plane and the color of the light of \(r_j\). Robot \(r_i\) can also know its own color and position. Each robot observes position on its own frame of reference, i.e., two different robots observing the position of the same point may produce different coordinates. However a robot observes the positions of points accurately within its own reference frame. (ii) Compute: In any LCM cycle, \(r_i\) may perform an arbitrary computation using only the colors and positions observed during the “look” portion of that cycle. This includes determination of a (possibly) new position and color for \(r_i\) for the start of the next LCM cycle. Robot \(r_i\) maintains this new color from that LCM cycle to the next LCM cycle. (iii) Move: At the end of the LCM cycle, \(r_i\) changes its light to the new color and moves to its new position.

In the fully synchronous setting (\(\mathcal{FSYNC}\)), every robot is active in every LCM cycle. In the semi-synchronous setting (\(\mathcal{SSYNC}\)), at least one robot is active, and over an infinite number of LCM cycles, every robot is active infinitely often. In the asynchronous setting (\(\mathcal{ASYNC}\)), there is no common notion of time and no assumption is made on the number and frequency of LCM cycles in which a robot can be active. The only guarantee is that every robot is active infinitely often. Complying with the \(\mathcal{ASYNC}\) setting, we assume that a robot performs its Look phase at an instant of time. We also assume that a robot moves at some (not necessarily constant) speed without stopping or changing direction until it reaches its destination (monotonic movements).

Runtime. For the \(\mathcal{FSYNC}\) setting, time is measured in rounds. Since a robot in the \(\mathcal{SSYNC}\) and \(\mathcal{ASYNC}\) settings could stay inactive for an indeterminate number of rounds, we bound a robot’s inactivity and introduce the idea of an epoch to measure runtime. An epoch is the smallest number of rounds within which each robot is guaranteed to be active at least once [4].

Configuration and Local Polygon. A configuration \({\mathbf {C}}_t=\{(r^t_0,col^t_0),\ldots ,(r^t_{N-1},col^t_{N-1})\}\) defines the positions of the robots in \(\mathcal{Q}\) and their colors for any time \(t\ge 0\). A configuration for a robot \(r_i\in \mathcal{Q}\), \({\mathbf {C}}_t(r_i)\), defines the positions of the robots in \(\mathcal{Q}\) that are visible to \(r_i\) (including \(r_i\)) and their colors, i.e., \({\mathbf {C}}_t(r_i)\subseteq {\mathbf {C}}_t\), at time t. The convex polygon \({\mathbf {P}}_t(r_i)\) encompassing all points of \({\mathbf {C}}_t(r_i)\) is local to \(r_i\) since \({\mathbf {P}}_t(r_i)\) depends only on the points that are visible to \(r_i\) at time t. We sometime write \({\mathbf {C}},{\mathbf {P}}, {\mathbf {C}}(r_i),{\mathbf {P}}(r_i)\) to denote \({\mathbf {C}}_t,{\mathbf {P}}_t,{\mathbf {C}}_t(r_i),{\mathbf {P}}_t(r_i)\), respectively.

figure a

Corner Triangle, Corner and Triangle Line Segments, and Corner Polygon. Let \(c_i\) be a corner of a convex polygon \({\mathbf {P}}\). Let \(c_{i-1}\) and \(c_{i+1}\) be the neighbors of \(c_i\) on the boundary of \({\mathbf {P}}\). (For any pair of points ab, we denote the line segment connecting them by \(\overline{ab}\) and its length by \(\mathsf{length}(\overline{ab})\). Moreover, we denote the infinite line passing through ab by \(\overleftrightarrow {ab}\).) Let \(x_i,y_i\) be the points on sides \(\overline{c_ic_{i-1}}\) and \(\overline{c_ic_{i+1}}\) at distance \(\mathsf{length}(\overline{c_ic_{i-1}})/8\) and \(\mathsf{length}(\overline{c_ic_{i+1}})/8\), respectively, from \(c_i\). We pick distance 1/8-th for our convenience, in fact any factor \(\le \)1/2 works for our algorithm. We say that triangle \(c_ix_iy_i\) is the corner triangle for \(c_i\), denoted as \(T(c_i)\), and line segment \(\overline{x_iy_i}\) is the triangle line segment for \(c_i\), denoted as \(TL_i\). Let r be any robot inside \(T(c_i)\) and \(L_i\) be the line segment parallel to \(\overline{x_iy_i}\) passing through r. Let \(T'(r)\) be the area divided by \(L_i\) towards r. We have that \(T'(r)\subset T(c_i)\). We say that \(L_i\) is the corner line segment for \(c_i\), denoted as \(CL_i\), if there is no robot inside \(T'(r)\). Let \(L_{i-1},L_{i+1}\) be the lines perpendicular to \(\overline{c_ic_{i-1}}\) and \(\overline{c_ic_{i+1}}\), respectively, passing through their midpoints. We say the interior of \({\mathbf {P}}\) divided by \(L_{i-1},L_{i+1}\) towards \(c_i\) is the corner polygon of \(c_i\), denoted as \(CP(c_i)\). The figure on the right shows \(T(c_i)\), \(TL_3\), \(CL_3\), \(CP(c_3)\), and a 5-corner convex polygon \({\mathbf {P}}\) with corners \(c_0\)\(c_4\).

figure b

Eligible Area and Eligible Line. Let \(c_i\) be a corner of \({\mathbf {P}}\) and let ab be the neighbors of \(c_i\) in the perimeter of \({\mathbf {P}}\). The eligible area for \(c_i\), denoted as \(EA(c_i)\), is a polygonal subregion inside \({\mathbf {P}}\) within the corner triangle \(T(c_i)\). The eligible areas for any two corners of \({\mathbf {P}}\) are disjoint [16]. \(EA(c_i)\) is computed based on \({\mathbf {C}}(c_i)\) and the corresponding polygon \({\mathbf {P}}(c_i)\). The figure on the right depicts eligible area for \(c_i\) where the shaded area is \(EA(c_i)\). To make sure that all the robots in the interior of \({\mathbf {P}}\) see \(c_i\) when it moves to \(EA(c_i)\), the points inside \(EA(c_i)\) that are part of the lines \(\overleftrightarrow {c_ix}\), connecting \(c_i\) with the robots in \({\mathbf {C}}(c_i)\backslash \{a,b,c_i\}\) are not considered as the points of \(EA(c_i)\).

Lemma 1

[16].\(EA(c_i)\) for each corner \(c_i\) of \({\mathbf {P}}\) is bounded by a non-empty convex polygon. Moreover, when \(c_i\) moves to a point inside \(EA(c_i)\), then \(c_i\) remains as a corner of \({\mathbf {P}}\) and all internal and side robots of \({\mathbf {P}}\) are visible to \(c_i\) (and vice-versa).

It is easy to see that edges \(\overline{c_ia}\) and \(\overline{c_ib}\) are always in the perimeter of \(EA(c_i)\). Let \(x_i,y_i\) be two points in \(\overline{c_ia}\) and \(\overline{c_ib}\), respectively, that are also in the perimeter of \(EA(c_i)\). Points \(x_i,y_i\) can be any point in \(\overline{c_ia}\) and \(\overline{c_ib}\) between \(c_i\) and e and \(c_i\) and f, respectively, where ef are the neighbor corners of \(c_i\) in \(EA(c_i)\). We say line \(\overline{x_iy_i}\) is the eligible line for \(c_i\) and denote it by \(EL_i\) (the figure above illustrates these ideas).

Lemma 2

The eligible line \(EL_i\) for each corner \(c_i\) of \({\mathbf {P}}\) contains no point outside of \(EA(c_i)\), except for the points intersecting lines from internal robots to \(c_i\).

figure c

Convex Polygons \({\mathbf {P}}\), \({\mathbf {P}}'\), \({\mathbf {P}}''\), and \({\mathbf {P}}'''\). \({\mathbf {P}}\) is the convex polygon of the points in \(\mathcal{Q}\). \({\mathbf {P}}'\) is the convex polygon connecting the corners of \({\mathbf {P}}\) after they moved to their eligible areas, \(EA(*)\). \({\mathbf {P}}''\) is the convex polygon with the corners of \({\mathbf {P}}'\) and the side robots of \({\mathbf {P}}\) from those sides with at least two robots. \({\mathbf {P}}'''\) is the convex polygon after all side points in \({\mathbf {P}}''\) become corners moving to the exterior of the side of \({\mathbf {P}}''\) they belong to. Observe that \({\mathbf {P}}\) contains \({\mathbf {P}}'\) and \({\mathbf {P}}''\), but not \({\mathbf {P}}'''\) (see the figure on the right).

Fig. 1.
figure 1

An illustration of Beacon-Directed Curve Positioning. The initial (resp., final) positions of \(n=4\) robots are shown as red (resp., white) circles. Each of the \(k=3\) left and right beacons are shown in blue. (Color figure online)

3 Beacon-Directed Curve Positioning

We describe here a framework (Beacon-Directed Curve Positioning) and use it to move a set of robots with lights in the \(\mathcal{ASYNC}\) setting, each from an initial position to a final position (subject to certain conditions). This movement is assisted by other robots, called beacons, that are already at their destinations. This framework is subsequently used to derive a \(\mathcal{O}(1)\) time Complete Visibility algorithm (Sect. 4).

We will use the term “curve” to mean the locus of a point in the real plane \(\mathbb {R}^2\). We will use a reference coordinate system for describing the ideas in this section; this is only to allow easy description and does not require robots to have a common coordinate system. In the framework we describe, the final positions of robots is on a curve. In some applications, the curve is a straight line represented by \(y=mx+c\). Given two points on the line, the constants mc can be determined and this determines the straight line. In other cases the curve is a segment of a semicircle with equation \((y-a)^2+(x-b)^2=r^2\); here a set of three points on the circle suffice to determine the circle. The following definition of a “k-point curve” generalizes this idea.

Definition 1

Let \(\mathbb {A}\subset \mathbb {R}\) be a finite interval in the real line. Let \(f:\mathbb {A}\longrightarrow \mathbb {R}\) be a (single-valued) function whose equation \(y=f(x)\) defines a curve on the plane. Call function f a k-point curve iff a set of k points \(\{(x_i,f(x_i)):0\le i<k\}\) suffices to determine the constants in the equation \(y=f(x)\).

A path \(p_i\) of robot \(r_i\) is a finite curve with one end at initial point \((x_i,y_i)\); the path represents the locus of \(r_i\) as it moves from its initial point to its next position in an algorithm; recall that a robot’s movement along a path is monotonic. Typically, these paths are straight lines; however, the ideas of this section do not require them to be so. Consider a set of n robots with lights, \(\mathcal{{R}}=\{r_i:0\le i<n\}\) with robot \(r_i\) positioned at a distinct initial point \((x_i,y_i)\). Initially the robots of \(\mathcal{{R}}\) will be called “waiting” robots (that are waiting to move to their next positions). Let \(f:\mathbb {A}\longrightarrow \mathbb {R}\) be a k-point curve and let path, \(p_i\), of robot \(r_i\in \mathcal{{R}}\) intersect with f at a distinct final point \((x'_i,y'_i)=(x'_i,f(x'_i))\). The objective of “curve positioning” is for each robot of \(\mathcal{{R}}\) to position itself at its final point. Other robots, called beacons, will assist robots of \(\mathcal{{R}}\) in getting to their final position. Robots \(b_{\ell ,i}\), for \(0\le i<k\), whose x-coordinate is smaller than those of the final positions of the waiting robots are called left beacons. Similarly right beacons \(b_{r,i}\), for \(0\le i<k\), are at points with x-coordinate greater than those of the final positions of the waiting robots. Figure 1 illustrates these ideas.

Definition 2

Let \(f:\mathbb {A}\longrightarrow \mathbb {R}\) be a k-point curve and let \(\mathcal{{R}}=\{r_i:0\le i<n\}\) be a set of robots with paths \(p_i\) from initial position \((x_i,y_i)\) to final position \((x'_i,f(x'_i))\). Let \(\mathcal{{B}}_\ell =\{b_{\ell ,i}:0\le i<k\}\) and \(\mathcal{{B}}_r=\{b_{r,i}:0\le i<\}\) be the sets of left and right beacons placed on f to the left and right of the robot set \(\mathcal{{R}}\). Then the triplet \(\langle f,\mathcal{{R}},\mathcal{{B}}_\ell \cup \mathcal{{B}}_r\rangle \) is admissible iff the following conditions hold. (a) For distinct ij, paths \(p_i\) and \(p_j\) do not intersect. (b) For distinct ij, any line through the initial position of \(r_i\) intersects \(p_j\) at at most one point. (c) For any i, a line through the initial position of \(r_i\) intersects curve f (within its domain \(\mathbb {A}\)) at exactly one point. (d) All 2k beacons in \(\mathcal{{B}}_\ell \cup \mathcal{{B}}_r\) are visible to each robot \(r_i\in \mathcal{{R}}\) in its initial position.

Definition 3

The Beacon-Directed Curve Positioning Problem is defined as follows: Let \(f:\mathbb {A}\longrightarrow \mathbb {R}\) be a k-point curve, let \(\mathcal{{R}}=\{r_i:0\le i<n\}\) and let \(\mathcal{{B}}\) be a set of k left and k right beacons on f. Let \(\langle f,\mathcal{{R}},\mathcal{{B}}\rangle \) be admissible. Let the initial color of each robot \(r_i\in \mathcal{{R}}\) be wait. Let the beacons in \(\mathcal{{B}}\) be colored beacon. The objective is to move each robot \(r_i\in \mathcal{{R}}\) to its final position on f and then change its color to beacon.

In the remainder of this section, we will implicitly assume in all lemmas that admissibility is satisfied. We also recall that all robot movements are monotonic. The following simple algorithm with three condition-action pairs solves the Beacon-Directed Curve Positioning Problem.

  • Condition 1: Robot r is colored wait and it can see at least k robots with color beacon.

  • Action 1: Robot r determines the equation for the k-point curve, f, and moves monotonically on its path p to position itself on curve f. It changes its color to not-waiting.

  • Condition 2: Robot r is colored not-waiting.

  • Action 2: Robot r changes its color to beacon.

  • Condition 3: Robot r is colored beacon and it cannot see any other robot colored wait.

  • Action 3: Terminate.

We now show that the algorithm terminates in \(\mathcal{O}(\log k)\) epochs for any execution. Any robot that is in motion along its path (between its initial and final positions) will be called a transient robot; clearly a transient robot is a waiting robot at the start of its cycle and is on its way to becoming a beacon at the end of its next cycle. If an awake robot cannot see some beacon, then there must be a transient robot that blocks its view.

Lemma 3

Let \(b,b'\) be left and right beacons and let \(\mathcal{{S}}=\{r_i:0\le i<m\}\) be a set of m waiting robots. Let u be a monotonically moving transient robot that blocks the view of b from every waiting robot in \(\mathcal{{S}}\). Then, at least m transient robots are needed to block \(b'\) from the view of all waiting robots of \(\mathcal{{S}}\).

At the end of the first epoch, let there be m waiting robots; these have not been able to see at least k beacons due to blocking by transient robots. We now derive the smallest number of transient robots (as a function of m) that must have moved during this epoch. Let the set of m remaining waiting robots be \(\mathcal{{S}}=\{r_h:0\le h<m\}\). Each of these robots must not have been able to see at least one left beacon. Arbitrarily associate each waiting robot, \(r_h\) with any one left beacon, \(b_{\ell ,i}\), that \(r_h\), has not been able to see. Let \(\mathcal{{S}}_i=\{r_h:r_h~\text {has been associated with }b_{\ell ,i}\}\). Thus, set \(\mathcal{{S}}\) has been partitioned into disjoint (and possibly empty) sets \(\mathcal{{S}}_i\). Partition non-empty \(\mathcal{{S}}_i\) into disjoint sets \(\mathcal{{S}}_{i,j}\) such that \(\bigcup _j\mathcal{{S}}_{i,j=\mathcal{{S}}_i}\) and for every \(r_h\in \mathcal{{S}}_{i,j}\), beacon \(b_{\ell ,i}\) is blocked from \(r_h\) by the same transient robot \(u_{i,j}\). Because a waiting robot is unable to move, each element of \(\mathcal{{S}}\) (and hence each element of \(\mathcal{{S}}_{i,j}\)) must also be blocked from at least one of the right beacons (\(b_{r,g}\), for \(0\le g<k\)). By Lemma 3, we have the following lemma.

Lemma 4

At least z transient robots are needed to block the same right beacon from any z elements of \(\mathcal{{S}}_{i,j}\).

We can also prove the following two results.

Lemma 5

If the first epoch of the algorithm starts with n waiting robots then the second epoch starts with at most \(n\left( 1-\frac{1}{k^2}\right) \) waiting robots and at least \(\frac{n}{k^2}+2k\) beacons.

Lemma 6

If an epoch \(e\ge 1\) of the algorithm starts with \(m\ge 2k\) beacons, then epoch \(e+1\) starts with at least \(\min \{n+2k,\frac{3m}{2}\}\) beacons.

Observe that \((\frac{3}{2})^{\mathcal{O}(\log k)}>k^2\). Therefore from Lemmas 5 and 6, in \(\mathcal{O}(\log k)\) epochs, all initial waiting robots have been converted to beacons. This gives the following main result of this section. We use this result in Sect. 4 with \(k=2\) and \(k=3\). For these cases, the number of epochs needed is at most 5 and 7, respectively.

Theorem 2

The Beacon-Directed Curve Positioning Problem using a k-point function runs on the lights model in \(\mathcal{O}(\log k)\) epochs, using 3 colors in the \(\mathcal{ASYNC}\) setting.

Fig. 2.
figure 2

The seven stages of the algorithm: Part (0) show the collinear initial configuration \({\mathbf {C}}_0\), and Part (i), \(1\le i\le 7\), shows the configuration of robots in \(\mathcal{Q}\) at the end of the \((i-1)^\mathrm{th}\) stage or the beginning of the \(i^\mathrm{th}\) stage.

4 \(\mathcal{O}(1)\)-Time \(\mathcal{ASYNC}\) COMPLETE VISIBILITY Algorithm

We now outline our algorithm. The algorithm consists of 7 stages, Stages 0–6. In each stage, the robots make progress on converging toward a configuration where all the robots are vertices in a convex hull (Fig. 2).

  • Stage 0: is for a collinear initial configuration \({\mathbf {C}}_0\) (Fig. 2.0). The endpoint robots move a small distance perpendicular to the line, which ensures that in the resulting configuration not all robots are collinear.

  • Stages 1–5: work toward moving all interior robots of \({\mathbf {P}}\) to the sides of \({\mathbf {P}}''\) (Fig. 2.1–2.6) as follows.

    • Stage 1: starts as soon as the robots in \({\mathbf {C}}_0\) reach a non-collinear configuration (Fig. 2.1). Stage 1 moves the corner robots of \({\mathbf {P}}\) (Fig. 2.1) to make them corners of \({\mathbf {P}}'\) (Fig. 2.2).

    • Stage 2: first computes the eligible lines for the corners of \({\mathbf {P}}'\) and then moves (at least) 4 interior robots of \({\mathbf {P}}'\) (all these robots have color start) to those eligible lines. Figure 2.3 illustrates this stage.

    • Stage 3: moves all the remaining interior robots of \({\mathbf {P}}'\) to the eligible lines of the corners of \({\mathbf {P}}'\). Figure 2.4 shows how the robots in the interior of \({\mathbf {P}}'\) in Fig. 2.3 move to \(EL_3\).

    • Stage 4: moves the robots on the eligible lines to the sides of \({\mathbf {P}}'\). Figure 2.5 shows how the robots on the eligible lines in Fig. 2.4 become side robots of \({\mathbf {P}}'\).

    • Stage 5: moves the side robots of \({\mathbf {P}}\) and \({\mathbf {P}}'\) to the sides of \({\mathbf {P}}''\). Figure 2.6 shows how the side robots of \({\mathbf {P}}\) and \({\mathbf {P}}'\) in Fig. 2.5 become side robots of \({\mathbf {P}}''\).

  • Stage 6: relocates the side robots of \({\mathbf {P}}''\) (Fig. 2.6) to the corners of \({\mathbf {P}}'''\). Figure 2.7 shows the resulting hull.

At the initial configuration \({\mathbf {C}}_0\), all robots in \(\mathcal{Q}\) are colored start. Each robot \(r_i\) works autonomously having only the information about \({\mathbf {C}}(r_i)\). If \({\mathbf {P}}(r_i)\) is a line segment and \(N>3\), Stage 0 transforms \({\mathbf {C}}_0\) to a non-collinear \({\mathbf {C}}_0\). Stages 1–6 then proceed autonomously until all robots are colored corner (which signifies that all N robots in \(\mathcal{Q}\) are on the corners of a hull \({\mathbf {P}}\)). The algorithm then terminates. The execution of the stages are synchronized through the colors the robots in \(\mathcal{Q}\) assume during the execution and the robots execute stages sequentially one after another. Due to space limitations, we do not explicitly describe in detail here how synchronization is achieved and what are the colors of the robots in the beginning of each stage. The algorithm uses 47 colors and runs for total \(\mathcal{O}(1)\) epochs.

Sharma et al. [17] showed that Stages 0, 1, 4, and 6 can run in \(\mathcal{O}(1)\) time and Stages 2, 3, and 5 run for \(\mathcal{O}(\log N)\) time. Our goal is to satisfy the conditions of the Beacon-Directed Curve Positioning framework (Sect. 3) to run Stages 2, 3, 5, and 6 in \(\mathcal{O}(1)\) time. This provides the overall \(\mathcal{O}(1)\) runtime for the algorithm. The Beacon-Directed Curve Positioning framework requires each robot moving to a k-point curve to see the 2k beacons that are on the curve in the beginning and all the robots that move to the curve (in addition to the 2k beacons in the beginning) during the execution of the framework, if there is no robot currently transit to the curve. This turned out to be particularly challenging among the other conditions listed in Definition 2.

We managed to address this challenge by exploiting the eligible area \(EA(*)\) of the corners of \({\mathbf {P}}\). Notice that all the points inside \(EA(c_i)\) for each corner \(c_i\) are visible to all the robots in the interior of \({\mathbf {P}}\) (while they are not moving). Therefore, we first develop a technique to compute an eligible line \(EL_i\) for each corner \(c_i\) of \({\mathbf {P}}\) by the interior robots of \({\mathbf {P}}\). We then develop a technique to place (at least) 4 interior robots on an eligible line \(EL_i\) (note that \(EL_i\) is inside \(EA(c_i)\)), 2 as left beacons and 2 as right beacons (Definition 3). After that, we develop a technique to maintain the property that the interior robots always see \(c_i\) (irrespective of the robots on \(EL_i\)), and when there is no transient robot, they see all the robots on \(EL_i\). This idea also turned out to be satisfying the remaining three conditions (Definition 2) of the Beacon-Directed Curve Positioning framework. Putting these ideas altogether achieves \(\mathcal{O}(1)\) runtime for Stages 2 and 3. We then extend these techniques in the same spirit to run Stages 5 and 6 in \(\mathcal{O}(1)\) time. We provide details of Stages 0–6 (the details on Stages 0, 1, and 4 are for completeness) separately below and outline the major properties they satisfy.

Stage 0 – Transforming a Collinear Initial Configuration \({\mathbf {C}}_0\) to a Non-collinear \({\mathbf {C}}_0\). Stage 0 is similar to Phase 0 of [17] and is done by moving (at least) one endpoint robot of the line segment hull \({\mathbf {P}}\) perpendicular to it (formed by \({\mathbf {C}}_0\)) to convert it to a polygonal hull. We have the following lemma for Stage 0.

Lemma 7

[17]. When Stage 0 finishes, for \(N\ge 3\), there exists a hull \({\mathbf {P}}\) such that all the robots in \(\mathcal{Q}\) are in the corners and sides of that hull with color \(\in \{\mathtt{start}, \mathtt{start\_moving}, \mathtt{ready}\}\). Stage 0 runs for (at most) 3 epochs avoiding collisions and Stage 1 starts only after Stage 0 finishes.

The goal of Stage 1 is to move all corners \(\mathcal{Q}_c\) of \({\mathbf {P}}\) into their eligible areas by a technique similar to [17]. First move and color all corners of \(\mathcal{Q}_c\) corner1 and then color them corner2 or corner. The intermediate color corner1 is because we would like to start Stage 2 only after all robots in \(\mathcal{Q}_c\) are colored corner2. Side robots of \({\mathbf {P}}\) are colored special, if they are neighbors of a corner of \({\mathbf {P}}\), otherwise side1, We have the following lemma for Stage 1.

Lemma 8

[17]. During Stage 1, the corners of \({\mathbf {P}}'\) are colored corner2 and the sides of \({\mathbf {P}}'\) are colored side1 or special. The interior robots of \({\mathbf {P}}\) remain as the interior robots of \({\mathbf {P}}'\) with color start. Moreover, Stage 1 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 2 starts only after Stage 1 finishes.

Stage 2 – Positioning 4 Interior Robots of \({\mathbf {P}}'\) on the Eligible Lines of the Corners of \({\mathbf {P}}'\). We execute Stage 2 in two sub-stages. In Stage 2.1, we compute eligible lines for the corners of \({\mathbf {P}}'\). In Stage 2.2, we put (at least) 4 interior robots in those lines satisfying the conditions of the Beacon Directed Curve Positioning framework of Sect. 3 to run Stage 3 in \(\mathcal{O}(1)\) epochs. The techniques described are necessary to run the Beacon-Directed Curve Positioning framework in Stage 3.

Stage 2.1 – Computing Eligible Lines for the Corners of \({\mathbf {P}}'\). Let \(c_i\) be a corner of \({\mathbf {P}}'\) colored corner2. If there are robots inside corner triangle \(T(c_i)\), pick the corner line segment \(CL_i\), otherwise the triangle line segment \(TL_i\). Let this line be denoted as \(L_i\). We first put 4 interior robots of \({\mathbf {P}}'\) in \(L_i\) (Fig. 3.a) and color them transit. This helps later to compute the eligible line \(EL_i\) for \(c_i\).

Stage 2.1.1 – Moving 4 Interior robots in \({\mathbf {P}}'\) to \(L_i\): This can be done by moving the robots closer to \(L_i\) sequentially one after another to \(L_i\) and color them transit (the details are in Appendix). The corner \(c_i\) then can change its color to corner21 from corner2 after there are exactly 4 robots on \(L_i\) (Fig. 3.a). The robots inside \(CP(c_i)\) then color themselves internal since they see \(c_i\) (Lemma 1). Robot \(c_i\) then changes its color to corner22. It is possible that some of the corners of \({\mathbf {P}}'\) may have less than 4 robots (or even no robot) in \(L_i\) even after all robots in \(\mathcal{Q}_i\) colored internal. Those corners change their color directly to corner5 from corner2.

Fig. 3.
figure 3

An illustration of how the corner and interior robots of \({\mathbf {P}}'\) move in Stage 2.1.2

Stage 2.1.2 – Computing Eligible Lines for the Corners of \({\mathbf {P}}'\): We describe how \(EL_i\) is computed for \(c_i\). Let \(t_1,t_2,t_3,t_4\) be the 4 robots in \(L_i\) of corner \(c_i\) (Stage 2.1.1) with \(t_2\) and \(t_3\) between \(t_1\) and \(t_4\), and \(t_2,t_3\) being closer to \(t_1,t_4\), respectively (Fig. 3.a). We ask \(t_1\) and \(t_4\) to move to the lines \(\overline{c_it_2}\) and \(\overline{c_it_3}\), respectively, assuming color transit_moving. Robots \(t_1,t_4\) perform this move only when they have color transit and \(c_i\) has color corner22. The position they move to in those lines is the 1/8-th point from \(t_2\) and \(t_3\) based on the distance to \(c_i\). They then change their color to transit1 (Fig. 3.b). After \(c_i\) sees both \(t_1,t_4\) with color transit1, it computes \(EA(c_i)\), and a point \(x_i\) on \(\overline{c_ic_{i-1}}\) (or \(y_i\) on \(\overline{c_ic_{i+1}}\)) so that the line, say \(L_i'\), parallel to \(\overline{t_1t_4}\) passing through \(x_i\) (or \(y_i\)) crosses \(EA(c_i)\). According to the construction, \(\overline{t_1t_4}\) is parallel to \(\overline{t_2t_3}\), and also parallel to \(\overline{c_{i-1}c_{i+1}}\). Let \(x_i\) on \(\overline{c_ic_{i-1}}\) be the point so that \(L_i'\) crosses \(EA(c_i)\). Observe that \(L_i'\) is in fact the eligible line \(EL_i\). Corner \(c_i\) then moves to \(x_i\) (the procedure for \(c_i\) moving to point \(y_i\) is analogous) assuming color corner22_moving (Fig. 3.c) and changes its color to corner23. Let \(op_i\) be the position of \(c_i\) before it moves to \(x_i\).

We now describe a technique to put all \(t_1,t_2,t_3,t_4\) on \(L_i'\) (which is \(EL_i\)) so that the interior robots of \({\mathbf {P}}'\) can recognize it as \(EL_i\). Let \(t_1\) be closer to \(c_i\) than \(t_4\) from the new position \(x_i\) of \(c_i\) (the case of \(t_4\) being closer to \(c_i\) than \(t_1\) is analogous). Robot \(t_1\) moves to the intersection point of \(L_i'\) and \(\overline{t_1t_2}\) assuming color transit1_moving (Fig. 3.d) and then changes its color to transit2 when it becomes active next time. After \(c_i\) sees \(t_1\) colored transit2, it moves back to its previous position \(op_i\) (where it was colored corner22) assuming color corner23_moving (Fig. 3.e). Although \(c_i\) has no memory of \(op_i\), it can compute \(op_i\) since \(op_i\) is the intersection point of lines \(\overline{t_1t_2}\) and \(\overline{t_4t_3}\). Robot \(c_i\) then assumes corner24. After this \(t_4\) with color transit1 moves to the intersection point of \(L_i'\) and \(\overline{t_4t_3}\) assuming color transit1_moving (Fig. 3.f). It then assumes transit2.

Let \(op_1,op_4\) be the current positions of \(t_1,t_4\), respectively. The robots \(t_1\) and \(t_4\) (after colored transit2) move to either left or right in \(L_i'\) to make room for robots \(t_2\) and \(t_3\) to move to \(L_i'\) without blocking any internal colored robots to see \(c_i\) and also the robots \(t_1, t_2,t_3,t_4\) on \(L_i'\). Robots \(t_1\) (and similarly \(t_4\)) moves as follows. Let \(\overleftrightarrow {c_it_1}\) be a line that connects \(t_1\) with \(c_1\). Let \(L'\) be a line connecting \(c_i\) with an internal colored robot r in the left or right of \(\overleftrightarrow {c_it_1}\) such that in the cone area QC(r) formed by \(L'\) and \(\overleftrightarrow {c_it_1}\) there is no other internal colored robot. Let w be the intersection point of \(L_i'\) and \(L'\). Robot \(t_1\) moves to the midpoint m of the line segment that connects it with w (note that all three points \(w, t_1\), and m are in \(L_i'\)) assuming color transit2_moving (Fig. 3.g). It then changes its color to eligible when it becomes active next time. After \(t_2\) and \(t_3\) see both \(t_1\) and \(t_4\) with color eligible, \(t_2\) moves to point \(op_1\) (the position of \(t_1\) in \(L_i'\) before it moved to point m) and \(t_3\) moves to \(op_4\) (the position of \(t_4\) in \(L_i'\) before it moved) (Fig. 3.h). Robots \(t_2,t_3\) then assume color eligible. After \(c_i\) sees all \(t_1,t_2,t_3,t_4\) are on \(L_i'\) with color eligible, it assumes color corner3.

Lemma 9

During Stage 2.1, 4 interior robots of \({\mathbf {P}}'\) inside the corner polygon \(CP(c_i)\) are correctly placed on the eligible line \(EL_i\) of \(c_i\) and colored eligible and the corners of \({\mathbf {P}}'\) are colored \(\in \{\)corner3, corner5, corner\(\}\). Stage 2.1 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 2.2 starts only after Stage 2.1 finishes.

Lemma 10

Let \(r_i\) be a robot with color internal in the interior of \({\mathbf {P}}'\). When Stage 2.1 finishes, \(r_i\) sees \(c_i\) and all 4 eligible colored robots in the eligible line \(EL_i\).

Fig. 4.
figure 4

An illustration of how the interior robots of \({\mathbf {P}}'\) move to the eligible lines during Stage 2.2

Stage 2.2 – Positioning (at least) 4 Interior Robots on the Eligible Lines of the Corners of \({\mathbf {P}}'\). After the eligible line \(EL_i\) is computed for a corner \(c_i\) of \({\mathbf {P}}'\) in Stage 2.1, the goal in this stage is to see whether the 4 robots on \(EL_i\) with color eligible can serve as left and right beacons to apply the framework of Sect. 3 to reposition the remaining interior robots of \({\mathbf {P}}'\) (with color internal) to the eligible lines in \(\mathcal{O}(1)\) epochs. If those 4 robots are positioned such that all the interior robots of \({\mathbf {P}}'\) inside the corner polygon \(CP(c_i)\) are within the cone area \(QC(c_i)\) formed by lines \(\overleftrightarrow {c_it_2},\overleftrightarrow {c_it_3}\), then these robots serve as left and right beacons and this stages finishes with \(c_i\) changing its color to corner4. Otherwise, (at most) 4 robots inside \(CP(c_i)\) are moved to \(EL_i\) in this stage so that 2 of them serve as left beacons and 2 of them serve as right beacons to apply the Beacon-Directed Curve Positioning framework of Sect. 3. Figure 4 illustrates these ideas.

Lemma 11

During Stage 2.2, (at least) four internal robots of \({\mathbf {P}}'\) are positioned on the eligible lines and colored eligible. Stage 2.2 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 3 starts only after Stage 2.2 finishes.

Stage 3 – Positioning the Remaining Internal Robots of \({\mathbf {P}}'\) on the Eligible Lines. In the beginning of Stage 3, all corners of \({\mathbf {P}}'\) have color \(\in \{\)corner4, corner5, corner\(\}\) with at least a corner colored corner4 (otherwise there is no interior robot with color internal in \({\mathbf {P}}'\)). All interior robots of \({\mathbf {P}}'\) that are on the eligible lines are colored eligible and the rest are colored internal. Let \(c_i\) be a corner of \({\mathbf {P}}'\) colored corner4 and let r be a robot with color internal that is inside the corner polygon \(CP(c_i)\). Note that r is closer to \(c_i\) than other corners of \({\mathbf {P}}'\) and it always sees \(c_i\) (Lemma 10). Robot r moves as follows.

  • Condition 3.1: Robot r is colored internal and it can see at least 2 eligible colored robots towards \(c_i\).

  • Action 3.1: Let \(L_r\) be the line formed by those eligible robots. Robot r assumes color internal_moving and moves to the intersection point w of lines \(L_r\) and \(\overline{c_ir}\).

  • Condition 3.2: Robot r is colored internal_moving.

  • Action 3.2: Robot r assumes color eligible.

As soon as \(c_i\) does not see any robot with color internal or internal_moving (i.e., all robots in the interior of \({\mathbf {P}}'\) are placed in the eligible lines), it assumes color corner5. We can prove the following two lemmas.

Lemma 12

During Stage 3, the eligible colored robots positioned on \(EL_i\) of a corner \(c_i\) of \({\mathbf {P}}'\) are seen by all the internal colored robots inside \(CP(c_i)\) (and vice-versa), if there is no transient robot towards \(EL_i\).

Lemma 13

During Stage 3, all the robots in the interior of \({\mathbf {P}}'\) (with color internal) are correctly positioned on the eligible lines of the corners of \({\mathbf {P}}'\) and colored eligible. Moreover, the corners of \({\mathbf {P}}'\) are colored corner5. Furthermore, Stage 3 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 4 starts only after Stage 3 finishes.

Stage 4 – Positioning the Robots on the Eligible Lines to the Sides of \({\mathbf {P}}'\). Stage 4 is similar to [17]. Let r be a robot on the eligible line \(EL_i\). Let \(x_i\) and \(y_i\) be the points in \(\overline{c_ic_{i-1}}\) and \(\overline{c_ic_{i+1}}\), respectively, where \(EL_i\) intersects them. The goal is to move r to position it on either side \(\overline{c_ic_{i-1}}\) or \(\overline{c_ic_{i+1}}\) of \({\mathbf {P}}'\) between points \(c_i\) and \(x_i\) (\(\overline{c_ix_i}\)) or \(c_i\) and \(y_i\) (\(\overline{c_iy_i}\)). We have the following lemma for Stage 4.

Lemma 14

[17]. During Stage 4, all robots in the eligible lines of the corners of \({\mathbf {P}}'\) (with color eligible) are correctly positioned on the sides of \({\mathbf {P}}'\) and colored side2. Moreover, the corners of \({\mathbf {P}}'\) are colored corner. Furthermore, Stage 4 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 5 starts only after Stage 4 finishes.

Fig. 5.
figure 5

An illustration of Stage 5

Stage 5 – Making Side Robots of \({\mathbf {P}}'\) the Sides of \({\mathbf {P}}''\). Let S be a side of \({\mathbf {P}}\) with \(c_i,c_{i+1}\) its endpoints. There is a side \(S'\) in \({\mathbf {P}}'\) with \(c_i,c_{i+1}\) its endpoints and all the side robots on S are on a line in the corridor of \(S'\) with color side1 or special. (The corridor of \(S'\) is the infinite subregion on its exterior that is bounded by \(S'\) and perpendicular lines through points \(c_i,c_{i+1}\) of \(S'\).) Robots that become side robots in Stage 4 are on \(S'\) with color side2. Stage 5 is for the side robots on \(S'\). Consider at least 2 side robots on S and at least a side robot on \(S'\). Figure 5 illustrates the configuration. Stage 5 is executed in two sub-stages. In Stage 5.1, the robots in \(S_l',S_r'\) move to \(S_l,S_r\), respectively. In Stage 5.2, the robots in \(S_m'\) move to \(S_m\). Stages 5.1 and 5.2 are synchronized by changing the colors of \(s_1,s_w\) from special to temp_corner. In both the sub-stages, the idea is to satisfy the conditions for the framework in Sect. 3 to run in \(\mathcal{O}(1)\) epochs.

Lemma 15

During Stage 5, all the robots on \(S'\) move to S and colored side. Moreover, Stage 5 runs for \(\mathcal{O}(1)\) epochs avoiding collisions and Stage 6 starts only after Stage 5 finishes.

Stage 6 – Making Side Robots of \({\mathbf {P}}''\) the Corners of \({\mathbf {P}}'''\). After Stage 5, all robots in \(\mathcal{Q}\) are in the sides and corners of \({\mathbf {P}}''\) colored side and corner, respectively. Stage 6 moves all side robots of \({\mathbf {P}}''\) to corners of \({\mathbf {P}}'''\) using the framework of Sect. 3 (see Figs. 2.6 and 2.7). The algorithm works independently on each side \(S=(c_i,s_1,s_2,\ldots ,s_m,c_{i+1})\) of \({\mathbf {P}}''\), placing all side robots of S on an arc of a circle (i.e., a 3-point curve) in the corridor of S that traverses the end points \(c_i,c_{i+1}\) of S; this circle is called a safe circle. This ensures that no three side points of S are collinear. The algorithm further guarantees that \({\mathbf {P}}'''\) is convex, thus ensuring complete visibility.

Lemma 16

During Stage 6, all the side robots of \({\mathbf {P}}''\) become corners of \({\mathbf {P}}'''\) and colored corner. Moreover, Stage 6 run for \(\mathcal{O}(1)\) epochs avoiding collisions and then the algorithm terminates.

Proof of Theorem 1. We have Theorem 1 combining Lemmas 79, 11, and 1316.   \(\square \)