Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots

: This article handles building underwater sensor networks autonomously using multiple surface ships. For building underwater sensor networks in 3D workspace with many obstacles, this article considers surface ships dropping underwater robots into the underwater workspace. We assume that every robot is heterogeneous, such that each robot can have a distinct sensing range while moving with a distinct speed. The proposed strategy works by moving a single robot at a time to spread out the underwater networks until the 3D cluttered workspace is fully covered by sensors of the robots, such that no sensing hole remains. As far as we know, this article is novel in enabling multiple heterogeneous robots to build underwater sensor networks in a 3D cluttered environment, while satisfying the following conditions: (1) Remove all sensing holes. (2) Network connectivity is maintained. (3) Localize all underwater robots. In addition, we address how to handle the case where a robot is broken, and we discuss how to estimate the number of robots required, considering the case where an obstacle inside the workspace is not known a priori. Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed network construction methods.

This article solves the coverage problem using multiple mobile sensors in an underwater cluttered environment. A mobile sensor needs to avoid underwater obstacles while it moves, since we consider 3D underwater environments with many obstacles. Underwater mobile sensors are deployed in the workspace, to perform collaborative monitoring and data collection tasks. For complete monitoring of the cluttered workspace, it is required to build sensor networks such that no sensing hole exists in the workspace.
In underwater environments, optic, electromagnetic, or acoustic sensors have been used for wireless communication [17,18]. But, building sensor networks in underwater environments is difficult, since communication range is rather short [19][20][21].
As a method to build underwater sensor networks autonomously, this article considers surface ships dropping underwater robots into the underwater 3D workspace. Here, each robot can move and has sensing/communication abilities. Since Global Positioning Systems (GPS) cannot be used to localize an underwater robot, this article introduces network building approaches to generate underwater sensor networks, while localizing every robots in the networks.
As a ship arrives at the sea surface of the designated workspace, it begins dropping underwater robots while not moving at the site. Once multiple robots are dropped, they autonomously move into the unknown 3D cluttered workspace until no sensing hole exists in the workspace. As a robot moves in a cluttered workspace, it needs to avoid obstacles while maintaining the network connectivity with other robots.
Many papers handled how to make multiple robots perform area coverage in 2D environments [4,5,8,22,23]. To build a sensor network in an unknown 2D workspace, the reference [2] made a single robot explore the workspace while deploying static sensor nodes. One restriction of this approach is that the robot must be large enough to carry all sensor nodes. Also, the robot must have enough power to explore the 2D workspace completely, since a sensor node has no mobility. the references [6,7] used Voronoi tessellations to make all robots spread in order to increase network coverage based on distributed information from their neighbors. The authors of [8] considered deploying a swarm of robots into an unknown 2D environment to remove all sensing holes in the environment. The authors of [8] handled the case where each robot has bearing-only sensors measuring the bearing to its neighbor in its local coordinate frame. Cluttered environments were not considered in the papers addressed in this paragraph.
Our paper is distinct from other papers addressed in the previous paragraph, since our paper considers the coverage problem in 3D workspace with many obstacles. Since a robot moves in a cluttered workspace, it needs to avoid obstacles while building the sensor network.
As far as we know, only a few papers handled coverage path planning for 3D space. In [24], different deployment strategies for 2D and 3D communication architectures for UnderWater Acoustic Sensor Networks (UW-ASNs) were proposed. For deploying underwater sensors, the reference [24] introduced a sensor which is anchored to the sea bottom and is equipped with a floating buoy that can be inflated by a pump. The buoy pulls the sensor towards the ocean surface. The depth of the sensor can then be regulated by adjusting the length of the wire that connects the sensor to the anchor. However, this deployment approach cannot be used for covering a deep sea. Also, collection of these sensors is not trivial, since a sensor is anchored to the sea bottom. Moreover, the deployment strategy in [24] does not assure that all coverage holes are removed after the deployment.
In this paper, we consider a robot which can freely move in 3D underwater environments. Under the proposed deployment strategy, all robots are deployed so that no sensing holes exist in the workspace. Suppose that a coverage mission of the underwater network is done. In this case, we need to gather the robots, so that they can be deployed for coverage of another space. By controlling the buoyancy of each robot, we can easily make each robot move to the sea surface. Then, a ship can collect the robots easily.
The references [25,26] handled 3D coverage path planning (i.e., path planning process for enabling full coverage of a given 3D space by one or multiple robots). The goal of such planning is to provide nearly full coverage, minimizing the occurrence of visiting an identical area multiple times. However, the references [25,26] did not address how to deploy autonomous sensors, so that the 3D workspace is continuously monitored by the deployed sensors. In our paper, we develop coverage control so that once the robots are located at their designated target positions, then the 3D workspace is continuously monitored by the deployed robots.
In practice, a robot may not be identical to another robot. For instance, some robots may not move as fast as other robots, since their hardware systems are partially broken during their operations. Hence, we assume that every robot moves with a distinct speed. Moreover, a robot may have a distinct sensing range with another robot. Hence, this article assumes that every robot is heterogeneous, such that each robot can have a distinct sensing range while moving with a distinct speed. This article thus handles a scenario where heterogeneous robots are deployed in unknown underwater environments.
Since obstacles can block the communication link between robots, each robot is controlled while maintaining the network connectivity in obstacle-rich environments. The proposed coverage methods work by moving a single robot at a time to spread out the 3D sensor network, until the 3D workspace is fully covered without sensing holes.
The surface ships dropping robots can use GPS to localize themselves. However, GPS cannot be used for localization of a robot. This article assumes that a robot can calculate the relative location of its neighboring robot using local sonar sensors. In underwater environments, sonar sensors are preferred, since sonar signal can travel a longer distance, compared to electromagnetic wave [18]. The proposed coverage methods result in underwater sensor networks without sensing holes, while localizing all robots.
As far as we know, this article is novel in enabling multiple heterogeneous robots to build underwater sensor networks in a 3D cluttered environment, while satisfying the following conditions: remove all sensing holes. 2.
network connectivity is maintained. 3.
localize all underwater robots.
This paper further addresses how to handle the case where a robot is broken. Also, we address how to conjecture the number of robots required, considering the case where an obstacle inside the 3D workspace is unknown a priori. Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed coverage methods.
The article is organized as follows: Section 2 introduces the preliminary information of this article. Section 3 introduces the 3D coverage methods utilizing multiple heterogeneous robots. Section 4 shows the MATLAB simulations of the proposed 3D coverage methods. Section 5 presents the discussion on the paper. Section 6 discusses conclusions.

Problem Statement
This article solves the space coverage problem using multiple robots in underwater environments. For complete monitoring of the given 3D workspace, it is required to build sensor networks such that no sensing hole exists in the given workspace.
Initially, all robots are carried by surface ships. Here, each robot has both sensing and communication abilities. As a ship arrives at the sea surface of the workspace, it begins dropping robots while not moving at the site.
The dropped robots move autonomously until the 3D workspace is fully covered by sensors of the robots. The obstacles in the workspace are unknown to each robot, and every robot moves based on local interaction with its neighboring robot. Our coverage methods result in underwater sensor networks without sensing holes, while localizing all robots. Section 3 discusses the 3D coverage methods using multiple robots.

Assumptions and Notations
Using the notations in graph theory [27], a weighted directed graph G is a set G = (V(G), E(G)), in which V(G) is the node set and E(G) is the directed edge set. The weight of every directed edge, say e ∈ E(G), is w(e) : e → Z + . A directed path is a sequence of directed edges.
Suppose that we have S surface ships in total. Let p s denote the s-th surface ship where s ∈ {1, 2, . . . , S}. Each surface ship p s carries N s robots in total.
As a ship arrives at the sea surface of the workspace, it begins dropping robots while not moving at the site. Let n s i denote the i-th robot dropped by p s . Since p s carries N s robots, we have i ∈ {1, 2, . . . , N s }. The ship p s drops robots in the following order : n s 1 → n s 2 · · · → n s N s . Suppose that each robot has both Ultra-Short BaseLine acoustic positioning (USBL) and depth sensors. USBL sensors can be used to derive both the range and the azimuth of a signal source. USBL sensors can also be used for mutual communication, i.e., a transmitter can send data to a receiver. Each USBL sensor can work as a transmitter as well as a receiver. For instance, one can access the information on commercial USBL sensors in the following website: https://www.ixblue.com/products/usbl-positionning-system (accessed on 10 May 2021).
Let r d i represent the detection (sensing) range of n s i . Also, let r c i represent the communication range of n s i . Let detect-communicate range r dc i be defined as Let Sphere dc of a robot n s i define a sphere with radius r dc i , whose center is located at n s i . Let Boundary dc of a robot indicate the boundary of the Sphere dc of the robot.
Let n s i denote the 3D coordinates of n s i where i ∈ {1, 2, . . . , N s }. If n s i − n s j < r dc i , then n s i can sense n s j . Also, n s i can send a communication signal to n s j . On the Boundary dc of a robot n s i , Q points are evenly generated. These points represent the points where the sensing ray of n s i can reach. As an example, we discuss generating Q = 18 * 18 points on a Boundary dc of a robot. (This generation method is utilized in Section 4.) Let us rotate one vector [r dc i , 0, 0] T by φ ∈ [π/9, 2 * π/9, 3 * π/9 . . . , 2π] about the z-axis. The matrix presenting the rotation is Thereafter, we rotate R(θ)[r dc i , 0, 0] T by θ ∈ [π/9, 2 * π/9, 3 * π/9, . . . , 2π] about the y-axis. The matrix presenting the rotation is The resulting vector after two rotations is The vector fp(ψ, θ) represents the relative position of a point on the Boundary dc of n s i , with respect to n s i . As we add n s i to fp(ψ, θ), we obtain the 3D coordinates of a point on the Boundary dc of n s i . Note that ψ and θ are chosen from 18 values, respectively. Therefore, Q = 18 * 18 points are derived on Boundary dc of n s i . Let L(n, m) denote a straight line segment connecting two robots n and m. We say that L(n, m) is safe once it does not intersect an obstacle. This implies that a robot avoids colliding with an obstacle, as it travels along L(n, m).
A freePoint of n s i denotes a point among Q points of n s i , such that a straight line segment connecting n s i and the point does not intersect obstacles. In practice, freePoints can be detected by active sonar sensors. A robot has active sonars, which can detect an obstacle close to the robot [28].
The authors of [28] considered the case where the active sonars are positioned, so that the viewing angle of the sonars is 200 degrees. In our paper, we need to install active sonars surrounding the robot. In this way, the sonars can cover the space surrounding the robot. As a sonar ray does not meet an obstacle, the ray is associated to a freePoint of the robot.
A frontierPoint f (n s i ) of a robot n s i denotes a freePoint of n s i , which is outside the Sphere dc of any other robot. As Q → ∞, one obtains dense frontierPoints on a Boundary dc . Let frontierSurface denote the set of frontierPoints as Q → ∞.
A frontierSurface is on the border between a space covered by all Sphere dc s and a space covered by no Sphere dc . If every robot has no frontierSurface, then all robots' Sphere dc s cover the open space completely.
Recall that n s i denotes the i-th robot which is dropped by the ship p s (s ∈ {1, 2, . . . , S}). Each robot n s i stores and expands the traversible graph, say I s = (V(I s ), E(I s )). Each vertex in V(I s ) indicates a deployed robot. Every directed edge, say {n s i , n s j } ∈ E(I s ), is established as the following conditions are met: n s i − n s j ≤ r dc i . Using the above definition, {n s i , n s j } ∈ E(I s ) implies that n s i can detect n s j . Also, n s i can send a communication signal to n s j . n s i − n s j is accessible using USBL sensors. Suppose that a new robot, say n s i , is deployed in I s , so that this new robot and a robot, say n s j , in I s (s = s) satisfy the following two conditions for establishing a directed edge.
In this case, a new directed edge is established from n s i to n s j . Then, I s and I s merge to generate the bigger graph I s I s . In this way, I s can expand to contain a robot n s j which is dropped by a ship p s where s = s.
In the case where I s is not connected to I s (s = s), I s and I s expand independently to each other. Once I s is connected to I s , they merge to generate the bigger graph I s I s .
Utilizing the notation of E(I s ), n s i avoids collision while traversing an edge in E(I s ). This implies that I s is a safe topological map for n s i . The weight of every directed edge, say e ∈ E(I s ), is w(e). n s i − n s j is accessible using USBL sensors and is set as w({n s i , n s j }). The weight of every edge in I s is used to let n s i find the path in I s .
We say that two robots n s i and n s j are neighbors in the case where n s i − n s j < r dc i + r dc j . Let N (n s i ) denote a neighbor of n s i . We assume that a ship dropping robots can localize itself using GPS. In summary, every robot satisfies the below assumptions: (A1) Using the local sonar sensor of a robot n, n is able to calculate the relative location of its neighbor robot N (n). (A2) A robot n is able to access r dc , the detect-communicate range, of its neighbor robot N (n). (A3) A robot n is able to calculate the relative location of a frontierPoint in f (n).

Each Robot Uses USBL and Depth Sensors to Enable Assumptions (A1) and (A2)
Suppose that a robot n transmits a signal to another robot, say m ∈ N (n). The bearing of the signal is measured by the USBL of m. The signal bearing is measured by the USBL of m, which contains an array of transducers. The transceiver head normally contains three or more transducers separated by a baseline of 10 cm or less. A method called "phasedifferencing" within this transducer array is used to calculate the signal direction. The equation for bearing measurement is Here, nm = n − m. Furthermore, nm[j] indicates the j-th element in nm. In (5), N b indicates the bearing measurement noise. In (5), atan2(y, x) define the angle of a complex number, say x + iy.
Suppose that a robot n transmits a signal to another robot, say m ∈ N (n). Then, m replies with its own acoustic pulse. This return pulse is detected by n. The time from the transmission of the initial acoustic pulse until the reply is detected is measured by the USBL system of n and is converted into a range. This is feasible, since n has the reference system to measure the signal speed in underwater environments. This range estimation is then shared by both n and m.
The equation for range measurement between n and m is Here, N r indicates the range measurement noise. Note that m has depth sensors to measure its depth. The depth of m is measured using Here, N d indicates the depth measurement noise. Also, m [3] indicates the third element in m. We define Then, the relative position of n with respect to m is USBL sensors can be used to transmit information from a transmitter to a receiver. Hence, the relative position information in (9) can be shared by both m and n. Moreover, r dc information of m and n can be shared by two robots. Thus, Assumptions (A1) and (A2) hold.
We next calculate the number of communication pings required, so that two neighboring robots m and n can share the relative position information. The bearing measurement in (5) requires a single communication ping from n to m. The range measurement in (6) further requires a return pulse from m to n. Then, the calculated range measurement is sent from n to m. Also, the bearing measurement in (5) is sent from m to n. In total, four communication pings are required, so that two neighboring robots m and n can share the relative position information.

Each Robot Uses Active Sonars to Enable Assumption (A3)
Recall that a frontierPoint f (n) of a robot n denotes a freePoint of n, which is outside the Sphere dc of any other robot. A robot n can detect an obstacle close to it using its local active sonar sensors. A robot has active sonars, which can sense an obstacle close to the robot [28]. Thus, a robot can calculate the relative location of its freePoint.
Moreover, n can derive the relative location of its neighbor robot using Assumption (A1). Note that n has N (n) neighbors. Hence, 4 * N (n) communication pings are required, so that n and every robot in N (n) can share the relative position information.
Furthermore, using Assumption (A2), n can access the detect-communicate range of its neighbor in N (n). N (n) communication pings are required for this access. In summary, 5 * N (n) communication pings are required, so that n can access both the relative position and the detect-communicate range of its every neighbor in N (n).
Thus, n can find a freePoint, which is outside the Sphere dc of any other robot. In other words, n is able to calculate the relative location of a frontierPoint in f (n). Hence, Assumption (A3) is derived. Note that 5 * N (n) communication pings are required to calculate the relative location of a frontierPoint in f (n).

Coverage Methods Utilizing Multiple Heterogeneous Robots
This section handles the following problem: Consider surface ships dropping multiple heterogeneous robots. Build 3D sensor networks without sensing holes, while localizing all robots. While each robot moves, it satisfies the following aspects: obstacle avoidance, and network connectivity are preserved.
To solve the above problem, this article proposes the coverage methods (Algorithm 1) as follows. Initially, all robots are dropped by a surface ship on the sea surface of the 3D workspace. Then, every robot is controlled one after one sequentially, such that as we locate more robots at their designated target positions, an unsensed open space reduces gradually. The proposed coverage algorithm works by moving a single robot at a time to spread out the sensor network, starting from an initial site and spreading out to cover the workspace completely.
Suppose a robot reaches its designated target position (see Algorithm 1). Thereafter, the robot activates its local sonar sensors to sense its surroundings. To build a connected network, a robot n s i moves into an unsensed open space within r dc i distance units. As a robot n s i reaches its designated target position f n s i , the robot activates its local sonar sensors with range r dc i .

Algorithm 1
Coverage methods using a ship p s .
1: Initially, all robots are stored in p s on the sea surface of the 3D workspace; 2: One robot, say n s 1 , is dropped by the ship for the first time; 3: Localize n s 1 using the 3D coordinates of p s ; 4: The robot n s 1 activates its local sonar sensors with range r dc 1 ; 5: FrontierPoints of n s 1 are generated; 6: i = 2; 7: repeat 8: One robot, say n s i , is dropped by the ship; 9: By applying the directed breadth-first search on I s , n s i finds the nearest robot, say m, with a frontierPoint; In Algorithm 1, every robot is controlled one after one sequentially. Initially, n s 1 activates its active sonar sensors to sense an obstacle close to the robot. Thus, n s 1 can calculate the relative location of its freePoint. Thereafter, frontierPoints of n s 1 are generated. Since n s 1 is the robot which is deployed for the first time, it has no neighbors when it generates frontierPoints. Therefore, no communication pings are required to calculate the relative location of a frontierPoint in f (n s 1 ). Whenever a new robot, say n, is dropped by a ship, n moves until arriving at the nearest frontierPoint. The robot n then activates its local sonar sensors to sense its surroundings. FrontierPoints of n are generated accordingly. Note that 5 * N (n) communication pings are required to calculate the relative location of a frontierPoint in f (n). Repeat this until every robot is positioned at its designated target position.
Recall that robots are dropped in the following order: n s 1 → n s 2 → · · · → n s N s . Consider the case where n s i−1 just activated its local sonar sensors to sense its surroundings. The robot n s i finds the nearest robot, say m, with a frontierPoint. The directed breadth-first search can be used for finding m. The time complexity of the directed breadth-first search is O( E(I s ) ), since every edge will be explored in the worst case.
One frontierPoint on m is then chosen as f n s i . Based on the directed breadth-first search, n s i travels along a directed path, say P, in I s to arrive at m. Recalling the notation of I s , P is a safe path for n s i . Furthermore, the length of every straight line segment of P is shorter than r dc i . As n s i arrives at a robot in the path P, n s i can move towards the next robot in the path P under Assumption (A1). GPS is not required for this local maneuver. In order to traverse the path P, n s i utilizes local interaction with a robot in P. As n s i arrives at a robot, say n s l , in P, n s l lets n s i access the next robot in P. This information sharing is possible under Assumption (A1).
As n s i arrives at the last robot in the path P, n s i can move towards f n s i under Assumption (A3). After n s i arrives at f n s i , n s i activates its local sonar sensors to sense its surroundings. Once n s i activates its local sonar sensors, several frontierPoints inside the Sphere dc of n s i are deleted. This deletion is possible based on local interaction of n s i . We explain the deletion process in detail. Suppose that a point in f (n s j ) is inside the Sphere dc of n s i . Since the relative distance between n s j and a point in f (n s j ) is r dc j , n s j is a neighbor to n s i . Under Assumption (A1), n s i calculates the relative location of n s j . Suppose that n s i calculates the relative location of n s j . The relative vector from n s i to a point in f (n s j ) is derived by adding the below two vectors, which are available based on Assumptions (A1) and (A3): 1 the vector from n s j to the point in f (n s j ). 2 the vector from n s i to n s j .
As n s i activates its local sonar sensors, several frontierPoints inside the Sphere dc of n s i are deleted. Recall that 5 * N (n s i ) communication pings are required to calculate the relative location of a frontierPoint in f (n s i ). Then, n s i broadcasts the deletion of frontierPoints to every other robot which can communicate with n s i using multi-hop communication. Here, multi-hop communication is feasible using USBL of each robot.
This broadcast of n s i requires V(I s ) communication pings, since there are V(I s ) nodes in I s . Using Algorithm 1, V(I s ) increases from 1 to ∑ S s=1 N s . Therefore, the number of broadcast pings is 1 + 2 + · · · + ∑ S s=1 N s . Also, n s i transmits a signal to the ship via multi-hop communication using USBL, so that the ship can deploy the next robot n s i+1 . n s i+1 finds the nearest frontierPoint. Then, n s i+1 travels along I s until arriving at the frontierPoint. This repeats until i == N s in Algorithm 1.

Analysis
Algorithm 1 ends when i == N s or I s finds no frontierPoint. If i == N s , then there is no remaining robot in the ship p s . Acknowledge that all sensing holes disappear only when ∑ S s=1 N s is sufficiently large. Based on simulations, Section 4.1 presents how to estimate the number of robots required for covering a 3D workspace, considering the case where an obstacle inside the workspace is not known a priori.
In Algorithm 1, a robot n s i travels along a directed path in I s until arriving at f n s i . The next theorem proves that n s i can find at least one directed path from n s 1 to f n s i . Recall that n s 1 is the first robot dropped by the ship p s . Proof. n s i travels along the directed path in I s until arriving at f n s i . Let {m 1 → m 2 → · · · → m end } define the order of robots along the path until arriving at f n s i . Here, m 1 is n s 1 , since n s 1 is the first robot dropped by the ship p s . After n s i arrives at m j (j ≤ end − 1), n s i moves toward m j+1 . In addition, after n s i arrives at m end , n s i moves toward f n s i . While n s i travels along an edge from m k to m k+1 , n s i avoids collision, since I s is safe for a robot. In addition, n s i is connected to m k utilizing the notation of I s . Furthermore, m k is connected to m 1 = n s 1 , since the directed path is found using the directed breadth-first method. Hence, n s i is connected to n s 1 during the maneuver. Recall that every frontierPoint is a freePoint. Thus, while n s i travels along an edge from m end to f n s i , n s i avoids collision. Furthermore, n s i is connected to m end . Utilizing the similar argument as in the previous paragraph, n s i is connected to n s 1 during the maneuver. We discuss how to localize all robots {m 1 → m 2 → · · · → m end } using Assumption (A1). m 1 is n s 1 , since n s 1 is the first robot dropped by the ship p s . Thus, we localize m 1 using n s 1 . Also, the 3D coordinates of m j+1 are the sum of the following two coordinates: 1 the 3D coordinates of m j . 2 the relative position of m j+1 with respect to m j .
Here, the relative position of m j+1 with respect to m j is available using Assumption (A1). Using deduction, we localize m j for all j ≤ end.
We next address how to localize f n s Here, the relative position of f n s i with respect to m end is available using Assumption (A3). This theorem is proved.
In Theorem 3, it is proved that after Algorithm 1 is done, I s contains all robots dropped by the ship p s . Also, all robots in I s are localized. Proof. Utilizing Theorem 2, n s i is connected to n s 1 , while n s i travels along the directed path to f n s i . As n s i arrives at f n s i , n s i is connected to n s 1 . Note that n s 1 doesn't move at all under Algorithm 1.
Utilizing deduction, all robots n s 2 , n s 3 , . . . n s i−1 are connected to n s 1 . As i varies from 1 to N s under Algorithm 1, all robots dropped by p s are located at their target positions and I s contains all these robots.
We next prove that all robots in I s are localized. On contrary, suppose that a robot, say n s c , is not localized. Since I s contains all robots dropped by p s , there exists a directed path from n s 1 to n s c using Theorem 1. Using the similar arguments as in the last two paragraphs of proof for Theorem 2, all robots along the path are localized. Thus, n s c is localized. We proved that all robots in I s are localized.
Theorem 4 proves that if one can't find a frontierSurface, then the obstacle-free space is covered by all Sphere dc s completely, such that no coverage hole exists.

Theorem 4.
If we can't find a frontierSurface, then all Sphere dc cover the obstacle-free space completely, such that no coverage hole exists.
Proof. Using the transposition rule, the below statement is proved: if an obstacle-free space, which is outside a Sphere dc , exists, then we can find a frontierSurface.
Suppose that an obstacle-free space, which is outside a Sphere dc , exists. Let O indicate this uncovered obstacle-free space. Then, at least one robot, say n s i , has a frontierSurface intersecting O. Using Theorem 3, I s contains n s i under Algorithm 1. Hence, we can find this frontierSurface using I s .
Theorem 4 proved that once the 3D network is completely generated using Algorithm 1, then no sensing hole remains.
In Algorithm 1, n s i moves along the directed path in I s until meeting f n s i . Let {m 1 → m 2 → · · · → m end } represent the robots on the path. Here, m 1 is n s 1 . After n s i arrives at m j (j ≤ end − 1), n s i heads towards m j+1 . In addition, after n s i arrives at m end , n s i heads towards f n s i .
A robot n s i needs to follow the path {m 1 → m 2 → · · · → m end }. In practice, n s i can be an Autonomous Underwater Vehicle (AUV) with acceleration constraints. An AUV n s i can follow the path using path following controls in [29][30][31][32]. Note that designing path following controls for a robot is not within the scope of this paper.

Handling Broken Robot or Network Delay
After the network is completely built, a robot may be broken. In this case, sensing holes may appear due to the broken robot. In this case, a frontierPoint of a robot, say a n , can be detected using its local sensor. Suppose that a new robot, say n N , is deployed from a ship to replace the broken robot. Based on Assumption (A3), each frontierPoint of a n can be located using the local sensor of a n . One frontierPoint on a n is then chosen as a target position, say f n N . Based on I s , n N travels along the directed path to reach the target position. In this way, n N replaces the broken robot.
Algorithm 1 does not require synchronization among the robots. Suppose that a robot, say n s j , reached its target position. It then transmits a signal to the ship via multihop communication using USBL, so that the ship can deploy the next robot, say n s i . It is inevitable that network delay occurs in multi-hop communications.
This article assumes that a robot can communicate with other robots via multi-hop communication using USBL. However, using multi-hop communication, time delay may occur during data transfer.
Algorithm 1 is robust to network delay due to data transfer. Suppose that it takes δ seconds until n s i finds the directed path to the nearest frontierPoint based on I s . The robot n s i begins moving only after it finds a directed path to the found frontierPoint. In addition, no robot moves while n s i moves. Hence, Algorithm 1 works regardless of how long the delay δ is.
Note that while a robot n s i travels along the path to the nearest frontierPoint, other robots stand still. Hence, the speed of a moving robot doesn't make effects on Algorithm 1. In the case where a robot moves slower than other robots, it may take longer to make the robot reach its associated frontierPoint. Since only a single robot moves at a time, a robot's speed doesn't disturb the process of Algorithm 1.
It can be argued that the proposed coverage methods run slow, since we make a single robot move at a time. However, we can speed up the coverage process by increasing the speed of each robot. This speed up is possible, since the speed of a moving robot doesn't make effects on Algorithm 1. Moreover, we can speed up the network building process by letting multiple surface ships drop multiple robots simultaneously.

MATLAB Simulation Results
We verify the performance of Algorithm 1 with MATLAB simulations. The sampling interval is dt = 1 second. We use Q = 36 * 36, and consider a given workspace with a known size [0, 200] * [0, 200] * [0, 200] in meters. This implies that the workspace has [0, 200] as its x-coordinate range, y-coordinate range, or z-coordinate range. Note that obstacles are unknown a priori.
In Algorithm 1, a robot n s i moves along a directed path to reach its target position. Designing path following controls is not within the scope of this paper. In simulations, the dynamics of n s i are given by where dt presents the sampling interval in discrete-time systems. Also, u s i (k) presents the velocity of n s i at sample-step k. The dynamic model in (10) is commonly used in multi-robot systems [10,33,34]. Let U s i = max k u s i (k) denote the maximum speed of n s i . Considering heterogeneous robots, U s i = U s j is possible. The controllers for n s i are designed in discrete-time systems. Let W indicate the next waypoint that n s i will encounter as n s i travels along the directed path to its target position f n s i . Let W indicate the coordinates of W.
Let g = W − n s i (k). n s i is controlled to move towards W. If g > U s i dt, then (10) is set as If g ≤ U s i dt and W = f n s i , then n s i heads towards the next waypoint after W. If W == f n s i and g ≤ U s i dt, then (10) is set as This leads to We consider two ships in total, i.e., S = 2. Using the simulation in the workspace without obstacles (Section 4.1), we can estimate the number of required robots as 32. Hence, we conjecture that 5, 10, 20 robots are not sufficient to cover the workspace. We hence use 50 robots in the Simulation section.
We consider heterogeneous robots as follows. Each robot may move with distinct speed, while having distinct USBL sensors. In the case where i mod 3 is zero, r dc i is 100 m, and the maximum speed is U s i = 5 m/s. In the case where i mod 3 is one, r dc i is 50 m, and the maximum speed is U s i = 3 m/s. In the case where i mod 3 is two, r dc i is 75 m, and the maximum speed is U s i = 2 m/s. In practice, there exists sonar sensing noise and external disturbance. Considering these practical aspects, a robot may not be located at its target position accurately. As we localize n s i using the 3D coordinates of f n s i (Algorithm 1), we added a Gaussian noise with mean 0 and standard deviation 1 m to each element in the 3D coordinates of f n s i . In this way, n s i is not accurately located at f n s i . Figure 2 shows the final sensor configuration after the robots are deployed to cover the 3D workspace with a known size 200 * 200 * 200 in meters. 966 s are spent to cover the workspace completely. Using MATLAB simulations, 7 s are spent to build the complete network without sensing holes. Among 50 robots, 18 robots move to cover the workspace. Yellow diamonds indicate robot positions dropped from p 1 , and blue diamonds indicate robot positions dropped from p 2 . The position of a ship is marked with a circle. In the figure, spheres indicate obstacles in the environment. Considering the scenario in Figure 2, Figure 3 shows the sensor network that is built utilizing our 3D coverage methods. In Figure 3, the path of a robot that is dropped from the ship at (30, 30, 0) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at (150, 150, 0) is marked with blue asterisks.  Figure 2, this figure shows the sensor network that is built under our 3D coverage methods. The path of a robot that is dropped from the ship at (30, 30, 0) is marked with yellow asterisks. Furthermore, the path of a robot that is dropped from the ship at (150, 150, 0) is marked with blue asterisks.

Estimate the Number of Robots Required
Next, we address how to estimate the number of robots required for covering a 3D workspace, considering the case where an obstacle inside the workspace is not known a priori. To estimate the number of robots, we simulate Algorithm 1, while setting no obstacles in the environment. We set no obstacles in the simulation, since we estimate the number of robots, in the case where an obstacle inside the workspace is not known a priori. Figure 4 shows the sensor configuration, as Algorithm 1 is used to simulate the robot maneuvers for covering the 3D workspace with a known size  Figure 4. The sensor configuration after the robots are simulated to cover the 3D workspace without obstacles. 32 robots are dropped in total, and the position of a robot is depicted with a green circle. The path of a robot that is dropped from the ship at (30, 30, 0) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at (150, 150, 0) is marked with blue asterisks. Using the simulation in the workspace without obstacles, we can estimate the number of required robots as 32.
In Figure 4, the path of a robot that is dropped from the ship at (30, 30, 0) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at (150, 150, 0) is marked with blue asterisks. Using MATLAB simulations, 14 s are spent to build the complete network without sensing holes. 32 robots are dropped in the simulations, and the position of a robot is depicted with a green circle in Figure 4. Using the simulation in the workspace without obstacles, we can estimate the number of required robots as 32.

Discussions
In Algorithm 1, a robot moves along a directed path to reach its target position. A robot, which can be an AUV with acceleration constraints, can follow the path using path following controls in [29][30][31][32]. As a robot moves along a path, it must avoid collision with moving obstacles, such as other vehicles or animals. Various reactive collision avoidance methods [35][36][37] can be integrated with the proposed path planner, so that a robot can avoid collision with abrupt obstacles.

Conclusions
As a method to build underwater sensor networks in a cluttered underwater environment, this article considers surface ships dropping robots into the sea. Multiple robots move autonomously to build 3D sensor networks, while avoiding collision with unknown obstacles. The 3D coverage algorithm works by deploying a single robot at a time to spread out networks, until the open space is fully covered. Moreover, this article discusses how to handle broken robots, as well as how to estimate the number of robots required, considering the case where an obstacle inside the workspace is not known a priori.
Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed network construction methods. As our future works, we will do experiments to verify the proposed methods utilizing real underwater robots.