Lidar-Based Navigation of Subterranean Environments Using Bio-Inspired Wide-Field Integration of Nearness

Navigating unknown environments is an ongoing challenge in robotics. Processing large amounts of sensor data to maintain localization, maps of the environment, and sensible paths can result in high compute loads and lower maximum vehicle speeds. This paper presents a bio-inspired algorithm for efficiently processing depth measurements to achieve fast navigation of unknown subterranean environments. Animals developed efficient sensorimotor convergence approaches, allowing for rapid processing of large numbers of spatially distributed measurements into signals relevant for different behavioral responses necessary to their survival. Using a spatial inner-product to model this sensorimotor convergence principle, environmentally relative states critical to navigation are extracted from spatially distributed depth measurements using derived weighting functions. These states are then applied as feedback to control a simulated quadrotor platform, enabling autonomous navigation in subterranean environments. The resulting outer-loop velocity controller is demonstrated in both a generalized subterranean environment, represented by an infinite cylinder, and nongeneralized environments like tunnels and caves.


Introduction
There is significant interest in using small Unmanned Aerial Systems (sUAS) for autonomous tasks, ranging from rapid situational awareness of hazardous environments to search and rescue missions. In each application, the system's ability to sense the environment around itself is critical to autonomous navigation, and generating control commands in a fast and efficient manner is still an ongoing challenge. This challenge is even more difficult in subterranean environments, where degraded sensing, diverse unstructured terrain, and a lack of Global Positioning System (GPS) signals can cause problems with traditional navigation methods. More detail on the challenges and proposed solutions to autonomous subterranean navigation are presented in [1][2][3][4][5][6][7][8][9][10][11][12]. Generally, proposed methods for subterranean navigation include some mix of the following subsystems: sensing and perception, state-estimation, map generation, planning, and guidance. These subsystems and processes can require multiple sensing modalities and significant computing resources, which ultimately leads to undesired size, weight, and power (SWaP) constraints on the vehicle. Additionally, all of these processes are directly affected by state-estimation accuracy; if the state estimates are noisy or experience significant drift over time, it can affect the quality of the map, which in turn affects the system's ability to navigate. To overcome these challenges and limitations, novel sensors and sensor processing methods need to be developed to continue to push forward robot autonomy.
For inspiration, researchers can look to countless examples in nature of elegant solutions to complex perception and navigation problems. Animals evolved for hundreds of millions of years. As a result, they developed efficient sensorimotor systems for processing spatially distributed sensor measurements. As engineers, we can learn from these incredible biological systems and use the underlying principals to come up with unique solutions to the different challenges of robot autonomy.
One example is the mechanosensory lateral line system, which allows fish to navigate their surroundings by sensing water flow. The mechanosensory lateral line system is composed of two different types of spatially distributed sensors: the superficial neuromast (SN) and the canal neuromaster (CN). The SNs are located on the body of the fish and sense local flow velocity, while the CNs are located in subepidermal canals and sense flow acceleration [13,14]. Specialized neural pathways enable the fish to achieve different behaviors like rheotaxis [15,16], schooling [17], obstacle avoidance [13], and prey detection [18,19]. By mimicking the mechanosensory lateral line system, reflexive obstacle avoidance can be achieved through the spatial decomposition of electric field measurements [20,21]. Another example is the spider mechanosensory system composed of hair and strain sensors that cover the abdomen and legs [22]. By processing many different spatially distributed measurements from the touch sensitive hairs, spiders are more equipped to detect and localize prey [23,24]. These hairs also allow them to sense air flow, allowing them to localize airborne prey and estimate ego-motion [25][26][27]. A third example of particular interest is the insect visuomotor system, which enables fast flight and obstacle avoidance. The insect compound eye is composed of thousands of ommatidea, or light receptors. Insects rely on optic flow, or the characteristic patterns of visual motion that are excited as they move [28][29][30]. By processing the changing patterns of light (optic flow) sensed by the ommatidia (the insect compound eye), insects are able to generate reactive motor commands allowing them to make rapid adjustments to their flight and avoid obstacles [31]. Additional analysis of the insect visuomotor system shows how specialized tangential cells process distributed optic flow measurements, inspiring the wide-field integration framework presented in [32][33][34]. Through observation of insects and other flying animals in different environments, optic flow-based navigation methods were developed [35][36][37][38][39][40][41][42]. Small-obstacle avoidance algorithms based on optic flow measurements are demonstrated onboard a multirotor platform in [43,44].
In each of these examples, hundreds to thousands of spatially distributed sensor measurements are processed simultaneously to produce different behavioral responses. Past studies focused on extracting ego-motion estimates and/or relative proximity to obstacles from optic flow or electric field measurements, whereas the main sensing mode for this study is relative depth observed by a spherical LiDAR sensor model. The proposed controller is reactive, which means that it generates control commands directly from the raw sensor measurements. Other reactive control approaches can be divided into the following subgroups: artificial potential fields, admissable-gap navigation, and laserbased reactive control approaches. In artificial potential field approaches, the known, or sensed, environment is converted to a field of forces [45][46][47][48]. The admissible-gap approach uses depth scans to generate a nearness diagram and then identify gaps to traverse [49]. The approach is strictly planar, however improvements to the initial algorithm were made including smooth nearness-diagram navigation [50], closest gap navigation [51], and tangential gap navigation [52]. This approach was recently implemented onboard a small multirotor platform and tested in both indoor and outdoor environments [53]. Other noteworthy LiDAR-based approachs are presented in [54], where depth scans are used to plan a path through the known environment using an improved rapidly-exploring random tree algorithm (RRT*) and [8], where depth scans are used to generate a centering response for graph navigation.
In this work, a bioinspired wide-field integration (WFI) framework for subterranean navigation is presented where the primary sensing modality is measured depth. Spatially distributed depth measurements are a rich source of information relevant to navigation. Using the spatial inner-product, a WFI method inspired by the Lobula Plate Tangential Cells (LPTCs) of the insect visuomotor system, environment relative states can be extracted from depth scans and applied as feedback to achieve a 3D centering response in subterranean environments. With careful selection of 3D weighting shape functions, or sensitivity shapes, control commands can be computed directly from depth scans in a single operation. The control algorithm is demonstrated onboard a simulated quadrotor platform in both a generalized cylinder environment and other subterranean environments with more diverse topology. The main advantage to this approach is its simplicity: it does not require state-estimates, local or global maps, or planned paths, and control commands are generated directly from each successive depth scan. By integrating across thousands of depth measurements, the algorithm is robust to sensor noise. The algorithm is also robust to changes in the environment. In the development of the algorithm the local environment is approximated as an infinite cylinder, however, navigation in more diverse subterranean environments is demonstrated.
This paper is outlined as follows. In Section 2, the development of the bio-inspired nearness controller is presented, as well as a description of the platform and simulated testing environment. Following the presentation of the algorithm, results from autonomous flight tests are presented in Section 3. The advantages and limitations of the controller are discussed in Section 4

System and Sensing Models
For the purpose of testing the algorithm, the DARPA SubT Simulator is used (Available online: https://www.subtchallenge.com (accessed on 17 January 2022). The DARPA SubT Simulator provides access to a number of different simulated subterranean environments that fall into three main subdomains: tunnel, urban, and cave. Each subdomain provides unique navigation challenges to autonomous platforms, such as the tight and narrow corridors of the tunnel worlds or the nonuniform caverns present in the cave worlds. Examples of these environments are shown in Figure 1. Initial formulation of the controller assumes an infinite cylinder as the generalized environment, which can also be loaded into the DARPA SubT simulator as a custom world. The SubT virtual code repository and resources on how to use it are hosted on GitHub (Available online: https://github.com/ osrf/subt/wiki (accessed on 17 January 2022). In addition to simulation worlds, the DARPA SubT Simulator provides access to robotic platforms developed by DARPA SubT Challenge Virtual competition teams complete with full sensor suites . Users can customize different platforms, like multirotor aerial systems or quadrupedal ground vehicles, with sensors that suite the needs of their algorithms. For the purpose of this study, the standard quadrotor "X3" model is used with outer-loop velocity control. Navigation of the different environments is achieved by setting a fixed forward speed and generating lateral and vertical velocity commands as well as heading rate commands. The inner-loop attitude controller is handled by the DARPA SubT Simulator quadrotor controller, which converts velocity commands to individual motor speeds of the simulated vehicle for attitude control and stabilization. A depth sensing model based on existing LiDAR sensors is used to measure depth to discrete points in the environment. Current LiDAR-based sensors have limited field of views, however for the purpose of this study a LiDAR sensor with a full spherical field of view is used. Depth measurements d(θ v , φ v , x) are discretized according to the body-referred viewing angles θ v and φ v , where θ v ∈ [0, 2π] is the inclination angle and φ v ∈ [−π, π] is the azimuthal angle, and are a function of the environment relative system states x, which are described in greater detail in Section 2.2.1. In this case, 64 equally spaced inclination angles and 360 equally spaced azimuthal angles are used, resulting in just over 23,000 depth measurements per scan. In the Robot Operating System (ROS), the incoming depth scans are represented as collections of (x, y, z) points, and so each depth measurement is made up of 3 floating point numbers. Depth scans are reported at a rate of 20 Hz, and the bandwidth of the sensor data is estimated to be around 5.27 MB/s. The LiDAR model has a max sensing distance d max = 100 m and minimum sensing distance d min = 0.05 m. Additive Gaussian white noise is added to each depth measurement, with a nominal standard deviation of 0.03. Measured depth is hypothetically unbounded, and so depth is converted to nearness, as shown in Equation (1):

Bio-Inspired Nearness Control
The goal is to develop an outer-loop velocity controller that produces a 3D centering response in subterranean environments using spatially distributed nearness measurements, or nearness scans. This is achieved by extracting environment relative information from the nearness scan using the spatial inner-product. First, the local environment is approximated as an infinite cylinder, which appropriately represents a subterranean environment where large obstacles can exist above, below, and to the sides of the vehicle. When navigating unknown environments, it is often desirable to fly as far away from large obstacles, such as walls, floors, and ceilings, as possible. In the case of subterranean environments, this can be achieved by driving the system to the middle of the local environment. By selecting appropriate weighting shapes for the spatial inner-product, these environment relative states can be extracted from the nearness scans of the onboard sensor. Linear feedback control is used to drive the system away from obstacles towards the center of the cylinder and point down the centerline. The general process is shown in Figure 2. Section 2.2.1 presents the lytic function for nearness in the approximated local environment. In Section 2.2.2, the process of extracting the environment relative states through WFI is presented in detail. Design of the linear feedback controller is presented in Section 2.2.3.

Parameterization of the Environment
It is useful to approximate the local environment around the vehicle as some generalized shape with a known analytic representation. This enables the determination of the environment relative states to use for feedback and also allows an initial set of feedback gains to be derived. As stated above, a cylinder is selected as the generalized local environment, as it simulates the presence of obstacles above, below, and to the sides of the vehicle. Figure 3 shows different views of a quadrotor model flying through a simple cylinder environment. The controller goal is to drive the system towards the centerline of the cylinder, and the environment relative states that can be used for feedback to achieve this are the lateral, vertical, and heading displacements from the cylinder centerline defined as y, z, and ψ. As the system moves through the environment, the expected shape of the measured nearness can be determined by intersecting a line in the body-referred spherical coordinate system with the surface of a cylinder (see Appendix A for a more detailed derivation). One simplifying assumption is made, which is that the roll and pitch angles of the vehicle are assumed to be negligibly small. The analytic representation of nearness as a function of the cylinder radius r, the body-referred viewing angles (θ v , φ v ), and the environment relative states x = y z ψ T is shown in the following equation: where sin(x) and cos(x) were simplified to s(x) and c(x), a = (r − y)(r + y), and b = (r − z)(r + z). While this is a complicated nonlinear function, estimates of the environment relative states can be extracted using WFI.

Wide-Field Integration
In the visuomotor system of the fly, tangential cells are large motion-sensitive neurons that are sensitive to different flow patterns. Different tangential cells are sensitive to different stimulus patterns, and the integrated output is effectively a comparison between the cell's sensitivity pattern and the measured stimulus. The outputs of these cells are pooled to produce different motor responses. This process can be represented mathematically by the spatial inner-product, shown in Equation (3), which compares two spherical nearness patterns.
Here, dΩ = sin(θ v )dθ v dφ v is the solid angle of the sphere. By projecting nearness onto different weighting shape functions F i , the function's dependence on the viewing angles θ v , and φ v is removed. Real spherical harmonics, shown in Figure 4, are used as the basis set for the sensitivity shapes, and are parameterized by the viewing angles θ v and φ v according to: where P m l (cos θ) is the associated Legendre function l ∈ Z, m ∈ Z, l ≥ 0, |m| ≤ l. With spherical harmonics as the weighting shapes F i , Equation (3) can be rewritten as: The resulting outputs p i (x) from Equation (5) are nonlinear functions of the state, and can be linearized for small motions around x 0 to produce a set of linear equations of the form p = Cx. This observation model can be inverted to produce a linear relationship between the projections and the environment relative states The C † matrix relating the projections to the states for the first nine spherical harmonics is found to be: The spatial inner-product is a linear operator, and so the C † matrix, which relates projections to states, can be moved inside the inner-product resulting in the following: where M is the number of weighting shapes. The second term in the inner-product can be interpreted as the weighting shape for extracting the i'th environment relative state: For the first nine spherical harmonics, a single projection correlates with each of the three environment relative states. The closed forms of the state sensitivity shapes are shown in Equations (9) and (10), and a visual representation is shown in Figure 5.

Feedback Control Design
Quadrotor platforms are able to move in any direction, subject to dynamic constraints. At low speeds, the linear dynamics models for the forward, lateral, vertical, and heading states can be decoupled and represented by the following dynamics models: ψ Equation (12) shows the forward velocity dynamics of the vehicle where X u is the forward, or longitudinal, stability derivative and X δ Fwd is the forward control derivative. A fixed, or regulated forward speed is desired, and no forward position states are required. Equation (13) shows the lateral dynamics model of the system where Y v , Y p , L v , and L p are the lateral stability derivatives and Y δ Lat and L δ Lat are the lateral control derivatives. Similarly, Equation (14) shows the vertical dynamics model where Z w is the vertical stability derivative and Z δ Vert is the vertical control derivative. Last is Equation (15), which shows the heading dynamics of the system where N r is the heading stability derivative and N δ Head is the heading control derivative. The inputs to the system are the forward, lateral, vertical, and heading control efforts δ Fwd , δ Lat , δ Vert , and δ Head . The table in Appendix B lists the values used for all the stability derivative constant terms, which were estimated for a similar quadrotor platform in [55].
The control efforts are determined using a simple feedback control scheme where the desired states are compared to the estimated states: where K v , K w , and K r are the lateral, vertical, and heading controller gains. A fixed forward speed is used directly as an input to the controller. The closed loop dynamics equations are shown in Equations (19)- (21): Stability of the controlled system can be determined using simple linear stability analysis. In each case, with a selection of K i > 0 the decoupled linearized systems are stable for small deviations from the equilibrium.
Similar to before, the controller gains can also be moved inside the spatial innerproduct: Now, the control commands u i can be directly computed from the measured nearness scan using the spatial inner-product. In this case, the second term in the inner-product is a spatially distributed weighting shape that is not only sensitive to the different environment relative states, but is also scaled to produce the desired control commands: For each of the three environment relative states, a different weighting function F u i is used to generate control commands from the measured nearness scans. In this case, the estimated states are linear functions of a single projection, as seen in Equations (9)-(11), and so the control command weighting shapes are just scaled versions of the shapes seen in Figure 5.
While a fixed forward speed can work in most environments, performance can be improved by regulating the forward speed. In this case, the speed response from [33] is used for inspiration for the following control law: Here, the forward speed control gains K u v , K u w , K u ψ , K u f wd are used to tune the forward speed regulation response to errors in the lateral, vertical, and heading states as well as approaching obstacles through the p(3) projection. By regulating the forward speed based on errors in the environment relative states, the system is better equipped to navigate changes in the environment.

Results
In the following subsections, performance of the algorithm in different simulated environments is presented. In Section 3.1, the performance of the system in a perfect cylinder environment is demonstrated. Zero-mean Gaussian white noise is added to the depth measurements in Section 3.2, demonstrating the algorithm's robustness to sensor noise. To demonstrate the robustness of the algorithm in nongeneralized environments, results from tests conducted in tunnel and cave environments are presented in Section 3.3.

Performance in Generalized Cylinder Environment
In this section, the algorithm is tested in a simulated perfect cylinder environment to demonstrate the effectiveness of the method. For the first set of tests, the quadrotor system was manually perturbed off the centerline of the cylinder and state estimates generated from the state sensitivity shapes F x i were recorded. No sensor noise was added to any of the tests presented in this subsection. Figures 6-8 compare the environment relative state estimates, shown as a red dashed line, to the ground truth, shown in blue. In each of these cases, the estimation error is negligible for small perturbations, however, large perturbations eventually begin to skew the estimated states.   In Figures 9-11, the stability of the controller is demonstrated for different perturbations away from the centerline of the cylinder. Isolated tests were conducted for each state where the system is initialized with some nonzero initial condition. Control commands were generated using the control sensitivity shapes F u i , and control gains K v = K v = K r = 1 are used simply to prove stability claims. In each case, the nearness controller drives the system back to the centerline of the cylinder, confirming the stability of the controller. With equal gains, the heading control is seen to have the slowest response, which is to be expected based on the quadrotor model stability derivatives.

Robustness to Noise
To evaluate the algorithm's robustness to additive Gaussian white noise on the depth measurements, the quadrotor platform was piloted through the cylinder environment for 60 s. During the flight, the system was perturbed away from the cylinder centerline along each environment relative state and the sensor data were recorded. Zero-mean Gaussian noise with varying standard deviations was added to the depth measurements offline, and the standard deviation of the error in the state estimates was computed. The results are shown in Figure 12. The standard deviation of the state estimates remain extremely small relative to the standard deviation of the added sensor noise. Even with large noise distributions, the algorithm is still able to maintain state estimates that are usable for feedback control.

Performance in Nongeneralized Subterranean Environments
Using different SubT simulation environments, the performance of the algorithm in nongeneralized environments is demonstrated. The control gains listed in the Table in Appendix C were used in both the tunnel and cave tests presented. Figure 13 shows different views of the "Simple Tunnel 03" world, which is composed of sloping ramps, curved bends, vertical shafts, and dead-ends. The speed of the system and the control commands are shown in Figure 14. The system is able to quickly navigate through all sections of the course in 4 min, covering 506 m of tunnel in the process. The average speed of the system was 2.01 m/s. The first section of the environment is the downward sloping ramp shown in Figure 13b. The vertical centering response is demonstrated in the u w plot in Figure 14 between the 20 and 35 s mark where the system is commanded to descend. Perturbations in the u v commands that correlate with the u r commands are indicative of the system navigating a bend in the tunnel. At the 80-s mark, the system approaches the vertical shaft, as seen by the decrease in the forward speed command. Even though the control algorithm is not explicitly designed to navigate vertical shafts, the system is still able to get through the section and continue exploring. The SubT virtual cave worlds provide even more diverse terrain for systems to navigate, consisting of large caverns connected by tight passageways. Figure 15a shows a top down view of the DARPA SubT sim world "Simple Cave 02", and a sample of the inside of the environment is shown in Figure 15b. The cave environments deviate massively from the cylinder model used to approximate the local environment, however the algorithm is still able to produce a centering response that keeps the vehicle exploring. The red line in Figure 15 shows the approximate route taken by the system through the cave environment starting at the green dot near the bottom and proceeding counter-clockwise. During this test, the system covered 1.2 km in just under 15 min, with an average speed of 1.35 m/s. The speed of the system and the control commands as it navigates through environment shown in Figure 15 are shown in Figure 16.  Figure 15 shows the approximate route taken by the system through the cave environment starting at the green dot near the bottom and proceeding counter-clockwise. (b) View of the orange quadrotor system moving through a larger cavern.

Discussion
In general, the WFI method works well on spatially distributed depth scans, and sensible state sensitivity shapes are presented in Figure 5. The magnitude of the lateral state weighting shape Fŷ is greatest at the sides of the vehicle, which makes sense because these are the measurements that should carry the most weight when estimatingŷ. Similarly, the vertical state weighting shape Fẑ magnitude is largest at the top and bottom poles, and it decreases as they get closer to the x − y plane. The heading state sensitivity shape Fψ is slightly more interesting. With opposing positive and negative components, this shape becomes sensitive to changes in the heading of the system.
The state estimates from the perfect cylinder environment demonstrated in Section 3.1 were computed using the full state sensitivity shapes. However, different responses can be achieved if subsets of the full shapes are used. For example, if only the front half of the heading state sensitivity shape Fψ is used in the spatial inner-product (i.e., measurements from behind the vehicle are not used), a more reactive steering response is produced. By only processing points that are in front of the vehicle, the system is more responsive to changes in the environment in front of it and can steer accordingly. The results presented in Section 3.3 were achieved using the front half of the Fψ. The same heading state-estimates can be achieved by recomputing the projection coefficient in the C † matrix in Equation (6). Other responses, like ground and wall following can be achieved through half shapes as well. With small changes to the feedback controller, a ground-following response can be produced by using only the bottom half of the vertical state sensitivity shape Fẑ. Similarly, using only the left-or right-hemisphere of the lateral state sensitivity shape Fŷ can produce a wall-following response.
The algorithm's robustness to additive Gaussian sensor noise is demonstrated in Section 3.2. No appreciable change in the standard deviation of the state estimate errors is excited until relatively large deviations in the noise are added to the measurements. Modern LiDAR depth sensors have noise distributions with standard deviations to the order of millimeters to centimeters, which is on the lower end of the data presented in Figure 12. The main takeaway here is that this algorithm enables the use of lower fidelity (and often cheaper) sensors whose noise levels are unacceptable for other applications.
The algorithm is also robust to occlusions, or discontinuities in the measurements, as long as the occlusions are symmetric. Occlusions in pointclouds could result from using multiple, nonoverlapping depth sensors. In this study, occlusions in the depth scan were present due to the presence of the quadrotor body. The quadrotor model body is detected by the onboard LiDAR sensor in the SubT sim, as it is impossible to place a spherical depth sensing model on a quadrotor and not experience some occlusions. These points are obviously not measurements of the environment, and so they should not be processed by the spatial inner-product. In this case, these measurements can simply be thrown away, as long as the corresponding point is removed from every other hemisphere quadrant. This ensures that the resulting state estimates are not skewed in any one direction.
One of the biggest advantages to this solution is the computational efficiency and simplicity. The entire control algorithm can be condensed down into a single equation, Equation (22), which directly produces controls commands from sensor measurements. Producing a single control command requires 2N floating point operations, where N is the number of measurements, and so only 6N floating point operations are required to generate a set of control commands. Actual implementation of the algorithm requires some additional data conditioning and processing, however the entire operation takes less than 0.01 s of processing time. This means that scans from equivalent LiDAR sensor models could be processed at rates up to 100 Hz. Faster processing of sensor measurements enables faster reactions to changes in the environment, in which case the limiting factor becomes the vehicle dynamics and control response.
While this algorithm is computationally efficient and enables fast flight, it can be susceptible to local minima in the environment such as corners, split paths, or unique environments that produce trajectory loops. In the case where the vehicle is approaching a symmetric split path or corner, the system will not deviate from its current trajectory because the local environment may already be balanced. In this case, the controller may not generate steering commands to avoid a collision. This situation can be thought of as an unstable equilibrium state. In real environments, these scenarios are rare because the relative environment is rarely perfectly symmetric, and even sensor noise can be enough to perturb the system off the unstable equilibrium state and back onto a centering trajectory. As mentioned above, trajectory loops can be excited in certain unique environments. For example, if the system were to fly into a large cavern from a small hallway, it may never exit the cavern and get stuck in a continuous loop flying around the cavern.

Conclusions
Using bio-inspired WFI methods, high-bandwidth sensor data can be efficiently processed to produce a 3D centering response in subterranean environments. The derivation of the sensitivity functions presented in Section 2.2.2 shows how different weighting functions can be used to extract environment relative information. With proper selection of the feedback controller gains, the algorithm is demonstrated to be stable for a range of perturbations from the equilibrium state in the cylinder environment. By integrating across thousands of sensor measurements, the algorithm is robust to additive white noise and occlusions in the depth scans. The 3D centering response is robust to large deviations away from the generalized local environment presented in Section 2.2.1, as demonstrated in the flight tests presented in Section 3.3.
Work on this framework can continue in several different directions. In Section 2.2.1, the local environment was approximated as an infinite cylinder with some known radius r. In this framework, computing controller gains requires some initial value for the expected radius of the environment. If the actual radius of the local environment is significantly larger than the expected radius, the system performance will be sluggish as the control commands are effectively scaled down. Conversely, if the actual radius is much smaller than the expected radius, the control commands could grow too large and make the system unstable. Using principals of robust control theory, a dynamic controller can be designed which provides performance and stability guarantees for a range of perturbations to the expected radius.
As noted in Section 3.3, the system is able to navigate a vertical shaft using this 3D centering approach; however, it required the vehicle to slow to a near stop. This subterranean exploration framework could be expanded by determining new sensitivity functions for vertically oriented infinite cylinders. By feeding back on a different set of pooled projections, forward-and lateral-centering could be achieved while the system ascends or descends the vertical shaft. This requires additional logic for determining when the system should switch between the different centering modes. Deeper analysis of different projection values may provide insight into signals that could be used to facilitate the mode switching.
This work focuses on a spherical LiDAR sensor model that produces the same distribution of points each scan. In other words, the viewing angles of each scan point are constant and known. Novel depth sensing modalities, such as millimeter wave radar, also produce depth scans, however the incoming, unfiltered points are often spurious and have no known spatial distribution. Future work can be done to apply this algorithm to sensors with unknown spatial distributions of their depth scan points.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix A. Intersection of a Line and a Cylinder Surface
Surface of a cylinder as a function of of the viewing angles θ v and φ v : Components of a line segment that intersect with the cylinder as a function of the viewing angles θ v and φ v : By plugging in the y c and z c components of Equation (A2) into Equation (A1): d 2 sin 2 (φ v ) sin 2 (θ v ) + cos 2 (θ v ) = r 2 (A3) Solving Equation (A3) for d: This is the unperturbed case, so now we want to add in the lateral (y), vertical (z), and heading (ψ) states. The new equations for the cylinder and line that encode these values is: Now, if we combine Equations (A5) and (A6) and solve for d we get: a = sin 2 (φ v + ψ) sin 2 (θ v ) + cos 2 (θ v ) b = y sin(φ v + ψ) sin(θ v ) + z cos(θ v ) c = y 2 + z 2 − r 2 Equation (A7) is simply inverted to get the final closed form solution for nearness presented in Equation (1).