This section presents the results of both simulation-based and real-world environment experiments performed using the proposed methodology.
4.1. Simulation-Based Experiments
To assess the performance of the proposed inspection path planner, several experiments have been conducted using the Gazebo-based simulation environment RotorS [
38]. The platform model used is the AscTec Firefly multi-rotor equipped with a visual-inertial sensor [
39], which provides the 3D pointcloud of the environment. The tunnel environment simulated for these experiments corresponds to a section of a real, single-track railway tunnel located in Rome, Italy. The simulated tunnel is
long and has a cross-section of
.
Figure 6 depicts an inside and outside view of this simulated environment.
Under these simulated tunnel conditions, the performance of the proposed inspection method is evaluated based on the number of surface voxels that are observed by the inspection sensors from a predefined sensing distance, . Moreover, we show the mean trajectory length and the distribution of observed voxels at different distance intervals. Each simulation experiment was run for exactly .
For these experiments, the four visual cameras described in
Section 2.3 are considered. All rays cast from the inspection sensors and intersecting a map voxel located at a distance lower or equal to
are labeled as
voxels sensed. Because inspection results are strongly affected by this parameter, we assess the generated path performance under three different scenarios. Going from more restrictive to less restrictive, the
parameter takes values of
,
and
. Each of these scenarios represents different inspection quality requirements, as the closer the inspection sensors can observe the tunnel walls, the smaller the defects that can be identified using the captured data.
The inspection trajectory generated by our proposed method is compared to a constant height, wall-following trajectory and the trajectory generated by the TSDF-based 3D-reconstruction method proposed in [
30]. The wall-following path is meant to mimic a potential manual flight or another inspection method in which a rail or ground vehicle equipped with several inspection sensors moves along the tunnel wall. The TSDF-based 3D reconstruction method is considered in this comparison because it indirectly steers the robot closer to the inspected infrastructure in order to minimize the TSDF-based map uncertainty.
In this case, a complete inspection of the tunnel means having inspected the area corresponding to half of the tunnel if we would slice it through its longitudinal axis. This assumption is taken here to enable a fair comparison among methods as during the entire simulation, the wall-following trajectory traverses the tunnel from one end to the other, inspecting only one of its walls.
Throughout these experiments, the system constraints and other mapping parameters have been equally applied to all methods. The same goes for the RRT* parameters that have been selected in accordance with the simulated tunnel dimensions and have been kept constant for both TSDF and our proposed method. These and other method-specific parameters are shown in
Table 3. The parameters of the TSDF method are kept as given in [
30] with the exception of the weight assigned to new, unknown voxels, which have increased up to a value of
. This parameter tuning is required to increase the exploration component of the algorithm and achieve a comparable tunnel length being covered by all methods. Additional parameters corresponding to our proposed method are also listed in
Table 3. Although the proposed path planning algorithm shows a good performance for a range of parameter values, the ones presented here were selected as a result of previous simulation experiments performed on this particular tunnel environment.
For the wall-following trajectory, the specific height and distance to the tunnel wall employed for each scenario are shown in
Table 4. These parameters have been computed taking into account the characteristics of the on-board inspection sensors and the simulated tunnel geometry, ensuring that all the voxels captured by the sensor’s field of view are inspected from a close distance.
The main results of the simulation experiments are shown in
Figure 7, while
Figure 8 depicts an example of the inspection trajectory performed by all methods considered in this analysis. Observing the evolution of the number of voxels sensed along the entire inspection procedure, we first notice that the TSDF-based Informed-RRT* [
30] method does not manage to steer the robot as close to the tunnel walls as the rest of the methods. This is especially perceptible in the scenario in which the robot had to inspect the tunnel wall from a distance lower than 1.5 m, where almost no voxels observed by the inspection payload sensors were closely inspected. Taking a closer look at the performance of this method, for the first scenario, in
Figure 7c, it can be seen how the majority of voxels were observed from a distance of
up to
. This is justified by the fact that in [
30], the optimization problem that maximizes the TSDF-based map reconstruction error gets the robot closer to the tunnel surfaces, but this distance is not directly controllable. Moreover, due to the increase of the new voxels weight parameter to promote the exploration of the tunnel, the accuracy with which the TSDF map is reconstructed is affected. All these points explain the lower performance of this method when compared to the remaining ones. A further exhaustive parameter tuning might improve its performance, although the distance to the infrastructure would still not be directly controllable.
Analyzing the performance of our proposed method, we see this is comparable and even outperforms the designed wall-following trajectory. For the more restrictive scenarios (
and
), the wall-following trajectory takes over the proposed algorithm for short times, while our approach outperforms for sufficiently long missions, managing to accurately sense more voxels. Moreover, the enhanced flexibility of our method is capable of better adapting to the tunnel curvature while also moving up and down along the tunnel wall to increase the number of voxels sensed (see
Figure 8c). These factors help achieve a greater number of voxels sensed when compared to the wall-following trajectory.
Focusing on the percentage of voxels observed by the on-board inspection sensors, as it is expected, the wall-following trajectory concentrates most of the observed voxels within the required sensing distance
for each scenario. In comparison, the inspection trajectory generated by our method maximizes the number of observed voxels within the required
, but at the same time, observes and explores a bigger section of the tunnel wall (
Figure 7c,f,i). This is consistent with the results listed in
Table 5, where we can see the percentage of voxels explored by each method and for each scenario. By maintaining a constant and close distance to the wall, the area observed by the wall-following trajectory is smaller, resulting in a lower exploration of the entire tunnel structure. The method presented in [
30] achieves a greater exploration of the tunnel as its trajectory maintains a bigger distance from the tunnel wall, increasing the number of voxels captured by the sensor’s field of view. However, this results in a less-accurate tunnel surface inspection. Finally, the method we propose manages to maximize the number of voxels accurately inspected, as well as maintain a level of exploration. As expected, this level of exploration is lower for the more restrictive scenarios in which the robot is required to maintain a lower distance from the tunnel walls. The direct effect of achieving a close inspection trajectory while also exploring the tunnel geometry is a higher trajectory length. For all three scenarios, our proposed method generates a higher inspection path length for the same amount of inspection time. Given the limited autonomy of aerial vehicles, this aspect is important.
4.2. Railway Tunnel Experiments
The proposed end-to-end robotic solution for tunnel-like infrastructure inspection has been tested in a single-track railway tunnel in Italy. Despite having a single track, this tunnel is
high and
wide, with a highly uneven floor. The rail tracks are located on one side of the tunnel, approximately
higher than the rest of the floor (see
Figure 9), making this tunnel configuration a challenging one for the use of any range sensor pointing in the
z-direction. Thus, several April tags were placed along the longitudinal tunnel axis on the lower-level floor with the objective of correcting any
z estimation error. Regardless of the existing lighting system inside the tunnel, some areas remained poorly illuminated, highlighting the need for on-board lighting sources.
In this experiment, a safety distance with respect to all nearby obstacles higher than has been set while striving to inspect the tunnel surface from a close distance (). Due to the presence of the catenary and the difficulty of accurately detecting it as an obstacle with the depth sensor, the maximum height of the robot has been limited to , close to below the cable. Likewise, the maximum velocity allowed for this operation has been configured to .
To start the autonomous inspection mission, the robot is given a goal point located
from the take-off location. Once the goal is reached, the drone is commanded to return to the initial take-off point. Along the entire forward and backward trajectory, the objective is to maximize the tunnel surface inspected by the on-board sensors.
Figure 10 shows several instances of such an autonomous inspection procedure.
Figure 10a depicts the moment at which the drone is taking off, and the tunnel map representation is created in real-time by the on-board computer. Next, the nearby environment is sampled, and viewpoint locations are added to the RRT* tree, expanding it within the mapped environment. The next-best-view position is identified based on the value formulation presented in
Section 3, and the robot is commanded to move to the next best location. The trajectory followed by the robot is marked in red (see
Figure 10b). The higher informative gain of the samples located closer to the tunnel walls steers the platform’s trajectory towards that direction, as seen in
Figure 10c. At this point, the tunnel wall direction has not been detected yet. As noticed, while flying towards the commanded goal closer to the right-hand side wall, the robot was unaware of the location of the opposite wall. The field of view of the equipped depth sensor was not sufficient to capture both walls of this wide tunnel environment. Once the platform follows a trajectory closer to the center of the tunnel, the presence of the opposite wall is detected. When the goal is reached, the platform returns to the take-off location while inspecting the other side of the tunnel (see
Figure 10d). As the information gain obtained by inspecting the same wall decreases, a higher number of samples are taken closer to the recently discovered surface.
Figure 11 shows the number of voxels observed and their corresponding observation distance. As noticed, most of the tunnel wall surface (
) has been inspected from the desired inspection distance, and
of the surface was sensed from a distance closer than
. Due to the height limitation, the ceiling surface could not be closely mapped. Nonetheless, the commanded path included several up and down movements along the tunnel wall, increasing the level of details captured by the upper camera when the robot was flying higher.
An example of the images taken with all the inspection sensors on-board the robot is shown in
Figure 12. As can be noticed, the visual and thermal data captured provides clear and accurate information on the tunnel surface, showing no blurriness from the robot motion. In the instant depicted in
Figure 12, the voxels sensed from a close distance correspond to the ones captured by the sensors on the right side. The main defects encountered during the inspection mission have been thin cracks and water inlet areas.
Figure 13 shows visual and thermal images capturing these defects on the tunnel walls and ceiling. The level of detail captured by the visual images enables the identification of thin concrete cracks (see
Figure 13a). To check how accurate the measurements performed in these images compare with the real ones, two elements (1 and 2 depicted in
Figure 13) have been measured in the real tunnel and compared with their sizes measured in the captured image. To measure these elements on the image, the distance from which the wall was observed (
) was obtained from the estimated robot position and used to compute the
GSD.
Table 6 shows these computations, as well as the real measurements and the error obtained. The measurement error of these elements is around 2%, indicating that features and defects of interest can be measured on these images with high accuracy. However, it must be noticed that these results depend on the robot position estimation and thus, the measurement error could increase during the flight if the position estimation begins to drift. Further post-processing steps involving image matching and aligning could improve the robot position estimation and remove the drift effects.
Regarding the effect of the on-board illumination system, it can be noticed how the tunnel wall images captured from a smaller distance present a constant illumination without being overexposed. Due to the height limitation imposed during the autonomous flight, the images corresponding to the tunnel ceiling are slightly underexposed due to the higher capturing distance. Nonetheless, defects, such as long cracks, can still be identified, as seen in
Figure 13b.
Figure 13c,d highlights several water inlet areas visible on both visual and thermal channels. Under these test conditions, no other temperature difference was noticeable on the captured thermal data.
The sample of images presented here showcases how the inspection path planner can guide the aerial platform to perform a close inspection of the tunnel surface. The raw data gathered through the inspection mission forms the basis of further post-processing steps, which can be applied to provide meaningful information to an infrastructure maintainer.