Next Article in Journal
Effect of Coulomb Focusing on the Electron–Atom Bremsstrahlung Cross Section for Tungsten and Iron in Nonthermal Lorentzian Plasmas
Next Article in Special Issue
Applications of the Open-Source Hardware Arduino Platform in the Mining Industry: A Review
Previous Article in Journal
Impact of Deep Eutectic Solvents on Extraction of Polyphenols from Grape Seeds and Skin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Comparison of Three Location Estimation Methods of an Autonomous Driving Robot for Underground Mines

Department of Energy Resources Engineering, Pukyong National University, Busan 48513, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(14), 4831; https://doi.org/10.3390/app10144831
Submission received: 17 June 2020 / Revised: 8 July 2020 / Accepted: 13 July 2020 / Published: 14 July 2020
(This article belongs to the Special Issue Recent Advances in Smart Mining Technology)

Abstract

:
In this study, we compared the accuracy of three location estimation methods of an autonomous driving robot for underground mines: an inertial measurement unit with encoder (IMU + encoder) sensors, Light Detecting and Ranging with encoder (LiDAR + encoder) sensors, and IMU with LiDAR and encoder (IMU + LiDAR + encoder) sensors. An accuracy comparison experiment was conducted in an indoor laboratory composed of four sections (X-change, X-Y change, X-Z change, and Y-change sections) that simulated an underground mine. The robot’s location was estimated using each of the three location estimation methods as the autonomous driving robot moved, and the results accuracy was analyzed by comparing the estimated location with the robot’s actual location. From the results of the indoor experiments, the average estimation error of the IMU + LiDAR + encoder sensors was approximately 0.09 m, that of the IMU + encoder was 0.19 m, and that of the LiDAR + encoder was 0.81 m. In a field experiment, the average error of the IMU + LiDAR + encoder was approximately 0.11 m, that of the IMU + encoder was 0.17 m, and that of the LiDAR + encoder was 0.70 m. In conclusion, the IMU + LiDAR + encoder method, which uses three types of sensors, showed the highest accuracy in estimating the location of autonomous robots in an underground mine.

1. Introduction

Autonomous driving is a technology that involves identifying the surrounding environment and driving to a destination without human intervention. Since the concept was first announced at the General Motors (GM) Motor Show [1] in 1956, autonomous driving cars that can drive on highways were developed through the PROMETHEUS project [2], which took place from 1984 to 1994. Since then, research on autonomous driving technology began in earnest in 2004 through the DARPA Grand Challenge [3], which involved traversing the Mojave Desert in the U.S. using an autonomous driving car. Recently, various global companies are also conducting research on autonomous driving technology. Representatively, GM [4] unveiled a multi-sensor-based autonomous vehicle, “BOSS”, in 2008, and plans to commercialize a super cruise capable of autonomous driving on highways by 2022. Tesla [5] developed a vision camera-based autonomous driving system, “autopilot”, and is launching it as a commercial product.
In the mining industry, various studies have been conducted using autonomous driving technology; many studies have been conducted to map underground mines [6,7,8]. Baker et al. [9] developed an autonomous driving robot called the “Groundhog”, equipped with a Light Detecting and Ranging (LiDAR) sensor, a camera sensor, and an environmental measurement sensor, and conducted driving and mapping experiments on abandoned mines. Bakambu and Poloski [10] developed an autonomous driving robot capable of route planning, and obstacle detection in the underground mine environment. They performed mapping work for underground mine tunnels. Neumann et al. [11] developed an autonomous driving robot, “Barney”, equipped with a rotating LiDAR sensor, and performed 2D and 3D mapping work on underground mines.
Studies have also been conducted to transport ores using autonomous driving vehicles in underground mines [12,13]. Larsson et al. [14] developed an autonomous driving loader for underground mines using radio frequency identification technology (RFID) and the fuzzy logic algorithm. Marshall et al. [15] developed Load–Haul–Dump equipment capable of autonomous driving in underground mines, and conducted experiments on the feasibility of application to sites. Mobile Tronics [16] developed an autonomous driving train called the “VirtuRail” that transport ores from underground mines without rail tracks. VirtuRail utilizes sensors such as LiDAR, RFID, and radio detection and ranging to measure the distance to the tunnel wall and is autonomously driven by maintaining a constant distance from the wall.
Additionally, studies have been conducted for exploration and environmental surveys of underground mines using the autonomous driving robot [17,18]. Zhao et al. [19] developed the autonomous driving robot “MSRBOTS”, for investigating safety accidents in the mines. The MSRBOTS is equipped with infrared sensors, environmental measuring sensors, and camera sensors, which make it possible to safely explore areas that are dangerous for humans to access. Günther et al. [20] developed a system that could measure environmental factors such as temperature, humidity, and gas concentration in underground mines, and transmit the results remotely using an autonomous driving robot. Kim and Choi [21,22] developed a LiDAR sensor-based autonomous driving robot and conducted driving performance experiments in an indoor laboratory, and field tests in underground mines.
Essentially, autonomous driving technology in the mining industry is used in various ways, including tunnel mapping, ore transportation, and environment exploration. To efficiently apply autonomous driving technology to the mining industry, it is necessary to detect the surrounding environment using sensors, as well as estimate the location of the robot accurately. The application of autonomous driving robots (tunnel mapping, ore transport, and environmental exploration) in underground mine environments can only be effective when the robot’s location is accurately determined. Moreover, it is impossible to determine the location in underground mine environments through GPS. Therefore, it is necessary to develop location recognition technology for the applications.
Studies have been conducted in other industries to estimate the location of autonomous vehicles and mobile robots [23,24,25]. Jo et al. [26] estimated the location of autonomous vehicles using GPS, digital maps, and camera sensors. Shen et al. [27] conducted a study to estimate the location by fusing the inertial measurement unit (IMU) sensor, encoder sensor and computer vision-based distance measurement technology. Li et al. [28] developed a location estimation system that can be used indoors using camera sensors and image processing technology. Moreno et al. [29] developed the localization algorithm for autonomous driving robots using ultrasonic sensors in an indoor semi-structured environment and evaluated its accuracy.
Additionally, in order to reduce the uncertainty that occurs when estimating the robot’s location, probability localization algorithms such as the Kalman and particle filters are widely used [30,31]. Wiscnewski et al. [32] developed a localization system that combines GPS and LiDAR sensors with the Kalman filter, and which was applied to three types of process models to compare the speed and residuals in the longitudinal and lateral directions. Moreover, Stahl et al. [33] developed a high-speed trace car location system using the Monte Carlo localization method, which is based on the Robot Operating System (ROS), an open-source operating system for mobile robots, and the LiDAR sensor and Kalman filter. Adams et al. [34] performed the localization of robots in a semi-constructed outdoor environment by utilizing a localization algorithm based on a particle filter. Consequently, this made it possible to perform location estimation quickly and easily in an indoor environment.
In the mining industry, studies have been conducted to estimate the location of the autonomous driving robot. Ghosh et al. [35] predicted the robot’s attitude using an IMU sensor and fused it with an encoder sensor to estimate the robot’s location. Chi et al. [36] developed an autonomous driving algorithm based on the LiDAR sensor and utilized it to estimate the robot’s location. Because GPS is not available in underground mine environments, and camera sensors are also limited, sensors such as IMU and LiDAR are often used to estimate the location of autonomous driving robots. However, because previous studies were focused on developing or utilizing the location estimation methods, they did not compare the accuracy of the methods in the underground mine environment.
In this study, we compared the accuracy of three location estimation methods of an autonomous driving robot in an underground mine environment: inertial measurement unit with encoder (IMU + encoder) sensors, Light Detecting and Ranging with encoder (LiDAR + encoder) sensors, and IMU with LiDAR and encoder (IMU + LiDAR + encoder) sensors. The study presents the autonomous driving robot system, sensors, and location estimation methods used in the experiments. The location of the autonomous driving robot was estimated using each of the three methods as it drove through the indoor laboratory, which simulated an underground mine, and in an actual underground mine. We analyzed the accuracy of the results by comparing the estimated robot location with its actual location.

2. Materials and Methods

2.1. System Configuration for Autonomous Driving Robot

Figure 1 shows the configuration and communication environment of the autonomous driving robot used in this study. The robot consists of high/low/remote controllers and sensors; the controllers have the following functions:
  • Low-Level Controller: Robot’s motor control and data acquisition from encoder sensor
  • High-Level Controller: Communication with low level, remote controllers, robot’s state monitoring and data acquisition from sensors
  • Remote Controller: Remotely control the robot by connecting to the high-level controller
The high-level controller communicates with the low-level controller via the RS232C method; the remote controller is connected wirelessly through Wi-Fi communication. A Bluetooth beacon, which is a Bluetooth-based wireless communication device, was used to enable the robot to stop automatically at the destination. The Bluetooth beacon used in the study was the RECO beacon (Perples, Seoul, Korea), and the HM-10 module was used as the Bluetooth signal receiver. LiDAR, IMU and encoder sensors were used to estimate the location of the robot. Table 1 shows the model and specifications of the sensors, controller, and driving platform used in this study.

2.2. Location Estimation Methods of Autonomous Driving Robot

2.2.1. Dynamic Model of the Autonomous Driving Robot

Figure 2 shows the robot location on a 2D coordinate system and its dynamic model. The mobile robot used in this study is a four-wheel drive, four-wheel steering based type vehicle, which is driven using principles similar to those used for real cars. The mobile robot also has two DC motors for driving, and two DC servo motors for steering.
The position of the robot in the two-dimensional coordinate system can be expressed using three variables [x(tk), y(tk), α(tk)], where x(tk) and y(tk) are the x and y coordinates at time tk, and α(tk) is the robot’s heading angle. By defining the straight distance from [x(tk+1), y(tk+1)] to [x(tk), y(tk)] as d(tk), the robot’s location at time tk+1 can be defined by Equations (1) and (2).
x ( t k + 1 ) = x ( t k ) + d ( t k ) · c o s ( α ( t k ) )
y ( t k + 1 ) = y ( t k ) + d ( t k ) · s i n ( α ( t k ) )
Because the roads in underground mine environments are often sloped, the height (z) from the surface must be included to accurately estimate the location of the robot. If the angle of the z-y coordinate axis rotated about the x-axis is called β (Pitch), the robot’s three-dimensional coordinates can be defined by Equations (3), (4), and (5).
x ( t k + 1 ) = x ( t k ) + d ( t k ) · c o s ( β ( t k ) ) · c o s ( α ( t k ) )
y ( t k + 1 ) = y ( t k ) + d ( t k ) · c o s ( β ( t k ) ) · s i n ( α ( t k ) )
z ( t k ) = z ( t k ) + d ( t k ) · s i n ( β ( t k ) )
Unlike Equations (1) and (2), cos(β(tk)) was multiplied with the variables in Equations (3) and (4) to convert the travel distance in the three-dimensional space to the travel distance projected in the x-y plane. This way, the location of the robot in three dimensions can be defined as variables [x(tk), y(tk), z(tk), α(tk), β(tk)].

2.2.2. Location Estimation Methods

Figure 3 shows the overall system architecture of the sensors and data processing equipment. Three types of orientation sensors (accelerometer, gyroscope, and magnetometer) measure the robot’s attitude and heading. Each measured data point is fused through the Kalman filter and then converted into a Euler angle. The encoder sensor measures the travel distance by counting the robot wheels. The LiDAR sensor measures the distance difference between the left and right walls, and calculates the robot’s heading through an autonomous driving algorithm.
This study compared three location estimation methods of autonomous driving robots. In all the methods, the encoder sensor was used to measure the travel distance of the robot. Additionally, the IMU and LiDAR sensors were also used to measure the robot’s heading angle in the IMU + encoder and LiDAR + encoder methods, respectively. The IMU + LiDAR + encoder method measures the headings by switching the IMU and LiDAR sensors according to the threshold angle.
GPS cannot be used in underground mine environments, and location estimation through a camera sensor is also impossible to use because of the lack of lighting. Therefore, in this study, a localization method was constructed by combining sensors that can measure the robot’s travel distance and direction on their own.

2.2.3. Distance Measurement Using an Encoder Sensor

The encoder sensor calculates the travel distance by determining the number of motor rotations. It is also possible to calculate the direction angle of the robot in wheel-type mobile robots using an encoder sensor based on the difference between the left and right wheels. However, wheel-type mobile robots generate slips depending on the condition of the road surface. Specifically, errors caused by slips are more likely when turning left and right than when driving in a straight line [36]. In an underground mine environment, this error is more pronounced because of the varying shape of the road and the roughness of the floor. Consequently, this study used the encoder sensor to measure only the linear travel distance. The wheel diameter of the mobile robot used in the study is 15.8 cm, the motor’s gear ratio is 61, and the encoder’s gear ratio is 13. Thus, the travel distance can be calculated based on the encoder count using Equation (6).
D i s t a n c e   ( m ) = 2 × π × 0.158 × 1 13 × 1 61 × E n c o d e r   C o u n t

2.2.4. Heading Measurement Using IMU Sensor

IMU sensors are used to estimate the attitude of aircrafts, ships, mobile robots, etc., by converging gyroscopes, acceleration sensors, and geomagnetic sensors. Although IMU sensors show high performance in estimating the position or angle of a mobile vehicle, they accumulate a large error when estimating the travel distance by integrating the acceleration. The IMU sensor used in this study is a small attitude and heading reference system device with a three-axis gyroscope, acceleration sensor, and a geomagnetic sensor (E2BOX, Hanam, Korea). The acceleration, angular velocity, and magnetic force data measured by the IMU sensor are output in the form of a Euler angle by applying a Kalman filter, and correction algorithms such as the Robust Attitude and Robust Heading Algorithms are applied to minimize the error.

2.2.5. Autonomous Driving Algorithm and Heading Measurement Using a LiDAR Sensor

The LiDAR sensor uses laser light to measure the distance and direction to an object. Because of its accurate and wide utilization, the LiDAR sensor is used as a core technology in various studies related to autonomous driving. In this study, the distance to the left and right walls was measured using the LiDAR sensor, and the robot was designed to drive along the central line of the road based on the difference between the right and left distances. For example, if the robot is driving close to the left wall, the distance to the right wall is measured as relatively large. The robot’s steering then changes towards the right direction, returning it to the central line of the road. Figure 4 shows the relationship graph of the steering angle output according to the left and right distance difference measured by the LiDAR sensor. In this study, the heading of the robot was estimated by the steering angle calculated according to the graph in Figure 3.

3. Indoor Experiment

In this study, an accuracy comparison experiment was conducted on the three location estimation methods in an indoor laboratory. The indoor experiment was conducted five times repeatedly at an indoor laboratory simulating an underground mining environment. The raw data measured from the IMU, LiDAR and encoder sensors, and all x, y, z coordinates and directional angles calculated in real time while the robot was driving through the indoor laboratory, were recorded. The location estimation accuracy was analyzed by comparing the recorded actual location with the estimated location.

3.1. Indoor Laboratory Simulation

Figure 5 shows the overall composition and sectional picture of the indoor laboratory used in this study. The indoor laboratory was 2.5 m wide, 2.6 m high, and 30 m long; the longitudinal direction of the robot’s front was set along the x-axis, and the transverse direction along the y-axis at the starting point. The indoor laboratory was composed of the X change, X-Y change, X-Z change, and Y change sections. The accuracy of the change in coordinates of the corresponding sections among X, Y, and Z were calculated as the robot was driving in each section. The temporary wall in the experiment was higher than the detection height (60 cm) of the LiDAR sensor in all sections, and the central point of the road was marked on the floor. To measure the accuracy of the Z value, an inclined terrain with a height of approximately 12.5 cm was set. The Bluetooth beacon was installed at the robot’s destination so that the autonomous driving robot stopped automatically.

3.2. Indoor Experiment Method

In this study, the location of the robot was estimated using the IMU + encoder, LiDAR + encoder, and IMU + LiDAR + encoder sensors, respectively. Additionally, the robot’s actual location was determined by filming the robot’s driving process. To measure the accuracy of the Z value, the Z coordinate was analyzed by comparing the shape of the actual slope structure with the robot’s X-Z coordinates. The mean absolute error (MAE) method was applied to determine the error of the location estimation methods. During the experiment, the data acquired from the sensors were set to be stored in 0.1 s, and compared with the actual robot location in 1 s.

3.3. Indoor Experimental Results

Figure 6 shows the autonomous driving robot conducting the location estimation experiment at the indoor laboratory. It was observed that the autonomous driving robot stably drove along the central line of the road in the entire experimental section; the ability to climb on a temporarily made ramp was used to measure the accuracy of the z-axis. The autonomous driving robot was driven on approximately the same path during the five repeated experiments, taking an average of 73.3 seconds to drive through the entire indoor laboratory.
Figure 7 shows the robot’s actual driving path, and the pathways measured by the three location estimation methods. Overall, the IMU + LiDAR + encoder and IMU + encoder sensors showed similar driving paths, with the IMU + LiDAR + encoder sensors showing higher accuracy than the IMU + encoder and LiDAR + encoder sensors. The mean absolute error of the IMU + LiDAR + encoder sensors was 0.09 and 0.08 m in the X and Y directions, respectively. For the IMU + encoder sensors, the mean absolute error was 0.20 and 0.18 m in the X and Y directions, respectively. Additionally, for the LiDAR + encoder sensors, the mean absolute error was 0.90 and 0.72 m in the X and Y directions, respectively. The LiDAR + encoder sensors produced greater errors in the sections where the robot’s orientation angle changes rapidly, and the accuracy of the location estimation decreases significantly in the sections where the robot rotates vertically. However, the IMU + LiDAR + encoder and IMU + encoder methods show a flow that is similar to the actual driving path.
The encoder sensors were used to measure the distances in all methods, and the IMU and LiDAR sensors were used to measure the robot’s heading angle. Therefore, the difference in the driving path was caused by the accuracy difference between the two heading measurement sensors.
When comparing the two types of heading measurement sensors, the cumulative error was expected to be relatively large for the LiDAR sensors as there was no filter to correct the raw value. It was also expected that there would be a difference between the angle at which the robot would be set to drive along the central point of the road and the angle at which the robot would actually drive. In addition, if the robot rotates at a large angle on the road, it does not recognize the accurate angle.
Based on the results, the IMU + LiDAR + encoder sensors showed an overall higher accuracy than the IMU + encoder and LiDAR + encoder sensors. For the IMU sensors, three types of sensors (acceleration, angular velocity, and magnetic) are fused and calibrated in real time to estimate the robot’s angle and can recognize directional rotation of up to 180°. In contrast, the LiDAR sensors measure the distance to the left and right walls and estimate the heading direction according to the difference between the two. Therefore, when the distance to the left and right wall suddenly changes, the calculated heading value also tends to vary significantly. In particular, it was confirmed that the direction estimation ability is rapidly decreasing at a 90° intersection. Therefore, if the two types of heading measurement sensors were compared, the accuracy of the IMU sensor is higher than that of the LiDAR sensor, but the IMU + LiDAR + encoder sensors combined, leveraging the advantages of each of the two sensors according to the heading angle, which showed the highest accuracy overall.
Table 2 shows the location estimation methods’ mean absolute error for each section during the five experiments. The X-change section has a straight portion with a constant road width. In this straight section, the distance from the LiDAR sensor to the left and right walls is similar; therefore, the location estimation accuracy was excellent when using the high-performance LiDAR sensor. On the other hand, the IMU sensors showed relatively high errors; it was inferred that they were caused by the robot’s body vibration during driving, and the poor performance of the magnetic sensor when it was close to metallic materials. The IMU + LiDAR + encoder and LiDAR + encoder sensors showed the same results because the headings were measured in the same way through the LiDAR sensors in the X-change section.
In the X-Y change section, the IMU + LiDAR + encoder and IMU + LiDAR sensors had cumulative errors similar to the previous section, whereas the LiDAR sensor’s errors increased sharply as the road width changed and the steering shift increased. In the X-Z change section, the IMU + encoder sensors showed a similar cumulative error to the previous section; however, the LiDAR + encoder sensors had fewer cumulative errors compared to the previous sections. There was no shift in steering because the height in the Z direction changed, but the Y value did not. Therefore, it was observed that the error in that section was relatively small, similar to the first section, and since the IMU + LiDAR + encoder sensors measure heading angle using the LiDAR sensor in a straight section, errors were accumulated in a similar manner to the LiDAR + encoder sensors. Finally, in the Y-change section, the LiDAR + encoder sensors showed the largest error among all sections, whereas the IMU + encoder sensors’ accumulated errors remained constant, as in all the previous sections. The LiDAR sensor could not detect the driving direction of the robot whenever it used a large steering change to turn, hence the X and Y values showed large errors.
Overall, all location estimation methods tended to accumulate errors over time. The IMU + encoder sensors showed similar errors in each section, whereas the LiDAR sensors showed very high accuracy in the straight sections, but their location estimation accuracy decreased sharply in the sections where there was a large steering shift. The IMU + LiDAR + encoder sensor generated errors in each section corresponding to the method with the highest accuracy in that section among the two different methods, so the total accumulated error was the smallest.
Figure 8 shows the X-Z position of the autonomous driving robot measured on an inclined terrain. The length and height of the Z-axis terrain are approximately 1.7 and 0.125 m, respectively. The tilt angle of the terrain was measured by the pitch angle estimated by the IMU sensor, and the distance was measured by the encoder sensor. The mean absolute error obtained from comparing the 58 measured Z-coordinates and the actual slopes was 0.58 cm.
In Figure 9, the lateral and longitudinal absolute error and velocity between the three types of location estimation methods (IMU + encoder, LiDAR + encoder, and IMU + LiDAR + encoder sensors) were compared. When measuring the robot’s heading angle, it was found that combining the two sensor types generated a smaller error than that of the two previously used location estimation methods. These results show a significant difference in the overall location estimation accuracy, as there was a straight road in all areas of the X-Y-change and Y-change sections, as well as some areas of the X-Z-change and Z-change sections.
In the longitudinal/lateral velocity graph from Figure 9c,d, the IMU + LiDAR + encoder sensors showed a similar flow to the LiDAR + encoder sensors in the X-change section, and showed a similar flow to the IMU + encoder sensors in the steering change section. The LiDAR + encoder sensors showed relatively little change in lateral and longitudinal velocity compared to the other two methods. In particular, in the Y-change section, the lateral velocity should be higher than the longitudinal velocity because the robot is driven along the Y axis after rotating approximately 90°, but the LiDAR + encoder sensors drive at an almost constant speed in all sections. It is assumed that the LiDAR sensor cannot recognize that the robot is driving after it turns at a large angle. On the other hand, for the IMU sensors, the robot’s rotation can be recognized to confirm that the longitudinal speed is reduced and the lateral speed is increased.

4. Field Experiment

4.1. Field Expriment Method

In this study, a field experiment of the three location estimation methods of autonomous driving robots was conducted in underground mines. While the autonomous driving robots were driving in the underground mine tunnel, the robot’s location was estimated by each method, and the actual location was measured by filming the robot’s driving path. The mean absolute error (MAE) method was used to compare the actual location of the robot with its estimated location. While the autonomous robot was driving, the sensor data and the calculated location were stored every 0.5 s, and this was compared to the actual robot location.

4.2. Experiment Area

The study area was an amethyst mine (35°32′43″ N, 129°5′37″ E) in Ulju-gun, Ulsan, Korea. The mine is 2.5 km long, has an average internal temperature of 12 to 16 °C, and an area of 16,000 m2, and is currently closed. A section of the mine, 30 m long and 3 m wide, shown in Figure 10, was set up as the experiment area. The experiment area contains four curved points, each with a curvature of −30°, 0°, −30° and −40°, respectively, compared to the starting point. Since the wall surface of the experiment section was higher than the LiDAR’s sensing height (60 cm), it was possible to measure the distance to the left and right wall surfaces through the LiDAR sensor in all driving sections. Before conducting the experiment, a scale surveying was performed to measure the width and length of the tunnel, and sticky notes were attached to the bottom of the tunnel at regular intervals to accurately observe the location of the robot. Bluetooth beacons were installed at the destination, so that the autonomous driving robots can be automatically stopped.

4.3. Field Experiment Results

Figure 11 shows the autonomous robots driving in the underground mine during the field experiment. The autonomous driving robot received signals from the remote controller at the starting point and drove through four curved points stably cornered. The entire drive through the experiment section was performed along the central point of the mine, without touching the sidewalk blocks installed at a width of about 1 m, and the total driving time was approximately 66 s.
Figure 12 shows the actual driving path of the autonomous robots in the underground mine, and the driving path measured through the three location estimation methods. Overall, the IMU + LiDAR + encoder sensors have the highest accuracy, followed by IMU + encoder sensors and the LiDAR + encoder sensors, which is similar to the indoor experiment results.
The mean absolute error in the method using the IMU + LiDAR + encoder sensors was 0.11 m and 0.11 m in the X and Y directions, respectively (Table 3). The IMU + LiDAR + encoder sensors show similar paths to the LiDAR + encoder sensors on the straight road, and similar paths to the IMU + encoder sensors on the curved road.
The X direction mean absolute error of the IMU + encoder sensors was 0.12 m, and the Y direction mean absolute error was 0.23 m. The IMU + encoder sensors showed a slightly higher location estimation accuracy in the field experiments compared to the indoor experiments.
The LiDAR + encoder sensors show relatively high accuracy on straight roads, similar to the indoor experiment, while errors tended to accumulate when the robot’s steering changed rapidly. The LiDAR + encoder sensors’ X direction mean absolute error was 0.52 m, and the Y direction mean absolute error was 0.91 m, showing relatively high accuracy compared to the indoor experiments. In the indoor laboratory, there was a section where the width changed rapidly or was bent vertically, whereas, in the underground mines, the width of the road gradually increased and decreased, so the robot’s heading did not change rapidly. Additionally, the LiDAR + encoder sensors did not recognize, in the indoor experiment, the large rotation of the robot’s steering, so a large error occurred. In the field experiments, however, the robot’s steering did not change rapidly because the tunnel road width increased and decreased gradually.
In order to quantitatively compare the accuracy of the location estimation according to the steering change, the boundary between the straight and curved sections was clearly set, and the width and curvature of the road were changed significantly. However, since the underground mine tended to change gradually, in the form of shafts or road curvatures, the accuracy of the location estimation was generally increased. In particular, in the case of the LiDAR sensor, the positioning performance was very low because the sensors did not recognize the robot’s large angle rotation in the indoor laboratory; however, in the underground mine environment, the robot’s headings changed frequently at small angles, meaning that the accuracy of the location estimation could have increased slightly. Unlike the indoor laboratory, the actual underground mine environment is large and most of the roads are almost straight. Therefore, the importance of the LiDAR sensors, which showed high accuracy when the robot moved in a straight direction, was expected to increase further.

5. Conclusions

In this study, an accuracy comparison experiment was conducted for three location estimation methods of an autonomous driving robot (IMU + encoder, LiDAR + encoder, and IMU + LiDAR + encoder sensors) in an indoor laboratory that simulated an underground mine, and an actual underground mine. The robot location was estimated by each of the three methods as the autonomous vehicle was driving through the indoor laboratory, in a total of five repetitive experiments, and through an underground mine. The results accuracy was analyzed by comparing the estimated with the actual robot location. From the results, the IMU + LiDAR + encoder sensors generally showed the highest accuracy, followed by the IMU + encoder sensors and the LiDAR + encoder sensors. When the entire test site was divided into sections, the IMU + encoder sensors showed high performance on the curved roads, whereas the LiDAR + encoder sensors showed high performance on the straight roads. The IMU sensor showed a constant error in all the sections owing to the robot’s vibration, whereas the LiDAR sensor showed relatively high accuracy in the straight sections with a constant distance to the wall. Since the IMU + LiDAR + encoder sensors were used by switching between the two types of heading measurement sensors, an error occurred corresponding to the method with the relative highest accuracy among the two different methods in each section. As a result, the IMU + LiDAR + encoder sensors method, which uses two types of heading measurement sensors together based on the steering angle, showed the highest overall accuracy in all sections.
A correction filter that could improve the localization accuracy of the robot was not applied in this study; in the future, if localization correction algorithms such as the Kalman and particle filters are applied, the accuracy of the location estimation will be further improved. It will also be possible to improve the localization accuracy by utilizing pre-built LiDAR maps and point clouds measured by the LiDAR sensors. Additionally, if a wide range of environmental sensors such as vision cameras are used, it will be possible to check the overall road condition, and correct the position of the robot by grasping the structural shape. The autonomous robot used in this study drove at the same speed on all paths. However, in the future, additional experiments should be conducted to compare changes in the location estimation accuracy based on different robot speeds.
If autonomous robots are used in underground mining environments, they can explore areas that are difficult for humans to access, and productivity can be improved by automating the equipment used in the underground mines through autonomous driving technology. Additionally, the location estimation technology can be combined with environmental sensors to automate the overall environmental monitoring of tunnels. Location estimation technology is the basis for the work of exploring underground mines, such as tunnel mapping, environmental mapping, and optimal route planning. It is expected that the results of this study will be useful reference materials for the use of autonomous driving robots in underground mine environments.

Author Contributions

Y.C. conceived and designed the experiments; H.K. performed the experiments; H.K. and Y.C. analyzed the data; Y.C. contributed reagents/materials/analysis tools; H.K. and Y.C. wrote the paper. Both authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1D1A1A09083947).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. GENERAL MOTORS. Available online: https://www.gm.com/our-stories/self-driving-cars.html/ (accessed on 7 June 2020).
  2. DAIMLER’S PROMETHEUS Project. Available online: https://media.daimler.com/marsMediaSite/en/instance/ko/The-PROMETHEUS-project-launched-in-1986-Pioneering-autonomous-driving.xhtml?oid=13744534 (accessed on 7 June 2020).
  3. Lee, B.Y. Trends and prospects for the development of self-driving cars in domestic and foreign countries. Inf. Commun. Mag. 2016, 33, 10–16. [Google Scholar] [CrossRef]
  4. Kim, Y.; Kim, H. Development trend of self-driving car. J. Korean Inst. Commun. Inf. Sci. 2017, 34, 10–18. [Google Scholar]
  5. Tesla’s Autopilot. Available online: https://www.tesla.com/support/autopilot/ (accessed on 7 June 2020).
  6. Grehl, S.; Donner, M.; Ferber, M.; Dietze, A.; Mischo, H.; Jung, B. Mining-RoX—Mobile Robots in Underground Mining. In Proceedings of the Third International Future Mining Conference, Sydney, Australia, 4–6 November 2015; AuslMM: Carlton, Australia, 2015; pp. 57–64. [Google Scholar]
  7. Se, S.; Jasiobedzki, P. Stereo-vision based 3D modeling for unmanned ground vehicles. Int. J. Intell. Control Syst. 2008, 13, 46–57. [Google Scholar] [CrossRef] [Green Version]
  8. Miller, I.D.; Fernando, C.; Anthony, C.; Shivakumar, S.S.; Lee, E.S.; Jarin-Lipschitz, L.; Akhilesh, B.; Rodrigues, N.; Zhou, A.; Cohen, A.; et al. Mine Tunnel Exploration Using Multiple Quadrupedal Robots. IEEE Robot. Autom. Lett. 2020, 5, 2840–2847. [Google Scholar] [CrossRef] [Green Version]
  9. Baker, C.; Morris, A.; Ferguson, D.; Thayer, S.; Whittaker, C.; Omohundro, Z.; Reverte, C.; Whittaker, W.; Thrun, S. A Campaign in Autonomous Mine Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’04), New Orleans, LA, USA, 26 April–1 May 2004; IEEE: New York, NY, USA, 2004. [Google Scholar]
  10. Bakambu, J.N.; Polotski, V. Autonomous system for navigation and surveying in underground mines. J. Field Robot. 2007, 24, 829–847. [Google Scholar] [CrossRef]
  11. Neumann, T.; Ferrein, A.; Kallweit, S.; Scholl, I. Towards a Mobile Mapping Robot for Underground Mines. In RobMech and AfLaT International Joint Symposium, Proceedings of the 2014 Pattern Recognition Association of South Africa(PRASA), Cape Town, South Africa, 27–28 November 2014; Puttkammer, M., Eiselen, R., Eds.; Pattern Recognition Association of South Africa (PRASA): Cape Town, South Africa, 2014. [Google Scholar]
  12. Berglund, T.; Brodnik, A.; Jonsson, H.; Staffanson, M.; Söderkvist, I. Planning Smooth and Obstacle-Avoiding B-Spline Paths for Autonomous Mining Vehicles. IEEE Trans. Autom. Sci. Eng. 2010, 7, 167–172. [Google Scholar] [CrossRef]
  13. Shaffer, G.K.; Stentz, A.; Whittaker, W.L.; Fitzpatrick, K.W. Position Estimator for Underground Mine Equipment. IEEE Trans. Ind. Appl. 1992, 28, 1131–1140. [Google Scholar] [CrossRef]
  14. Larsson, J.; Broxvall, M.; Saffiotti, A. A Navigation System for Automated Loaders in Underground Mines. In Field and Service Robotics (FSR-2005), Proceedings of the 5th International Conference, Port Douglas, Australia, 29–31 July 2005; Corke, P., Sukkariah, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 129–140. [Google Scholar]
  15. Marshall, J.; Barfoot, T.; Larsson, J. Autonomous Underground Tramming for Center—Articulated Vehicles. J. Field Robot. 2008, 25, 400–421. [Google Scholar] [CrossRef]
  16. MobileTronics’s VirtuRail. Available online: https://innovation.strabag.com/en/project/virturail-with-an-autonomous-train-into-the-heart-of-the-tunnel/ (accessed on 7 June 2020).
  17. Yinka-Banjo, C.; Bagula, A.; Osunmakinde, I. Autonomous Multi-robot Behaviours for Safety Inspection under the Constraints of Underground Mine Terrains. Ubiquitous Comput. Commun. J. 2012, 7, 1316–1328. [Google Scholar]
  18. Roberts, J.M.; Duff, E.S.; Corke, P.I. Reactive navigation and opportunistic localization for autonomous underground mining vehicles. Inf. Sci. 2002, 145, 127–146. [Google Scholar] [CrossRef]
  19. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A Search and Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Günther, F.; Mischo, H.; Lösch, R.; Grehl, S.; Güth, F. Increased safety in deep mining with iot and autonomous robots. In Mineral Industry’(APCOM 2019), Proceedings of the 39th International Symposium ‘Application of Computers and Operations Research, Wroclaw, Poland, 4–6 June 2019; Mueller, C., Assibey-Bonsu, W., Baafi, E., Dauber, C., Doran, C., Jaszczuk, M.J., Nagovitsyn, O., Eds.; CRC Press: London, UK, 2019; pp. 101–105. [Google Scholar]
  21. Kim, H.; Choi, Y. Development of a LiDAR Sensor-based Small Autonomous Driving Robot for Underground Mines and Indoor Driving Experiments. J. Korean Soc. Miner. Energy Resour. Eng. 2019, 56, 407–415. [Google Scholar] [CrossRef]
  22. Kim, H.; Choi, Y. Field Experiment of a LiDAR Sensor-based Small Autonomous Driving Robot in an Underground Mine. Tunn. Undergr. Space 2020, 30, 76–86. [Google Scholar] [CrossRef]
  23. Royer, E.; Lhuillier, M.; Dhome, M.; Lavest, J.M. Monocular Vision for Mobile Robot Localization and Autonomous Navigation. Int. J. Comput. Vis. 2007, 74, 237–260. [Google Scholar] [CrossRef] [Green Version]
  24. Park, S.; Hashimoto, S. Autonomous Mobile Robot Navigation Using Passive RFID in Indoor Environment. IEEE Trans. Ind. Electron. 2009, 56, 2366–2373. [Google Scholar] [CrossRef]
  25. Wolcott, R.W.; Eustice, R.M. Robust LIDAR Localization using Multiresolution Gaussian Mixture Maps for Autonomous Driving. Int. J. Rob. Res. 2017, 36, 292–319. [Google Scholar] [CrossRef]
  26. Jo, K.; Jo, Y.; Suhr, J.K.; Jung, H.G.; Sunwoo, M. Precise Localization of an Autonomous Car Based on Probabilistic Noise Models of Road Surface Marker Features Using Multiple Cameras. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3377–3392. [Google Scholar] [CrossRef]
  27. Shen, J.; Tick, D.; Gans, N. Localization Through Fusion of Discrete and Continuous Epipolar Geometry with Wheel and IMU Odometry. In Proceedings of the 2011 American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; IEEE: New York, NY, USA, 2011; pp. 1292–1298. [Google Scholar]
  28. Li, I.-H.; Chen, M.-C.; Wang, W.-Y.; Su, S.-F.; Lai, T.-W. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments. Sensors 2014, 14, 2089–2109. [Google Scholar] [CrossRef] [Green Version]
  29. Moreno, L.; Armingol, J.M.; Garrido, S.; De La Escalera, A.; Salichs, M.A. A genetic algorithm for mobile robot localization using ultrasonic sensors. J. Intell. Robot. Syst. Theory Appl. 2002, 34, 135–154. [Google Scholar] [CrossRef] [Green Version]
  30. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  31. Betz, J.; Heilmeier, A.; Wischnewski, A.; Stahl, T.; Lienkamp, M. Autonomous driving-A crash explained in detail. Appl. Sci. 2019, 9, 5126. [Google Scholar] [CrossRef] [Green Version]
  32. Wischnewski, A.; Stahl, T.; Betz, J.; Lohmann, B. Vehicle Dynamics State Estimation and Localization for High Performance Race Cars. IFAC-PapersOnLine 2019, 52, 154–161. [Google Scholar] [CrossRef]
  33. Stahl, T.; Wischnewski, A.; Betz, J.; Lienkamp, M. ROS-based localization of a race vehicle at high-speed using LIDAR. In Proceedings of the E3S Web of Conferences, Prague, Czech Republic, 16–19 February 2019; Volume 95, p. 04002. [Google Scholar] [CrossRef]
  34. Adams, M.; Zhang, S.; Xie, L. Particle filter based outdoor robot localization using natural features extracted from laser scanners. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA’04, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 1493–1498. [Google Scholar]
  35. Ghosh, D.; Samanta, B.; Chakravarty, D. Multi sensor data fusion for 6D pose estimation and 3D underground mine mapping using autonomous mobile robot. Int. J. Image Data Fusion 2017, 8, 173–187. [Google Scholar] [CrossRef]
  36. Chi, H.; Zhan, K.; Shi, B. Automatic guidance of underground mining vehicles using laser sensors. Tunn. Undergr. Space Technol. 2012, 27, 142–148. [Google Scholar] [CrossRef]
Figure 1. Overall structure of the autonomous driving robot used in this study.
Figure 1. Overall structure of the autonomous driving robot used in this study.
Applsci 10 04831 g001
Figure 2. Two-dimensional coordinate system and dynamic model of the autonomous driving robot used in this study.
Figure 2. Two-dimensional coordinate system and dynamic model of the autonomous driving robot used in this study.
Applsci 10 04831 g002
Figure 3. System architecture of the data processing procedure for the autonomous driving robot used in this study.
Figure 3. System architecture of the data processing procedure for the autonomous driving robot used in this study.
Applsci 10 04831 g003
Figure 4. Relationship between the right and left distance difference, and the steering value.
Figure 4. Relationship between the right and left distance difference, and the steering value.
Applsci 10 04831 g004
Figure 5. Conceptual diagram and views of the indoor laboratory designed in this study.
Figure 5. Conceptual diagram and views of the indoor laboratory designed in this study.
Applsci 10 04831 g005
Figure 6. Photos showing indoor experimental scenes of the autonomous driving robot (a) at the starting point, (b) X change section, (c) X-Y change section, (d) X-Z change section, (e) Y change section, and (f) the end point.
Figure 6. Photos showing indoor experimental scenes of the autonomous driving robot (a) at the starting point, (b) X change section, (c) X-Y change section, (d) X-Z change section, (e) Y change section, and (f) the end point.
Applsci 10 04831 g006aApplsci 10 04831 g006b
Figure 7. Autonomous robot’s driving path estimated by the three location estimation methods (inertial measurement unit (IMU) + encoder sensors, light detecting and ranging (LiDAR) + encoder sensors, and IMU + LiDAR + encoder sensors) and the actual driving path in the indoor experiment.
Figure 7. Autonomous robot’s driving path estimated by the three location estimation methods (inertial measurement unit (IMU) + encoder sensors, light detecting and ranging (LiDAR) + encoder sensors, and IMU + LiDAR + encoder sensors) and the actual driving path in the indoor experiment.
Applsci 10 04831 g007
Figure 8. Comparison of the pathways estimated by the IMU + encoder sensors and the actual driving path of the autonomous driving robot in the Z-axis terrain.
Figure 8. Comparison of the pathways estimated by the IMU + encoder sensors and the actual driving path of the autonomous driving robot in the Z-axis terrain.
Applsci 10 04831 g008
Figure 9. Comparison of the longitudinal and lateral absolute errors ((a) and (b)) and the longitudinal and lateral velocities ((c) and (d)) between the three (LiDAR + encoder, IMU + encoder, and IMU + LiDAR + encoder sensors) location estimation methods.
Figure 9. Comparison of the longitudinal and lateral absolute errors ((a) and (b)) and the longitudinal and lateral velocities ((c) and (d)) between the three (LiDAR + encoder, IMU + encoder, and IMU + LiDAR + encoder sensors) location estimation methods.
Applsci 10 04831 g009
Figure 10. Conceptual diagram of the field experiment in the underground mine environment.
Figure 10. Conceptual diagram of the field experiment in the underground mine environment.
Applsci 10 04831 g010
Figure 11. Photos showing field experiment scenes of the autonomous driving robot at (a) the starting point, (b) the curved point, and (c) the end point.
Figure 11. Photos showing field experiment scenes of the autonomous driving robot at (a) the starting point, (b) the curved point, and (c) the end point.
Applsci 10 04831 g011
Figure 12. Autonomous robot’s driving path estimated by the three location estimation methods (IMU + encoder, LiDAR + encoder, and IMU + LiDAR + encoder sensors), and the actual driving path in the field experiment.
Figure 12. Autonomous robot’s driving path estimated by the three location estimation methods (IMU + encoder, LiDAR + encoder, and IMU + LiDAR + encoder sensors), and the actual driving path in the field experiment.
Applsci 10 04831 g012
Table 1. Specification of sensors, controller, and driving platform used in this study.
Table 1. Specification of sensors, controller, and driving platform used in this study.
ItemEquipmentModel and Specification
SensorLiDAR SensorLMS-111 2D (SICK, Waldkirch, Germany)
IMU SensorEBIMU – 9DOFV4 (E2BOX, Hanam, Korea)
Encoder SensorIG-32PGM 01TYPE (YOUNGJIN B&B, Seoul, Korea)
ControllerHigh Level ControllerIntel Celeron(R) CPU 1007U 1.50 GHz, 4 GB RAM, Intel(R) HD Graphics, (Intel, Santa Clara, CA, UAS)
Windows10 (Microsoft Corporation, Redmond, WA, USA)
NI LabVIEW software (National Instruments, Austin, TX, USA)
Low Level ControllerAVR Microcontroller ATMega128 Pro Kit (MICROCHIP, Chandler, Arizona, USA)
C Language (Bell Labs, Murray Hill, NJ, USA)
Remote ControllerIntel Atom (TM) CPU N2600 1.60 GHz, 2 GB RAM, Intel(R) Graphics Media Accelerator 3600 Series, Windows 7
NI LabVIEW Software
Driving PlatformDriving RobotERP-42(Unmanned Solution, Seoul, Korea)
Table 2. Experiment results of the autonomous robot’s indoor location estimation at each section.
Table 2. Experiment results of the autonomous robot’s indoor location estimation at each section.
Location Estimation MethodMAEX-Change SectionX -Y Change SectionX-Z Change SectionY-Change Section
IMU + Encoder SensorsX MAE (m)0.110.190.280.36
X MAE (m)0.100.140.190.20
LiDAR + Encoder SensorsX MAE (m)0.030.430.462.66
Y MAE (m)0.030.310.352.12
IMU + LiDAR + Encoder SensorsX MAE (m)0.030.080.110.13
Y MAE (m)0.030.050.060.17
Table 3. Field experiment results of the autonomous robot’s location estimation.
Table 3. Field experiment results of the autonomous robot’s location estimation.
IMU + EncoderLiDAR + EncoderIMU + LiDAR + Encoder
X MAE (m)0.120.520.11
Y MAE (m)0.230.910.11

Share and Cite

MDPI and ACS Style

Kim, H.; Choi, Y. Comparison of Three Location Estimation Methods of an Autonomous Driving Robot for Underground Mines. Appl. Sci. 2020, 10, 4831. https://doi.org/10.3390/app10144831

AMA Style

Kim H, Choi Y. Comparison of Three Location Estimation Methods of an Autonomous Driving Robot for Underground Mines. Applied Sciences. 2020; 10(14):4831. https://doi.org/10.3390/app10144831

Chicago/Turabian Style

Kim, Heonmoo, and Yosoon Choi. 2020. "Comparison of Three Location Estimation Methods of an Autonomous Driving Robot for Underground Mines" Applied Sciences 10, no. 14: 4831. https://doi.org/10.3390/app10144831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop