Next Article in Journal
Structural–Parametric Synthesis of Path-Generating Mechanisms and Manipulators
Previous Article in Journal
Increasing Horizontal Controlled Force Delivery Capabilities of Aerial Manipulators by Leveraging the Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment

by
Olivier Poulet
1,*,
Frédéric Guinand
1,2 and
François Guérin
3,*
1
LITIS, Le Havre Normandy University, 76600 Le Havre, France
2
Faculty of Mathematics and Natural Sciences, Cardinal Stefan Wyszynski University, 01-815 Warsaw, Poland
3
GREAH, Le Havre Normandy University, 76600 Le Havre, France
*
Authors to whom correspondence should be addressed.
Robotics 2024, 13(10), 148; https://doi.org/10.3390/robotics13100148
Submission received: 27 August 2024 / Revised: 24 September 2024 / Accepted: 27 September 2024 / Published: 30 September 2024
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)

Abstract

:
This work concerns the autonomous navigation of non-holonomic ground mobile robots in a GPS-denied environment. The objective was to locate, in a global frame, without GPS, anonymous ground mobile robots starting from two consecutive aerial images captured by a single fixed webcam. The effectiveness of deep learning by a MultiLayer Perceptron in an indexed localization was compared to the methods studied in previous works. The ability of a robot to determine the position of other non-indexed robots was also performed. The structure and parameters of the network and the choice of the points taken into account during the learning phase to obtain a local optimum are presented. The results, obtained from simulated and experimental data, are compared to those obtained with more classical methods for different sampling periods (time between images).

1. Introduction

The localization of mobile robots and autonomous vehicles, in general, is an essential step in their control, enabling them to move and perform tasks safely and efficiently [1]. When a robot is considered as an individual agent, a distinction is made between global and local localization [2]. Global localization enables the robot to locate itself without prior knowledge of its position, while local localization situates the robot in relation to its previous movements. To go beyond individual capabilities and accomplish more complex tasks [3], a work is being carried out on multi-robot systems that enable missions such as, for example, training control [4] or exploration [5]. Relative localization, the most widely used indoor method, enables robots to situate themselves in relation to others. Relative localization mainly involves measuring the distance or orientation between individuals using sensors [3]. GNSS localization, and particularly GPS, is the main technique used for global localization [6], and it can be accurate to within a few centimeters by integrating RTK (Real Time Kinematic) positioning [7], but it is not applicable in constrained environments such as shaded urban areas with tall, dense buildings, or in indoor environments [8].
In addition to the constraint of having periodic coordinates from aerial images, the localization of anonymous robots is a source of additional constraints: robots are anonymous in the sense that they have no identifiers, no visible and distinguishable identifying features that can be used in computations, and all robots execute exactly the same algorithm [9,10]. This anonymity is problematic as it does not allow a leader to be elected from among processes with identical local visualization and identical behavior [11].
The work presented here proposes a global localization method for anonymous mobile robots. These robots evolve indoors and cannot communicate with each other; they are equipped with lidars to avoid collisions. These coordinates come from a global benchmark, a webcam located above the evolution area of the mobile robots. It takes images at regular intervals and provide coordinates (in its image plan) to the ground mobile robots. These coordinates are not indexed, and the ground mobile robots must, from this minimum information, determine their own position in the periodic images among the received coordinates. The image processing system (acquisition, coordinates computing, etc.) is decentralized and will not be studied here. This paper proposed a deep learning-based localization method using a MultiLayer Perceptron (MLP). A comparison with the previous algorithm (LDRMA [12]) was carried out, and then the algorithm was experimentally implemented on four Turtlebot-type mobile robots with the addition of two stationary decoys. The achieved performance and new advancements regarding this method were carried out.
In [13], the authors addressed the challenges involved in localizing and coordinating robots in GPS-denied areas. Among these are communication network issues: the management of highly dynamic and possibly dense robot networks, routing, conflict management, and quality of service. Heterogeneity also remains an issue for managing different localization methods and forms of mobility between robots. But heterogeneity can also lead to robots evolving in a common space, being deaf and blind to the group and failing to meet the communication and/or coordination requirements of the rest of the fleet.
The MultiLayer Perceptron-based localization approach presented in this article attempts to provide some clues by considering that robots do not communicate with each other directly and do not depend on a common inter-robot communication network but instead have strong constraints such as only receiving sampled and anonymous coordinates. This study could be extended to a heterogeneous set of robots and to a larger number of robots.
The main contribution of this paper is to enable the global localization of robots on two consecutive aerial images in a constrained environment: no GPS, no communication between robots, and no possibility of differentiating robots by index, shape, or specific movements.
This paper is divided into several parts: Part II summarizes the state-of-the-art works, Part III presents the deep learning algorithm, Part IV describes the mobile robot platforms and control algorithms, and Part V discusses the obtained results. The conclusion presents the challenges and sets the direction for future work.

2. State-of-the-Art Works

The localization of anonymous ground robots is rarely addressed in the literature. In contrast, the localization of mobile robots, in general, is a highly discussed field, especially in constrained environments where GPS is no longer operational. Wireless networks using radio frequencies, including Wi-FI, Ultra Wide Band (UWB), and Bluetooth Low Energy (BLE), are the most widely used. Other technologies include visual localization, infrared-based localization, and sound-based localization [13]. In order to exploit these technologies, a number of techniques have been implemented [14], including those based on distance measurement such as CSI (Channel State Information) [15], TOA (Time of Arrival) [16], RTT (Round-Trip Time) [17], RSS (Received Signal Strength), and TDOA (Time Difference of Arrival) [18], as well as those based on direction measurements such as AOA (Angle of Arrival) [19]. Each has its advantages and disadvantages [14], such as the need for synchronization, sensitivity to noise, complexity, or the need for additional hardware. Various algorithms have been used for localization, including trilateration [20], fingerprinting algorithms that compare the online intensity of an RSS signal with that of a previously validated offline state [21]. The use of machine learning in algorithms can attenuate the interfering signals encountered in indoor localization [14], including Extreme Learning Machine (ELM) algorithms, which deliver high accuracy and reduce computational effort [22]. Artificial intelligence algorithms are used in localization [23], including Deep Learning. Convolutionnal Neural Networks (CNNs) are one of the most popular algorithms in deep learning, enabling image recognition from cameras and localization based on training [24,25]. The use of Recurrent Neural Networks (RNNs) to process sequential data can be combined with CNNs to improve localization performance [26]. Deep Reinforcement Learning (DRL) models, which are unsupervised networks based on a reward system, are usually used for robot navigation. However, work based on RSS measurements has made it possible to use DRL for localization [27]. MultiLayer Perceptron networks (MLP), whose aim is to approximate functions [23], can be used to perform localization operations. This is the case in [28], where the MLP is used to merge several sensors and enable the global localization of a robot in a map. Another visual technology, using reference markers positioned on the agents, enables multi-agent localization in a global frame of reference [29]. These markers enable robust localization with high precision [30]. The fact that the patterns are very different from one another means that errors in distinguishing robots are avoided. Most of the square-shaped marker techniques used are derived from ARToolkit [31], as in the work presented here. Pattern recognition by a camera provides precise information on the coordinates of robots in an image.
The specific problem of localizing anonymous robots is mainly dealt with in the context of relative mutual localization using anonymous bearing measurements. In [32], a particle filter was used. The authors in [33] adapted the coupled probabilistic data association filter to estimate the relative localization. Recently, an algorithm based on semidefinite relaxation has been proposed ([34]).
In our previous work [12], three algorithms for anonymous mobile robot global localization were proposed. Two of them were based on motion implementation (of a voluntary mobile robot) but had not been tested, while a third was based on motion modeling. This last algorithm was tested on a simulation platform with two mobile robots and was called “Localization During a Random Motion Algorithm” (LDRMA). In [35], a platform was proposed to experimentally validate these results for two mobile robots. Stationary decoys were then added to estimate their impact on the success rate. The sources of measurement imprecision were quantified, and the algorithm in simulation for four mobile robots and two decoys was validated.
The problem addressed in [12] is the same as in this work: how to achieve the global localization of anonymous robots without communication between them. Two consecutive pictures were taken from a webcam at regular time intervals. The coordinates, in the camera frame, of all the robots can be extracted from the pictures. The set of anonymous coordinates was sent to all the robots. Each robot has to determine its own position without communicating with the others. Robots are characterized by their linear and angular speeds ( v , r ). By noting the indices of the first image m, the indices of the second image n, the iteration period δ t , the initial angle of the robot θ m , and the focal length of the camera λ , it was demonstrated in [12] that the final coordinates depended on the initial coordinates:
x n y n = x m y m + δ t j = m + 1 n λ . cos ( θ m + δ t k = m + 1 j r k ) λ . sin ( θ m + δ t k = m + 1 j r k ) . v j .
When a robot has to localize itself and if λ is known, it compares pairs of positions on the two pictures (one point of the first picture and the second point on the second picture), and it attempts to compute an unique initial angle compatible for a motion from p m = ( x m , y m ) to p n = ( x n , y n ) . If such an angle could be determined, then the coordinates correspond to the positions of the robot of both images. The system has two unknowns: λ and θ m . There was n 2 possibilities for n robots, which entails an increase in the computational complexity. For the experiments, λ was supposed to be known.

3. Deep Learning Algorithm (DLA) for Indexed Localization

Indexed localization refers to the ability of a mobile robot to know its own coordinates on both images among all of the available coordinates. The aim of this work was to find a learning algorithm that can, on the basis of past experience (i.e., knowledge of the coordinates and the linear and angular movements of mobile robots), estimate its own coordinates from all those possible between two successive images. Artificial neural networks, particularly MLP, provide a good opportunity to solve this problem as they can be likened to a classification of the data and are considered as independent [36]. This is the choice proposed to address this issue as their implementation is relatively straightforward and yields satisfactory results. The data knowledge in the initial image consists of the initial coordinates ( x i r , y i r ) of the considered mobile robot (index r n ), where n is the number of mobile robots. In the final image, the knowledge of each mobile robot includes not only the final coordinates ( x f r , y f r ), but also the linear (l) and angular (a) distances traveled between the two images. Thus, the information for each mobile robot represents six variables. The initial and final coordinates were obtained from the webcam, and the linear and angular motions were derived from the odometry. Therefore, it was possible to create a neural network (Figure 1) with six inputs, hidden layers, and, finally, one output neuron providing a probability of success among n possibilities. Choosing the highest probability among all allowed for estimating the most probable initial and final coordinates. The artificial neural network was supervised and, therefore, requires training. The training was conducted over a five-second inter-image period to understand the impact on the success rate of a shorter or longer period compared to the training. Training a neural network is crucial; it only can run if it was conducted properly. Validation was performed on data that were not used during training. It is common to consider 20% of the data collected during the training for network validation and enable generalization. The number of data points must be sufficiently high and should include “true” data, i.e., data corresponding to the correct initial and final coordinates based on distance and angle for an indexed robot. Still, it also requires a certain number of “false” data, corresponding to the incorrect coordinates for a considered robot based on the same distance and angle between images. There are many more “false” data points labeled as “0” than “true” data points labeled as “1”. In fact, for n robots, there is 1 “true” possibility for n 2 1 “false” possibilities. The choice of these “false” possibilities, labeled as “0”, and their number is crucial to obtain an optimal representation of the robot’s motions and, thus, of its learning. In both simulated and experimental measurements, we considered the same number of true and false values, with the false values being randomly selected from the values provided by the mobile robots. For the simulated measurements, 4853 true values were taken into account. They originated from only one mobile robot as they were all perfectly identical in simulation. Therefore, the results of only one mobile robot were enough for a correct learning. For the experimental platform, the best results were obtained when the following training values were derived from all four moving mobile robots (even though the mobile robots were considered identical, and even if they still differed in certain aspects): the positions of the localization targets; the mechanical structures and wears; the mechanical and electronic components; and the assembly variations. Even when considering the fact that only one mobile robot did not yield the best results, unlike when considering all measurements, an improvement of more than 3% was observed in the overall results in this case. The number of training measurements in this case averaged 4515 measurements per mobile robot, totaling 18,060 measurements. This difference compared to the number of measurements considered in simulation can be explained by the need to estimate the differences between the robots. The considered measurements of the linear distances in both cases were bounded. It is theoretically impossible to determine the position of a linearly stationary mobile robot between two images among several stationary robots, as was the case here. Therefore, training the mobile robots was considered only if they covered minimum linear distances.
In the experimental measurements, the minimum linear distance considered was very low in order to encompass all the possible measurements of all the mobile robots.
The chosen structure for deep learning, meaning the number of hidden layers and the number of neurons per layer, was not simple to optimize. The number of neurons in the hidden layer(s) was typically determined empirically. A higher number of neurons improves system performance but increases complexity [37,38]. The number of inputs was defined by the number of considered data points, which was six in our case. The number of outputs was one, and this was also the probability of success. Experimentation helped choose the number of hidden layers and neurons to consider. The choice was ultimately settled on three hidden layers with thirty neurons per layer for both the real and simulated measurements (Figure 1). The following choices of the activation and output functions is common for this type of neural network: Rectified Linear Unit (RELU) for the activation functions [39] and sigmoid for the output function [40] (which allows one to obtain a probability). This is useful in our case to determine the most probable position among n 2 possibilities. Finally, the optimization algorithm is the Nadam- or Nesterov-accelerated Adaptive Moment Estimation algorithm [41]. The algorithm was implemented using Keras [42], a high-level library developed in Python.

4. Platform Presentation

Two platforms were designed and used, the first one for the simulation and the second one for the experimentation. A detailed description of the two platforms platforms used in this study is provided in [35].

4.1. Simulation Platforms

The player–stage simulation platform [43] consists of mobile robots (Figure 2) that are all perfectly identical in their shapes and motions. These mobile robots are not equipped with incremental encoders. The positions are recorded through successive iterations of linear and angular velocities multiplied by the considered sampling period ( δ t ). Four moving and two stationary mobile robots were considered.

4.2. Experimental Platform

The experimental platform (Figure 3) was composed of four mobile robots (TurtleBot3 Burger [44]), a camera, a computer for image processing and for communicating information to the mobile robots, and a router for the wireless network. Coordinates were collected using reference markers attached to each robot. These coordinates were sent by a camera positioned at a height of three meters. The robots’ movement surfaces were about 3 m long and 2 m wide. Image capture periods were tested from 2 to 8 s.
The platform was rectangular and surrounded by barriers. Decoys were not added to avoid overloading target detection, but they were automatically added to each set of coordinates in the images. The mobile robots were different from those used in [35]. The TurtleBot3 Burger (Figure 4) robots were more suitable for conducting deep learning tests. Indeed, they had an autonomy of approximately two hours in our configuration, and they had a lidar, for 360° obstacle detection, as well as incremental encoders. Algorithms can be implemented in C++ or Python languages with ROS middleware. Two different algorithms were designed: the first one for motion and obstacle avoidance, and the second one for handling the image reception and recording the coordinates of the mobile robots (odometry).
The objective was to present the simulation and experimental results together. The number of mobile robots and decoys was the same. The spaces where the mobile robots operated were different in shape (square in simulation and rectangular in the experiment), but their dimensions were similar and adapted to the size of the mobile robots. However, the platforms differed in several ways. The real mobile robots (TurtleBot) used odometry, unlike the simulated mobile robots (player–stage), and they operated on a ground surface that was improved compared to [35] but still had slippage and imperfections. The distances to the aerial camera were different, as well as the parallelism between the camera and the ground surface.

4.3. Algorithms and Programming

However, the algorithms for controlling the motions of simulated and real mobile robots were similar but differed a little bit. Since the simulated mobile robots did not have odometry, the positioning estimation relied on the linear and angular velocities generated from the mobile robot’s controls. Distance and orientation were achieved through successive iterations in a time interval δ t . In the case of the real mobile robots, the distance and orientation came from odometry (incremental encoders) and were initialized at the beginning of the program. Therefore, it was easy to retrieve this information at each image acquisition. In both cases, simulated and experimental, the linear and angular velocities chosen with each new decision were random and bounded. When it detected an obstacle, the mobile robot stopped and rotated in the opposite direction to the object’s orientation. Finally, if a linear distance had been active for some time without detecting an obstacle, an incremental counter was generated, allowing, beyond a threshold, the addition of a small variation in angular velocity. The goal was to make the mobile robot’s trajectories more complex. The mobile robot motions were implemented in C++ for the simulated mobile robots and in Python for the real mobile robots. Data processing for deep learning was performed offline using Python and Keras. The simulation and experiment values are respectively summarized in Table 1 and Table 2.

5. Results and Analysis

All the experimental or simulated results were obtained with four mobile robots and two decoys, and they allowed comparison of the LDMRA and Deep Learning Algorithm (DLA).

5.1. Indexed Localization of Mobile Robots

The simulated platform was the same for both algorithms (simulation and experimentation), so it was possible to quantitatively compare the results for indexed localization, i.e., the robot’s ability to determine its coordinates from among several.
The Deep Learning Algorithm applied to real or experimental measurements outperformed the LDRMA in all cases where the inter-image period was greater than 2 s (Figure 5). For the specific inter-image period of 2 s, the success rates were equivalent. For the LDRMA, the algorithm performance increased as the inter-image period decreased with a maximum success rate of 64.6%. Its success rate decreased linearly to 36.8% for an 8 s inter-image period. The longer the distances traveled, the greater the distance covered, leading to a larger gap between the modeled and the real trajectories. For the DLA, the results of the simulated and experimental measurements had a similar “bell-shaped” envelope. They logically reached their maximum success rates at the 5 s inter-image learning period. The success rates between the simulated (75.1%) and experimental (74.1%) measurements were equivalent.
For the DLA, as described in Section 3, the measurements taken into account only concerned mobile robots with a minimum linear motion between two images. The minimum thresholds of the linear motion taken was 1 cm for simulation and 0.1 cm for experimentation. When the inter-image period was greater than the learning period, the success rate decreased. The number of measurements outside the learning boundaries became higher. A particular representation is given in Figure 6. The algorithm can accurately determine the positions of the mobile robots only if it is in relation to a known similar experience; it cannot accurately generalize beyond that. The success rate was inversely proportional to the number of points outside the learning limits when the inter-image period was greater than 5 s (Figure 7).
Conversely, when the inter-image period is shorter than the sampling period, the previous analysis is no longer applicable. In this case, all the points were included within the learning area. These points corresponded to shorter distances traveled, and it was more challenging for the DLA to distinguish the solutions in this scenario, resulting in decreased success rates: the DLA only used a portion of its learning, and it was limited by the distances and angles covered during a shorter time, so its experience was less significant. In both cases, the simulated and experimental measurements, the decrease in success rates was equivalent. As the measurement period moved away from the learning period, the success rate decreased.
For the DLA, the quantitative comparison of the results, after simulation and experimentation, confirmed similar success rates (where the average difference was 2.1%). It is worth noting that the only difference between both algorithms was in the learning phase. Indeed, the algorithm on simulated measurements only considered a single mobile robot and fewer learning points, whereas the algorithm on experimental measurements considered all the mobile robots. As mentioned, experimentation introduces uncertainties and inaccuracies, such as manufacturing differences between mobile robots, wear, and variations in odometry due to slippage related to the quality of the ground/wheel contact. The DLA properly managed these inaccuracies, and it took into account the differences between the mobile robots to make its predictions, and even the smallest differences between the mobile robots did not disturb the success of the DLA.
The results were independent of the configuration of the implemented setups. Distances, parallelism, and camera positioning relative to the mobile robots had no impact on the results. They were also independent of the difference algorithms and processing time.

5.2. Non Indexed Localization of Mobile Robots

In the previous section, the mobile robot ability to find its own initial and final coordinates on two successive images, which is referred to as indexed localization, was studied.
In this section, we examined the mobile robot ability to determine its initial and final coordinates independently of its index, meaning its number. It is referred to as non-indexed localization. This reveals a new capability of the DLA: its ability to determine the positions of the six mobile robots, whether mobile or fixed, regardless of the considered inter-image period. Success was achieved when the six highest probabilities correspond to the initial and final coordinates of the same mobile robot, meaning the ability to determine the coordinates of all the mobile robots without knowing their indices.
For the experimental measurements, the success rate was greater than 97.14%, reaching 100% for a 3 s inter-image period (Figure 8). For simulated measurements, the success rates linearly decreased beyond a 5 s period, reaching a minimum of 66.6% for a 8 s inter-image period. This difference between simulated and real measurements can be explained by the following:
  • The simulated learning phase considers only one mobile robot, while the success rates take into account all the mobile robots. Even if they are perfectly identical, their trajectories can be different.
  • The differences between the experimental mobile robots were considered in the learning process and allow for a better understanding of the respective motions of each mobile robot, thus enhancing non-indexed localization.
Here is an example (Table 3) of the top-10 detection probabilities for the experimental mobile Robot 1 (inter-image period: 7 s). The six first possibilities, derived from the six positions of the mobile robots between the two successive images stood out significantly.
These results were not easily predictable.
  • This could have been the case if the mobile robots were operating in a very large environment where the ambiguity of being confused and making errors in non-indexed localization would be low. Here, the surfaces were relatively small, there were frequent obstacle detections, and the coordinates could be close, which increased the difficulty of localization.
  • Each mobile robot, regardless of its own movements, succeeded in determining the positions of all the mobile robots as if, ultimately, its own linear and angular motions were not of major importance in non-indexed detection. But, rather, regardless of its own motions, the mobile robots were able to anticipate the positions of all because the learning had integrated the characteristics of each mobile robot. Linear and angular controls played a minor role in non-indexed localization. The network is capable of knowing the positions of all the mobile robots without even knowing their controls.

6. Conclusions and Future Works

This work focused on the localization of anonymous ground mobile robots in successive images. The coordinates between two successive images, as well as the linear and angular motions, are considered minimal information. The experiments involved four mobile robots and two fixed ones. Several key conclusions can be drawn from this work.
  • The Deep Learning Algorithm (DLA) outperformed the algorithm based on a modeling of the mobile robots’ motions (LDRMA).
  • The Deep Learning Algorithm was equally efficient with simulation and experimental measurements.
  • The success rate envelope of the DLA formed a “bell curve” with the maximum centered on the sampling period. The results were the best for measurements at the same inter-image period as the learning, and they decreased with period deviation.
  • The Deep Learning Algorithm was able to know the positions of all mobile robots without indexing between two successive images with a very high success rate.
A future optimization of the DLA could be to study the effectiveness of learning at varied inter-image periods at the same time as modifying the structure of the network.
The Deep Learning Algorithm outperformed the classical algorithm presented in previous work and it enabled knowledge of the motion between two successive images. Thus, the fusion of both indexed and non-indexed localization information could provide the position of all indexed mobile robots with even higher success rates. This also means that a mobile robot, knowing its position in the two images, knows the relative motions to other mobile robots and can anticipate their motions to optimize group missions and avoid collisions during task execution.
The Deep Learning Algorithm has the advantage of not being concerned with the structure or operation of the mobile robots, but rather only being concerned with the consideration of their motions. This eliminates the need to know the detailed design of the mobile robots. It also does not require knowledge of the differences between the mobile robots as it incorporates them into its learning through experience. This is its strength as modelization is only based on experience. It requires initial work to acquire the number of points needed to train the neural network, and the training must adapt to any change in plant configuration.
Future work will focus on collision avoidance systems. The study in this article demonstrated the ability of an MLP to obtain the coordinates of each robot between two images. Reinforcement learning associated with the MLP could make it possible to anticipate collisions between mobile agents without the use of exteroreceptive sensors. This could improve robot safety in a constrained dynamic environment using minimalist measures.

Author Contributions

Format, analysis, methadology, software, writing—original draft, P.O.; conceptualization, Methodology, writing—review and editing, supervision, F.V.G.G.; funding acquisition, resqurces, methadology, software, supervision, writing—original draft, F.M.G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data will be available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carvalho, J.L.; Farias, P.C.; Simas Filho, E.F. Global Localization of Unmanned Ground Vehicles Using Swarm Intelligence and Evolutionary Algorithms. J. Intell. Robot. Syst. 2023, 107, 45. [Google Scholar] [CrossRef]
  2. Se, S.; Lowe, D.; Little, J. Local and global localization for mobile robots using visual landmarks. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180), Maui, HI, USA, 29 October–3 November 2001; Volume 1, pp. 414–420. [Google Scholar]
  3. Chen, S.; Yin, D.; Niu, Y. A survey of robot swarms’ relative localization method. Sensors 2022, 22, 4424. [Google Scholar] [CrossRef]
  4. Quan, L.; Yin, L.; Xu, C.; Gao, F. Distributed swarm trajectory optimization for formation flight in dense environments. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4979–4985. [Google Scholar]
  5. Gao, Y.; Wang, Y.; Zhong, X.; Yang, T.; Wang, M.; Xu, Z.; Wang, Y.; Lin, Y.; Xu, C.; Gao, F. Meeting-merging-mission: A multi-robot coordinate framework for large-scale communication-limited exploration. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13700–13707. [Google Scholar]
  6. Joubert, N.; Reid, T.G.; Noble, F. Developments in modern GNSS and its impact on autonomous vehicle architectures. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 2029–2036. [Google Scholar]
  7. Krasuski, K.; Ciećko, A.; Bakuła, M.; Grunwald, G.; Wierzbicki, D. New methodology of designation the precise aircraft position based on the RTK GPS solution. Sensors 2021, 22, 21. [Google Scholar] [CrossRef] [PubMed]
  8. Sesyuk, A.; Ioannou, S.; Raspopoulos, M. A survey of 3D indoor localization systems and technologies. Sensors 2022, 22, 9380. [Google Scholar] [CrossRef] [PubMed]
  9. Flocchini, P.; Prencipe, G.; Santoro, N.; Widmayer, P. Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theor. Comput. Sci. 2008, 407, 412–447. [Google Scholar] [CrossRef]
  10. Di Luna, G.A.; Uehara, R.; Viglietta, G.; Yamauchi, Y. Gathering on a circle with limited visibility by anonymous oblivious robots. arXiv 2020, arXiv:2005.07917. [Google Scholar]
  11. Yamauchi, Y. Symmetry of anonymous robots. In Distributed Computing by Mobile Entities: Current Research in Moving and Computing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 109–133. [Google Scholar]
  12. Poulet, O.; Guérin, F.; Guinand, F. Self-localization of anonymous mobile robots from aerial images. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 1094–1099. [Google Scholar]
  13. Siva, J.; Poellabauer, C. Robot and drone localization in gps-denied areas. In Mission-Oriented Sensor Networks and Systems: Art and Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 2, pp. 597–631. [Google Scholar]
  14. Kim Geok, T.; Zar Aung, K.; Sandar Aung, M.; Thu Soe, M.; Abdaziz, A.; Pao Liew, C.; Hossain, F.; Tso, C.P.; Yong, W.H. Review of indoor positioning: Radio wave technology. Appl. Sci. 2020, 11, 279. [Google Scholar] [CrossRef]
  15. Gönültaş, E.; Lei, E.; Langerman, J.; Huang, H.; Studer, C. CSI-based multi-antenna and multi-point indoor positioning using probability fusion. IEEE Trans. Wirel. Commun. 2021, 21, 2162–2176. [Google Scholar] [CrossRef]
  16. Kaune, R. Accuracy studies for TDOA and TOA localization. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 408–415. [Google Scholar]
  17. Liu, X.; Zhou, B.; Huang, P.; Xue, W.; Li, Q.; Zhu, J.; Qiu, L. Kalman filter-based data fusion of Wi-Fi RTT and PDR for indoor localization. IEEE Sens. J. 2021, 21, 8479–8490. [Google Scholar] [CrossRef]
  18. Šoštarić, D.; Mester, G. Drone localization using ultrasonic TDOA and RSS signal: Integration of the inverse method of a particle filter. Fme Trans. 2020, 48, 21–30. [Google Scholar] [CrossRef]
  19. Menta, E.Y.; Malm, N.; Jäntti, R.; Ruttik, K.; Costa, M.; Leppänen, K. On the performance of AoA–based localization in 5G ultra–dense networks. IEEE Access 2019, 7, 33870–33880. [Google Scholar] [CrossRef]
  20. Thomas, F.; Ros, L. Revisiting trilateration for robot localization. IEEE Trans. Robot. 2005, 21, 93–101. [Google Scholar] [CrossRef]
  21. Kokkinis, A.; Kanaris, L.; Liotta, A.; Stavrou, S. RSS indoor localization based on a single access point. Sensors 2019, 19, 3711. [Google Scholar] [CrossRef]
  22. Lian, L.; Xia, S.; Zhang, S.; Wu, Q.; Jing, C. Improved Indoor positioning algorithm using KPCA and ELM. In Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an, China, 23–25 October 2019; pp. 1–5. [Google Scholar]
  23. Cebollada, S.; Payá, L.; Flores, M.; Peidró, A.; Reinoso, O. A state-of-the-art review on mobile robotics tasks using artificial intelligence and visual data. Expert Syst. Appl. 2021, 167, 114195. [Google Scholar] [CrossRef]
  24. Wozniak, P.; Afrisal, H.; Esparza, R.G.; Kwolek, B. Scene recognition for indoor localization of mobile robots using deep CNN. In Proceedings of the Computer Vision and Graphics: International Conference, ICCVG 2018, Warsaw, Poland, 17–19 September 2018; Proceedings. Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 137–147. [Google Scholar]
  25. Xu, S.; Chou, W.; Dong, H. A robust indoor localization system integrating visual localization aided by CNN-based image retrieval with Monte Carlo localization. Sensors 2019, 19, 249. [Google Scholar] [CrossRef] [PubMed]
  26. Walch, F.; Hazirbas, C.; Leal-Taixe, L.; Sattler, T.; Hilsenbeck, S.; Cremers, D. Image-based localization using lstms for structured feature correlation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 627–637. [Google Scholar]
  27. Li, Y.; Hu, X.; Zhuang, Y.; Gao, Z.; Zhang, P.; El-Sheimy, N. Deep reinforcement learning (DRL): Another perspective for unsupervised wireless localization. IEEE Internet Things J. 2019, 7, 6279–6287. [Google Scholar] [CrossRef]
  28. Magrin, C.E.; Todt, E. Multi-Sensor Fusion Method Based on Artificial Neural Network for Mobile Robot Self-Localization. In Proceedings of the 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 138–143. [Google Scholar] [CrossRef]
  29. Tang, Q.N.; Truong, X.T.; Nguyen, D.Q. AN INDOOR LOCALIZATION METHOD FOR MOBILE ROBOT USING CEILING MOUNTED APRILTAGS. J. Sci. Tech. 2022, 17. [Google Scholar]
  30. Kalaitzakis, M.; Cain, B.; Carroll, S.; Ambrosi, A.; Whitehead, C.; Vitzilaios, N. Fiducial markers for pose estimation: Overview, applications and experimental comparison of the artag, apriltag, aruco and stag markers. J. Intell. Robot. Syst. 2021, 101, 71. [Google Scholar] [CrossRef]
  31. Kato, H.; Billinghurst, M. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar]
  32. Franchi, A.; Oriolo, G.; Stegagno, P. Mutual localization in multi-robot systems using anonymous relative measurements. Int. J. Robot. Res. 2013, 32, 1302–1322. [Google Scholar] [CrossRef]
  33. Nguyen, T.; Mohta, K.; Taylor, C.J.; Kumar, V. Vision-based multi-MAV localization with anonymous relative measurements using coupled probabilistic data association filter. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3349–3355. [Google Scholar]
  34. Wang, Y.; Wen, X.; Yin, L.; Xu, C.; Cao, Y.; Gao, F. Certifiably optimal mutual localization with anonymous bearing measurements. IEEE Robot. Autom. Lett. 2022, 7, 9374–9381. [Google Scholar] [CrossRef]
  35. Poulet, O.; Guérin, F.; Guinand, F. Experimental and Simulation Platforms for Anonymous Robots Self-Localization. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 949–954. [Google Scholar]
  36. Kruse, R.; Mostaghim, S.; Borgelt, C.; Braune, C.; Steinbrecher, M. Multi-layer perceptrons. In Computational Intelligence: A Methodological Introduction; Springer International Publishing: Cham, Switzerland, 2022; pp. 53–124. [Google Scholar]
  37. Stathakis, D. How many hidden layers and nodes? Int. J. Remote. Sens. 2009, 30, 2133–2147. [Google Scholar] [CrossRef]
  38. Rasheed, F.; Yau, K.L.A.; Noor, R.M.; Wu, C.; Low, Y.C. Deep reinforcement learning for traffic signal control: A review. IEEE Access 2020, 8, 208016–208044. [Google Scholar] [CrossRef]
  39. Rasamoelina, A.D.; Adjailia, F.; Sinčák, P. A review of activation function for artificial neural network. In Proceedings of the 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herlany, Slovakia, 23–25 January 2020; pp. 281–286. [Google Scholar]
  40. Pratiwi, H.; Windarto, A.P.; Susliansyah, S.; Aria, R.R.; Susilowati, S.; Rahayu, L.K.; Fitriani, Y.; Merdekawati, A.; Rahadjeng, I.R. Sigmoid activation function in selecting the best model of artificial neural networks. J. Phys. Conf. Ser. 2020, 471, 012010. [Google Scholar] [CrossRef]
  41. Haji, S.H.; Abdulazeez, A.M. Comparison of optimization techniques based on gradient descent algorithm: A review. Palarch’s J. Archaeol. Egypt/Egyptol. 2021, 18, 2715–2743. [Google Scholar]
  42. Manaswi, N.K.; Manaswi, N.K. Understanding and working with Keras. In Deep Learning with Applications Using Python: Chatbots and Face, Object, and Speech Recognition with TensorFlow and Keras; Springer: Berlin/Heidelberg, Germany, 2018; pp. 31–43. [Google Scholar]
  43. Gerkey, B.; Vaughan, R.T.; Howard, A. The player/stage project: Tools for multi-robot and distributed sensor systems. In Proceedings of the 11th International Conference on Advanced Robotics, Coimbra, Portugal, 30 June–3 July 2003; Volume 1, pp. 317–323. [Google Scholar]
  44. Amsters, R.; Slaets, P. Turtlebot 3 as a robotics education platform. In Robotics in Education: Current Research and Innovations 10; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 170–181. [Google Scholar]
Figure 1. The considered neural network.
Figure 1. The considered neural network.
Robotics 13 00148 g001
Figure 2. The four mobile robots and two decoys.
Figure 2. The four mobile robots and two decoys.
Robotics 13 00148 g002
Figure 3. The four mobile robots with Artoolkit targets.
Figure 3. The four mobile robots with Artoolkit targets.
Robotics 13 00148 g003
Figure 4. Turtlebot3.
Figure 4. Turtlebot3.
Robotics 13 00148 g004
Figure 5. Comparison of the success rates for LDMRA (blue) and DLA (red: simulated; yellow: experimental).
Figure 5. Comparison of the success rates for LDMRA (blue) and DLA (red: simulated; yellow: experimental).
Robotics 13 00148 g005
Figure 6. The red triangles delimit the area where the training coordinates were recorded for a period of 5 s between images. The blue dots represent measurements for a period of 8 s between images. (a): simulation; (b): experimental.
Figure 6. The red triangles delimit the area where the training coordinates were recorded for a period of 5 s between images. The blue dots represent measurements for a period of 8 s between images. (a): simulation; (b): experimental.
Robotics 13 00148 g006
Figure 7. DLA (red: success rate (%); blue: percentage of points out of learning limits—(a): simulations; (b): experimental).
Figure 7. DLA (red: success rate (%); blue: percentage of points out of learning limits—(a): simulations; (b): experimental).
Robotics 13 00148 g007
Figure 8. Success rates for the non-indexed localization; red: simulated; blue: experimental.
Figure 8. Success rates for the non-indexed localization; red: simulated; blue: experimental.
Robotics 13 00148 g008
Table 1. Simulation values (v in m. s 1 , r in rad. s 1 , and δ in ms).
Table 1. Simulation values (v in m. s 1 , r in rad. s 1 , and δ in ms).
v min v max r min r max δ r min δ r max δ t K
0.10.20.20.400.0510030
Table 2. Experimental values (v in m. s 1 , r in rad. s 1 , and δ in ms).
Table 2. Experimental values (v in m. s 1 , r in rad. s 1 , and δ in ms).
v min v max r min r max δ r min δ r max K
0.0250.050.10.2−0.10.130
Table 3. Detection probabilities for experimental mobile Robot 1.
Table 3. Detection probabilities for experimental mobile Robot 1.
Probability0.9700.9510.9510.9390.938
Robot number (initial/final)3/32/24/45/51/1
Probability0.9340.0010.0010.0000.000
Robot number (initial/final)6/64/11/42/33/2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Poulet, O.; Guinand, F.; Guérin, F. Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics 2024, 13, 148. https://doi.org/10.3390/robotics13100148

AMA Style

Poulet O, Guinand F, Guérin F. Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics. 2024; 13(10):148. https://doi.org/10.3390/robotics13100148

Chicago/Turabian Style

Poulet, Olivier, Frédéric Guinand, and François Guérin. 2024. "Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment" Robotics 13, no. 10: 148. https://doi.org/10.3390/robotics13100148

APA Style

Poulet, O., Guinand, F., & Guérin, F. (2024). Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics, 13(10), 148. https://doi.org/10.3390/robotics13100148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop