Next Article in Journal
Broccoli to the Lab: Green-Synthesized N-CQDs for Ultrasensitive “Turn-On” Detection of Norfloxacin in Food
Next Article in Special Issue
BDGS-SLAM: A Probabilistic 3D Gaussian Splatting Framework for Robust SLAM in Dynamic Environments
Previous Article in Journal
YOLO–LaserGalvo: A Vision–Laser-Ranging System for High-Precision Welding Torch Localization
Previous Article in Special Issue
Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Robot Localization Based on the PSO Algorithm with Local Minima Avoiding the Fitness Function

1
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
2
HSE Invest, d.o.o., Obrežna ulica 170, SI-2000 Maribor, Slovenia
3
Piktronik d.o.o., Cesta k Tamu 17, SI-2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(20), 6283; https://doi.org/10.3390/s25206283
Submission received: 18 August 2025 / Revised: 18 September 2025 / Accepted: 3 October 2025 / Published: 10 October 2025
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)

Abstract

Localization of a semi-humanoid mobile robot Pepper is proposed based on the particle swarm optimization algorithm (PSO) that is robust to the disturbance perturbations of LIDAR-measured distances from the mobile robot to the walls of the robot real laboratory workspace. The novel PSO, with the avoiding local minima algorithm (PSO-ALM), uses a novel fitness function that can prevent the PSO search from trapping into the local minima and thus prevent the mobile robot from misidentifying the actual location. The fitness function penalizes nonsense solutions by introducing continuous integrity checks of solutions between two different consecutive locations. The proposed methodology enables accurate and real-time global localization of a mobile robot, given the underlying a priori map, with a consistent and predictable time complexity. Numerical simulations and real-world laboratory experiments with different a priori map accuracies have been conducted to prove the proper functioning of the method. The results have been compared with the benchmarks, i.e., the plain vanilla PSO and the built-in robot’s odometrical method, a genetic algorithm with included elitism and adaptive mutation rate (GA), the same GA algorithm with the included ALM algorithm (GA-ALM), the state-of-the-art plain vanilla golden eagle optimization (GEO) algorithm, and the same GEO algorithm with the added ALM algorithm (GEO-ALM). The results showed similar performance with the odometrical method right after recalibration and significantly better performance after some traveled distance. The GA and GEO algorithms with or without the ALM extension gave us similar results according to the accuracy of localization. The optimization algorithms’ performance with added ALM algorithms was much better at not getting caught in the local minimum, while the PSO-ALM algorithm gave us the overall best results.

1. Introduction

Localization and navigation in the workspace environment are natural processes for people and other species. Robots, however, must use a sample of sensors to be able to locate the workspace automatically, build an image of the workspace, and place themselves within this workspace. Simultaneous localization and mapping (SLAM) is a process of detecting either the unknown or known workspace and placing themself into it. Many sensors exist for conducting SLAM, mainly with cameras (monocular, stereo, and RGB-D) and time-of-flight sensors (light detection and ranging (LIDAR), radar, ultrasound, laser, etc.). The localization methods of today’s mobile robots can be divided into those that utilize internal or mounted sensors like LIDARs, inertial measurement units (IMUs), encoders, and cameras and those that rely on external radio frequency sensors such as Wi-Fi, Bluetooth, Ultra-Wideband, etc. [1,2]. For many localization cases, the limitation to traditional methods is predetermined by hardware, using on-board sensors that face difficulties in a real-time environment. Recent advances in hardware and computing methods have enabled many artificial intelligence methods that can be used for a robot’s localization. A review paper by Wang and Ahmad [3] divided these into Deep Learning (GAN, CNN, and RNN), Machine Learning (Reinforcement Learning and SVM), and Evolutionary and Swarm Intelligence methods (such as PSO, GA, DE, etc.). Most papers that have presented AI-based localization solutions operated with visual and odometrical information for mapping and/or localization, whereas our work relies on LIDARs measurements and evolutionary/swarm algorithms to find the robot’s position and orientation. In such a case a map of the mobile robot environment (workspace) must be known or has to be obtained by other mapping methods.
Once the mapping is performed and a so-called a priori map is built, the localization remains. Localization in a low-signal-to-noise ratio workspace is a misleading task, as the robot may not recognize several similar potential locations on the map but, instead, select one of them sporadically, not knowing that the location is incorrect. A simple remediation to this is to follow the path continuously and check the long-term integrity of a solution constantly so that, eventually, only a single solution exists.
The challenge we are addressing is a reliable and real-time localization of a semi-humanoid robot that is equipped with three independent LIDAR sensors, given the underlying a priori map. We have realized that the classical odometry integrates severe offset with respect to the distance traveled, which demands frequent recalibrations.
Moreno et al. [4] mentioned that, for evolutionary optimization algorithms, a fitness function based on the mean square error function does not perform well for mobile robot localization due to the following:
  • Sensor accuracy and the number of sensors limit the possibility of discriminating between several fitness function local minima and a global minimum.
  • The geometrical similarities of the robot workspace produce high numbers of possible mobile robot pose solutions.
We will show in our paper that one more reason exists that an optimization algorithm (OA) cannot discriminate between local minima and a global minimum if the mapping of the robot workspace is not measured accurately enough.
A lot of papers, in spite of their authors recognizing the problems mentioned by Moreno et al. [4], have not found solutions to discriminating between the local minima and global minimum [5,6,7], so their solutions were limited in practice. Zhang et al. [5] tried to address this problem by introducing a hybrid algorithm based on two evolutionary algorithms (the firefly algorithm and a genetic algorithm) but only presented the simulation results. Zhang et al. [8] found a solution for the discrimination of local and global minima, but his fitness function was complicated, so the average time to find the global minimum was 2.41 s in the Matlab environment, while our algorithm was executed a few times faster (0.5 s in the Python 2.7.18 environment), while the position and orientation errors were almost the same. When the geometrical similarities of the robot workspace produced a higher number of possible mobile robot pose solutions, Zhang et al.’s [8] algorithm needed to investigate from several nearby positions to find the global minimum, while our algorithm needed only two positions and, therefore, decreased the overall elapsed time to a maximum of 1 s in the Python environment.
The following literature review involved research on mobile robot localization with a known a priori map range. Havangi [6] implemented a practical case study of the mobile robot localization between the particle filter (PF) and extended Kalman filter (EKF) as benchmarks and PSO-driven robot localization. The PSO-based localization proved superior in terms of accuracy and consistency. Zhang et al. [8] presented an improved PF based on the PSO for mobile robot localization for an indoor Intel Research Lab public dataset and Fort AP Hill [9]. Chung et al. [10] implemented an improved Adaptive Monte Carlo Localization (AMCL) for tracking and localizing the robot’s position in two dimensions that is equipped with laser sensors and range finders. Ji et al. [11] employed pose tracking and localization based on a camera plane extraction in the camera coordinates and comparing these to a real-world map. De Sá et al. [12] showed an example of range-based mobile robot localization within a wireless network of stationary anchors with known positions. Using basic triangulation, the mobile robot position can be determined by using the min–max method and the PSO. Worley [13] implemented autonomous localization for water pipe inspection mobile robots. These robots need to define the absolute location based on several properties, such as pipe length, radius, interconnections, and others, and compare these to a known a priori map. A pose graph optimization method with fusion of acoustic information was used by the authors for efficient localization.
The work by Vahdat et al. [14] stipulated the importance of global (absolute) robot localization. The authors used three algorithms, Monte Carlo localization (MCL), differential evolution (DE), and particle swarm optimization (PSO). In a given case study, the DE proved to be most successful between the three. The authors reported that the DE underwent faster convergence than the PSO, as well as scored a higher convergence rate. Begum et al. [15] introduced the fuzzy evolutionary SLAM algorithm. A genetic algorithm was utilized as an OA for searching for the optimal robot pose, while fuzzy logic was utilized to handle the uncertainties in the robot’s pose. The robot was equipped with laser range finders, similar to a LIDAR, of up to 10 m of effective distance. The authors proposed that new position updates should be conducted for each 0.5–1 m of travel or a 20 to 40 degree change in rotation. Again, the authors emphasized the importance of solving the robot’s pose problem as a global optimization problem. Bian et al. [16] showed an example of a multi-objective PSO for FastSLAM, called FastSLAM-MO-PSO, especially useful for high noise and large-scale maps. Carvalho et al. [17] introduced the perfect match global localization (PMGL) metric for evaluation of the fitness of several hypothetical robot poses displaced over the map. The authors argued that the efficacy of the PMGL may vary across several scenarios. For a sample of maps, the PSO performed similarly to the GA. However, for one of the maps, the PSO performed significantly worse than the GA. The authors therefore highlighted the GA as the most intuitive solution for global localization. In the last five years, novel state-of-the-art swarm intelligence algorithms, such as golden eagle optimizer (GEO) [18], were proposed, with their performance validated in the robotics field [19].
The novelties of this paper are as follow:
  • We upgraded the PSO algorithm successfully with an innovative search strategy that can discriminate between the local minima and the global minimum.
  • A localization algorithm based on the PSO algorithm with an algorithm for avoiding the local minima was tested in the real laboratory workspace, and the results show that it outperformed the odometrical algorithm and OA’s benchmarks.
  • The average overall elapsed time to calculate a new position was between t e = 0.5 and t e = 1 s in the Python environment.
This paper is structured as follows: the Section 2 presents the fundamentals of the Pepper robot, its LIDAR sensors and cameras, and the OAs, i.e., the PSO and two benchmark OAs. The Section 3 present the organization of the practical laboratory tests. Then, a rigorous discussion follows in Section 4. Finally, Section 5 concludes this paper.

2. Materials and Methods

The following section is divided into three subsections. Section 2.1 deals with Pepper’s hardware v1.8 (manufactured by Aldebaran Robotics, Paris, France, formerly SoftBank Robotics, Tokyo, Japan, now part of United Robotics, Bremmen, Germany), cameras (OV5640, OmniVision, CA, USA), and LIDAR sensors (Pepperl Fuchs, Mannheim, Germany). Section 2.2 deals with the OAs employed, such as PSO, the genetic algorithm, and GEO [18]. Section 2.3 introduces the iterative search strategy called the avoidance of local minima (ALM).

2.1. Pepper Robot and Cameras

The Pepper robot is a semi-humanoid mobile robot that is equipped with numerous built-in functions and operates on its own operating system (OS). NAOqi-OS is based on a Linux distribution and is installed on Pepper’s integrated computer (Intel ATOM, Santa Clara, CA, USA); however, users are restricted from installing third-party applications directly on the OS and must rely on Pepper’s proprietary Software Development Toolkit (SDK). Therefore, users can employ different tools to program and control the robot, e.g., GUI Choreographe, Python SDK, Linux SDK with a ROS environment, etc. Pepper comes with many integrated sensors, such as LIDARs, IRs (Pepperl Fuchs, Mannheim, Germany), sonars, RGB cameras, and a 3D camera (Asus, Taipei, Taiwan), which allow safe navigation and movement in a workspace. In this paper, the particular focus is on two different sensor families: the robot’s RGB cameras that provide color 3D images with depth information (RGB-D) and three separate LIDARs (front, left, and right). The visual data from the 3D camera can be used for feature extraction and mapping of the workspace—for instance, for detecting walls and corners—while the LIDARs can be used for autonomous and absolute localization of the robot within the workspace. The following subsection describes the cameras.

2.1.1. Cameras

Pepper’s cameras are preinstalled in the chin and in the forehead. They provide several adjustable resolutions for image processing tasks such as V-SLAM, although it was figured out that resolutions set too high become a problem due to the stalling WiFi connection. Experimentally, the image resolution at 320 × 240 pixels with 15–20 fps was realized as the maximum allowed, although the mapping results were very poor with higher signal-to-noise ratios. Selection of such a low resolution was recognized as insufficient; hence, we tested different external cameras to evaluate the usability, performance, and speed of camera image processing in ROS. A classic USB mono cam (Logitech HD C270 webcam, Lausanne, Switzerland) and RGB-D cameras (Kinect, RealSense D435, Cupertino, CA, USA and Luxonis DepthAI OAK-D Pro, Denver, CO, USA) were used in the visual SLAM tests while changing settings (resolution, fps, and Oriented FAST and Rotated BRIEF (ORB) features) and calibration parameters (camera distortion, low contrast, etc.). We were aiming for a high-accuracy point cloud for the later workspace map definition needed for localization based on known area dimensions (walls, corners, etc.). Due to the good SDK library support in ROS and Pepper, we decided on a RealSense D435 RGB-D camera that allowed us to operate at an enhanced rate of 640 × 480 pixels at 20 fps. As it was an external camera, we mounted it on the top of the robot’s head for improved vision, as seen in Figure 1. Again, the RGB-D video stream was streamed using the Raspberry Pi 3 node (Cambridge, UK).

2.1.2. LIDARs

Three different LIDAR sensors are installed within Pepper. LIDAR sensors comprise a transmitting laser actuator and a separated receiving sensor. The sensors are offset by 90°, each with a horizontal field of view (HFOV, hereinafter field of view (FOV)) of 60°. To sense the whole workspace in 360°, Pepper needs either to be rotated or its LIDARs combined with other sensors. The layout of the LIDARs and their visible range is shown in Figure 2. The frame rate of each laser was 6.25 Hz, or 0.16 s. Each measured LIDAR section (front, left, and right) returns 15 segments divided into FOV   =   4 ° with X and Y of the robot’s coordinate system. The accuracy of the measurements depends on the FOV angle, where segments more to the far left (segment 1) and far right (segment 15) are generally less accurate than the central segments. For example, central segment 8 was the most accurate. The built-in function returns the X and Y values of detected obstacles for each segment, which are then processed to calculate the angle and distance to the obstacle. Here, special cases exist regarding the uncertainty of the measurement accuracy, as the robot may return “Uncertain_Obstacle” or “Annoying_Reflection”. The analysis of these measurements showed no improvements in the measured distances but merely prolonged the data acquisition and processing times. A more detailed specification of the robot’s LIDARs can be found in Pepper’s documentation (http://doc.aldebaran.com/2-5/home_pepper.html, accessed on 12 June 2025).
The measurements were recorded with all three LIDARs of the robot. The workspace was realistic but artificially constructed with the help of wooden boards and the concrete walls of the laboratory. The workspace was measured precisely with a tape measure. The workspace had a starting point of the base coordinate system, upon which all other distances were derived to all the corners of the workspace in the picture. We placed the robot inside the defined workspace and performed real LIDAR measurements and compared them to the ground-truth measurements.
Additionally, we tested the accuracies of all three LIDARs by placing the robot in front of a flat wide wall, so that the center beam of the LIDAR (sensor segment 8, out of the total 15 segments) was perpendicular to the wall. We moved the robot away from the wall gradually and took measurements with the same LIDAR. The distance of the robot from the wall was additionally measured with a tape measure and a handheld laser meter. It was realized that the measurements deviated from the expected values. In addition, it was realized that the measurements were distorted in a curved way, even if measuring from a flat wall, and the distortion grew as the distance increased. The measurement of distances beyond 3.5 m was not useful practically.

2.1.3. Robot’s Leg

The robot coordinate system has the X axis positive toward the robot’s front, the Y axis from right to left, and the Z axis vertical. Pepper is a semi-humanoid robot that moves using three sets of omnidirectional wheels (omni-wheels), i.e., a front left (FL), front right (FR), and back (B) wheel. These allow free movement in both X and Y directions as well as rotation around Z. The robot’s center of rotation is displaced by 6.2 mm from the KneePitch position, as seen from Figure 3 on the bottom view of the robot’s base.

2.1.4. Test Workspace

Mobile robots move and navigate in a workspace according to the given map and a path plan. For evaluation of the proposed localization algorithm, a laboratory testing workspace with dimensions of 4 by 3.2 m was built with classic OSB wooden boards (the right plot of Figure 3). The surface of the workspace walls (boards) was selected as non-monothonic, so that the RGB-D camera was able to obtain proper results for the visual SLAM mapping and LIDAR measurements.

2.2. Algorithms and Methods Used

This section represents the PSO algorithm, the benchmarks, and the description of the implemented fitness function f f . Evolutionary and swarm intelligence OAs were selected instead of simple minimum gradient search algorithms due to future work considerations. Although gradient search would be sufficient for present tasks with lower dimensionalities ( D   <   4 ) , we selected the more sophisticated OAs to ensure compatibility with anticipated future extensions, i.e., environments with larger dimensionalities ( D     4 ) , such as localization of airborne unmanned aerial vehicles.

2.2.1. Particle Swarm Optimization

PSO is a population-based OA that was designed in 1995 by Kennedy and Eberhart [20]. PSO rests on the intelligent movement of a swarm that searches for prey by calculating consecutive velocity and position updates. It is an iterative optimization method, and its pseudocode is denoted in Algorithm 1. The canonical PSO was taken as a baseline, to which we added neighborhood best particle [21,22].
Algorithm 1 Particle swarm optimization (PSO).
  1: 
Initialize swarm with population size N P
  2: 
Initialize positions x i and velocities v i
  3: 
Evaluate fitness function values f f i   =   f u n ( x i )
  4: 
Set personal best p i   =   x i
  5: 
Select global best g   =   argmin ( f i )
  6: 
while termination condition not met do
  7: 
   for each particle i in swarm N P  do
  8: 
     for each element j in dimension D do
  9: 
        // Update velocity and position
10: 
         v i , t + 1 ( j ) c 0 · v i , t ( j )   +   c 1 · r 1 ( j ) · ( p i , t ( j )     x i , t ( j ) )   +   c 2 · r 2 ( j ) · ( g t ( j )     x i , t ( j ) )   +   c 3 · r 3 ( j ) ( l t ( j )     x i , t ( j ) )
11: 
         x i , t + 1 ( j ) x i , t ( j )   +   v i , t ( j )
12: 
     end for
13: 
     Evaluate fitness f f i   =   f f ( x i )
14: 
     if  f f ( p i )   <   f f i  then
15: 
         p i x i Update personal best
16: 
     end if
17: 
     if  f f ( g )   <   f f i  then
18: 
         g x i Update global best
19: 
     end if
20: 
     if  f f ( m i n ( l i * ) )   <   f f i  then
21: 
         l i x i {Update neighborhood best}
22: 
     end if
23: 
   end for
24: 
    t t   +   1
25: 
end while
26: 
return global best position g
The control parameters of PSO are inertia weight w, cognitive constant c 1 , social constant c 2 , and neighborhood constant c 3 . The parameters r 1 and r 2 are random numbers drawn from the uniform distribution r 1 ,   r 2 [ 0 , 1 ] . The personal best p i is updated when the particle itself obtains a better position. The global best g is updated when a particle, among all particles, obtains a better position. The l i * represents the neighborhood. The neighborhood best l i is updated when a particle, among its neighborhood, obtains a better position. PSO was initialized with the parameter settings as outlined in Table 1 for both the simulation and laboratory experiments.

2.2.2. Adaptive Elitist-Enhanced Genetic Algorithm (GA)

The adaptive elitist-enhanced GA was used as a benchmark. The genetic operators included roulette wheel selection, single-point crossover, bit-flip mutation, and elitism. Binary representation was implemented. An adaptive mutation factor F m was implemented to increase the mutation rate p m up to a certain value that equaled p m = 0.05 in the case of indicating no progress and reduce it down to a certain value that equaled p m = 0.001 in case of indicating progress, as follows in Equation (1). Here, f f ( a v g ) represents the average value of the fitness function evaluations that were monitored for two different periods separately: First, for the last g 6 generations (g represents the current generation) to g 4 generations, i.e., f f ( g 6 : g 4 ) ( a v g ) . Second, for the last g 3 to g 1 generations, i.e., f f ( g 3 : g 1 ) ( a v g ) . Here, all the calculated fitness function evaluations were taken into account. The mutation rate p m was increased when the value f f ( g 3 : g 1 ) ( a v g ) did not exhibit any progress. When significant progress was indicated, the mutation rate was decreased. This mechanism decreased the probability of trapping into the local optima further. Algorithm 2 exhibits the GA pseudocode and Table 1 exhibits the GA setup.
p m = p m · F m f f ( g 3 : g 1 ) ( a v g ) f f ( g 6 : g 4 ) ( a v g ) p m / F m otherwise .
Algorithm 2 Adaptive elitist-enhanced genetic algorithm.
  1: 
Initialize population N P with random solutions
  2: 
Evaluate the fitness of each individual in N P
  3: 
while stopping criterion not met do
  4: 
   Convert population N P into binary form (genes)
  5: 
   Select parents for reproduction
  6: 
   Perform crossover and mutation to create offspring
  7: 
   Convert binary genes of population N P back into decimal representation
  8: 
   Evaluate the fitness of the offspring
  9: 
   Apply elitism: select the best individuals for the next generation
10: 
   Replace the worst individuals in the population with offspring
11: 
   Apply probability of mutation p m adaptation
12: 
end while

2.2.3. Golden Eagle Optimization (GEO)

GEO is among the latest and most novel representatives of nature-inspired OAs. GEO resembles the spiral motion of golden eagles when cruising overhead the prey and, once finding the prey, attacking it. Cruise and attack effectively mirror the trade-off between the exploration and exploitation phases of swarm intelligence algorithms. The control parameters include the initial and final propensities to cruise and propensities to attack. The pseudocode is visualized in Algorithm 3, and the control parameter setup is shown in Table 1.
Algorithm 3 Pseudocode of golden eagle optimizer (GEO).
  1: 
Initialize the population of golden eagles
  2: 
Evaluate fitness function
  3: 
Initialize population memory
  4: 
Initialize p a ( G E O ) and p c ( G E O )
  5: 
for each iteration t do
  6: 
   Update p a ( G E O ) and p c ( G E O )
  7: 
   for each golden eagle i do
  8: 
     Randomly select a prey from the population’s memory
  9: 
     Calculate attack vector A
10: 
     if length of attack vector A 0  then
11: 
        Calculate cruise vector C
12: 
        Calculate step vector Δ x
13: 
        Update position: x i = x i + Δ x
14: 
        Evaluate fitness function for the new position
15: 
        if fitness is better than the position in eagle i’s memory then
16: 
          Replace the position in eagle i’s memory with the new position
17: 
        end if
18: 
     end if
19: 
   end for
20: 
end for
21: 
Return the best solution

2.2.4. Description of the Fitness Function

The fitness function value to evaluate the quality of each particle or individual during the execution of the OA is denoted as f f . Calculation of the fitness function consists of two steps. First, the fitness function f f j for each measured position j is calculated as the distance between ( x m j ,   y m j ) and the closest equidistant position on the wall ( x w i ,   y w i ) , as described in Equation (2). Next, an average of these fitness functions is calculated as a final fitness function f f , as described in Equation (3)
f f j = ( x w i x m j ) 2 + ( y w i y m j ) 2 ,
f f = j = 1 m f f j m ,
where i = 1 , , n and j = 1 , , m , n represents the number of equidistant positions on the walls between the corners of the mobile robot workspace in the base coordinate system, and m represents the number of positions measured by LIDAR in the base coordinate system. x m j and y m j represent positions measured by the LIDAR in the base coordinate system, as denoted in Figure 4. x w i ,   y w i represent equidistant positions on the walls between the corners of the mobile robot workspace in the base coordinate system, as denoted in Figure 4.
Figure 4 shows the organization of the workspace. The coordinate axes ( x b , y b ) define the base coordinate system. The red diamonds denote the corners of the workspace. The blue diamonds denote equidistant positions on the wall x w i ,   y w i . The green diamonds denote measured distances between the mobile robot and the walls by the LIDAR x m j ,   y m j . A triangle and a forward path line define the location and orientation of the mobile robot. The black lines define the walls of the workspace.

2.3. Algorithm for Avoiding the Local Minima (ALM)

The scientific literature has often criticized PSO for not having any global optima avoidance mechanism. Especially if relying on the g b e s t action that may cause premature convergence and not having proper momentum settings, PSO can be trapped in the local minima easily [23]. This problem is usually solved with repeated dispersion of particles from the randomly chosen initial positions, even several times if needed. Knowing the global minimum helps significantly in achieving it, although, in the majority of the cases, the global optimum is not known in practice. In our case, the fitness function value of the global minimum was not known directly. Furthermore, due to the noise of the LIDAR used, a solution in the “spurious” local minima may have the same or, in rare cases, even better fitness function value than in the “truly” global minimum. This means that we cannot estimate directly if the OA, in general, is trapped or not. However, the integrity of the solution can be checked indirectly by moving the robot for a known distance and rotation (the absolute value of the difference betweem the reference position and reference rotation value and the previous, already calculated, actual position and rotation) and making another measurement, hence comparing two consecutive measurements effectively. If the integrity check returns nonsense with regard to the Markov chain, this is then a reliable indicator that the OA is trapped in the local minima.
To cope with this, the classical OAs were tailored to address the challenge of local optima effectively. The idea of avoiding the local minima in the PSO, GA, and GEO algorithms is not a general solution, but rather an application-oriented solution. It is valid only for the localization of the mobile robot application or very similar applications with low signal-to-noise ratios. The odometrical method measures the distance traveled and orientation shifted between two relatively close positions of the mobile robot accurately. For longer distances traveled, the odometrical method is not a reliable indicator due to the omni-wheel slip and constantly accumulating error (in particular the rotation in the Z axis causes odometrical challenges). However, for shorter distances of up to two meters of linear travel and a rotation of less than 90°, the odometrical method works just fine, imposing just a few centimeters of error between two consecutive movements.
Practically, it was realized that the distance error margin of Δ p < 15 cm and the rotation error margin of Δ ω < 15 ° , due to omni-wheel slip and accumulating error, resemble the sufficient error margins based on practical observations.
Meeting these two error margins simultaneously means passing the integrity check, meaning that the global minimum was found. When the OA does not find the global optimum within a given maximum number of iterations or generations, the algorithm is reset and run again. The so-called avoiding local minima search strategy (ALM) pseudocode is exhibited in Algorithm 4. If both the error margins are met, the integrity check is passed and the global best solution is returned. When the error margins are not met, the OA is repeated by dispersing the particles or individuals again in a while loop. The process is repeated until the testing solution meets the imposed error margins, which, unfortunately, imposes potential time delays.
The ALM search strategy commences by initializing the particles or individuals randomly. The variable r e p e a t e d monitors the number of OA resets after not finding the global optima. It is set to 0. The variables p r 1 and ω r 1 measure the unknown initial reference position and orientation. The variables p r 2 and ω r 2 measure the unknown ending reference position and orientation. The differences Δ p r and Δ ω r measure the reference move.
The variables p a 1 and ω a 1 indicate an OA trial solution for initial position and orientation, where the trial solution means each particle or individual in every epoch or generation. The variables p a 2 and ω a 2 indicate an OA trial solution for the ending position and orientation. The differences Δ p r and Δ ω r measure the reference move. The differences Δ p a and Δ ω a indicate a move performed for the given trial solution (see Figure 5).
The procedure for the ALM search strategy is as follows: First, the robot is placed at an unknown position p r 1 within the predetermined search space. Here, the LIDAR measurements are performed. Next, the robot is moved to position p r 2 . Again, the LIDAR measurements are performed and recorded. The reference distance and rotation change are calculated as follows: Δ p r = p r 1 p r 2 and Δ ω r = ω r 1 ω r 2 .
Then, the OA is executed to calculate and evaluate the trial solutions. The best solutions p a 1 and ω a 1 are output for the initial position and p a 2 and ω a 2 for the ending position. These are then compared to calculate the distance and orientation errors Δ p and Δ ω . As long as these two fall outside the allowed error margins, the OA is rerun randomly again. The variable r e p e a t e d is incremented with each rerun. Once the errors meet the error margins, the global best solutions ( p g b e s t 1 , ω g b e s t 1 ) and ( p g b e s t 2 , ω g b e s t 2 ) are output.
The ALM search strategy pseudocode is exhibited in Algorithm 4.
Algorithm 4 The ALM initial search strategy pseudocode.
  1: 
Move the robot to ( p r 1 , ω r 1 )
  2: 
Measure the LIDAR data
  3: 
Move the robot to ( p r 2 , ω r 2 )
  4: 
Measure the LIDAR data
  5: 
Input: reference distance and rotation ( Δ p r , Δ ω r )
  6: 
r e p e a t e d 0
  7: 
Calculate: optimization algorithm result ( p a 1 , ω a 1 )
  8: 
Calculate: optimization algorithm result ( p a 2 , ω a 2 )
  9: 
Δ p a p a 1 p a 2
10: 
Δ ω a ω a 1 ω a 2
11: 
Δ p Δ p a Δ p r
12: 
Δ ω Δ ω a Δ ω r
13: 
while  Δ p > 15 cm and Δ ω > 15 °  do
14: 
   Calculate: optimization algorithm result ( p a 1 , ω a 1 )
15: 
   Calculate: optimization algorithm result ( p a 2 , ω a 2 )
16: 
    Δ p a p a 1 p a 2
17: 
    Δ ω a ω a 1 ω a 2
18: 
    Δ p Δ p a Δ p r
19: 
    Δ ω Δ ω a Δ ω r
20: 
    r e p e a t e d r e p e a t e d + 1
21: 
end while
22: 
p g b e s t 1 , ω g b e s t 1 , p g b e s t 2 , ω g b e s t 2 global best solutions for both poses
23: 
return  p g b e s t 1 , ω g b e s t 1 , p g b e s t 2 , ω g b e s t 2
If both the error margins are met, the integrity check is passed. Then, the two poses of ( p g b e s t 1 , ω g b e s t 1 ) , ( p g b e s t 2 , ω g b e s t 2 ) are output from the ALM as a global minimum solution. If the error margins are not met, the search is repeated by dispersing the particles or individuals again until the testing solution meets the imposed error margins. In addition, the OA needs to be run twice for both unknown first ( p a 1 , ω a 1 ) and second robot positions ( p a 2 , ω a 2 ) . However, for each subsequent position, e.g., a third position or ( p a , i + 1 , ω a , i + 1 ) , in general, just a single OA run is necessary, as the i-th position is known. Therefore, Algorithm 4 was modified slightly, as follows in Algorithm 5. Accordingly, the average overall elapsed time t o ¯ has been reduced by more than twice.
Algorithm 5 The ALM search strategy pseudocode for each subsequent robot position.
  1: 
Move the robot to ( p r , i + 1 , ω r , i + 1 )
  2: 
Measure the LIDAR data
  3: 
Input: reference distance and rotation ( Δ p r ( i + 1 ) , Δ ω r ( i + 1 ) )
  4: 
r e p e a t e d 0
  5: 
Calculate: optimization algorithm result ( p a , i + 1 , ω a , i + 1 )
  6: 
Δ p a p a , i p a , i + 1
  7: 
Δ ω a ω a , i ω a , i + 1
  8: 
Δ p Δ p a Δ p r
  9: 
Δ ω Δ ω a Δ ω r
10: 
while  Δ p > 15 cm and Δ ω > 15 °  do
11: 
   Calculate: optimization algorithm result ( p a , i + 1 , ω a , i + 1 )
12: 
    Δ p a p a , i p a , i + 1
13: 
    Δ ω a ω a , i ω a , i + 1
14: 
    Δ p Δ p a Δ p r
15: 
    Δ ω Δ ω a Δ ω r
16: 
    r e p e a t e d r e p e a t e d + 1
17: 
end while
18: 
p g b e s t , i + 1 , ω g b e s t , i + 1 global best solution for the i + 1 pose
19: 
return  p g b e s t , i + 1 , ω g b e s t , i + 1

2.4. Mapping of the Workspace by the ORB-SLAM2 Method and RANSAC Algorithm

The Pepper robot has 2D and 3D cameras, which means that applying V-SLAM for initial mapping of the workspace was natural. Although robot LIDARs are adequate for mapping surfaces near the ground, Pepper must also be careful of higher obstacles while navigating to prevent falling. Pepper’s native 3D camera assures the security perimeter, so the robot stops before hitting the obstacle, but for SLAM tasks the results can be poor. Therefore a RealSense RGB-D camera was used with ORB-SLAM2, mounted on Pepper’s head, for enhanced point cloud retrieval and a more detailed workspace map.
ORB-SLAM2 [24] rests on detecting distinct features that are input into the mapping algorithm. It is designed for real-time applications in robotics and computer vision. Developed as an extension of ORB-SLAM, it offers improvements in robustness, efficiency, and adaptability across monocular, stereo, and RGB-D camera inputs. The algorithm utilizes ORB features for keypoint detection and descriptor matching. ORB features are computationally efficient and invariant to rotation and scale, making them well-suited for real-time SLAM applications. The algorithms can be used with monocular cameras, which require a motion-based initialization, while stereo and RGB-D cameras can compute a scaled map directly from depth information. The algorithm estimates the camera pose continuously by matching the extracted ORB features with keypoints in previous frames, and it employs a motion model to predict the next pose and refines estimates using bundle adjustment. It maintains a local map of the workspace by adding new keyframes and refining 3D poses using local bundle adjustment. This process ensures accurate tracking and prevents drift in the estimated trajectory. Finally, to prevent long-term drift, ORB-SLAM2 detects loop closures by matching features from the current frame with previously visited locations. Once a loop closure is detected, the system performs pose graph optimization to correct any accumulated errors.
The ORB-SLAM2 algorithm was not performed on the robot due to its CPU overload in the case of such high processing demand. Rather, the algorithm was executed on an Intel i7 laptop using 16 GB of RAM, running Ubuntu 16.04 LTS with ROS Kinetic. The ORB-SLAM2 used a RealSense D435 camera in RGB-D mode with a properly set up configuration to obtain optimal features for mapping. A RANSAC algorithm was used to be able to recognize and model a workspace with walls and corners from a previous gathered map (point cloud and features).
RANSAC is the short name for the Random Sample Consensus Algorithm. It is used for finding the best fit function for an existing dataset with outliers. Using RANSAC, datasets could be fitted to a single variable, as well as multivariable functions. In comparison to the least squares method (LSM) for finding the best fit function, it is more robust to outliers. RANSAC removes outliers from datasets iteratively before fitting the function to the data by the LSM or some other M-estimator algorithm. To remove outliers, RANSAC samples a part of the points from the existing dataset randomly and fits the function to this sampled dataset. Next, the absolute error between the function and randomly sampled points is calculated, and the points with the greatest error are considered as outliers and are removed from the dataset. Random sampling, error calculation, and elimination of outliers are iterated until random sampling of points from the dataset does not produce any change in the parameters of the function fitted to the dataset. Many improvements of the RANSAC algorithms have been proposed, and among them is MLESAC [25].

3. Results

The experiments were divided into two large segments: simulation experiments and real-world laboratory (lab) experiments. The simulation experiments were performed first and involved proving the concept. Once the concept was proved, the real lab experiments were performed.

3.1. Conduct of Experiments and Experimental Setup

The conduct of the simulation experiments was as follows. First, the MATLAB/Simulation environment was used to design the exact workspace as in a real lab experiment. Then, the model of the LIDAR measurements was input. Different qualities of the simulated LIDAR measurements and workspace corners were imposed, ranging from the most accurate to least accurate by employing uniform random perturbations. Perturbation of the LIDAR measurements represents the errors imposed by LIDAR. Perturbation of the workspace corners represents the errors imposed by the mapping algorithm (ORB-SLAM2 and RANSAC), since the latter also imposes deviations from the ground truth. The OAs were then executed, and their performance was monitored closely. The OAs’ control parameters were set as outlined in Table 1.
The conduct of the real lab experiments was as follows. A laboratory workspace was built with Pepper put into it. The first set of the LIDAR measurements were taken. Then, Pepper was moved for a certain distance and rotation. Next, a second set of LIDAR measurements were taken. Then, the OA was executed on the remote computer to calculate Pepper’s position.

3.2. Simulation Experiments

The simulation experiments were run completely in the MATLAB/Simulink environment with the objective of (1) modeling the LIDAR errors and modeling the RANSAC workspace corners’ definition errors and (2) realizing the underlying OA performance. Such simulation experiments were needed to estimate the influence of both qualities of the real mobile robot LIDAR measurements and the workspace corner measurements, helping improve the quality of OA performance in calculating position and orientation.
Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 represent the simulation experiments of different magnitudes of error perturbation. Representation of the the robot’s position and orientation is consistent throughout this paper. The red and green triangles present the two-dimensional footprint of Pepper. The straight black lines present the orientation of the mobile robot. The green triangles represent the simulated ground-truth position and orientation. The red triangles represent the simulated position and orientation executed by the OA. Seven different mobile robot positions and orientations, i.e., M-poses, e.g., M1 to M7, are drawn in each Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. These M-poses represent the simulated path that the Pepper model underwent. For each M-pose, the following deviations can be calculated: (1) the absolute difference between the actual simulated positions and orientations and ground truth ( | Δ x | , | Δ y | , | Δ ω | ), (2) an indicator of the OA repetitions (random dispersions, i.e., an indicator of OA trapping), and (3) the value of the quality function of the solution Q e , where lower values of Q e represent a statistically better quality of the results. The LIDAR errors were modeled by the random perturbation parameter, where different magnitudes of the perturbations were implemented. Two types of uniform random perturbations were modeled, i.e., the different qualities of the LIDAR measurements p c L = { ± 1 % , ± 5 % , ± 10 % } , as well as different measurements of the test robot workspace corners p c c = { ± 1 % , ± 5 % , ± 10 % } . Here, v i ( c ) = v i , x ( c ) , v i , y ( c ) T represents the set of workspace corners in x and y axes. Additionally, v i ( L ) = v i , x ( L ) , v i , y ( L ) T represents the simulated LIDAR measurements. These undergo the random perturbations as denoted in Equation (4)
v i ( p c ) = v i ( c ) v i ( c ) · p c c · rand ( 2 , 1 ) 100 ,
if simulating LIDAR errors, or alternatively in Equation (5) if simulating the LIDAR measurements
v i ( p L ) = v i ( L ) v i ( L ) · p c L · rand ( 2 , 1 ) 100 ,
where v i ( p c ) represents the perturbed corners and v i ( c ) stands for unperturbed measurements (ground truth) of the robot’s workspace, given in a Cartesian coordinate system (c.s.) in polar form ( r , θ ) to be transformed into Cartesian form. Additionally, v i ( p L ) represents the perturbed LIDAR measurements. The p c L , p c c = { ± 1 % , ± 5 % , ± 10 % } represent percentage values of the LIDAR measurement perturbation or the percentage value of the perturbations of the definition of the corners of the workspace. rand ( 2 ,   1 ) [ 1 ,   1 ] represents the two-dimensional ( x , y ) T random value drawn from the uniform distribution. The process of perturbing the LIDAR measurements and workspace corners is as follows.
First the workspace corners are perturbed, given the ground truth v i ( c ) . Here, each ( x , y ) T component of each workspace corner is perturbed randomly for an amount equal to p c c · r a n d . This creates perturbed workspace corners v i ( p c ) . Next, based on the positions of perturbed corners, line segments from one corner to another are drawn. Based on these, LIDAR apparent ground-truth measurements are derived v i ( L ) mathematically. Finally, the apparent ground-truth measurements v i ( p L ) are perturbed for an amount equal to p c L · r a n d .
Furthermore, the quality of OA solutions is calculated using the quality function Q e , which is calculated as denoted in Equation (6).
Q e = i = 1 7 Δ x 7 + i = 1 7 Δ y 7 + i = 1 7 Δ ω 7 + i = 1 7 r e p e a t e d 7 / 4 ,
where Δ x presents the absolute error between the actual position and the ground-truth position in the x axis of the mobile robot, Δ y presents the absolute error between the actual position and the ground-truth position in the y direction of the mobile robot, Δ ω presents the absolute error between the actual orientation and ground-truth orientation of the mobile robot, and r e p e a t e d represents the number of OA dispersions when finding the global minimum. For a sanity check, two evaluations of the quality function value Q e were performed as follows:
  • LIDAR measurements perturbed p c L = ± 5 % and the corners perturbed p c c = ± 5 % : Q e = 2.25 ;
  • LIDAR measurements perturbed p c L = ± 10 % and the corners perturbed p c c = ± 10 % : Q e = 4.26 .

3.3. Practical Laboratory Experiments

The real laboratory LIDAR measurements on the mobile robot Pepper exposed a significant drawback; i.e., the LIDAR measurements were of severely low quality. The low-quality LIDAR measurements for all M-poses can be observed in Figure 11. The purple stars mark the ideal distance measured from the robot to the edges of the workspace. The green stars mark the actual distance measured by the LIDAR. The deviation between the two represents the error. Larger errors were realized (1) for the outer edges of the LIDAR and (2) when increasing the distance to the wall. Hence, the laser beam scanning width (FOV) was limited to 44° and the maximum measured distance was limited below 3.5 m.

3.3.1. The Plain Vanilla PSO Benchmark and Representation of the Local Optima Trapping

Within the workspace there are several different locations that undergo similar solutions. The low accuracies of the LIDAR could favor several possible locations (local optima) rather than the ground-truth location (global optimum), thereby imposing multiple local optima. Such a location is the pose P1, denoted in Figure 12. The pose P1 was used in a plain vanilla PSO experiment. The objective of this experiment was to examine whether the plain vanilla PSO could trap into the local optima. The experiment was carried out by executing PSO just once, not including the implemented local optima avoidance mechanism. The whole procedure was run multiple times to realize potential local optima. Figure 12 represents the two possible results that PSO statistically favored. In the upper subplot, the mobile robot was trapped into the local optimum, causing significant deviation of the measured location from the ground truth. The lower subplot includes the identical set of LIDAR measurements. PSO was run again from the beginning. Now, the global optimum was found, thus decreasing the deviation from the ground truth significantly. Further analysis exhibited that the local optima scored a lower fitness function value f f than the global optimum, contributing to the complexity of the challenge and expressing the necessity for the local optima avoidance mechanism.

3.3.2. Quality Function Value Convergence for Pose P1 Q e P 1 with Respect to the Population Size N P and Number of Iterations N i

The number of iterations N i and the population size N P in the PSO algorithm with the algorithm to avoid local minima had an important influence on the elapsed time of the execution of the PSO algorithm and the quality of the calculated position and orientation of the mobile robot. The influence of N i and N P on the quality of the PSO was measured again at position P1 (Figure 13). The quality function value Q e introduced was tailored to a single pose, as denoted in Equation (7).
Q e P 1 = Δ x + Δ y + Δ ω + r e p e a t e d ¯ 4
Table 2 shows the results of varying the number of iterations N i and the number of particles N P , while simultaneously monitoring the quality of the calculated positions and orientations at P1 and value Q e P 1 .
Further discussion of the convergence results is given in the Section 4.

3.3.3. PSO Localization with ± 1 % Accurate Measured Corners of the Workspace (Handheld Method)

The first lab experiment with the PSO with the local optima avoidance mechanism was to prove its functioning. The experiment consisted of measuring the corners of the workspace, manually with a high degree of accuracy, giving the obtained map to the PSO and calculating the absolute positions of Pepper within this workspace. The real ground-truth positions (green) and orientations and the real actual positions (red) and orientations executed by the described PSO algorithm for all M-poses are presented in Figure 14. The real LIDAR measurements, as presented in Figure 11, were used for calculation of the positions and orientations of the mobile robot with the PSO algorithm. The manually measured corners of the workspace were estimated to be within ± 1 % accuracy.
An experiment of scanning with different FOV angles and monitoring quality function values and time elapsed simultaneously was performed in order to realize optimal LIDAR’s FOV scanning angle. Table 3 represents the results of different FOV angles. The average elapsed time of a single measurement and a PSO execution is denoted as t e ¯ . The average overall time of a single measurement and multiple PSO executions (no. of r e p e a t e d ¯ times) in the case of trapping is denoted as t o ¯ . Equation (8) represents the formula for the average elapsed time and average overall time, as follows.
t o ¯ = t e ¯ · r e p e a t e d ¯ + t e ¯
In general, the higher the FOV, the longer the average elapsed time of scanning t e ¯ . The higher the FOV, the longer the total overall time t o ¯ on average. For the practical experiments, the FOV angle was limited to FOV = 44° as a compromise between the average quality function values Q e and time complexity t o ¯ .

3.3.4. PSO Localization with a Corner Accuracy of 15 % of the Workspace (ORB-SLAM2 and RANSAC Algorithm Method)

The next lab experiment consisted of the automated workspace corner measurements using the ORB-SLAM2 mapping methods and a RANSAC algorithm for corner extraction using the camera. The ground truth measured from the handheld method is not supplied here. The accuracy of the RANSAC algorithm workspace corner definitions was estimated as 15 % . The diminished accuracy reproduced the increased random perturbation p c c parameter used in the simulations, which, naturally, complicated the challenge of absolute localization within the workspace. Figure 15 represents the typical workspace corner identification and related errors. The quality function value increased from Q e = 2.88 for the handheld method to Q e = 11.14 for the RANSAC, thus weakening the localization skills significantly.

3.3.5. Influence of Obstacles in the Mobile Robot Workspace

Calculations of the positions and orientations of the mobile robot with obstacles inside of the robot’s workspace were performed by the PSO-ALM for three types of obstacles: minor obstacles (legs of the chairs), medium obstacles (human feet) and major obstacles (overturned chair) (see Figure 16).
The minor obstacles did not cause additional position and orientation errors for the PSO-ALM. This was due to the fact that the number of active segments (FOV) could be modified. For example, a central LIDAR segment n = 1 to the left and n = 1 to the right defined three active LIDAR segments, which defined a joint FOV = 12°. However, imposing medium and large obstacles on the workspace affected the LIDAR measurements heavily. Still, the size of the obstacles was not the only relevant factor. The distance from the LIDAR to the obstacle also played a crucial role, as the further away the obstacle was, the less the FOV was affected. Therefore, the distance from the LIDAR to the obstacle and its size were relevant factors in realizing the influence of the obstacles. Practical experiments have shown that, whether just a single or two LIDAR FOV segments were affected, no additional error was caused due to the obstacles. Therefore, the human feet and overturned chair as obstacles imposed far away from the mobile robot, e.g., more than 3 m, did not impose any additional errors. On the other hand, the human feet and the chair imposed on the LIDAR at a distance of 0.5 m spoiled the measurement completely.

3.3.6. Comparison Between the Odometrical Calculations and PSO Calculation of the Mobile Robot Positions and Orientations

Finally, a comparison was carried out of the calculated positions and orientations between the robot’s built-in odometrical algorithm as a benchmark and PSO-ALM. The experiment was performed as follows: The robot was reset at the pose M1. Then, it was controlled manually to the next M-pose consecutively and stopped there. The robot underwent all M-poses. The OA was used to calculate the position and orientation sequentially at each pose when stopped. The error was calculated for each pose for both methods and these are collated within Table 4. Δ x , Δ y , and Δ ω and their averages are shown for all M-poses. Further discussion is provided in the Discussion Section.
Direct comparison between the odometrical algorithm and the PSO-ALM showed that the odometrical errors increased rapidly after the recalibration, i.e., the initialization at pose M1. In absolute terms, their average values were more than twice as high as for the PSO-ALM. In general, the longer the path, the larger the odometrical algorithm error due to disturbance integration and more frequent wheel slippage. As the PSO-ALM localization is absolute, the error remained stable across the whole path. From the table, one can notice that, in any case, when moving a robot incrementally from a single point to another point, the maximum distance error does not exceed 15 cm, nor does the rotation error exceed 15°, therefore defining the sufficient error margins.

3.3.7. Comparison of the PSO-ALM Against the Plain Vanilla PSO, GA, GEO, GA-ALM, and GEO-ALM

Similar nature-inspired OAs were used as benchmark algorithms for our PSO-ALM algorithm. The benchmark algorithms used for the tests were as follows: the plain vanilla PSO, the vanilla plain GA with added elitism and adaptive mutation rate, the plain vanilla GEO and its combination with the ALM extension to prevent trapping in the local minima: PSO-ALM, GA-ALM and GEO-ALM. We also compared the odometrical built-in algorithm along with the previously mentioned six OAs to see their suitability for the indoor localization application on the mobile robot Pepper. We checked the accuracy of the measured position M5 (see Figure 14), how many times on average the algorithms were trapped in the local minima in the 10 real lab tests per checked OA, performed as described in the Section 3.3.3, and the minimum and maximum overall elapsed times for the code written in the Python language. We observed only the measured position M5 due to our experiences that this was the position where the OAs were often trapped in the local minima between all seven measured positions. The time of execution for the measurement of all the LIDAR measurements for FOV = 44° and Wi-Fi data transfer (IEEE 802.11 a/b/g/n, Security: 64/128 bit: WEP, WPA/WPA2) to the PC computer (Intel(R) Core(TM) i7-14700, 2.53 GHz, RAM 31.7 GB, Windows 11), where the OAs were executed, lasted app. 0.18 s. This was the same for all the algorithms, except the odometrical algorithm, which was executed on the built-in computer of the mobile robot Pepper and did not need to be executed on the remote PC. Table 5 shows the comparison data.

4. Discussion

We have proposed a novel PSO-ALM to cope with the absolute localization of a mobile robot with low-quality LIDAR measurements. The experimental work was divided into simulations and real lab experiments. The simulation tests modeled both random perturbations of the LIDAR as well as camera mapping workspace corner definitions. The former was to model the random disturbances caused by the LIDAR inaccuracies. The latter was used for workspace mapping—localization itself was not capable of simultaneous mapping; therefore separate mapping of the workspace (also called corner identification) was conducted before. The real laboratory tests included manual identification of the workspace corners, as well as automated identification using 3D camera mapping.
The simulation tests have shown that random perturbations of p c L = ± 1 % of simulated LIDAR measurements have quite a small effect on the deterioration in accuracy of the calculated OA positions and orientations. Increasing the perturbations p c L lowered the quality of the results slightly but did not prevent the adequate localization skills of the robot. Even in the case of the highest perturbation, p c L = ± 10 % , the deviations were less than or equal to 8 cm; i.e., only a single case had Δ x ,   y 8 cm and Δ ω = 5.2 ° . Definition of the workspace corners had a dominant effect on the localization accuracy.
Increasing the perturbation of the corners of the workspace in simulations from p c c = ± 1 % to p c c = ± 5 % or p c c = ± 10 % caused a significant deterioration in the accuracy of the calculated PSO-ALM. Simulated deviations of up to Δ x = 23 cm and Δ ω = 5.5 ° were spotted in the worst-case scenario. For the practical lab experiments, these accuracies of identifying the corner positions using the ORB-SLAM2 were estimated to be p c c = ± 15 % , even larger than the simulated ones. The estimated errors in the positions and orientations reached Δ x = 39 cm, Δ x = 23 , cm and Δ ω = 8.7 ° for orientation in the worst case. Still, the general shape of the workspace was recognizable, and the OA was functioning, but the quality of the solutions was worse than expected. An adequate definition of the robot’s workspace area must be of higher priority than the LIDAR measurements; hence, significant care must be taken in the proper definition of the robot’s workspace area.
The real lab tests were much affected by the heavily deteriorated LIDAR measurements, especially for distances greater than 2 m and angles of measurement greater than 44°. An experiment was performed to realize the adequate FOV, where the disturbances were lowest. Here, the quality function value Q e was implemented to measure the disturbances. A separate experiment to realize the adequate PSO-ALM parameter settings was performed too. The influence of the obstacles imposed in front of the LIDAR were also studied carefully. The closer the obstacle, the greater the disturbance and the worse the localization inaccuracy. Despite the inaccurate LIDAR measurements, the PSO calculations of the mobile robot positions and orientations were still close to the ground-truth measurements of the real mobile robot positions and orientations (see Figure 14). The average absolute errors in the base coordinate system were ( Δ x ,   Δ y = 3.43 ) cm for all seven measurements, and the average absolute error was ( Δ ω = 2.92 ° ).
The value of elapsed time t e was decreased by lowering the FOV settings, as expected. The most adequate FOV setting seemed to be F O V = 44 ° for lab measurements as a compromise between time complexity and sensitivity to obstacles. The FOV setting could be decreased further to F O V = 20 ° , decreasing the time elapsed for measurements t e significantly and increasing the quality function value Q e of the measurements due to less dispersed beams but increasing the sensitivity to obstacles. By using more accurate LIDAR sensors, the proposed algorithm would have produced fewer local minima solutions and converged toward the global minimum faster. Therefore, we can say that the proposed method could be generalized, as it should work even better with more accurate LIDARs.
The experiments performed have shown that the best quality function value for pose P1 Q e P 1 was obtained by a population size N P = 50 or N P = 70 and number of iterations N i = 100 . Increasing the population size further did not improve the result; actually it worsened the time complexity. Although it was not mentioned in this paper, the real lab experiments were solved using the traditional genetic algorithm (GA) for a fair comparison. The GA performed similarly in terms of the quality function value Q e , but with significantly higher overall elapsed time t o , app. 3 to 4 times larger than the PSO.
Extensive testing was performed for pose P1 with multiple local optima within the workspace. Additionally, convergence plots were generated for the fitness function value for position P1. Figure 12 represents the two possible outputs, with one of them being the local optimum trapped solution and the other the global optimum. The experiments performed for position P1 show the fitness function value for both experiments and the positions and orientations calculated by the plain vanilla PSO for each value of the fitness function. Executing the PSO twice, and monitoring the performance after 160 iterations, these fitness function values were between f f = 0.132 and f f = 0.136 for both experiments, with the difference that one of these was the local optimum trapped solution and the other was the global optimum solution. The wrong positions and orientations were calculated four times more often than the correct one due to the local optimum solution having a fitness function value lower than the global optimum solution. This was a significant drawback of the plain vanilla PSO algorithm. Hence, the integrity check and PSO-ALM were introduced to verify the correctness of the calculated results, as by referencing the fitness function value only, it was impossible to deduce whether the solution was trapped or not. The PSO-ALM demonstrated superior performance compared to the odometric method and overperformed all the benchmarks, the GA, GEO, GA-ALM, and GEO-ALM.

5. Conclusions

The absolute localization of the semi-humanoid mobile robot Pepper in the reference Cartesian coordinate system was presented in this paper. Three sets of light detection and ranging (LIDAR) distance measurements were used, i.e., a single measurement ahead of the robot and two measurements to the sides of the robot. Underlying simulations were performed, as well as real laboratory experiments. Based on the knowledge obtained from the simulations, the modified particle swarm optimization algorithm with a local optima avoidance mechanism (PSO-ALM) and integrity check for verifying the correctness of the solution was proposed and tested in a real-world robot workspace. The proposed solution was compared to the plain vanilla PSO, which underwent local optima trapping. Experimentally, we realized that the solution trapped into the local optima was more than four times more frequent than the global optimum. The built-in odometrical algorithm, on the other hand, depended much more on the distance traveled, since it underwent the classic disturbance integration that increased with respect to the distance. The PSO-ALM worked well when knowing the workspace with high accuracy. When a low-accuracy workspace plan, obtained from the automated ORB simultaneous localization and mapping, was given to the PSO-ALM, its performance decreased heavily. This is one of the drawbacks of the PSO-ALM that should be addressed in future, i.e., to allow for both simultaneous localization and mapping, not only the localization itself. Although some cues were written in this paper, a rigorous statistical analysis with time complexity of the proposed algorithm still needs to be performed. Another improvement that could be implemented is the fusion of localization data coming from different sources for improved reliability, e.g., using an extended Kalman filter or similar. Data fusion of the odometry with the LIDAR seems a natural choice for a future work. The odometry is accurate enough for short distances but prone to drift—such drift could be reset occasionally using the LIDAR localizations. If successful, the proposed method should be validated in a three-dimensional workspace in the future, including the z axis for localization of airborne unmanned aerial vehicles.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/s25206283/s1, Video S1: Demo of the mobile robot Pepper going through the M-poses within the workspace when performing the localization process.

Author Contributions

Conceptualization, B.B., R.Š. and E.R.W.; methodology, R.Š.; software, K.K. and E.R.W.; validation, D.F., S.U. and I.M.; formal analysis, B.B.; investigation, R.Š.; resources, R.Š. and I.M.; data curation, K.K.; writing—original draft preparation, B.B., D.F., S.U. and R.Š.; writing—review and editing, B.B. and R.Š.; visualization, R.Š.; supervision, I.M.; project administration, I.M. and R.Š.; funding acquisition, S.U. and R.Š. All authors have read and agreed to the published version of the manuscript.

Funding

The study was carried out under the Horizon H2020-funded project HosmartAI (Grant no. 101016834) and the Slovenian Research Agency (ARIS) research and infrastructure programme (Grant nos. P2-0069 and P2-0028). The funding source had no impact on the decision to carry out the study or in its design. The funding source had no impact on the decision to publish this research. The content of this paper does not reflect the official opinion of the European Union or any other institution. Responsibility for the information and views expressed therein lies entirely with the authors. The APC was funded by ARIS Grant no. P2-0028.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Erik Rot Weiss was employed by the company HSE Invest, and author Kristijan Korez was employed by company Piktronik. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle swarm optimization
PSO-ALMPSO with local minima avoidance mechanism
GAGenetic algorithm
GA-ALMGA with local minima avoidance mechanism
GEOGolden eagle optimization
GEO-ALMGolden eagle optimization with local minima avoidance mechanism
M-posesStationary poses within the workspace for mobile robot localization
LIDARLight detection and ranging
SLAMSimultaneous localization and mapping
V-SLAMVisual SLAM
RANSACRandom sample consensus
MCLMonte Carlo localization
FOVField of view
f f Fitness function
Q e Quality function value
Q e P 1 Quality function value for pose P1

References

  1. Huang, Y.H.; Lin, C.T. Indoor localization method for a mobile robot using LiDAR and a dual AprilTag. Electronics 2023, 12, 1023. [Google Scholar] [CrossRef]
  2. Sukop, M.; Grytsiv, M.; Jánoš, R.; Semjon, J. Simple ultrasonic-based localization system for mobile robots. Appl. Sci. 2024, 14, 3625. [Google Scholar] [CrossRef]
  3. Wang, S.; Ahmad, N.S. AI-based approaches for improving autonomous mobile robot localization in indoor environments: A comprehensive review. Eng. Sci. Technol. Int. J. 2025, 63, 101977. [Google Scholar] [CrossRef]
  4. Moreno, L.; Garrido, S.; Muñoz, M.L. Evolutionary filter for robust mobile robot global localization. Robot. Auton. Syst. 2006, 54, 590–600. [Google Scholar] [CrossRef]
  5. Zhang, T.W.; Xu, G.H.; Zhan, X.S.; Han, T. A new hybrid algorithm for path planning of mobile robot. J. Supercomput. 2022, 78, 4158–4181. [Google Scholar] [CrossRef]
  6. Havangi, R. Mobile robot localization based on PSO estimator. Asian J. Control 2019, 21, 2167–2178. [Google Scholar] [CrossRef]
  7. Zuo, T.; Min, H.; Tang, Q.; Tao, Q. A Robot SLAM Improved by Quantum-Behaved Particles Swarm Optimization. Math. Probl. Eng. 2018, 2018, 1596080. [Google Scholar] [CrossRef]
  8. Zhang, Q.B.; Wang, P.; Chen, Z.H. An improved particle filter for mobile robot localization based on particle swarm optimization. Expert Syst. Appl. 2019, 135, 181–193. [Google Scholar] [CrossRef]
  9. Howard, A. The Robotics Data Set Repository (Radish). 2003. Available online: http://radish.sourceforge.net/ (accessed on 21 January 2025).
  10. Chung, M.A.; Lin, C.W. An improved localization of mobile robotic system based on AMCL algorithm. IEEE Sens. J. 2021, 22, 900–908. [Google Scholar] [CrossRef]
  11. Ji, J.; Zhao, J.S.; Misyurin, S.Y.; Martins, D. Localization on a-priori information of plane extraction. PLoS ONE 2023, 18, e0285509. [Google Scholar] [CrossRef] [PubMed]
  12. de Sá, A.O.; Nedjah, N.; de Macedo Mourelle, L. Distributed efficient localization in swarm robotics using min–max and particle swarm optimization. Expert Syst. Appl. 2016, 50, 55–65. [Google Scholar] [CrossRef]
  13. Worley, R.; Ma, K.; Sailor, G.; Schirru, M.M.; Dwyer-Joyce, R.; Boxall, J.; Dodd, T.; Collins, R.; Anderson, S. Robot localization in water pipes using acoustic signals and pose graph optimization. Sensors 2020, 20, 5584. [Google Scholar] [CrossRef] [PubMed]
  14. Vahdat, A.R.; NourAshrafoddin, N.; Ghidary, S.S. Mobile robot global localization using differential evolution and particle swarm optimization. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1527–1534. [Google Scholar]
  15. Begum, M.; Mann, G.K.; Gosine, R. A fuzzy-evolutionary algorithm for simultaneous localization and mapping of mobile robots. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 1975–1982. [Google Scholar]
  16. Bian, X.; Zhao, W.; Tang, L.; Zhao, H.; Mei, X. FastSLAM-MO-PSO: A Robust Method for Simultaneous Localization and Mapping in Mobile Robots Navigating Unknown Environments. Appl. Sci. 2024, 14, 10268. [Google Scholar] [CrossRef]
  17. Carvalho, J.L.; Farias, P.C.; Simas Filho, E.F. Global localization of unmanned ground vehicles using swarm intelligence and evolutionary algorithms. J. Intell. Robot. Syst. 2023, 107, 45. [Google Scholar] [CrossRef]
  18. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  19. Zhao, G.; Tao, B.; Jiang, D.; Yun, J.; Fan, H. Improved Bald Eagle Search Optimization Algorithm for the Inverse Kinematics of Robotic Manipulators. Biomimetics 2024, 9, 627. [Google Scholar] [CrossRef] [PubMed]
  20. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  21. Tsujimoto, T.; Shindo, T.; Jin’no, K. The neighborhood of canonical deterministic PSO. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1811–1817. [Google Scholar]
  22. Suganthan, P.N. Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 3, pp. 1958–1962. [Google Scholar]
  23. Engelbrecht, A.P. Particle swarm optimization: Global best or local best? In Proceedings of the 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, Ipojuca, Brazil, 8–11 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 124–135. [Google Scholar]
  24. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  25. Torr, P.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
Figure 1. Pepper robot with a mounted RealSense D435 RGB-D camera on the top of the head. The video was streamed using Raspberry Pi 3.
Figure 1. Pepper robot with a mounted RealSense D435 RGB-D camera on the top of the head. The video was streamed using Raspberry Pi 3.
Sensors 25 06283 g001
Figure 2. Locations and FOVs of the LIDARs on Pepper.
Figure 2. Locations and FOVs of the LIDARs on Pepper.
Sensors 25 06283 g002
Figure 3. (a) Bottom side of the robot’s leg with three omnidirectional wheels. (b) Workspace built with OSB wooden boards.
Figure 3. (a) Bottom side of the robot’s leg with three omnidirectional wheels. (b) Workspace built with OSB wooden boards.
Sensors 25 06283 g003
Figure 4. Architecture of the robot’s workspace. The black lines define walls of the workspace, red diamonds represent the ground truth corners, blue diamonds represent the equidistant positions on the walls and green diamonds represent the measured distances from the robot to the wall.
Figure 4. Architecture of the robot’s workspace. The black lines define walls of the workspace, red diamonds represent the ground truth corners, blue diamonds represent the equidistant positions on the walls and green diamonds represent the measured distances from the robot to the wall.
Sensors 25 06283 g004
Figure 5. Notation for robot’s poses. The blue symbols and lines represent the reference positions and orientations. The red symbols and lines represent the trial solution positions and orientations by the OA.
Figure 5. Notation for robot’s poses. The blue symbols and lines represent the reference positions and orientations. The red symbols and lines represent the trial solution positions and orientations by the OA.
Sensors 25 06283 g005
Figure 6. Simulated differences of the actual and ground-truth positions and the orientations for the LIDAR accuracy measurements p c L = ± 1 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Figure 6. Simulated differences of the actual and ground-truth positions and the orientations for the LIDAR accuracy measurements p c L = ± 1 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Sensors 25 06283 g006
Figure 7. Simulated differences of the actual and ground-truth positions and orientations for the LIDAR accuracy measurements p c L = ± 1 % and the accuracy measurements of the workspace corners p c c = ± 5 % .
Figure 7. Simulated differences of the actual and ground-truth positions and orientations for the LIDAR accuracy measurements p c L = ± 1 % and the accuracy measurements of the workspace corners p c c = ± 5 % .
Sensors 25 06283 g007
Figure 8. Simulated differences of actual and ground-truth positions and orientations for LIDAR accuracy measurements p c L = ± 1 % and accuracy measurements of workspace corners p c c = ± 10 % .
Figure 8. Simulated differences of actual and ground-truth positions and orientations for LIDAR accuracy measurements p c L = ± 1 % and accuracy measurements of workspace corners p c c = ± 10 % .
Sensors 25 06283 g008
Figure 9. Simulated differences of the actual and ground truth positions and orientations for the LIDAR accuracy measurements p c L = ± 5 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Figure 9. Simulated differences of the actual and ground truth positions and orientations for the LIDAR accuracy measurements p c L = ± 5 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Sensors 25 06283 g009
Figure 10. Simulated differences of the actual and ground-truth positions and orientations for the LIDAR accuracy measurements p c L = ± 10 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Figure 10. Simulated differences of the actual and ground-truth positions and orientations for the LIDAR accuracy measurements p c L = ± 10 % and the accuracy measurements of the workspace corners p c c = ± 1 % .
Sensors 25 06283 g010
Figure 11. LIDAR measurements in the real lab experiment: green stars—real measurements; purple stars—prediction of exact measurements.
Figure 11. LIDAR measurements in the real lab experiment: green stars—real measurements; purple stars—prediction of exact measurements.
Sensors 25 06283 g011
Figure 12. Calculating the position and orientation with the plain vanilla PSO. Subfigure (a) represents converging of the fitness function value for a misidentifies robot’s location; (b) an example of misidentified robot’s location, i.e., ground truth position in green, calculated position in red; (c) converging of the fitness function value for a well identified robot’s location; (d) an example of well identified robot’s location, i.e., ground truth position in green, calculated position in red, thus red covering the green triangle.
Figure 12. Calculating the position and orientation with the plain vanilla PSO. Subfigure (a) represents converging of the fitness function value for a misidentifies robot’s location; (b) an example of misidentified robot’s location, i.e., ground truth position in green, calculated position in red; (c) converging of the fitness function value for a well identified robot’s location; (d) an example of well identified robot’s location, i.e., ground truth position in green, calculated position in red, thus red covering the green triangle.
Sensors 25 06283 g012
Figure 13. The P1 localization for which the influence of the number of iterations N i and population size N P on the enhanced quality function value was calculated.
Figure 13. The P1 localization for which the influence of the number of iterations N i and population size N P on the enhanced quality function value was calculated.
Sensors 25 06283 g013
Figure 14. Differences of the real actual and ground-truth positions and orientations for the real LIDAR measurements and the estimated accuracy measurements of workspace corners +/−1%.
Figure 14. Differences of the real actual and ground-truth positions and orientations for the real LIDAR measurements and the estimated accuracy measurements of workspace corners +/−1%.
Sensors 25 06283 g014
Figure 15. Differences of the real actual and ground-truth positions and orientations for the real LIDAR measurements and the workspace corners calculated by ORB-SLAM2 and RANSAC algorithms.
Figure 15. Differences of the real actual and ground-truth positions and orientations for the real LIDAR measurements and the workspace corners calculated by ORB-SLAM2 and RANSAC algorithms.
Sensors 25 06283 g015
Figure 16. (a) Minor (small) obstacles, (b) medium obstacles, and (c) major (large, huge) obstacles.
Figure 16. (a) Minor (small) obstacles, (b) medium obstacles, and (c) major (large, huge) obstacles.
Sensors 25 06283 g016
Table 1. OAs’ parameter settings. The parameters were determined based on extensive experience with the aforementioned algorithms, complemented by the experimental methods.
Table 1. OAs’ parameter settings. The parameters were determined based on extensive experience with the aforementioned algorithms, complemented by the experimental methods.
ParameterDescriptorDescriptionValue
Population Size N P Number of particles/individuals70
Max Iterations/Generations N i Maximum number of iterations100
DimensionDNumber of dimensions ( x , y , ω )3
Inertia Weight c 0 ( P S O ) Weight of inertia0.8
Cognitive const. c 1 ( P S O ) Weight for personal best2.1
Social const. c 2 ( P S O ) Weight for global best1.1
Neighborhood const. c 3 ( P S O ) Weight for neighborhood best0.8
Initial mutation rate p m ( G A ) Initial mutation rate for GA0.001
Adaptive mutation factor F m ( G A ) Adaptive mutation factor for GA2
Crossover rate p c ( G A ) Probability of crossover1
Elitism E ( G A ) Number of elite individuals1
Propensity to cruise p c ( G E O ) Initial and final propensities0.5, 0.5
Propensity to attack p a ( G E O ) Initial and final propensities2, 2
Table 2. The influence of N i and N P on Q e P 1 . Remark: The value av. repeated was measured as the average value between N = 11 repeated values, which all always gave the same Δ x , Δ y , and Δ ω , which presents the high repetitiveness of the PSO-ALM.
Table 2. The influence of N i and N P on Q e P 1 . Remark: The value av. repeated was measured as the average value between N = 11 repeated values, which all always gave the same Δ x , Δ y , and Δ ω , which presents the high repetitiveness of the PSO-ALM.
NP N i | Δ x | [cm] | Δ y | [cm] | Δ ω | [°] repeated ¯ Q e P 1
7020024.02.052.01
40401.52.301.95
60312.12.852.23
80411.92.452.33
100311.91.291.79
120411.91.952.21
140411.92.352.31
160411.92.102.25
5020311.42.311.93
40967.23.786.49
60321.32.052.08
80312.03.411.85
100301.81.951.69
120411.92.852.43
140411.82.802.40
160411.91.652.14
3020726.74.555.06
40113.83.152.23
60331.33.152.68
80412.03.052.26
100411.93.302.55
120411.92.502.60
140411.92.002.23
160411.92.502.35
Table 3. Time complexities and quality function values Q e with respect to the scanning angle FOV (sample size N = 11 ). The average elapsed time t e ¯ and average overall time t o ¯ were measured from the MATLAB 2023a environment. Note that the similar execution of the FOV = 44 ° scenario in Python took significantly less time, t o ¯ = 0.46 s (see the Supplementary Material Video S1).
Table 3. Time complexities and quality function values Q e with respect to the scanning angle FOV (sample size N = 11 ). The average elapsed time t e ¯ and average overall time t o ¯ were measured from the MATLAB 2023a environment. Note that the similar execution of the FOV = 44 ° scenario in Python took significantly less time, t o ¯ = 0.46 s (see the Supplementary Material Video S1).
Scanning Angle (FOV) [°] t e ¯ [s] repeated ¯ Q e t o ¯ [s]
601.121.283.692.55
440.981.712.882.66
280.781.713.122.11
200.612.482.952.12
120.541.006.311.08
Table 4. Comparison between the odometrical algorithm and the PSO-ALM.
Table 4. Comparison between the odometrical algorithm and the PSO-ALM.
PositionOdometrical AlgorithmPSO-ALM
| Δ x | [ cm ] | Δ y | [ cm ] | Δ ω | [ ° ] | Δ x | [cm] | Δ y | [cm] | Δ ω | [°]
M1000322.8
M2148.3433.4
M38410.0912.2
M413149.7314.2
M59139.2110.4
M6429.2383.7
M7171018.4183.7
Average7.427.836.73.433.432.92
Table 5. LM count = Local Minima count, i.e., number of times the OA was caught into the local optima, t o = overall elapsed time in seconds.
Table 5. LM count = Local Minima count, i.e., number of times the OA was caught into the local optima, t o = overall elapsed time in seconds.
OALM Count [ % ] Best Δ x   [ cm ] Best Δ y   [ cm ] Best Δ ω   [ ° ] t o [ s ]
PSO60.8110.40.2–0.3
GA48.2211.20.6–1.2
GEO80.0441.70.4–0.6
PSO-ALM0110.40.5–1
GA-ALM0211.21.8–4.8
GEO-ALM0441.71.3–2.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bratina, B.; Fister, D.; Uran, S.; Mlakar, I.; Rot Weiss, E.; Korez, K.; Šafarič, R. Mobile Robot Localization Based on the PSO Algorithm with Local Minima Avoiding the Fitness Function. Sensors 2025, 25, 6283. https://doi.org/10.3390/s25206283

AMA Style

Bratina B, Fister D, Uran S, Mlakar I, Rot Weiss E, Korez K, Šafarič R. Mobile Robot Localization Based on the PSO Algorithm with Local Minima Avoiding the Fitness Function. Sensors. 2025; 25(20):6283. https://doi.org/10.3390/s25206283

Chicago/Turabian Style

Bratina, Božidar, Dušan Fister, Suzana Uran, Izidor Mlakar, Erik Rot Weiss, Kristijan Korez, and Riko Šafarič. 2025. "Mobile Robot Localization Based on the PSO Algorithm with Local Minima Avoiding the Fitness Function" Sensors 25, no. 20: 6283. https://doi.org/10.3390/s25206283

APA Style

Bratina, B., Fister, D., Uran, S., Mlakar, I., Rot Weiss, E., Korez, K., & Šafarič, R. (2025). Mobile Robot Localization Based on the PSO Algorithm with Local Minima Avoiding the Fitness Function. Sensors, 25(20), 6283. https://doi.org/10.3390/s25206283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop