Next Article in Journal
Robust Low-Overlap Point Cloud Registration via Displacement-Corrected Geometric Consistency for Enhanced 3D Sensing
Previous Article in Journal
A Hybrid Optical Fiber Detector for the Simultaneous Measurement of Dust Concentration and Temperature
Previous Article in Special Issue
Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle

by
Iván González-Hernández
,
Jonathan Flores
,
Sergio Salazar
and
Rogelio Lozano
*
Program of Aerial and Submarine Autonomous Navigation Systems, Department of Research and Multidisciplinary Studies, Center for Research and Advanced Studies, Mexico City 07360, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(14), 4334; https://doi.org/10.3390/s25144334
Submission received: 22 May 2025 / Revised: 9 July 2025 / Accepted: 9 July 2025 / Published: 11 July 2025

Abstract

This paper presents a robust control strategy based on simplified second-order sliding mode for autonomous navigation and obstacle avoidance for an unmanned ground vehicle. The proposed control is implemented in a mini ground vehicle equipped with redundant inertial sensors for orientation, a global positioning system, and LiDAR sensors. The algorithm control avoids the derivative of the sliding surface. This provides a feasibility in real-time programming. In order to demonstrate stability in the system, the second method of the Lyapunov theory is used. In addition, the robustness of the proposed algorithm is verified through numerical simulations. Outdoor experimental tests are performed in order to validate the performance of the proposed control.

1. Introduction

Autonomous rover navigation from one location to another without human assistance has been extensively studied due to its wide range of applications, including data collection in remote or inaccessible areas, inspection and monitoring of hard-to-reach environments, and tracked locomotion for all-terrain navigation. Autonomous rovers play a crucial role in trajectory tracking and path following, as they are capable of navigating harsh environments such as nuclear power plants and sites requiring environmental monitoring, among others. These rovers and robots are equipped with specialized sensors that allow them to explore areas of interest, gather environmental data, and transmit this information back to operators to support decision-making processes.
Unmanned ground vehicles (UGVs) are increasingly prominent in the field of robotics. These systems typically have fewer actuators than degrees of freedom. They often include high-torque motors, a drive mechanism (engine or motor), wheels or treads for ground mobility, a battery for power supply, and solar panels for recharging. One of the key characteristics of autonomous vehicles is their ability to operate independently. Their capability to perform tasks without human intervention makes them a highly relevant and compelling area of study within robotics.
Nowadays, technology is continuously advancing toward greater automation, with minimal human interaction and increased ease of operation. Autonomous rovers are ground-based exploration robots designed to operate without human presence, equipped with varying levels of autonomous capabilities.
Currently, there are ongoing developments in the field of autonomous vehicles. Some research has focused on the development of semi-autonomous robots capable of operating on uneven terrain [1,2]. Other studies have aimed to implement autonomous capabilities in rescue robots [3,4]. Several other studies have explored the use of artificial neural networks in the domain of autonomous driving. Some approaches use vision-based systems for intersection detection [5,6,7], while others rely on neural networks that focus on key features of specific road types [8,9] or use anticipated road markings for navigation [10]. Some of these autonomous robots utilize various sensors, such as ultrasonic sensors, although these tend to offer lower precision.
In [11], the authors propose a robust and accurate localization scheme for unmanned ground vehicles (UGVs) operating in GPS-denied or weak-signal environments. Their approach employs multisensor fusion using LiDAR and an IMU through a Gaussian projection map technique.
In [12], the authors present a navigation method based on a Kalman Filter (KF), which integrates continuous vision, IMU mechanization, and geomagnetic measurements to achieve robust navigation performance. Although the algorithm uses two filters to improve robustness, this study is not focused on autonomous navigation.
Ref. [13] presents a practical setup that integrates two publicly available ROS packages with a recently released one to create a fully functional multi-session navigation system for ground vehicles.
In [14], the authors develop an adaptive and robust algorithm for GNSS and inertial navigation systems, incorporating data predictors based on a neural network architecture.
In [15], the authors present two robust controllers for a unicycle-type vehicle. The linear component of the controller design is based on the Lyapunov barrier function and the attractive ellipsoid method, incorporating input saturation, state constraints, and certain parameter uncertainties. The nonlinear component employs a sliding-mode integral controller that also considers state constraints.
Moreover, the authors in [16] use vision-based technology in autonomous vehicles and propose approaches for road safety under model perturbations and object occlusion based on deep learning models.
Controlling underactuated mechanisms presents significant challenges because techniques developed for fully actuated systems cannot be directly applied. These systems are not feedback-linearizable and exhibit nonholonomic constraints, as well as nonminimum phase characteristics [17].
In summary, from the articles cited above, it can be observed that in [13] the authors do not propose any control law for ground vehicle dynamics, while a control strategy based on a robust algorithm for GNSS and inertial navigation system has not been tested on autonomous navigation vehicles, as described in [14]. After that, other authors in [15] propose two robust controllers for a unicycle-type vehicle, but the experimental validation is conducted without incorporating obstacle avoidance. Meanwhile, the work presented in [11] does not offer a robust contribution to navigation under ambient or sensor noise disturbances. Moreover, in [12], although the authors present an algorithm that uses two filters to improve robustness, the study is not focused on autonomous navigation. In addition, [16] demonstrates that the use of a LIDAR sensor proves more cost-effective and accurate in short, safe navigation times with moving and static obstacles. Finally, in [17], although the authors present a class of systems that can be stabilized using continuous controllers, their control remains complex. One of the most common approaches for controlling underactuated systems is sliding mode control (SMC) based on a Lyapunov design. Hence, the objective of our work is to solve all these disadvantages that are presented in the references mentioned above by the different authors cited.
Therefore, the main contribution of this article is to implement a robust trajectory-tracking controller for an UGV system based on the first- and second-order sliding mode (1st-SM and 2nd-SM) control approaches to compensate all external disturbances that could affect the trajectory to be executed by the ground vehicle. The performance of the UGV system with the proposed controllers is evaluated through numerical simulations and real-time experiments. The contributions of this paper include presenting a chattering-free, asymptotic second-order sliding mode controller for the UGV system that does not require the derivative of the switching function. Additionally, one of the key advantages of this type of controller is its ability to reject external disturbances affecting the ground vehicle during the execution of any user-defined desired trajectory.
This paper is structured as follows: Section 2 presents the mathematical model and Section 3 presents the control strategy design for the unmanned ground vehicle. The numerical simulation results are shown in Section 4. Section 5 is devoted to the experimental platform, obstacle avoidance algorithms, and tree bark resin dot identification detection algorithms. Finally, discussion and conclusions are shared in Section 6.

2. Mathematical Model and Control

Unmanned ground vehicles (UGVs) are mechanically simpler than other autonomous vehicles, such as aerial vehicles or submarines. Their main advantage is that they do not rely on lift or buoyancy for movement. However, they face challenges such as wheel traction issues, irregular terrain, and obstacles on the path that can hinder the performance of their assigned tasks.
The performance of autonomous vehicles in executing tasks depends on the design of control algorithms that effectively respond to disturbances, ensuring accurate trajectory tracking in both numerical simulations and real-world environments.

Dynamical Model

Figure 1 shows the unmanned ground vehicle diagram and, according to [18], can be modeled in terms of the error with respect to the road as follows:
m ( x ¨ y ˙ ψ ˙ ) = u c o s ( θ + ψ )
m ( y ¨ + x ˙ ψ ˙ ) = u s i n ( θ + ψ )
I z ψ ¨ = τ
where the control inputs are defined as u = F r + F l and τ = L F l L F r and the right and left forces are described as follows:
F r = C r ω r
F l = C l ω l
C r and C l are constants that represent a set of wheel constants, ω r and ω l are the wheel angular velocities (see Figure 2), m is the total mass, v is the longitudinal velocity, ψ is the angular position, R is the radio of the wheels, and L is the distance from the center of mass to the wheels. F r and F l are the left and right forces, respectively. τ r is the right torque, τ l is the left torque, and J is the inertia moment with respect to the center of mass.
Since the wheel is assumed to be a rigid element, it comes into contact with the ground at a single point that serves as the origin of the reference system shown in the previous figure, which is used to define the rotational speed that occurs in the wheels when the vehicle makes a turn on the road.
Based on the mathematical model, it is established that although the autonomous ground vehicle has four actuated wheels, it only has two PWM control signals. Therefore, the control signal u ( t ) consists of the left and right forces that control the vehicle’s movements in the x y plane.

3. Control Algorithm Design

In this section, we present the robust control law for the unmanned ground vehicle (UGV) described by Equations (1a)–(1c). A second-order sliding mode control (SMC) algorithm is proposed due to its ability to reduce or eliminate the chattering effect inherent in first-order sliding mode controllers. Additionally, this robust control approach offers better performance in real-time applications by preventing wear on the actuators responsible for the movement of the UGV prototype.
Most second-order sliding mode (2nd-SM) control algorithms can achieve chattering-free control; however, they require the derivative of the sliding manifold σ t for implementation. For instance, the 2nd-SM control, also known as the super-twisting algorithm [19], is designed as follows:
u ˙ ( t ) = r 1 s i g n ( σ ˙ ( t ) ) r 2 s i g n ( σ ( t ) )
In order to avoid a chattering-free SM, the control law is given by
u ( t ) = r 1 0 t s i g n ( σ ˙ ( t ) ) d t r 2 0 t s i g n ( σ ( t ) ) d t
where r 1 and r 2 are positive constants.
To avoid using the derivative of σ ( t ) , it is possible to stabilize the UGV system given by the three dynamics of (1a)–(1c) by the 2nd-SM control with a 1st-SM control law [17,20,21,22]:
u ˙ ( t ) = k 2 s i g n ( σ ( t ) ) , k 2 > 0
To improve the convergence time, a feedback control term is added to the above 1st-Sm control law as
u ˙ ( t ) = k 1 σ ˙ ( t ) k 2 s i g n ( σ ( t ) )
Therefore, by integrating the previous control law, a chattering-free 2nd-SM control without using the derivative of the switching function σ ( t ) is proposed as
u ( t ) = k 1 σ ( t ) k 2 0 t s i g n ( σ ( t ) ) d t
Here, k 1 and k 2 are positive constants to be designed such that the UGV is stabilized asymptotically.

4. Numerical Simulations

In the simulation, the dynamics of the systems described by Equations (1a)–(1c) are considered to observe the performances of the control strategies proposed in Section 3 for the UGV vehicle with the following two cases:
  • 1st-SM control law;
  • 2nd-SM control law (also called the super-twisting algorithm).
For both cases of numerical simulations, it is assumed that the initial conditions ( x 0 , y 0 , ψ 0 ) are equal to (1 m, 1 m, 0.9 rad) and that the control input u ( t ) is bounded with
u t < 30
The switching functions of the robust control law are designed for each dynamic as follows:
σ x ( t ) = 5 x ( t ) + x ˙ ( t )
σ y ( t ) = 4 y ( t ) + y ˙ ( t )
σ ψ ( t ) = 9 ψ ( t ) + ψ ˙ ( t )
such that the UGV system on the sliding mode σ ( t ) = 0 is asymptotically stable.
At first, a 1st-SM control law u ( t ) = 30 s i g n ( σ ( t ) ) is implemented in the simulation using a circular path as a reference. In addition, Gaussian noise is added to these simulations to verify the robustness of the navigation. The two main parameters of the noise added in the simulation to observe the behavior of the vehicle in the face of possible external disturbances are the mean with a value of 0 and the variance with a value of 1. The results given in Figure 3 show that the states of position (x, y) converge in finite time with the 1st-SM control law. The control input designed above is asymptotically stable. In this case, however, the chattering phenomena can be observed as the 1st-SM control input u ( t ) is switched among ±30 at a high frequency. Figure 4 and Figure 5 correspond to the evolution of the circular trajectory tracking for the UGV vehicle and the angle error ψ that represents the vehicle deviation from the path to follow. It can be seen that the vehicle reaches the desired reference, but a bias is preserved that is not desirable.
The simulation results with the proposed 2nd-SM control law (7) are shown in Figure 6, Figure 7 and Figure 8, where the control gains are shown in Table 1. It is clear that second-order SM control allows the UGV vehicle to adequately reach the desired circular path asymptotically, the state variables x ( t ) and y ( t ) converge to the references of the sine and cosine components of the desired circular path asymptotically, and u ( t ) is smooth in comparison with the 1st-SM control law shown in Figure 3. Thus, the system described by (1a)–(1c) with the chattering-free 2nd-SM control law (7) is asymptotically stable without a chattering effect. It is also confirmed in the simulation that the system cannot be stabilized by the integrated 1st-SM control law, i.e., the proposed robust control manages to stabilize the UGV in the desired reference of the circular trajectory. Obviously, the 2nd-SM control algorithm ensures the finite time convergence of states x ( t ) , y ( t ) and ψ ( t ) to the references given, as shown Figure 6 and Figure 8, respectively.
The effectiveness of the 2nd-SM control algorithm applied to the UGV model is demonstrated through a series of numerical simulations, which show a significant reduction in the chattering effect in the control input used to reject external disturbances affecting the vehicle. These results are further confirmed in the following section presenting the experimental results.

5. Experimental Results

The unmanned ground vehicle (UGV) experimental platform has four actuated wheels and does not have servomotors. The forward direction is controlled by the difference in speed of its right and left wheels. A Rocker Bogie suspension system is used to prevent overturning and to cushion mild uneven terrain. Figure 9 shows the interaction diagram of the electronic components, where sensors transmit information about the UGV and its surroundings and, after processing it, command the actuators to control navigation. Table 2 lists and names these components, as well as their general functions.
The Controller board was used, which was compatible with a open-source firmware due to its support for embedded and peripheral precision sensors. Control algorithms developed in Section 2 were developed and simulated in Matlab and Simulink in Section 3 and implemented in the controller board. A Global Navigation Satellite System (GNSS) was used for navigation, using differential devices to achieve centimeter-level accuracy for autonomous navigation of the unmanned ground vehicle. Some piles of cut grass were used to induce perturbations to the UGV in circular trajectory tracking. Figure 10 shows the servoless four-wheeled ground vehicle used in this research, which used the differential-drive forces of its left and right wheels for its displacement.
Figure 11 illustrates the overall control architecture implemented for the unmanned ground vehicle (UGV). The system begins with a trajectory reference generator, which defines the desired path based on mission way-points. This trajectory can be dynamically modified by a LiDAR-based obstacle avoidance module, which detects obstacles in the environment and adjusts the path accordingly to ensure collision-free navigation.
The reference trajectory is then compared to the current pose of the UGV (position and orientation), and the sliding mode controller—either first- or second-order—calculates the appropriate control actions. These actions are converted into motor commands for the left and right wheels ( ω l , ω r ), which drive the physical UGV platform.
To close the control loop, the Pose Estimator, which fuses data from a GNSS and IMU, provides updated state information to the controller, enabling real-time correction and stability. This cascaded control scheme ensures a robust trajectory tracking while maintaining obstacle avoidance and disturbance rejection capabilities.

5.1. Position Control

The experiments consist of following a circular path twice around the same point (4, 8.5) with a radius of 8.5 m. The UGV encounters two grass piles near the points (5, 12) and (15, −2), which induce perturbations in the path tracking. The second-order sliding mode control algorithm preserves the same start and end points of the path more accurately despite the perturbations induced by the grass piles. Figure 12 shows the start points, paths, and end points of the UGV following a circular path twice using the first- and second-order sliding mode control algorithms on the left and right side, respectively.

5.2. Obstacle Avoidance

The Unmanned ground vehicle is equipped with the RPLiDAR A2 full-range omnidirectional laser scanner, which senses distances up to 10 m around the vehicle in the horizontal plane. This laser scanner detects obstacles such as clumps of grass along the path during autonomous missions. The obstacle avoidance algorithm then selects an open path to continue the mission, allowing the vehicle to follow an alternate route to the final destination. Figure 13 shows robust and precise navigation for the UGV that involves the following stages:
1.
Mission Planning: The trajectory is defined at the ground station and transmitted to the vehicle. This trajectory is based on the vehicle’s absolute position and sequential tracking of way-point using a high-precision outdoor GNSS positioning system.
2.
GNSS -Based Navigation: The path-following algorithm is robust to terrain irregularities. The vehicle follows the navigation route using an embedded magnetometer into the autopilot, and the trajectory is regulated by varying the speed of the left and right wheels.
3.
Object detection: A LiDAR-based omnidirectional distance sensor emits light beams to generate a point cloud of obstacles along the trajectory. Obstacles are detected, and the distance and angle of the object relative to the UGV are determined ( d o b j , θ o b j ).
4.
Obstacle avoidance: When an obstacle is detected during UGV trajectory tracking, the evasion algorithm calculates an alternative course that allows the vehicle to evade the obstacle by turning right or left depending on the clearest path (toward where the lidar sensor does not detect any obstacles). When the vehicle no longer detects the obstacle, it resumes its trajectory toward the next waypoint.
Due to the point cloud dispersion (with a density of 4000 at 10 Hz), obstacle detection is effective when the obstacle has a radius greater than 4 cm; this detection improves as the object approaches. Before the autonomous mission, a safety radius of 1 m is established around the vehicle; a greater range would not allow the vehicle to pass between two nearby obstacles. Signal processing from the global and local positioning sensors, as well as the inertial sensors, enables robust and precise navigation.
A companion computer is not necessary for the navigation system’s obstacle detection and avoidance due to its low computational cost, as it does not require map reconstruction or memory to store obstacle data. The obstacle avoidance algorithm calculates possible curved secondary paths as alternate route options for the vehicle, which then orients itself back to the assigned path. The vehicle detects objects along its route, with an evasion radius assigned both to the vehicle and to the target obstacle.
Figure 14 illustrates the avoidance of an object (green point) during straight path as follows: The vehicle begins avoidance (red point) 1.5 m before the obstacle and assigns a 1-meter radius around it. Afterwards, the UGV continues toward the end point of the path (blue point). The dotted line represents the straight path between the start and final points.
Figure 15 shows the trajectory of the unmanned ground vehicle (UGV) from its starting point to its end point. During its path, the UGV encounters a moving obstacle traveling parallel to it at the same speed. When the obstacle stops, the UGV continues its mission by navigating around it, as if bypassing a wall. The vehicle maintains an approximate distance of 0.5 m from the obstacle until it safely passes.
Table 3 shows the performances of the first- and second-order sliding mode controllers in terms of the standard deviation of position and velocity in the autonomous mission.

6. Conclusions and Discussion

A robust control technique based on a chattering-free asymptotic second-order sliding mode (2nd-SM) controller has been presented for autonomous rover vehicle navigation. This approach does not require the derivative of the switching function. A comparison was made between the proposed controller and the classical first-order sliding mode controller.
The simulation results demonstrated the good performance of the proposed sliding mode control, allowing the ground vehicle to be stabilized on the desired circular trajectory within a 0.1 m accuracy. Additionally, the typical chattering effect was significantly reduced in the control inputs, facilitating easier real-world implementation. This improvement was due to the pre-feedback control term included in the control law, which ensured that the necessary condition for sliding mode control was locally reached, guaranteeing asymptotic stability.
Experimental results with the UGV were achieved in several scenarios, including position control and obstacle avoidance. In the position control experiments, the tracking error was around 0.08 m, while the error during obstacle avoidance was approximately 0.07 m.

Author Contributions

Conceptualization and project administration, S.S. and J.F.; methodology and data curation, I.G.-H.; software and virtualization and validation and formal analysis, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by Department of Research and Multidisciplinary Studies of the Research and Advanced Studies Center (Cinvestav).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful to Secretariat of Science, Humanities, Technology, and Innovation (CECIHTI) for its support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned Aerial Vehicle
UGVUnmanned Ground Vehicle
IMUInertial Measurement Unit
KFKalman Filter
PWMPulse Wide Modulation
I2CInter Integrated Circuit
UARTUniversal Asynchronous Receiver-Transmitter
CANController Area Network
SPISerial Peripheral Interface
ADCAnalog-to-Digital Converter
LiDARLight Detection and Ranging
ROSRobot Operating System
GPSGlobal Positioning System
GNSSGlobal Navigation Satellite System
PDProportional Derivative

References

  1. Nagatani, K.; Yamasaki, A.; Yoshida, K.; Yoshida, T.; Koyanagi, E. Semi-autonomous traversal on uneven terrain for a tracked vehicle using autonomous control of active flippers. In Proceedings of the International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 2667–2672. [Google Scholar]
  2. Bashar, M.R.; Al Arabi, A.; Tipu, R.S.; Sifat, M.T.A.; Alam, M.Z.I.; Amin, M.A. 2D surface mapping for mine detection using wireless network. In Proceedings of the 2nd International Conference on Control and Robotics Engineering (ICCRE), Bangkok, Thailand, 1–3 April 2017; pp. 180–183. [Google Scholar]
  3. Edlinger, R.; Zauner, M.R.W. New approach of robot mobility systems for rescue scenarios. In Proceedings of the International Symposium on Safety, Security, and Rescue Robotics (SSRR), Linköping, Sweden, 21–26 October 2013; pp. 1–5. [Google Scholar]
  4. Al Arabi, A.; Sarkar, P.; Ahmed, F.; Rafie, W.R.; Hannan, M.; Amin, M.A. 2D mapping and vertex finding method for path planning in autonomous obstacle avoidance robotic system. In Proceedings of the 2nd International Conference on Control and Robotics Engineering (ICCRE), Bangkok, Thailand, 1–3 April 2017; pp. 39–42. [Google Scholar]
  5. Ivis, F. Calculating geographic distance: Concepts and methods. In Proceedings of the 19th Conference of Northeast SAS User Group, Toronto, ON, Canada, 17 September 2006; pp. 17–20. [Google Scholar]
  6. Pomerleau, D.A. (Ed.) Neural Network Perception for Mobile Robot Guidance; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  7. Dickmanns, E.D.; Zapp, A. Autonomous high speed road vehicle guidance by computer vision. IFAC Proc. 1987, 20, 221–226. [Google Scholar] [CrossRef]
  8. Jochem, T.; Pomerleau, D. Vision-based neural network road and intersection detection. In Proceedings of the Intelligent Unmanned Ground Vehicles: Autonomous Navigation Research at Carnegie Mellon, Carnegie Mellon University, Pittsburgh, PA, USA, 5–9 August 1995; pp. 73–86. [Google Scholar]
  9. Jochem, T.; Baluja, S. Massively Parallel, Adaptive, Color Image Processing for Autonomous Road Following; The Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 1993; pp. 1–25. [Google Scholar]
  10. Kenue, S.K. Lanelok: Detection of lane boundaries and vehicle tracking using image-processing techniques-part i: Hough-transform, region-tracing and correlation algorithms. Mob. Robot. IV 1990, 1195, 221–233. [Google Scholar]
  11. Wu, Y.; Li, Y.; Li, W.; Li, H.; Lu, R. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 5633–5643. [Google Scholar] [CrossRef] [PubMed]
  12. Yang, C.; Fan, Z.; Zhu, Q.; Yang, P.; Mei, C.; Lu, B. Robust navigation system for UGV based on the IMU/vision/geomagnetic fusion. Int. J. Intell. Robot. Appl. 2023, 7, 321–334. [Google Scholar] [CrossRef]
  13. Pearson, E.; Englot, B. A Robust and Rapidly Deployable Waypoint Navigation Architecture for Long-Duration Operations in GPS-Denied Environments. In Proceedings of the 2023 20th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 25–28 June 2023; pp. 319–326. [Google Scholar]
  14. Shen, K.; Li, Y.; Liu, T.; Zuo, J.; Yang, Z. Adaptive-robust fusion strategy for autonomous navigation in GNSS-challenged environments. IEEE Internet Things J. 2023, 11, 6817–6832. [Google Scholar] [CrossRef]
  15. Gutierréz, A.; Ríos, H.; Mera, M. A robust trajectory tracking controller for constrained and perturbed unicycle mobile robots. Asian J. Control 2025, 1–12. [Google Scholar] [CrossRef]
  16. Zhang, J.; Lou, Y.; Wang, J.; Wu, K.; Lu, K.; Jia, X. Evaluating adversarial attacks on driving safety in vision-based autonomous vehicles. IEEE Internet Things J. 2021, 9, 3443–3456. [Google Scholar] [CrossRef]
  17. Shtessel, Y.B.; Krupp, D.S.I. 2-sliding-mode control for nonlinear plants with parametric and dynamic uncertainties. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Dever, CO, USA, 14–17 August 2000; pp. 1–9. [Google Scholar]
  18. Rajamani, R. Vehicle Dynamics and Control; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  19. Levant, A. Principles of 2-sliding mode design. Automatica 2007, 43, 576–586. [Google Scholar] [CrossRef]
  20. Anosov, D. On stability of equilibrium points of relay systems. Autom. Remote Control 1959, 2, 135–149. [Google Scholar]
  21. Fridman, L.; Levant, A. Higher order sliding modes as a natural phenomenon in control theory. Lect. Notes Control Inf. Sci. Robust Control Var. Struct. Lyapunov Tech. 1996, 217, 107–133. [Google Scholar]
  22. Pan, Y.; Kumar, K.L.G. Reduced-order design of high-order sliding mode control system. Int. J. Robust Nonlinear Control 2011, 21, 2064–2078. [Google Scholar] [CrossRef]
Figure 1. Unmanned ground vehicle top-view diagram.
Figure 1. Unmanned ground vehicle top-view diagram.
Sensors 25 04334 g001
Figure 2. A diagram of the forces acting on the wheel in contact with the ground.
Figure 2. A diagram of the forces acting on the wheel in contact with the ground.
Sensors 25 04334 g002
Figure 3. Simulation result with 1st-SM control.
Figure 3. Simulation result with 1st-SM control.
Sensors 25 04334 g003
Figure 4. The simulation result of the desired trajectory using 1st-SM control.
Figure 4. The simulation result of the desired trajectory using 1st-SM control.
Sensors 25 04334 g004
Figure 5. The simulation result of the ψ -angle using 1st-SM control.
Figure 5. The simulation result of the ψ -angle using 1st-SM control.
Sensors 25 04334 g005
Figure 6. The simulation result with chattering-free 2nd-SM control.
Figure 6. The simulation result with chattering-free 2nd-SM control.
Sensors 25 04334 g006
Figure 7. The simulation result of the desired trajectory using 2nd-SM control.
Figure 7. The simulation result of the desired trajectory using 2nd-SM control.
Sensors 25 04334 g007
Figure 8. Simulation result of the ψ -angle using 2nd-SM control.
Figure 8. Simulation result of the ψ -angle using 2nd-SM control.
Sensors 25 04334 g008
Figure 9. Unmanned fround vehicle component diagram.
Figure 9. Unmanned fround vehicle component diagram.
Sensors 25 04334 g009
Figure 10. Unmanned ground vehicle platform.
Figure 10. Unmanned ground vehicle platform.
Sensors 25 04334 g010
Figure 11. Overall control architecture.
Figure 11. Overall control architecture.
Sensors 25 04334 g011
Figure 12. Circular path tracking using first-order (left) and second-order (right) sliding mode control techniques.
Figure 12. Circular path tracking using first-order (left) and second-order (right) sliding mode control techniques.
Sensors 25 04334 g012
Figure 13. UGV navigation and object avoidance concept.
Figure 13. UGV navigation and object avoidance concept.
Sensors 25 04334 g013
Figure 14. Object (green point) avoiding with a radius of 1.5 m during the UGV autonomous mission.
Figure 14. Object (green point) avoiding with a radius of 1.5 m during the UGV autonomous mission.
Sensors 25 04334 g014
Figure 15. Moving obstacle (gray box) avoidance during the UGV autonomous mission.
Figure 15. Moving obstacle (gray box) avoidance during the UGV autonomous mission.
Sensors 25 04334 g015
Table 1. Unmanned ground vehicle simulation parameters.
Table 1. Unmanned ground vehicle simulation parameters.
ParameterValueParameterValueParameterValue
m1 kg R a d i o u s 3 m k 2 x 1
R 0.15 m x ( 0 ) 1 m k 1 y 5
L 0.5 m y ( 0 ) 1 m k 2 y 1
x d 3 m ψ ( 0 ) 0.9 rad k 1 ψ 10
y d 3 m k 1 x 5 k 2 ψ 8
Table 2. Unmanned ground vehicle list of components.
Table 2. Unmanned ground vehicle list of components.
PartNameFunction
AAutopilotSensor signal processing and actuator control.
BBatteryAutopilot and actuator power supply.
CData RadioOnboard telemetry transmitter and receiver.
DRC ReceiverRadio control receiver.
EGNSS AntennaGlobal Navigation Satellite System Antenna.
FRangefinderLiDAR-based omnidirectional distance sensor.
GMotor DriverMotor speed controller.
HMotor and WheelActuator to control position and orientation.
IRC TransmitterRadio Control Transmitter.
JGround Control StationGround-based system used to monitor and control.
KData RadioGround control station telemetry transmitter and receiver.
Table 3. The standard error of the mean.
Table 3. The standard error of the mean.
ParameterValueUnits
1st-Order Sliding Mode Position standard deviation 0.13 m
2nd-Order Sliding Mode Position standard deviation 0.08 m
1st-Order Sliding Mode Velocity standard deviation 0.09 m/seg.
2nd-Order Sliding Mode Velocity standard deviation 0.07 m/seg.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

González-Hernández, I.; Flores, J.; Salazar, S.; Lozano, R. Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle. Sensors 2025, 25, 4334. https://doi.org/10.3390/s25144334

AMA Style

González-Hernández I, Flores J, Salazar S, Lozano R. Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle. Sensors. 2025; 25(14):4334. https://doi.org/10.3390/s25144334

Chicago/Turabian Style

González-Hernández, Iván, Jonathan Flores, Sergio Salazar, and Rogelio Lozano. 2025. "Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle" Sensors 25, no. 14: 4334. https://doi.org/10.3390/s25144334

APA Style

González-Hernández, I., Flores, J., Salazar, S., & Lozano, R. (2025). Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle. Sensors, 25(14), 4334. https://doi.org/10.3390/s25144334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop