Next Article in Journal
Advancements in Electrochemical Biosensors for Comprehensive Glycosylation Assessment of Biotherapeutics
Previous Article in Journal
Machine Learning Models and Mathematical Approaches for Predictive IoT Smart Parking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Overview and Comparison of Traditional Motion Planning Based on Rapidly Exploring Random Trees

1
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(7), 2067; https://doi.org/10.3390/s25072067
Submission received: 14 January 2025 / Revised: 17 March 2025 / Accepted: 17 March 2025 / Published: 26 March 2025
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Motion planning is a fundamental problem in robotics that involves determining feasible or optimal paths within finite time. While complete motion planning algorithms are guaranteed to converge to a path solution in finite time, they are proven to be computationally inefficient, making them unsuitable for most practical problems. Resolution-complete algorithms, on the other hand, ensure completeness only if the resolution parameter is sufficiently fine, but they suffer severely from the curse of dimensionality. In contrast, sampling-based algorithms, such as Rapidly Exploring Random Trees (RRT) and its variants, have gained the increasing attention of researchers due to their computational efficiency and effectiveness, particularly in high-dimensional problems. This review paper introduces RRT-based algorithms and provides an overview of their key methodological aspects.

1. Introduction

Motion planning is a well-known and fundamental problem in robotics that involves finding a (optimal) path for a robot from its initial position toward goal regions while avoiding collisions with any obstacles in its environment. While not the only fundamental problem of robotics, motion planning has garnered significant attention from researchers due to widespread applications across various fields, including robotics [1], computer animation [2], medical procedures [3], and manufacturing [4,5].
Many motion planning algorithms have been proposed, including complete, resolution complete, heuristic, potential field, and sampling-based algorithms. Each method offers distinct advantages and limitations in terms of space complexity, time complexity, and path optimization.
Complete algorithms primarily focus on geometric methods [6,7,8,9] that are guaranteed to find a path solution in finite time. However, both finding the shortest path and finding a feasible path under velocity constraints are proven to be NP-hard [10].
Resolution-complete algorithms include cell decomposition methods [11] and graph search methods [12,13,14]. They can guarantee completeness and optimality if fine resolution parameters are provided. However, their computation time and memory space grow exponentially with increasing dimensionality.
Heuristic algorithms consist of neural networks (NNs) and meta-heuristic optimization algorithms. On the one hand, NNs, with their advantages such as nonlinear mapping capabilities and learning ability, are employed in various aspects of navigation, including processing sensory data [15], obstacle avoidance, and path planning [16,17,18,19]. Additionally, end-to-end learning approaches for driving have gained significant attention in recent years, such as Deepdriving [20] and ChauffeurNet [21]. However, their disadvantages include slow training speeds, the need for large datasets, and a lack of traceability. On the other hand, meta-heuristic optimization algorithms offer high success rates and the ability to manage complex constraints. Notable examples include Brain Storm Optimization [22], Genetic Algorithm [23], Particle Swarm Optimization [24], and Ant Colony Optimization (ACO) [25]. However, they require considerable time to reach optimal solutions, especially for large-scale problems.
The Artificial Potential Field (APF) [26] is a well-known method usually used to handle unknown and dynamic environments [27]. However, APF suffers from the problem of local minima. To address this issue, APF is combined with various heuristics to achieve path optimality and safety, such as APF with random sampling [28], APF with ACO [29], and APF with GA [30].
Sampling-based algorithms have emerged as particularly prominent in recent years for two key reasons: (i) They avoid the explicit construction of obstacle configurations (which can be challenging in high-dimensional spaces). (ii) They allow the incorporation of differential constraints to generate dynamically feasible paths. Among sampling-based methods, Rapidly Exploring Random Tree (RRT) [31] and its asymptotically optimal variant (RRT* [32]) are widely recognized as foundational algorithms.
Despite the success of RRT*-based algorithms, there are still some major limitations [33]: (i) Their slow convergence rate to optimal solutions; (ii) Their significantly large memory requirements due to numerous iterations utilized to calculate the optimal path. Various improvements have been proposed to address these challenges. Some approaches focus on enhanced sampling strategies to replace pure uniform sampling, such as Informed-RRT* [34], RRT*-Smart [35], and CE-RRT* [36]. Other methods employ two directional trees with greedy connect heuristics to enhance the convergence rates of RRT and RRT*, including RRT-Connect [37], B-RRT* [38], and IB-RRT* [33].
Recent research has increasingly focused on extending sampling-based algorithms to handle dynamic systems. While RRT and RRT* excel at high-level path planning, they lack a low-level control synthesis. Several approaches attempt to bridge the gap by designing controllers to steer the system between two generated vertices, such as Kinodynamic-RRT* [39] and LQR-RRT* [40]. For environments with dynamical obstacles, CBF-RRT [41] incorporates control barrier functions (CBFs) into RRT to achieve dynamical obstacle avoidance. Similarly, CBF-RRT* [42] combines CBFs for safety with control Lyapunov functions (CLFs) to generate safe motion trajectories between vertices of RRT*.
This paper is structured as follows: Section 2 introduces fundamental components of RRT-based motion planning. Various RRT-based algorithms are introduced in Section 3. Section 4 introduces RRT-based algorithms for dynamic systems. Section 5 presents experimental results and analysis, and the conclusion is in Section 6.

2. Background

In this section, we introduce key local motion planning components commonly integrated into RRT-based algorithms. We focus on three fundamental approaches: linear quadratic regulators (LQRs), which optimize control inputs to minimize cost functions, control barrier functions (CBFs), which ensure safety constraints and obstacle avoidance, and control Lyapunov functions (CLFs), which guarantee stability and convergence properties.

2.1. System Dynamics

Consider robot dynamics modeled as affine control dynamics,
x ˙ = f cl ( x , u ) : = f ( x ) + g ( x ) u ,
with f : R n R n and g : R n R n × m locally Lipschitz, the state x X R n , and the control input u U R m . The initial state is denoted as x init and the goal region is denoted as X goal X . O i X represents the i-th obstacle region, where i = 1 , , n . The obstacle-free set is defined as X safe : = X i = 1 n O i .

2.2. Linear Quadratic Regulator

An LQR is a widely used optimal control strategy that minimizes a quadratic cost function. Consider a linear time-invariant system
x ˙ = Ax + Bu ,
which is a special case of the general affine system (1) where f ( x ) = Ax , g ( x ) B . The LQR problem is to find control inputs u ( t ) that minimize the following quadratic cost criterion:
J LQR : = 0 x ( t ) Qx ( t ) + u ( t ) Ru ( t ) d t ,
where Q and R are positive-definite weighting matrices.
According to Theorem 10.1 in [43], if there exists a symmetric solution P to the following algebraic Riccati equation:
A P + PA + Q PBR 1 B P = 0 ,
for which A BR 1 B P = 0 is a stability matrix, then the optimal feedback control law is given by
u = Kx ,   where   K : = R 1 B P ,
with the optimal cost equal to x ( 0 ) P x ( 0 ) . This feedback law also stabilizes the closed-loop system.
In addition, LQR can be extended to handle nonlinear system dynamics through local linearization. More specifically, a local linearization can be made around an equilibrium point ( x eq , u eq ) using first-order Taylor expansion [43]:
δ x ˙ = A ^ δ x + B ^ δ u ,
defined by the following Jacobian matrices:
A ^ : = f cl ( x eq , u eq ) x , B ^ : = f cl ( x eq , u eq ) u ,
where δ x : = x x eq , δ u : = u u eq , δ x ˙ : = x ˙ x eq ˙ . Therefore, this local linearization allows the application of LQR techniques to nonlinear systems in the neighborhood of the equilibrium point.

2.3. Control Barrier Functions

CBFs are employed to generate control inputs that steer the system (1) towards an exploratory point while ensuring obstacle avoidance. Let the safe set  C be defined as
C = { x D : h ( x ) 0 } ,
for a continuously differentiable function h : D R n R . The boundary and interior of C are given by
C = { x D : h ( x ) = 0 } ,   Int ( C ) = { x D : h ( x ) > 0 } .
The set C is called forward invariant for the system (1) if x ( 0 ) C implies x ( t ) C ,   t > 0 . Intuitively, set invariance ensures that any trajectory starting within the invariant set will never leave it, thereby guaranteeing safety.
First, consider a model without control inputs, x ˙ = f cl ( x ) . According to Nagumo’s Theorem [44], the necessary and sufficient conditions for set invariance are
h ˙ ( x ) 0 x C .
Alternatively, another necessary and sufficient condition [45] is the existence of an extended class K function α such that
h ˙ ( x ) α ( h ( x ) ) ,   x D .
The function h, which defines the forward invariant set, is called a zeroing barrier function.
Next, consider the system dynamics (1). The derivative of h becomes
h ˙ ( x ) = L f h ( x ) + L g h ( x ) u .
Combining the above equation with Equation (3), for any point x D , the set of all controllers rendering the set C forward invariant is defined as
K zcbf ( x ) : = u U : L f h ( x ) + L g h ( x ) u α ( h ( x ) ) .
The function h is called a zeroing control barrier function (ZCBF) if K zcbf ( x ) , x D .
Finally, given a ZCBF h, safety can be ensured by modifying an existing controller in a minimal way. More specifically, given a feedback controller u = k ( x ) for the control system (1), it may happen that k ( x ) K zcbf ( x ) for some x D . To guarantee safety, one can determine the minimum perturbation to u by solving the following quadratic program (CBF-QP):
u ( x ) = arg min u R m 1 2 u k ( x ) 2 s.t. L f h ( x ) + L g h ( x ) u α ( h ( x ) ) .

2.4. Control Lyapunov Functions

CLFs can be employed to generate control inputs that drive a system to a desired equilibrium point  ( x ¯ , u ¯ ) that is, f cl ( x ¯ , u ¯ ) = 0 . Without loss of generality, the equilibrium point is typically assumed at the origin ( x ¯ = 0 , u ¯ = 0 ) . Additionally, it is typically assumed that the equilibrium point is stable and there exists an asymptotically stable neighborhood around it. More specifically, the equilibrium point is stable [46] if ϵ > 0 δ > 0 ,
x ( 0 ) < δ x ( t ) < ϵ , t 0 .
Intuitively, the stability implies that a trajectory starting in a δ neighborhood of the origin will never leave the ϵ neighborhood. Moreover, a neighborhood S of the stable equilibrium point is called the region of asymptotic stability [46] if
x ( 0 ) S lim t x ( t ) = 0 .
Intuitively, the asymptotic stability implies that any point starting within S converges to the stable equilibrium point as t tends to infinity.
First, consider a model without control inputs, x ˙ = f cl ( x ) . According to Theorem 2.26 in [44], a neighborhood S of origin is the region of asymptotic stability if there exists a locally Lipschitz-positive definite function Ψ such that
S N [ Ψ , v ] : = { x : Ψ ( x ) v }   for   some   v > 0 ,
and for all x N [ Ψ , v ] ,
D + Ψ ( x ) ϕ ( x ( t ) )   for   some   K   function   ϕ ,
where D + is a Dini derivative. And this function Ψ is called the Lyapunov function inside S .
Next, consider the system dynamics (1). The Dini derivative of Ψ is given by
D + Ψ ( x ) = L f Ψ ( x ) + L g Ψ ( x ) u .
Combining the above equation and Equation (5), the set of all stabilizing controllers for every point x S is defined by
K clf ( x ) : = u U : L f Ψ ( x ) + L g Ψ ( x ) u ϕ ( x ( t ) ) .
The function Ψ is called the control Lyapunov function (CLF) if K clf ( x ) ,   x S .
Finally, given a CLF Ψ and ZCBF h, one can steer the state x to another state x while guaranteeing safety by solving a sequence of QPs. More specifically, suppose ( x , u ¯ ) is the equilibrium point of the system. Define new variables z ( t ) : = x ( t ) x and v ( t ) : = u ( t ) u ¯ through a translation to establish a new system whose equilibrium point is the origin. The initial state is set to z ( 0 ) = x x . By restricting the control inputs v ( t ) K clf for all t > 0 , one can guarantee z ( t ) converges to the origin. This constraint can then be incorporated into the CBF-QP, resulting in a new problem termed the CLF-CBF-QP:
v ( z ) = arg min v R m 1 2 v k ( z ) 2 s.t. L f h ( z ) + L g h ( z ) v α ( h ( z ) ) L f Ψ ( z ) + L g Ψ ( z ) v ϕ ( z ( t ) ) .

3. The RRT-Based Motion Planning

In this section, several incremental RRT-based motion planning algorithms are introduced.

3.1. The RRT Algorithm and Its Variations

The Rapidly Exploring Random Tree (RRT) is introduced as an efficient data structure and sampling scheme to explore efficiently nonconvex high-dimensional spaces by growing the search tree toward goal areas. The major components of the RRT are as follows:
  • Sample: The procedure provides independent identically distributed samples from X free .
  • Steer: Given two states x , x X , the Steer procedure returns a state z X free that lies between x and x while maintaining z x η , for a prespecified constant η > 0 . It is typically defined by
    Steer ( x , x ) = arg min z X free , z x η z x .
  • Nearest node: Given a tree T = ( V , E ) with a set of nodes V and a set of edges E , the Nearest ( V , x ) procedure provides a node in V that is closest to x , i.e.,
    Nearest ( V , x ) = arg min x V x x .
  • Collision test: Given two points x , x X free , the procedure ObstacleFree ( x , x ) returns True if and only if the line segment between x and x lies entirely within X free , i.e.,
    ObstacleFree ( x , x ) = True , if [ x , x ] X free , False , otherwise .
The body of RRT algorithms is presented in Algorithm 1. The algorithm proceeds as follows. First, a state is sampled from the configuration space, denoted as x samp . Next, the Extend procedure grows the tree T towards x samp , as outlined in Algorithm 2. More specifically, the Nearest procedure selects the closest neighbor node x nearest T to x samp . The Steer procedure steers the state from x nearest to x samp , resulting in a new state x new . Finally, the state x new and the edge ( x nearest , x new ) are added to T if the ObstacleFree procedure returns True.
Additionally, RRT-Connect, a variation of RRT, employs a two-RRT planner to achieve rapid convergence to a solution. More specifically, it constructs two trees, T 1 and T 2 , starting from x init and x goal , respectively. In each iteration, both trees are extended towards a random state x samp simultaneously. The algorithm maintains both trees at all times until they connect and a solution is found.
Last but not least, these algorithms ensure probabilistic completeness, i.e., as the number of iterations approaches infinity, the probability of finding a solution (if one exists) approaches one (see Theorem 3 in [47]).
Algorithm 1: Body of RRT and RRT* algorithms
Sensors 25 02067 i001
Algorithm 2: ExtendRRT
Sensors 25 02067 i002

3.2. The RRT* Algorithm and Its Variations

Although RRT ensures probabilistic completeness, it does not guarantee optimality, i.e., it converges to a suboptimal solution with probability one (see Theorem 10 in [32]). To address this limitation, RRT* is introduced as an asymptotically optimal variant that asymptotically optimizes the existing tree to achieve asymptotic optimality properties, i.e., almost-sure convergence to an optimal solution. Several key components are added in RRT*:
  • Near nodes: Given a tree T = ( V , E ) with a set of nodes V and a set of edges E , the Near ( V , x ) procedure utilizes a pre-defined distance · to identify a set of near nodes X near in T that are close to x . This is typically defined as the set of all nodes within a closed ball of radius r n centered at x , i.e.,
    X near : = x V : x x r n ,
    where the radius is defined by
    r n : = γ log n n 1 / d ,
     n is the number of vertices in the tree, d is the dimension of the space, and γ is a constant.
  • Choose Parent: The procedure tries to find collision-free paths between a selected node x and all its near nodes, and selects the near node that yields the lowest cost of x as the parent of x .
  • Rewire: The procedure attempts to connect a selected node x with each node in its neighborhood X near . If the trajectory from x to the near node x near results in a lower cost for x near , then x becomes the new parent of x near .
It is worth noting that the Rewire procedure guarantees the asymptotic optimality properties of RRT*.
The body of the RRT* algorithm is outlined in Algorithm 1, while the Extend procedure for the RRT* is detailed in Algorithm 3. The Extend procedure operates as follows. First, the Nearest procedure selects the closest neighbor node x nearest T of x samp . Second, the Steer procedure steers the state from x nearest to x samp , resulting in a new state x new . Third, the ChooseParent procedure identifies a potential parent node for x new . If a suitable parent node x parent is found, then x new and the edge ( x parent , x new ) are added to the tree T . Fourth, the Rewire procedure attempts to connect x new with each node in its neighborhood X near . If the trajectory from x new to the near node x near results in a lower cost for x near , then x new becomes the new parent of x near .
Although RRT* ensures asymptotic optimality, its convergence rate is relatively slow. More specifically, let f ( x ) be the cost of an optimal path from x init to x goal constrained to pass through x . The set of states that could improve the current solution is defined by
X f : = { x X : f ( x ) < c best } ,
where c best is the cost of the current solution. In RRT*, the probability of sampling states from X f becomes arbitrarily small for large subsets (see Remark 1 in [34]), leading to a slow convergence rate. Rapid convergence can be expected only if the probability of sampling states from X f increases.
Algorithm 3: ExtendRRT*
Sensors 25 02067 i003
Several methods aim to enhance the probability of sampling states from X f through sample biasing. For instance, Informed-RRT* employs the heuristic f ^ ( x ) : = x init x 2 + x x goal 2 to estimate the unknown f ( x ) . The set X f is then estimated by
X f ^ : = { x X : x init x 2 + x x goal 2 c best } .
The Sample procedure of Informed-RRT* is modified to generate uniformly distributed samples from the hyperellipsoid X f ^ .
RRT*-Smart also implements biased sampling to accelerate the convergence rate. It tends to sample new states as close as possible to obstacles, inspired by visibility graph techniques, which optimize the existing path at turns.
Another approach is adaptive sampling, utilized by CE-RRT*, which applies the cross-entropy method to estimate the probability P ( X f ) . This estimator is then used to sample trajectories, resulting in a faster convergence rate.
Similar to RRT-Connect, there is also a bidirectional version of RRT* known as B-RRT*. B-RRT* employs a slight variation of the greedy RRT-Connect heuristic for connecting two trees. However, using two directional trees with a greedy connection heuristic does not guarantee asymptotic optimality.

4. The RRT-Based Motion Planning with Dynamics

It is challenging to apply RRT* to problems with dynamics due to the need for two domain-specific extension heuristics [40]: a distance metric and a steering procedure.
On the one hand, while the Euclidean distance is widely used in high-level path planning, it may not accurately reflect the dynamics of particular domains, especially in cases involving complex or underactuated dynamics. This can lead to poor exploration and slow convergence to optimal solutions.
On the other hand, a prevalent steering procedure in high-level path planning involves generating straight paths between vertices. However, steering safe trajectories between vertices in RRT* becomes challenging when considering low-level control synthesis.
In recent years, significant efforts have been made to adapt RRT-based algorithms to dynamic domains. This section will look at these methods that address the said challenges by leveraging either LQRs or CBFs.

4.1. The LQR-RRT* Algorithm

LQR-RRT* [48] is an extension of RRT* that integrates LQR into the Steer, Near, and Nearest procedures for optimal control synthesis. More specifically, given an equilibrium point x , cos t ( v , x ) is defined as the LQR cost from any state v to x under the LQR policy. The cost function over the neighborhood of x can be expressed as
cos t x ( v ) = ( v x ) P ( v x ) ,
where P is a solution to Equation (2). Then, the three components of RRT* are modified as follows:
  • LQRNearest: The LQRNearest ( V , x ) procedure identifies a node in V that is closest to x according to the cost function, i.e.,
    x nearest = arg min v V cos t x ( v ) .
  • Near nodes: the LQRNear ( V , x ) procedure employs the cost function to find a set of near nodes X near in T that are close to x , i.e.,
    X near : = { v V : cos t x ( v ) r n } ,
    where r n is consistent with Equation (7).
  • Steer: Given two states, x , x , the LQR controller generates a sequence of optimal controls { u i } that steer a state trajectory from x to x .

4.2. The CBF-RRT Algorithm and Its Variations

CBF-RRT is an extension of RRT that incorporates CBFs into the steering function to ensure safety, eliminating the need for the Nearest or ObstacleFree procedures of RRT. The Steer procedure is modified as follows:
  • Safe steer: Given a state x 0 , a time horizon t h , and a control reference k ( x ) , the SafeSteer ( x 0 , t h , k ) procedure steers the state x 0 to an exploratory new state x new . Concretely, it generates a sequence of control inputs u ( t ) by solving a sequence of CBF-QPs (Equation (4)) that steers a collision-free trajectory to x new at time t 0 + t h .
The overall algorithm of CBF-RRT is outlined in Algorithm 4.
Similar to CBF-RRT, CBF-RRT* incorporates CBFs into the RRT* algorithm to ensure safety. In addition to safety, RRT* requires the exact local motion planning because it needs to steer x near to x new in the ChooseParent procedure and steer x new to x near in the Rewire procedure. Consequently, CBF-RRT* implements the exact motion planning by solving a sequence of CLF-CBF-QPs (Equation (6)). The state trajectory from x new to x near can be also utilized to evaluate the cost between the two states, offering greater precision than the LQR cost-to-go function used in LQR-RRT*.
Algorithm 4: CBF-RRT
Sensors 25 02067 i004
LQR-CBF-RRT* [48], an incremental version of CBF-RRT*, not only incorporates CBFs into the steering function for safety but also employs LQR for optimal control synthesis. Additionally, it proposes truncating the state trajectory that violates CBF constraints to reduce the burden of solving the QPs, leading to a new Steer procedure:
  • LQR-CBF-Steer: Given two states x , x , the LQR controller generates a sequence of optimal controls { u i } that steer a state trajectory from x to x . The CBF constraints are checked to ensure that the generated trajectory is collision-free. If none of the CBF constraints are violated, then x is added to the tree. Otherwise, the trajectory is truncated so that the end state x new is safe, and the end state x new is then added to the tree.

5. Experimental Results

In this section, we conduct an experimental study to compare the performance of several baseline methods.

5.1. Path Planning

We evaluated RRT, RRT*, RRT-Connect, Informed-RRT*, and RRT*-Smart on two examples for planning the motions of rigid objects in 2D. The implementations of RRT, RRT*, RRT-Connect, RRT*-Smart, and Informed-RRT* were based on the reference code from the package https://github.com/zhm-real/PathPlanning (accessed on 10 December 2024). The hyperparameters for these methods were set as follows: step _ len = 1 ,   goal _ sample _ rate = 0.1 , and search _ radius = 12 (applied to RRT*-based methods only).
Figure 1 illustrates the evolution of baseline methods across both examples. The comparison between RRT and RRT* demonstrates that RRT* can asymptotically optimize the existing path but its convergence rate is notably slow. The comparison between RRT and RRT-Connect demonstrates that the greedy behavior of RRT-Connect improves both the convergence rate and final solution quality of RRT. With the direct sampling of an ellipsoidal subset, Informed-RRT* improves the convergence rate and final solution quality compared to RRT*. Additionally, with intelligent sampling and path optimization, RRT*-Smart produces significantly shorter paths than RRT*.
Figure 2 presents the convergence pattern of the costs associated with baseline methods on these two examples. RRT*-Smart demonstrates the fastest convergence rate and the highest final solution quality among all baseline methods. This highlights the effectiveness of the intelligent sampling and path optimization techniques introduced by RRT*-Smart. Notably, RRT-Connect also performs exceptionally well, as its greedy connect heuristic significantly outperforms both RRT* and Informed-RRT*. Additionally, Informed-RRT* shows improvements in both convergence rate and final solution quality compared to RRT*.

5.2. Double Integrator Model

Following [48], we considered a double integrator model with linear dynamics,
x 1 ¨ = u 1 ,   x 3 ¨ = u 2
with the state x = [ x 1 , x 2 , x 3 , x 4 ] where [ x 1 , x 3 ] denotes the position and [ x 2 , x 4 ] denotes the velocity. The control input u = [ u 1 , u 2 ] consists of acceleration.
We present an experimental performance comparison between LQR-RRT* and LQR-CBF-RRT*. The LQR-RRT* and LQR-CBF-RRT* used reference implementations from the package https://github.com/mingyucai/LQR_CBF_rrtStar (accessed on 10 December 2024). The hyperparameters for these methods were set as follows: step _ len = 10 ,   goal _ sample _ rate = 0.1 , and search _ radius = 20 .
Figure 3 shows the evolution of these two methods on the double integrator model. It can be observed that, given the same number of iterations, LQR-CBF-RRT* achieves a higher-quality final solution compared to LQR-RRT*. However, as is shown in Figure 4, LQR-RRT* achieves a faster convergence rate with respect to running time than LQR-CBF-RRT*. This difference may be attributed to the increased time complexity introduced by using CBFs in LQR-CBF-RRT*.

6. Conclusions

In this paper, we have summarized the current state of RRT-based algorithms and conducted an experimental study to compare the performance of several baseline methods. Our results confirm a good performance of the greedy connection heuristic introduced by RRT-Connect, the intelligent sampling and path optimization introduced by RRT*-Smart, and the effectiveness of LQR-RRT*. However, the field still has much to explore. There are undoubtedly many new challenges, discoveries, and insights awaiting further investigation in the future.

Author Contributions

Conceptualization, Y.C. and X.Y.; investigation, Q.C.; writing—original draft preparation, Q.C.; writing—review and editing, Y.C. and X.Y.; visualization, Q.C.; supervision, X.Y.; project administration, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. González, D.; Pérez, J.; Montero, V.M.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  2. Liu, Y.; Badler, N.I. Real-Time Reach Planning for Animated Characters Using Hardware Acceleration. In Proceedings of the 16th International Conference on Computer Animation and Social Agents, CASA 2003, New Brunswick, NJ, USA, 7–9 May 2003; IEEE Computer Society: Washington, DC, USA, 2003; pp. 86–93. [Google Scholar] [CrossRef]
  3. Taylor, R.H.; Menciassi, A.; Fichtinger, G.; Fiorini, P.; Dario, P. Medical Robotics and Computer-Integrated Surgery. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer Handbooks; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1657–1684. [Google Scholar] [CrossRef]
  4. Latombe, J. Motion Planning: A Journey of Robots, Molecules, Digital Actors, and Other Artifacts. Int. J. Robot. Res. 1999, 18, 1119–1128. [Google Scholar] [CrossRef]
  5. Beyer, T.; Jazdi, N.; Göhner, P.; Yousefifar, R. Knowledge-based planning and adaptation of industrial automation systems. In Proceedings of the 20th IEEE Conference on Emerging Technologies & Factory Automation, ETFA 2015, Luxembourg, 8–11 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–4. [Google Scholar] [CrossRef]
  6. Lozano-Pérez, T.; Wesley, M.A. An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles. Commun. ACM 1979, 22, 560–570. [Google Scholar] [CrossRef]
  7. Schwartz, J.T.; Sharir, M. On the “piano movers” problem. II. General techniques for computing topological properties of real algebraic manifolds. Adv. Appl. Math. 1983, 4, 298–351. [Google Scholar] [CrossRef]
  8. Asano, T.; Asano, T.; Guibas, L.J.; Hershberger, J.; Imai, H. Visibility-Polygon Search and Euclidean Shortest Paths. In Proceedings of the 26th Annual Symposium on Foundations of Computer Science, Portland, OR, USA, 21–23 October 1985; IEEE Computer Society: Washington, DC, USA, 1985; pp. 155–164. [Google Scholar] [CrossRef]
  9. Alexopoulos, C.; Griffin, P.M. Path planning for a mobile robot. IEEE Trans. Syst. Man Cybern. 1992, 22, 318–322. [Google Scholar] [CrossRef]
  10. Canny, J.F. The Complexity of Robot Motion Planning; MIT Press: Cambridge, MA, USA, 1988. [Google Scholar]
  11. Brooks, R.A.; Lozano-Pérez, T. A subdivision algorithm in configuration space for findpath with rotation. IEEE Trans. Syst. Man Cybern. 1985, 15, 224–233. [Google Scholar] [CrossRef]
  12. Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  13. Koenig, S.; Likhachev, M.; Furcy, D. Lifelong Planning A. Artif. Intell. 2004, 155, 93–146. [Google Scholar] [CrossRef]
  14. Koenig, S.; Likhachev, M. Fast replanning for navigation in unknown terrain. IEEE Trans. Robot. 2005, 21, 354–363. [Google Scholar] [CrossRef]
  15. Gavrilut, I.; Tepelea, L.; Gacsádi, A. CNN processing techniques for image-based path planning of a mobile robot. In Proceedings of the WSEAS 15th International Conference on Systems, Corfu Island, Greece, 14–16 July 2011; pp. 259–263. [Google Scholar]
  16. Al-Sagban, M.; Dhaouadi, R. Neural-based navigation of a differential-drive mobile robot. In Proceedings of the 12th International Conference on Control Automation Robotics & Vision, ICARCV 2012, Guangzhou, China, 5–7 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 353–358. [Google Scholar] [CrossRef]
  17. Engedy, I.; Horvath, G. Artificial neural network based mobile robot navigation. In Proceedings of the 2009 IEEE International Symposium on Intelligent Signal Processing, Budapest, Hungary, 26–28 August 2009; pp. 241–246. [Google Scholar] [CrossRef]
  18. Zhong, Y.; Shirinzadeh, B.; Yuan, X. Optimal Robot Path Planning with Cellular Neural Network. In Advanced Engineering and Computational Methodologies for Intelligent Mechatronics and Robotics; IGI Global: Hershey, PA, USA, 2013; pp. 19–38. [Google Scholar] [CrossRef]
  19. Hills, J.; Zhong, Y. Cellular neural network-based thermal modelling for real-time robotic path planning. Int. J. Agil. Syst. Manag. 2014, 7, 261–281. [Google Scholar] [CrossRef]
  20. Chen, C.; Seff, A.; Kornhauser, A.L.; Xiao, J. DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving. In Proceedings of the 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, 7–13 December 2015; IEEE Computer Society: Washington, DC, USA, 2015; pp. 2722–2730. [Google Scholar] [CrossRef]
  21. Bansal, M.; Krizhevsky, A.; Ogale, A.S. ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst. In Proceedings of the Robotics: Science and Systems XV, Freiburg im Breisgau, Germany, 22–26 June 2019. [Google Scholar] [CrossRef]
  22. Zhou, Q.; Gao, S.; Qu, B.; Gao, X.; Zhong, Y. Crossover Recombination-Based Global-Best Brain Storm Optimization Algorithm for Uav Path Planning. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2022, 23, 209–218. [Google Scholar]
  23. Roberge, V.; Tarbouchi, M.; Labonté, G. Comparison of Parallel Genetic Algorithm and Particle Swarm Optimization for Real-Time UAV Path Planning. IEEE Trans. Ind. Inform. 2013, 9, 132–141. [Google Scholar] [CrossRef]
  24. Kim, J.; Lee, J. Trajectory Optimization with Particle Swarm Optimization for Manipulator Motion Planning. IEEE Trans. Ind. Inform. 2015, 11, 620–631. [Google Scholar] [CrossRef]
  25. Morin, M.; Abi-Zeid, I.; Quimper, C. Ant colony optimization for path planning in search and rescue operations. Eur. J. Oper. Res. 2023, 305, 53–63. [Google Scholar] [CrossRef]
  26. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings of the Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; IEEE: Piscataway, NJ, USA, 1985; pp. 500–505. [Google Scholar] [CrossRef]
  27. Bahwini, T.; Zhong, Y.; Gu, C. Path planning in the presence of soft tissue deformation. Int. J. Interact. Des. Manuf. (IJIDeM) 2019, 13, 1603–1616. [Google Scholar] [CrossRef]
  28. Chiang, H.; Malone, N.; Lesser, K.; Oishi, M.; Tapia, L. Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26–30 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2347–2354. [Google Scholar] [CrossRef]
  29. Zhao, D.; Yi, J. Robot Planning with Artificial Potential Field Guided Ant Colony Optimization Algorithm. In Proceedings of the Advances in Natural Computation, Second International Conference, ICNC 2006, Xi’an, China, 24–28 September 2006; Proceedings, Part II; Lecture Notes in Computer Science. Jiao, L., Wang, L., Gao, X., Liu, J., Wu, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4222, pp. 222–231. [Google Scholar] [CrossRef]
  30. Hu, Y.; Yang, S.X. A Knowledge based Genetic Algorithm for Path Planning of a Mobile Robot. In Proceedings of the Proceedings of the 2004 IEEE International Conference on Robotics and Automation, ICRA 2004, New Orleans, LA, USA, 26 April–1 May 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 4350–4355. [Google Scholar] [CrossRef]
  31. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning; TR 98-11; Computer Science Department, Iowa State University: Ames, IA, USA, 1998; Available online: https://msl.cs.illinois.edu/~lavalle/papers/Lav98c.pdf (accessed on 10 December 2024).
  32. Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
  33. Qureshi, A.H.; Ayaz, Y. Intelligent bidirectional rapidly-exploring random trees for optimal motion planning in complex cluttered environments. Robot. Auton. Syst. 2015, 68, 1–11. [Google Scholar] [CrossRef]
  34. Gammell, J.D.; Srinivasa, S.S.; Barfoot, T.D. Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Chicago, IL, USA, 14–18 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 2997–3004. [Google Scholar] [CrossRef]
  35. Islam, F.; Nasir, J.; Malik, U.; Ayaz, Y.; Hasan, O. RRT*-Smart: Rapid convergence implementation of RRT* towards optimal solution. In Proceedings of the 2012 IEEE International Conference on Mechatronics and Automation, Chengdu, China, 5–8 August 2012; pp. 1651–1656. [Google Scholar] [CrossRef]
  36. Kobilarov, M. Cross-Entropy Randomized Motion Planning. In Proceedings of the Robotics: Science and Systems VII, University of Southern California, Los Angeles, CA, USA, 27–30 June 2011; Durrant-Whyte, H.F., Roy, N., Abbeel, P., Eds.; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar] [CrossRef]
  37. Kuffner, J.J., Jr.; LaValle, S.M. RRT-Connect: An Efficient Approach to Single-Query Path Planning. In Proceedings of the 2000 IEEE International Conference on Robotics and Automation, ICRA 2000, San Francisco, CA, USA, 24–28 April 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 995–1001. [Google Scholar] [CrossRef]
  38. Jordan, M.; Perez, A. Optimal Bidirectional Rapidly-Exploring Random Trees; Technical Report MIT-CSAIL-TR-2013-021; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology: Cambridge, MA, USA, 2013. [Google Scholar]
  39. Webb, D.J.; van den Berg, J. Kinodynamic RRT*: Asymptotically optimal motion planning for robots with linear dynamics. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 5054–5061. [Google Scholar] [CrossRef]
  40. Perez, A.; Platt, R., Jr.; Konidaris, G.D.; Kaelbling, L.P.; Lozano-Pérez, T. LQR-RRT*: Optimal sampling-based motion planning with automatically derived extension heuristics. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2012, St. Paul, MN, USA, 14–18 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 2537–2542. [Google Scholar] [CrossRef]
  41. Yang, G.; Vang, B.; Serlin, Z.; Belta, C.; Tron, R. Sampling-based Motion Planning via Control Barrier Functions. In Proceedings of the Proceedings of the 2019 3rd International Conference on Automation, Control and Robots; Prague, Czech Republic, 11–13 October 2019, ICACR 2019; pp. 22–29. [CrossRef]
  42. Ahmad, A.; Belta, C.; Tron, R. Adaptive Sampling-based Motion Planning with Control Barrier Functions. In Proceedings of the 61st IEEE Conference on Decision and Control, CDC 2022, Cancun, Mexico, 6–9 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 4513–4518. [Google Scholar] [CrossRef]
  43. Hespanha, J.P. Linear Systems Theory, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  44. Blanchini, F.; Miani, S. Set-Theoretic Methods in Control, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  45. Xu, X.; Tabuada, P.; Grizzle, J.W.; Ames, A.D. Robustness of Control Barrier Functions for Safety Critical Control. In Proceedings of the 5th IFAC Conference on Analysis and Design of Hybrid Systems, ADHS 2015, Atlanta, GA, USA, 14–16 October 2015; IFAC-PapersOnLine. Egerstedt, M., Wardi, Y., Eds.; Elsevier: Amsterdam, The Netherlands, 2015; Volume 48, pp. 54–61. [Google Scholar] [CrossRef]
  46. Khalil, H. Nonlinear Systems: Global Edition; Pearson: London, UK, 2015. [Google Scholar]
  47. LaValle, S.M.; Kuffner, J.J., Jr. Randomized Kinodynamic Planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
  48. Yang, G.; Cai, M.; Ahmad, A.; Belta, C.; Tron, R. Efficient LQR-CBF-RRT*: Safe and Optimal Motion Planning. arXiv 2023, arXiv:2304.00790. [Google Scholar] [CrossRef]
Figure 1. The evolution of baseline methods on two examples. In the visual representation, green edges represent sampled trajectories, the red trajectory highlights the final solution achieved after 1000 iterations, and gray areas represent obstacles.
Figure 1. The evolution of baseline methods on two examples. In the visual representation, green edges represent sampled trajectories, the red trajectory highlights the final solution achieved after 1000 iterations, and gray areas represent obstacles.
Sensors 25 02067 g001
Figure 2. (Top row) Costs are plotted against the running time showing the convergence rate of baseline methods, averaged over 50 runs. (Bottom row) The distribution over the final costs over 50 runs.
Figure 2. (Top row) Costs are plotted against the running time showing the convergence rate of baseline methods, averaged over 50 runs. (Bottom row) The distribution over the final costs over 50 runs.
Sensors 25 02067 g002
Figure 3. The evolution of LQR-RRT* and LQR-CBF-RRT* on the double integrator model. In the visual representation, green edges represent sampled trajectories, the red trajectory highlights the final solution achieved after 1000 iterations, and gray areas represent obstacles.
Figure 3. The evolution of LQR-RRT* and LQR-CBF-RRT* on the double integrator model. In the visual representation, green edges represent sampled trajectories, the red trajectory highlights the final solution achieved after 1000 iterations, and gray areas represent obstacles.
Sensors 25 02067 g003
Figure 4. (Left) Costs are plotted against the running time showing the convergence rate of LQR-RRT* and LQR-CBF-RRT*, averaged over 50 runs. (Right) The distribution over the final costs over 50 runs.
Figure 4. (Left) Costs are plotted against the running time showing the convergence rate of LQR-RRT* and LQR-CBF-RRT*, averaged over 50 runs. (Right) The distribution over the final costs over 50 runs.
Sensors 25 02067 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chu, Y.; Chen, Q.; Yan, X. An Overview and Comparison of Traditional Motion Planning Based on Rapidly Exploring Random Trees. Sensors 2025, 25, 2067. https://doi.org/10.3390/s25072067

AMA Style

Chu Y, Chen Q, Yan X. An Overview and Comparison of Traditional Motion Planning Based on Rapidly Exploring Random Trees. Sensors. 2025; 25(7):2067. https://doi.org/10.3390/s25072067

Chicago/Turabian Style

Chu, Yang, Quanlin Chen, and Xuefeng Yan. 2025. "An Overview and Comparison of Traditional Motion Planning Based on Rapidly Exploring Random Trees" Sensors 25, no. 7: 2067. https://doi.org/10.3390/s25072067

APA Style

Chu, Y., Chen, Q., & Yan, X. (2025). An Overview and Comparison of Traditional Motion Planning Based on Rapidly Exploring Random Trees. Sensors, 25(7), 2067. https://doi.org/10.3390/s25072067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop