Next Article in Journal
Gallstone Classification Using Random Forest Optimized by Sand Cat Swarm Optimization Algorithm with SHAP and DiCE-Based Interpretability
Previous Article in Journal
An Interferometric Multi-Sensor Absolute Distance Measurement System for Use in Harsh Environments
Previous Article in Special Issue
Dynamic SLAM Dense Point Cloud Map by Fusion of Semantic Information and Bayesian Moving Probability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation

1
School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215137, China
2
Jiangsu Eazytec Co., Ltd., Wuxi 214205, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(17), 5488; https://doi.org/10.3390/s25175488
Submission received: 30 July 2025 / Revised: 25 August 2025 / Accepted: 1 September 2025 / Published: 3 September 2025
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)

Abstract

Mobile robots face collision risk avoidance challenges in dynamic environments, necessitating that we address the safety and adaptability shortcomings of traditional navigation methods. Traditional methods rely on predefined rules, making it difficult to achieve flexible, safe, and real-time obstacle avoidance in complex, dynamic environments. To address this issue, a risk-aware, dynamic, adaptive regulation barrier policy optimization (RADAR-BPO) method is proposed, combining proximal policy optimization (PPO) with the control barrier function (CBF). RADAR-BPO generates exploratory actions using PPO and constructs a real-time safety filter using the CBF. This method uses quadratic programming to minimize risky actions, thereby ensuring safe obstacle avoidance while maintaining navigation efficiency. Testing of three phased learning environments in the ROS Gazebo simulation environment demonstrated that the proposed method achieves an obstacle avoidance success rate of nearly 90% in complex, dynamic, multi-obstacle environments and improves the overall mission success rate, validating its robustness and effectiveness in complex dynamic scenarios.

1. Introduction

Currently, the convergence of artificial intelligence (AI), the Internet of Things (IoT), and automation technologies is facilitating intelligent transformation across various industries [1,2,3]. Robotic technology represents a key driver of this evolution [4]. Advancements in intelligent algorithms and hardware enable robots to undertake increasingly complex tasks. As a critical subset of robots, mobile robots are being deployed extensively in industry [5,6], logistics [7], and agriculture [8].
Despite their diverse designs and functionalities, mobile robots commonly face challenges in safe obstacle avoidance and efficient navigation during operation [9]. In dynamic and complex environments, they must accurately identify obstacles, plan safe trajectories, and make real-time adjustments [10]. Ensuring task completion necessitates advanced sensor systems for real-time environmental perception [11]. Moreover, using intelligent algorithms to process information, execute dynamic path planning, and adapt to uncertainties is critical, and enhancing these capabilities is key to ensuring efficient and safe operation in complex environments [12,13,14].
Beyond traditional obstacle avoidance, navigating shared environments requires socially aware navigation capabilities [15]. Robots must not only avoid collisions but also adhere to social norms, exhibit predictable behavior, and respect personal space to achieve smooth and comfortable human–robot coexistence [16,17]. This involves understanding and predicting the motivations and intentions of human behavior and has become a research focus in the field of human–robot interaction (HRI) [18,19]. However, although the ultimate goal of social navigation is to achieve natural and comfortable human–machine coexistence, collision risk mitigation remains the most basic and critical safety prerequisite. Any social navigation strategy must be based on absolute collision avoidance in order to further optimize comfort and efficiency [20,21].
While techniques have previously been developed for obstacle avoidance and navigation by mobile robots, significant limitations persist [22,23]. Traditional control methods such as the artificial potential field (APF) method [24] and dynamic window approach (DWA) [25], often reliant on predefined rules and fixed path planning, perform adequately in simple tasks but lack adaptability and flexibility in complex, dynamic environments [26,27,28]. Their limited capacity for autonomous adjustment frequently results in decision errors or inefficiency when handling intricate scenarios [29]. Furthermore, the safety assurance mechanisms of conventional obstacle avoidance strategies remain inadequate, leading to elevated collision risks during operation. These constraints hinder their widespread deployment in complex and variable settings [30,31].
Advances in AI have facilitated the application of reinforcement learning (RL) in mobile robotics [32]. As a machine learning approach, RL demonstrates significant potential for use in autonomous robot decision-making [33]. It addresses key challenges including task execution policy optimization, environmental adaptability enhancement, and autonomous decision-making for complex tasks [34,35]. By learning optimal policies through environmental interaction, RL enables effective decision-making in dynamic and uncertain environments [36].
To enhance the time efficiency for mobile robot navigation in crowded environments, Zhou et al. [37] proposed a social graph-based double dueling deep Q-Network (DQN). This approach employs a social attention mechanism to extract effective graph representations, optimizes the state–action value approximator, and leverages simulated experiences generated by a learned environmental model for further refinement, demonstrating significantly improved success rates in crowded navigation tasks. Li et al. [38] introduced a fused deep deterministic policy gradient (DDPG) method, integrating a multi-branch deep learning network with a time-critical reward function, which effectively enhanced the convergence velocity and navigation performance in complex environments.
On the other hand, the control barrier function (CBF) has garnered attention in regard to obstacle avoidance control for mobile robots [39,40,41]. Originating from control theory, the CBF provides robust control guarantees for safety-critical systems operating under constraints [42]. For mobile robot obstacle avoidance, the CBF enforces strict safety conditions during motion by constructing safety constraints [43,44].
In order to further enhance the obstacle avoidance performance of the CBF, researchers have proposed various improvements and hybrid approaches. Singletary et al. [45] conducted a comparative analysis of the performance of the CBF and artificial potential fields (APFs) in robotic obstacle avoidance, demonstrating that the CBF generates smoother trajectories, effectively mitigates oscillations, and offers enhanced safety guarantees. Jian et al. [46] introduced a dynamic control barrier function, integrating it with model predictive control (MPC) to ensure collision-free trajectories for robots operating in uncertain and dynamic environments.
This paper proposes a novel safe reinforcement learning framework integrating the control barrier function (CBF) and proximal policy optimization (PPO) to reduce collisions in navigation for a mecanum wheel robot. Implemented within the ROS Gazebo simulation environment, the framework leverages CBF-based safety constraints formulated as a quadratic programming problem. This dynamically adjusts potentially unsafe actions generated by the PPO policy, guaranteeing collision mitigation while preserving motion efficiency. Comparative experiments against baseline algorithms (PPO, DQN, and DDPG) are conducted to rigorously evaluate the method’s effectiveness across key safety and navigation performance metrics.
The remainder of this paper is organized as follows: Section 2 introduces the robot model and basic methods; Section 3 presents the main research and methods; Section 4 describes the simulation results and and provides a discussion; and the final section, Section 5, concludes this paper.

2. Preliminary Knowledge

This section establishes the theoretical groundwork for the mobile robot platform, detailing its kinematic model and core implementation approaches. To support the development of subsequent control strategies and navigation algorithms, the unique omnidirectional motion capabilities inherent to the mecanum wheel robot are mathematically characterized. This section then presents fundamental obstacle avoidance techniques utilizing relevant control methodologies.

2.1. Kinematic Model of Robot

The mecanum wheel robot can achieve omnidirectional movement due to its unique wheel hub structure. The passive rollers installed at a 45° angle around each wheel allow the robot to translate or rotate in any direction without changing the orientation of the robot. Its kinematic model defines the relationship between the wheel velocities and the robot’s holistic motion. This representation is illustrated in Figure 1.

2.1.1. Forward and Inverse Kinematics Models

The parameters related to the robot chassis are defined in Figure 1. In the model W a represents half of the body length (from the front wheel to the center of the rear wheel), and W b represents half of the body width (the distance between the left and right wheel centers). The robot’s linear velocity is v l i n e a r , and v x and v y are the velocities in the x and y directions, respectively.
In addition v n ( n = 1 , 2 , 3 , 4 ) is the linear velocity of each wheel, and v n ω is the angular velocity of the chassis corresponding to each wheel, and V n R represents the velocity at which the wheel moves forward. In summary, the forward kinematics model of the robot can be described as
v x v y ω = J V 1 R V 2 R V 3 R V 4 R T ,
J = 1 4 1 1 1 1 1 1 1 1 1 d 1 d 1 d 1 d ,
where d = W a + W b represents the characteristic length of the robot chassis. The matrix J establishes the mapping relationship between the wheel velocities and the chassis motion in the body-fixed frame. The corresponding inverse kinematics of the mecanum wheel robot chassis are shown as
V 1 R = v x v y ( W a + W b ) ω V 2 R = v x + v y + ( W a + W b ) ω V 3 R = v x + v y ( W a + W b ) ω V 4 R = v x v y + ( W a + W b ) ω .
Therefore, the velocity of the robot chassis and the velocity of the four wheels can be converted into each other using the forward and inverse kinematics, which facilitates control of the robot’s trajectory. Based on this, a schematic diagram of the robot’s wheel odometry model is shown in Figure 2.

2.1.2. Ideal Model of Robot’s Odometry

Under ideal conditions, the relationship between the robot’s current position, P k , and the previous position, P k 1 , is expressed by the offset Δ k . The next posture, P k + 1 , is the current posture, P k , plus the offset Δ k + 1 , and the corresponding expression equation is
P k + 1 = P k + Δ k + 1 Δ t + 1 ,
where P k = x k y k θ k T , and Δ k + 1 = v x k + 1 v y k + 1 ω k + 1 T . Assuming that the time, Δ t + 1 , between two points is very small, the next position can be expressed as the linear and angular velocity offsets of the current position, and the expanded component form is as follows:
x k + 1 = x k + v x k + 1 y k + 1 = y k + v y k + 1 θ k + 1 = θ k + ω k + 1 .
Therefore, the derived forward and inverse kinematic models, combined with the discrete-time odometry update equations, provide a complete mathematical framework to predict the robot’s chassis motion based on the wheel velocities, which forms the basis for controlling the robot’s trajectory.

2.2. Control Barrier Function

In the process of robot movement and navigation, obstacle avoidance and velocity limiting are indispensable to prevent uncontrollable behavior. This section mainly introduces the control barrier function (CBF), which is often used in the control field as a controller and can also be used to constrain a robot’s actions.

2.2.1. Definition of Safety Set

For a dynamic system, the safety set D is defined as the set of safe states in the system. A diagram of the safety set is shown in Figure 3, and D is defined as
D = { x x R n h ( x ) 0 } , D = { x x R n h ( x ) = 0 } , Int ( D ) = { x x R n h ( x ) > 0 } ,
where the safe set D contains the status x R n . When the system state x is safe, it means that it is inside the safety combination D or just on the boundary, which is h ( x ) 0 . On the contrary, h ( x ) < 0 means that state x is outside the safe set.

2.2.2. Definition of Control Barrier Funciton

Consider a control affine system and the corresponding dynamic system in the nonlinear case, which can be expressed as
x ˙ = F ( x , u ) ,
x ˙ = f ( x ) + g ( x ) u ,
where u R n , and F is Lipschitz continuous. The behavior of the system is f when there is no control input, and g represents the control of the system and affects the changes in the system through a control input, u.
For h ( x ) , x D , and h ( x ) 0 , there exists α , such that any state, x, in the set D satisfies the condition
h ˙ ( x , u ) = L f h ( x ) + L g h ( x ) u α h ( x ) ,
where h ( x ) is considered to be a CBF, and α is an extended K function.

2.3. Basic Theory of Reinforcement Learning

Reinforcement learning (RL) is a machine learning approach characterized by an agent learning to optimize its actions through trial-and-error interactions with a dynamic environment, aimed at discovering strategies that maximize the long-term returns. Its core feature is learning through a trial-and-error mechanism, relying on reward signals to guide behavior optimization.
RL usually models a problem as a Markov decision process (MDP). A MDP can be represented by a five-tuple, S , A , P , R , γ .
S: The state space represents the set of all the possible states of the agent in the environment. The state x S represents a specific situation in the environment at a certain moment and contains all the information needed by the agent.
A: The Action Space and the actions that the agent can take during an interaction with the environment are usually represented by A ( s ) , where a A ( s ) represents the set of possible actions in state s.
P: The State Transition Function represents the probability of transitioning to state s t + 1 if an action, a, is executed in state s: that is, P ( s t + 1 | s t , a ) .
R: The reward function can be written as R ( s , a ) and refers to the immediate reward feedback obtained by the agent when it performs action a in state s.
γ : The discount factor is used to calculate future cumulative rewards, γ [ 0 , 1 ] . A larger discount factor is suitable for tasks that focus on achieving long-term goals, while a smaller discount factor is more suitable for tasks that emphasize immediate feedback.
A robot can be regarded as an intelligent agent in reinforcement learning. The process of training it to interact with the environment involves RL. The corresponding environmental interaction is shown in Figure 4.
Figure 4 depicts the iterative learning process through which an agent learns to interact with its environment within an Actor–Critic framework. The Actor generates an action mechanism and communicates it to the robot agent. The robot explores the environment and obtains the corresponding rewards and states, which are then passed to the Critic network for value calculation. The Critic updates the Actor with the calculated parameters and outputs a better action strategy.

3. Main Research Content and Methods

This section mainly describes the motion control method based on proximal policy optimization (PPO) and the design of the real-time constraint mechanism using the CBF. On this basis, a novel fusion framework is constructed to implement a strategy for safe reinforcement learning, so as to ensure that the robot maintains safety performance while exerting its exploration advantages and further improve the stability of reinforcement learning.

3.1. Proximal Policy Optimization

The PPO objective function and Actor–Critic architecture are defined as key mechanisms for generating efficient exploratory navigation strategies for omnidirectional mobile robotic platforms. Their core function is to learn adaptive obstacle avoidance strategies for use in complex or dynamic scenarios and output robot velocity commands that balance navigation task completion and safety.

3.1.1. PPO Principle Description

As an Actor–Critic derivative, PPO enforces policy update constraints via a trust-region-inspired mechanism to ensure monotonic improvement. By introducing an objective function clipping mechanism, it achieves efficient learning while ensuring training stability. Its core components and formulas are as follows:
Actor : π θ ( a t | s t ) ,
Critic : V ϕ ( s t ) ,
where θ and ϕ are the parameters of the policy network and the value network, respectively, the current state is s t , and a t is the generated action.
First, we need to calculate the generalized advantage estimation (GAE), calculate the TD error based on V ϕ ( s t ) , and calculate the advantage A ^ t based on the TD error δ t . The specific equation is expressed as
δ t = r t + γ V ϕ ( s t + 1 ) V ϕ ( s t ) ,
A ^ t = δ t + γ λ δ t + 1 + + ( γ λ ) T t + 1 δ T 1 ,
with γ [ 0 , 1 ] being the discount factor and λ [ 0 , 1 ] the GAE parameter.
The importance sampling ratio in the policy network update r t ( θ ) is
r t ( θ ) = π θ ( a t | s t ) π θ old ( a t | s t ) .
Next, we need to calculate the clipping objective function, which is also one of the core components of PPO. It can be expressed as
L c l i p P P O ( θ ) = E t min r t ( θ ) A ^ t , clip r t ( θ ) , 1 ϵ , 1 + ϵ A ^ t ,
where ϵ is the clipping threshold, and it limits the importance sampling ratio r t ( θ ) to within the interval [ 1 ϵ , 1 + ϵ ] .
For an action with a positive advantage ( A ^ t > 0 ) , the objective function is clipped to prevent the policy from increasing the probability of the action too aggressively ( r t ( θ ) > ( 1 + ϵ ) ) . Conversely, for an action with a negative advantage ( A ^ t < 0 ) , the objective function is clipped to prevent the probability from decreasing too much ( r t ( θ ) < ( 1 ϵ ) ) .
In the Critic network update, the value loss function that needs to be calculated is as follows:
L v f P P O ( ϕ ) = E t V ϕ ( s t ) R t 2 ,
where R t = k = 0 T γ k r t + k is the cumulative return. At the same time, PPO introduces an entropy regularization term to enhance the exploration of strategies:
L e n t P P O ( θ ) = E t H ( θ ( a | s t ) ) ,
where H ( θ ( a | s t ) ) is the entropy of the strategy.
Then the total loss function can be calculated as
L t o t a l P P O ( θ , ϕ ) = E t L c l i p P P O ( θ ) c 1 L v f P P O ( ϕ ) + c 2 L e n t P P O ( θ ) ,
where c 1 and c 2 are weight parameters.

3.1.2. PPO Algorithm Flow

The PPO algorithm is an efficient policy gradient method. By introducing an objective function clipping mechanism, it can achieve efficient learning while ensuring training stability. The core idea of PPO is to limit the step size of policy updates to avoid performance crashes during training. The algorithm flow is shown in Algorithm 1.
Algorithm 1 Proximal policy optimization strategy.
  • Input: Initial policy θ 0 , value ϕ 0 , ϵ , γ , λ , c 1 , c 2 , K, M
  • Output: Optimized θ * , ϕ *
    1:
    Initialize θ θ 0 , ϕ ϕ 0
    2:
    for   k = 0  to  K  do
    3:
        Collect D k via π θ
    4:
        for each t in D k  do
    5:
             R t l = t T γ l t r l
    6:
             A ^ t l = 0 T t ( γ λ ) l δ t + l
    7:
        end for
    8:
         θ o l d θ
    9:
        for episode = 1  to M do
    10:
            Sample minibatch B from D k
    11:
             L t o t a l P P O ( θ , ϕ ) 0
    12:
            for each ( s t , a t , R t , A ^ t ) in B  do
    13:
                r t ( θ ) π θ ( a t | s t ) / π θ o l d ( a t | s t )
    14:
                L c l i p P P O ( θ ) min ( r t ( θ ) A ^ t , clip ( r t ( θ ) , 1 ϵ , 1 + ϵ ) A ^ t )
    15:
                L v f P P O ( ϕ ) ( V ϕ ( s t ) R t ) 2
    16:
                L e n t P P O ( θ ) E t H ( θ ( a | s t ) )
    17:
                L t o t a l P P O ( θ , ϕ ) L t o t a l P P O ( θ , ϕ ) L c l i p P P O ( θ ) + c 1 L v f P P O ( ϕ ) c 2 L e n t P P O ( θ )
    18:
            end for
    19:
             θ * θ η θ θ L t o t a l P P O
    20:
             ϕ * ϕ η ϕ ϕ L t o t a l P P O
    21:
        end for
    22:
    end for
    23:
    return  θ * θ , ϕ * ϕ

3.2. Application and Implementation of CBF

The CBF is specifically used for its safety conditions, which are based on the kinematic model of the omnidirectional mobile robot used in this study. The focus is on demonstrating how this function acts as a real-time safety filter, applying online corrections to the original motion output by the PPO algorithm to achieve collision avoidance during navigation. Its quadratic programming formulation explicitly serves a single goal: to minimize the safety corrections to the PPO-determined motion while ensuring that the robot maintains its specific safety distance and stays within its physical motion limits.

3.2.1. Safe Sets and Safe Functions

The set of states, D , in which the robot can safely operate and the barrier function h ( p ) are defined as
D = { p R n : h ( p ) 0 } ,
h ( p ) = | | p p o b s | | 2 r s a f e 2 ,
where p = [ x , y ] represents the coordinates of the robot in the global coordinate system, p o b s represents the coordinates of the nearest obstacle, and r s a f e is the safety distance threshold.

3.2.2. Implementation of CBF Safety Constraints

The time derivative of the CBF can be calculated as
d h d t = h · d p d t = [ h x , h y ] · [ x ˙ , y ˙ ] T ,
where h x = 2 ( x x o b s ) , h y = 2 ( y y o b s ) . Then substitute the robot motion model into the equation
d h d t = 2 v [ ( x x o b s ) c o s θ + ( y y o b s ) s i n θ ] .
Since the CBF needs to satisfy this constraint, substitute Equation (22) into Equation (9):
2 v [ ( x x o b s ) c o s θ + ( y y o b s ) s i n θ ] α ( x x o b s ) 2 + ( y y o b s ) 2 r s a f e 2 .
Set a relative position vector, Δ p , and the unit vector of the robot’s forward direction, i, as
Δ p = x x o b s , y y o b s T ,
i = cos θ , sin θ T .
The corresponding constraints can be transformed into
2 v · ( Δ p · i ) α ( Δ p 2 r safe 2 ) .
In summary, the overall constraint solving framework of the CBF can be written as
( v , ω ) * = arg min v , ω R n ( v a t ( v ) ) 2 + ( ω a t ( ω ) ) 2 s . t . 2 v · ( Δ p · n ) α ( Δ p 2 r s a f e 2 ) , 0 v v max , | ω | ω max ,
where v m a x is the max value of v, and ω m a x is the max angular velocity of ω .
Figure 5 shows a simple diagram of the CBF obstacle avoidance process, as well as an example of the impact of different α values on the trajectory. At each step, the CBF combines the current velocity and other state variables to determine the next output that meets the constraint requirements and combines this with the result from the obstacle function calculation to continuously move until it reaches the target position.
A CBF framework for safe robot motion is proposed. First, a safe set is used by a barrier function, which represents the set of collision-free trajectories. Then, the time derivative of this barrier function is derived from the robot’s kinematic model and combined with the K function to obtain linear inequalities for the control variables. Finally, a quadratic program is formulated to minimize the modification of the nominal control command while satisfying the CBF safety constraints and the actuator’s restrictions on the linear and angular velocities. The corresponding algorithm for the CBF is shown in Algorithm 2.
Algorithm 2 CBF safety filter.
  • Input: Robot kinematic model, safety distance threshold r s a f e , maximum velocity v max , maximum angular velocity ω max , CBF parameters α
  • Output: Optimized control inputs v * , ω *
    1:
    Initialize robot state p = [ x , y ] , robot orientation θ , velocity v, and angular velocity ω
    2:
    Calculate the relative position vector Δ p = [ x x o b s , y y o b s ] T to the nearest obstacle
    3:
    Calculate the unit vector of the robot forward direction i = [ cos θ , sin θ ] T
    4:
    Calculate the barrier function h ( p ) = Δ p 2 r s a f e 2
    5:
    Calculate the time derivative of the barrier function using (22)
    6:
    Set the initial control inputs v 0 , ω 0
    7:
    for  k = 0   to  K  do
    8:
           Calculate the constraint 2 v · ( Δ p · i ) α ( Δ p 2 r s a f e 2 )
    9:
          where i = [ cos θ , sin θ ] T
    10:
        and Δ p = [ x x o b s , y y o b s ] T
    11:
      Solve the optimization problem
    ( v , ω ) * = arg min v , ω R n ( v a t ( v ) ) 2 + ( ω a t ( ω ) ) 2 s . t . 2 v · ( Δ p · i ) α ( Δ p 2 r s a f e 2 ) , 0 v v max , | ω | ω max .
    12:
        Update control inputs v * v , ω * ω
    13:
        Update robot state using the kinematic model
    14:
    end for
    15:
    return  v * v , ω * ω

3.3. Risk-Aware, Dynamic, Adaptive Regulation Barrier Policy Optimization

This section designs a risk-aware, dynamic, adaptive regulation barrier policy optimization (RADAR-BPO) method which combines the efficiency of stable exploration with PPO. The Actor outputs probabilistic actions, and the Critic updates their value and optimizes the Actor based on the information obtained from feedback on the agent’s motion in the environment. In addition, extra security is provided by the CBF, which was added to further optimize the actions output by the PPO algorithm for the actor, filter actions with risky values, and output safe actions. This method deeply integrates the exploration capabilities of PPO and the security of the CBF, so that the robot agent can maximize the value of exploration in different environments. A corresponding block diagram of the complete system is shown in Figure 6, and the algorithm is shown in Algorithm 3.
Figure 6. Complete RADAR-BPO framework for training to interact with environment.
Figure 6. Complete RADAR-BPO framework for training to interact with environment.
Sensors 25 05488 g006
Algorithm 3 RADAR-BPO.
  • Input: Initial policy θ 0 , value ϕ 0 , safety params
  • Output: Optimized θ * , ϕ *
    1:
    Initialize θ θ 0 , ϕ ϕ 0
    2:
    for iteration k = 0  to K do
    3:
        Data Collection:
    4:
        Collect trajectory D k via policy π θ
    5:
        Policy Optimization:
    6:
        Compute advantages A ^ t and returns R t for D k
    7:
         θ old θ
    8:
        Update θ , ϕ using PPO loss with D k
    9:
        for each state s t in D k  do
    10:
            Get PPO action a t PPO = ( v PPO , ω PPO )
    11:
            Compute safety constraint based on robot state
    12:
            Safety Filtering:
    13:
            Solve CBF-QP:
    ( v * , ω * ) = arg min v , ω a a t PPO 2 s . t .     CBF constraints
    14:
            Execute safe action a t safe = ( v * , ω * )
    15:
        end for
    16:
    end for
    17:
    return  θ * θ , ϕ * ϕ
In addition, this method fully considers information on the robot’s spatial dimensions during the obstacle avoidance process, including its position, orientation, distance from obstacles, etc., and transforms this spatial information into safety constraints using the CBF and embeds it into a reinforcement learning-based decision-making process, thereby achieving safe and efficient navigation in complex dynamic environments.
To achieve an optimal balance between navigation efficiency and safety guarantees, a multi-objective reward function with adaptive components is designed. The mathematical formulation is defined as follows:
r t = r h + r d + r ob + r v ,
where r h , r d , r ob , and r v are defined below.
The heading reward r h encourages alignment with the target direction:
r h = 1 + cos ( θ d i f f ) ,
where θ d i f f denotes the angular deviation between the robot’s current orientation and the target direction. The distance reward r d motivates progression toward the target position:
r d = 5 exp 2 d t d 0 ,
where d t is the current Euclidean distance to the target and d 0 is the initial distance.
The obstacle penalty r ob ensures collision risk awareness:
r ob = r r i s k if d min < r s a f e r n o r m a l otherwise ,
where d min is the distance to the nearest obstacle.
The velocity reward r v optimizes the motion efficiency:
r v = exp ( v t v d e s ) 2 2 σ 2 ,
with an adaptive optimal velocity of v d e s = 0.25 min ( 1.0 , d t ) and σ = 0.1 . This equation encourages a higher velocity when distant from the target while promoting precision during the final phases.
The reward function integrates the four components in an additive manner. Although this is an equally weighted sum, each term is designed to have comparable magnitudes to prevent any one objective from overly dominating the learning process. For example, r h [ 0 , 2 ] , and r d is scaled by a factor and exponentially decays with distance. r ob provides a significant but bounded penalty. This design prioritizes simplicity, interpretability, and minimal parameter tuning in the early stages of the algorithm’s implementation, resulting in a reliable benchmark. The r v component is not a constant reward but a term used to fine-tune the efficiency of the movement.
The RADAR-BPO framework establishes a safety-critical reinforcement learning paradigm through integrated policy optimization and real-time safety assurance. During each training iteration, the agent executes interactions with the environment using its current policy network π θ , gathering trajectory data that includes environmental states, exploratory actions, and reward signals. This collected experience forms the foundation for subsequent policy refinement.
Following data collection, the algorithm performs proximal policy optimization using the GAE. This involves calculating the temporal difference errors to estimate the action advantages, then updating both the policy and value networks using a specialized objective function. The optimization balances exploration incentives with value approximation while maintaining training stability through gradient clipping.
During action execution, each policy-generated velocity command undergoes safety verification using a CBF filter. This module solves a quadratic optimization problem that minimizes deviations from the original actions while enforcing collision avoidance constraints. The safety verification incorporates the robot’s positional relationship with obstacles and its current heading orientation to dynamically adjust the motion commands.
The resulting safe actions preserve the learning direction of the policy network while ensuring formal safety guarantees through real-time constraint enforcement. This synergistic integration creates an evolution mechanism where policy improvement and safety assurance mutually reinforce each other throughout the learning process. The system maintains continuous compliance with safety boundaries while progressively refining its navigation strategy through environmental interactions.

4. Case Study

To rigorously evaluate the performance and validate the efficacy of the proposed risk-aware, dynamic, adaptive regulation barrier policy optimization (RADAR-BPO) framework for collision risk mitigation in mobile robot navigation, this section presents comprehensive simulation experiments conducted within the ROS Gazebo environment.

4.1. Test Environment Setup

The simulation environments visualized in Figure 7 were constructed within the ROS Gazebo platform to test a robot’s obstacle avoidance navigation. Specifically, Figure 7a presents a dynamic pedestrian environment, Figure 7b demonstrates a multi-obstacle environment, and Figure 7c showcases a complex obstacle environment incorporating both static and dynamic elements.
In Figure 7a, there are two pedestrians walking back and forth at a certain speed. The robot needs to avoid them and reach the target point while the pedestrians are moving. In Figure 7b, there are multiple static cylindrical or cubic obstacles. The robot needs to avoid the static obstacles and successfully navigate to the target point. Figure 7c contains multiple pedestrians and multiple different static obstacles. The robot also needs to complete a navigation task in this complex and changing environment.

4.2. Design of Experimental Test

The motion trajectories of the pedestrians in Figure 7a,c are shown in Table 1, and there are two stationary pedestrians in Figure 7c.
In the Gazebo coordinate system (the positive direction on the y-axis is left and the negative direction is right), pedestrian 1 moves 1 m to the right from their initial position (0.5, 2.5) to (0.5, 1.5), turns on the spot, moves 1 m to the left, returns to their starting point (0.5, 2.5), and stays there. At the same time, pedestrian 2 moves 2 m to the left from their initial position (3.0, 0.0) to (3.0, 2.0), turns on the spot, moves 2 m to the right, returns to their starting point (3.0, 0.0), and stays there. Both pedestrians complete a round trip in the y-axis direction. There are two other stationary pedestrians: one is fixed at (1.0, 0.0), and the other is fixed at (2.0, 3.0). The robot starts from the starting point (0.0, 0.0) and needs to traverse the dynamic environment to reach the target point (3.0, 3.0). Its path will be blocked by moving pedestrians and it will need to avoid stationary pedestrian obstacles and other obstacles.
Table 2 lists the key parameter settings used to train the RADAR-BPO navigation algorithm. These parameters cover aspects such as policy optimization, the safety constraints, the robot’s kinematic model, and the training environment configuration. Some parameter values (such as discount factors and GAE parameters) were within typical ranges or needed to be adjusted according to the specific environment.
The algorithm was trained on each environment for a fixed number of episodes: 150 for Env 1 (dynamic pedestrians), 200 for Env 2 (multiple static obstacles), and 300 for Env 3 (multiple complex obstacles). This design with an increasing environment complexity and training time was intended to mimic curriculum learning, allowing the agent to consolidate foundational skills before tackling more difficult tasks. Each training episode terminated when the agent successfully reached the goal, collided with an obstacle, or reached the maximum step limit.

4.3. Test Results and Analysis

To comprehensively evaluate the performance of the proposed RADAR-BPO framework and quantitatively assess its effectiveness in mitigating collision risks while maintaining navigation efficiency, extensive simulations were conducted across the three distinct environments introduced in Section 4.1 (Figure 7). The evaluation employed a three-stage progressive training paradigm: Stage 1 (Figure 7a) utilized the dynamic pedestrian environment (Env 1), Stage 2 (Figure 7b) progressed to the multi-static obstacle environment (Env 2), and Stage 3 (Figure 7c) took place in the complex hybrid environment containing both static obstacles and dynamic pedestrians (Env 3). This staged approach rigorously tested the algorithm to determine its fundamental obstacle avoidance ability.
The core performance metrics included the learning stability, safety performance, and navigation efficiency. Crucially, the cumulative reward curves from throughout the training process were analyzed and compared against those of the baseline PPO algorithm. This direct comparison highlighted the impact of integrating the real-time CBF safety filter within the RADAR-BPO framework. The corresponding segmented training reward curve is shown in Figure 8.
As shown in Figure 8, during the progressive training process implemented across three stages (Env 1, Env 2, and Env 3), the RADAR-BPO algorithm significantly outperformed all the baseline algorithms (PPO, DQN, and DDPG) in terms of the average reward in all the environments. This overall performance advantage was demonstrated by the following: in the relatively simple Env 1 stage, while all the algorithms initially experienced low rewards, RADAR-BPO quickly escaped this low-reward zone and stabilized at a higher level. Entering the more complex Env 2 stage, RADAR-BPO exhibited a burst of performance growth, with its reward values rapidly exceeding 1000 and stabilizing at approximately 1100. In contrast, PPO slowly climbed to around 400, while the DQN and DDPG lagged significantly behind. In the most complex Env 3 stage, the challenging environment caused performance degradation for all the algorithms. However, RADAR-BPO maintained an absolute advantage, stabilizing at around 760 with a smooth, volatility-resistant curve. PPO fluctuated violently below 400 and showed weak growth. The DQN and DDPG’s performance remained sluggish, failing to improve significantly, and the gap between their performance and RADAR-BPO’s was the largest.
Comparing the two curves at the beginning of the Env 3 phase reveals that PPO’s reward values exhibited a larger upward gradient and oscillation amplitude, while the rise in RADAR-BPO’s values was more stable. This phenomenon reveals the different learning modes of the two algorithms: The PPO strategy almost failed after an environment switch, requiring a painstaking learning process to recover from an extremely high collision rate, resulting in an unstable learning process. In contrast, RADAR-BPO benefited from the real-time safety guarantees provided by the CBF. Its strategy retained its core obstacle avoidance capabilities after an environment switch, and its learning process involved stable fine-tuning based on a higher performance baseline to adapt to new dynamic obstacles. Therefore, RADAR-BPO sacrificed a seemingly larger learning amplitude in exchange for a higher, more stable, and safer final performance.
This result clearly shows that the RADAR-BPO framework integrated with real-time CBF security filtering can not only achieve higher task returns in complex dynamic environments (based on the higher success rate and average reward in Table 3) but also significantly improve the convergence speed of the learning process, the final performance ceiling, and the training stability, effectively solving the core problems of low exploration efficiency, large policy fluctuations, and limited performance of traditional PPO in safety-critical scenarios.
According to the results listed in Table 3, RADAR-BPO outperformed the PPO algorithm in most metrics. In Env 1, RADAR-BPO’s collision rate was 68.67%, lower than PPO’s 82.00%; in Env 3, RADAR-BPO’s collision rate further decreased to 10.67%, compared to PPO’s 30.67%. Furthermore, RADAR-BPO achieved 47, 164, and 268 successes in the three different environments, respectively, all exceeding PPO’s 27, 134, and 208. In terms of the average reward, RADAR-BPO also generally outperformed PPO, with these algorithms achieving 1070.09 and 377.01, respectively, in Env 2. These results demonstrate that RADAR-BPO maintains high task completion efficiency and a high reward yield while reducing the collision rate.
In contrast, the DDPG and DQN algorithms generally underperformed in comparison to RADAR-BPO and PPO. The DDPG’s collision rate was above 75% across all the environments, and its average reward was significantly lower than that of the other algorithms. The DQN had zero successes and a 100% collision rate in Env 1. In Env 2 and Env 3, its collision rates were 55.50% and 68.00%, respectively. While its average reward was higher than that of the DDPG, it was still lower than that of RADAR-BPO and PPO.
To provide an intuitive visualization of the algorithm’s real-time decision-making and safety assurance capabilities in the most challenging scenario, the trajectory evolution of the robot navigating across Env 3 over a critical 9 s interval (from t = 1 s to t = 9 s) is presented and discussed. This sequence illustrates how RADAR-BPO dynamically adjusted the robot’s path to safely avoid both static obstacles and moving pedestrians while progressing towards the target.
In Figure 9, the robot starts from the starting position (0.0, 0.0), initially accelerates to bypass the stationary pedestrian in front, and turns left toward the target direction; when it encounters Pedestrian 1 (0.5, 2.5 → 1.5) moving horizontally to the right, it suddenly turns at a sharp angle to achieve emergency avoidance and simultaneously and smoothly avoids walls and nearby obstacles. Under the combined threat of Pedestrian 2 (3.0, 2.0 → 0.0) moving horizontally and dense obstacles, the robot flexibly adjusts the path to squeeze through and finally arrives at the end point (3.0, 3.0) accurately. The entire path maintains the straight-line efficiency of PPO in open areas and forms a smooth and conservative contour using CBF constraints when approaching obstacles. The collision-free nature of the entire process verifies RADAR-BPO’s seamless integration of exploration and safety in dynamic and dense scenes.

5. Conclusions and Future Work

This paper proposes RADAR-BPO (risk-aware, dynamic, adaptive regulation barrier policy optimization), a novel safe reinforcement learning framework integrating PPO with CBF-based safety filters to mitigate collision risks in mobile robot navigation. The framework leverages PPO for exploratory policy generation while employing the CBF as a real-time safety filter, formulated as a quadratic programming problem, to minimally modify risky actions and ensure collision avoidance. Implemented on a mecanum wheel robot within the ROS Gazebo simulation environment, the method demonstrated significant improvements in safety performance across diverse dynamic and complex scenarios. Comparative experiments against baseline PPO, DQN, and DDPG algorithms confirmed that RADAR-BPO achieves higher success rates, lower collision rates, and superior average rewards while maintaining navigation efficiency, highlighting its effectiveness in balancing exploration with formal safety guarantees.
In future work, the framework could be extended to address more complex real-world challenges. Potential directions include validating the approach on physical robotic platforms to assess its real-time performance and robustness under sensor noise and hardware limitations. Additionally, exploring the use of adaptive or learned CBF parameters to handle heterogeneous obstacle shapes and uncertain dynamics, integrating multi-robot collision avoidance scenarios, and extending the method to unstructured outdoor environments would further enhance this approach’s applicability.

Author Contributions

Conceptualization: B.G., G.W., Y.C. and Y.G.; methodology: B.G., G.W., Y.C. and Y.G.; software: B.G., Y.G. and Q.X.; validation: B.G. and Y.C.; formal analysis: G.W.; investigation: B.G. and Y.C.; resources: B.G., G.W., Y.C. and Y.G.; data curation: B.G.; writing—original draft preparation: B.G.; writing—review and editing: Y.C. and Y.G.; visualization: B.G.; supervision: B.G. and G.W.; project administration: Y.C.; funding acquisition: Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants No. 62103293 and No. 72201186 and the Natural Science Foundation of Jiangsu Province under Grants No. BK20210709 and No. BK20220481.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our heartfelt thanks to Jiangsu Eazytec Co., Ltd., for the technical support provided for this paper and to Zhi Xie, Bin Gong, and Xiaodi Yu for their valuable contributions.

Conflicts of Interest

The author Qian Xie was employed by the company Jiangsu Eazytec Co., Ltd. The author Yiyang Chen has received research grants from the company Jiangsu Eazytec Co., Ltd. The funder had the following involvement with the study: providing technical support for test environment implementation and software validation.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
IoTInternet of Things
RLReinforcement learning
DQNDeep Q-Network
DDPGDeep deterministic policy gradient
CBFControl barrier function
APFArtificial potential field
MPCModel predictive control
PPOProximal policy optimization
MDPMarkov decision process
GAEGeneralized advantage estimation
RADAR-BPORisk-aware, dynamic, adaptive regulation barrier policy optimization

References

  1. Ahmed, I.; Jeon, G.; Piccialli, F. From artificial intelligence to explainable artificial intelligence in industry 4.0: A survey on what, how, and where. IEEE Trans. Ind. Inform. 2022, 18, 5031–5042. [Google Scholar] [CrossRef]
  2. Ge, S.; Xie, Y.; Liu, K.; Ding, Z.; Hu, E.; Chen, L.; Wang, F.Y. The use of intelligent vehicles and artificial intelligence in mining operations: Ethics, responsibility, and sustainability. IEEE Trans. Intell. Veh. 2023, 8, 1021–1024. [Google Scholar] [CrossRef]
  3. Aoki, S.; Yonezawa, T.; Kawaguchi, N. RobotNEST: Toward a viable testbed for IoT-enabled environments and connected and autonomous robots. IEEE Sens. Lett. 2022, 6, 6000304. [Google Scholar] [CrossRef]
  4. Sharma, N.; Pandey, J.K.; Mondal, S. A review of mobile robots: Applications and future prospect. Int. J. Precis. Eng. Manuf. 2023, 24, 1695–1706. [Google Scholar] [CrossRef]
  5. Srisuchinnawong, A.; Phongaksorn, K.; Ausrivong, W.; Manoonpong, P. Adaptive bipedal robot walking on industrial pipes under neural multimodal locomotion control: Toward robotic out-pipe inspection. IEEE/ASME Trans. Mechatron. 2023, 29, 1205–1216. [Google Scholar] [CrossRef]
  6. Chen, Y.; Zhang, F.; Wang, G.; Weng, G.; Fontanelli, D. An Active Contour Model Based on Fuzzy Superpixel Centers and Nonlinear Diffusion Filter for Instance Segmentation. IEEE Trans. Instrum. Meas. 2025, 74, 5035013. [Google Scholar] [CrossRef]
  7. Li, Y.; Huang, H. Efficient task planning for heterogeneous AGVs in warehouses. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10005–10019. [Google Scholar] [CrossRef]
  8. Ju, C.; Son, H.I. Modeling and control of heterogeneous agricultural field robots based on Ramadge–Wonham theory. IEEE Robot. Autom. Lett. 2019, 5, 48–55. [Google Scholar] [CrossRef]
  9. Guo, B.; Guo, N.; Cen, Z. Obstacle avoidance with dynamic avoidance risk region for mobile robots in dynamic environments. IEEE Robot. Autom. Lett. 2022, 7, 5850–5857. [Google Scholar] [CrossRef]
  10. Li, X.; Xu, Z.; Su, Z.; Wang, H.; Li, S. Distance-and velocity-based simultaneous obstacle avoidance and target tracking for multiple wheeled mobile robots. IEEE Trans. Intell. Transp. Syst. 2023, 25, 1736–1748. [Google Scholar] [CrossRef]
  11. Guan, R.; Hu, G. Formation tracking of mobile robots under obstacles using only an active RGB-D camera. IEEE Trans. Ind. Electron. 2023, 71, 4049–4058. [Google Scholar] [CrossRef]
  12. Liu, W.; Hu, J.; Zhang, H.; Wang, M.Y.; Xiong, Z. A novel graph-based motion planner of multi-mobile robot systems with formation and obstacle constraints. IEEE Trans. Robot. 2023, 40, 714–728. [Google Scholar] [CrossRef]
  13. Loizou, S.G.; Rimon, E.D. Mobile robot navigation functions tuned by sensor readings in partially known environments. IEEE Robot. Autom. Lett. 2022, 7, 3803–3810. [Google Scholar] [CrossRef]
  14. Cui, Y.; Zhang, Y.; Zhang, C.H.; Yang, S.X. Task cognition and planning for service robots. Intell. Robot. 2025, 5, 119–142. [Google Scholar]
  15. Hoang, V.B.; Nguyen, V.H.; Ngo, T.D.; Truong, X.-T. Socially aware robot navigation framework: Where and how to approach people in dynamic social environments. IEEE Trans. Autom. Sci. Eng. 2022, 20, 1322–1336. [Google Scholar] [CrossRef]
  16. Gao, Y.; Huang, C.M. Evaluation of socially-aware robot navigation. Front. Robot. AI 2022, 8, 721317. [Google Scholar] [CrossRef]
  17. Wang, W.; Mao, L.; Wang, R.; Min, B.C. Multi-robot cooperative socially-aware navigation using multi-agent reinforcement learning. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; pp. 12353–12360. [Google Scholar]
  18. Mavrogiannis, C.; Baldini, F.; Wang, A.; Zhao, D.; Trautman, P.; Steinfeld, A.; Oh, J. Core challenges of social robot navigation: A survey. ACM Trans. Hum.-Robot. Interact. 2023, 12, 36. [Google Scholar] [CrossRef]
  19. Mello, R.C.; Scheidegger, W.M.; Múnera, M.C.; Cifuentes, C.A.; Ribeiro, M.R.; Frizera-Neto, A. The PoundCloud framework for ROS-based cloud robotics: Case studies on autonomous navigation and human–robot interaction. Robot. Auton. Syst. 2022, 150, 103981. [Google Scholar] [CrossRef]
  20. López, B.; Muñoz, J.; Quevedo, F.; Monje, C.A.; Garrido, S.; Moreno, L.E. Path planning and collision risk management strategy for multi-UAV systems in 3D environments. Sensors 2021, 21, 4414. [Google Scholar] [CrossRef]
  21. Arrouch, I.; Ahmad, N.S.; Goh, P.; Mohamad-Saleh, J. Close proximity time-to-collision prediction for autonomous robot navigation: An exponential GPR approach. Alex. Eng. J. 2022, 61, 11171–11183. [Google Scholar] [CrossRef]
  22. Cui, S.; Chen, Y.; Li, X. A robust and efficient UAV path planning approach for tracking agile targets in complex environments. Machines 2022, 10, 931. [Google Scholar] [CrossRef]
  23. Sun, Y.; Zhu, H.; Liang, Z.; Liu, A.; Ni, H.; Wang, Y. A phase search-enhanced Bi-RRT path planning algorithm for mobile robots. Intell. Robot. 2025, 5, 404–418. [Google Scholar]
  24. Szczepanski, R. Safe Artificial Potential Field - Novel Local Path Planning Algorithm Maintaining Safe Distance From Obstacles. IEEE Robot. Autom. Lett. 2023, 8, 4823–4830. [Google Scholar] [CrossRef]
  25. Lee, D.H.; Lee, S.S.; Ahn, C.K.; Shi, P.; Lim, C.C. Finite Distribution Estimation-Based Dynamic Window Approach to Reliable Obstacle Avoidance of Mobile Robot. IEEE Trans. Ind. Electron. 2021, 68, 9998–10006. [Google Scholar] [CrossRef]
  26. Pan, H.; Luo, M.; Wang, J.; Huang, T.; Sun, W. A safe motion planning and reliable control framework for autonomous vehicles. IEEE Trans. Intell. Veh. 2024, 9, 4780–4793. [Google Scholar] [CrossRef]
  27. Lobos-Tsunekawa, K.; Leiva, F.; Ruiz-del Solar, J. Visual navigation for biped humanoid robots using deep reinforcement learning. IEEE Robot. Autom. Lett. 2018, 3, 3247–3254. [Google Scholar] [CrossRef]
  28. Chen, Y.; Wang, Y.; Freeman, C.T. Iterative Learning Control of Minimum Energy Path Following Tasks for Second-Order MIMO Systems: An Indirect Reference Update Framework. IEEE Trans. Cybern. 2025, 55, 3403–3416. [Google Scholar] [CrossRef]
  29. Cui, J.; Wu, L.; Huang, X.; Xu, D.; Liu, C.; Xiao, W. Multi-strategy adaptable ant colony optimization algorithm and its application in robot path planning. Knowl.-Based Syst. 2024, 288, 111459. [Google Scholar] [CrossRef]
  30. Xiao, X.; Liu, B.; Warnell, G.; Stone, P. Motion planning and control for mobile robot navigation using machine learning: A survey. Auton. Robot. 2022, 46, 569–597. [Google Scholar] [CrossRef]
  31. Mackay, A.K.; Riazuelo, L.; Montano, L. RL-DOVS: Reinforcement learning for autonomous robot navigation in dynamic environments. Sensors 2022, 22, 3847. [Google Scholar] [CrossRef]
  32. Cao, H.; Xiong, H.; Zeng, W.; Jiang, H.; Cai, Z.; Hu, L.; Zhang, L.; Lu, W. Safe reinforcement learning-based motion planning for functional mobile robots suffering uncontrollable mobile robots. IEEE Trans. Intell. Transp. Syst. 2023, 25, 4346–4363. [Google Scholar] [CrossRef]
  33. Wang, G.; Li, Z.; Weng, G.; Chen, Y. An overview of industrial image segmentation using deep learning models. Intell. Robot. 2025, 5, 143–180. [Google Scholar] [CrossRef]
  34. Yang, H.; Yao, C.; Liu, C.; Chen, Q. Rmrl: Robot navigation in crowd environments with risk map-based deep reinforcement learning. IEEE Robot. Autom. Lett. 2023, 8, 7930–7937. [Google Scholar] [CrossRef]
  35. Cheng, C.; Duan, S.; He, H.; Li, X.; Chen, Y. A generalized robot navigation analysis platform (RoNAP) with visual results using multiple navigation algorithms. Sensors 2022, 22, 9036. [Google Scholar] [CrossRef]
  36. Cheng, C.; Zhang, H.; Sun, Y.; Tao, H.; Chen, Y. A cross-platform deep reinforcement learning model for autonomous navigation without global information in different scenes. Control Eng. Pract. 2024, 150, 105991. [Google Scholar] [CrossRef]
  37. Zhou, Z.; Zhu, P.; Zeng, Z.; Xiao, J.; Lu, H.; Zhou, Z. Robot navigation in a crowd by integrating deep reinforcement learning and online planning. Appl. Intell. 2022, 52, 15600–15616. [Google Scholar] [CrossRef]
  38. Li, B.; Huang, Z.; Chen, T.W.; Dai, T.; Zang, Y.; Xie, W.; Tian, B.; Cai, K. MSN: Mapless short-range navigation based on time critical deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 2022, 24, 8628–8637. [Google Scholar] [CrossRef]
  39. Ames, A.D.; Xu, X.; Grizzle, J.W.; Tabuada, P. Control barrier function based quadratic programs for safety critical systems. IEEE Trans. Autom. Control 2016, 62, 3861–3876. [Google Scholar] [CrossRef]
  40. Ferraguti, F.; Landi, C.T.; Singletary, A.; Lin, H.C.; Ames, A.; Secchi, C.; Bonfe, M. Safety and efficiency in robotics: The control barrier functions approach. IEEE Robot. Autom. Mag. 2022, 29, 139–151. [Google Scholar] [CrossRef]
  41. Wang, X. Ensuring safety of learning-based motion planners using control barrier functions. IEEE Robot. Autom. Lett. 2022, 7, 4773–4780. [Google Scholar] [CrossRef]
  42. Guo, B.; Sun, Y.; Chen, Y. Safe path planning of mobile robot based on improved particle swarm optimization. Trans. Inst. Meas. Control 2025, 47, 1715–1724. [Google Scholar] [CrossRef]
  43. Ames, A.D.; Coogan, S.; Egerstedt, M.; Notomista, G.; Sreenath, K.; Tabuada, P. Control barrier functions: Theory and applications. In Proceedings of the 2019 18th European Control Conference (ECC), Naples, Italy, 25–28 June 2019; pp. 3420–3431. [Google Scholar]
  44. Huang, Z.; Lan, W.; Yu, X. A formal control framework of autonomous vehicle for signal temporal logic tasks and obstacle avoidance. IEEE Trans. Intell. Veh. 2023, 9, 1930–1940. [Google Scholar] [CrossRef]
  45. Singletary, A.; Klingebiel, K.; Bourne, J.; Browning, A.; Tokumaru, P.; Ames, A. Comparative analysis of control barrier functions and artificial potential fields for obstacle avoidance. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 8129–8136. [Google Scholar]
  46. Jian, Z.; Yan, Z.; Lei, X.; Lu, Z.; Lan, B.; Wang, X.; Liang, B. Dynamic control barrier function-based model predictive control to safety-critical obstacle-avoidance of mobile robot. arXiv 2022, arXiv:2209.08539. [Google Scholar]
Figure 1. Kinematic model of mecanum wheel robot.
Figure 1. Kinematic model of mecanum wheel robot.
Sensors 25 05488 g001
Figure 2. Wheel odometry model of robot.
Figure 2. Wheel odometry model of robot.
Sensors 25 05488 g002
Figure 3. State space safety region visualization.
Figure 3. State space safety region visualization.
Sensors 25 05488 g003
Figure 4. Training robot agent to interact with environment.
Figure 4. Training robot agent to interact with environment.
Sensors 25 05488 g004
Figure 5. CBF example: (a) Sequential points showing the robot’s navigation path through an environment containing obstacles. (b) Comparative analysis of the trajectory variations under different α parameter values.
Figure 5. CBF example: (a) Sequential points showing the robot’s navigation path through an environment containing obstacles. (b) Comparative analysis of the trajectory variations under different α parameter values.
Sensors 25 05488 g005
Figure 7. Simulation in three different environments.
Figure 7. Simulation in three different environments.
Sensors 25 05488 g007
Figure 8. Reward curve resulting from three-stage training process.
Figure 8. Reward curve resulting from three-stage training process.
Sensors 25 05488 g008
Figure 9. Trajectory of robot in complex hybrid environment.
Figure 9. Trajectory of robot in complex hybrid environment.
Sensors 25 05488 g009
Table 1. Trajectory coordinates of pedestrians in environment.
Table 1. Trajectory coordinates of pedestrians in environment.
Time (s)Pedestrian 1Pedestrian 2
x (m)y (m)x (m)y (m)
0.00.52.53.00.0
3.00.51.53.02.0
3.50.51.53.02.0
5.50.52.5--
6.00.52.5--
6.5--3.00.0
7.0--3.00.0
Table 2. Training parameter settings for RADAR-BPO.
Table 2. Training parameter settings for RADAR-BPO.
CategoryParameterSymbolValue/Range
Policy OptimizationDiscount factor γ 0.99
GAE parameter λ 0.95
Clipping threshold ϵ 0.15
Value coefficient c 1 0.5
Entropy coefficient c 2 0.01
Policy iterationsK[150, 200, 300]
Linear velocity v max [0, 0.5]
Angular velocity ω max [−1.0, 1.0]
Safety ConstraintsSafety distance r safe 0.5
Max linear velocity v max 0.5
Max angular velocity ω max 1.0
Barrier parameter α 1.0
Robot KinematicsHalf of body’s length W a 0.195
Half of body’s width W b 0.172
Characteristic length d = W a + W b 0.367
Training Setup2D lidar angle range[0°, 360°]
Scanning range[0.08, 10]
Table 3. Algorithm performance comparison.
Table 3. Algorithm performance comparison.
EnvironmentAlgorithmSuccess CountCollision CountAverage RewardCollision Rate (%)
Env 1PPO27123357.2382.00
RADAR-BPO47103421.5868.67
DDPG1713313.6788.67
DQN0150271.50100.00
Env 2PPO13466377.0133.00
RADAR-BPO164361070.0918.00
DDPG2517558.0287.50
DQN89111273.4755.50
Env 3PPO20892399.3830.67
RADAR-BPO26832762.6610.67
DDPG7322719.4575.67
DQN96204269.2268.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, B.; Wang, G.; Chen, Y.; Gao, Y.; Xie, Q. Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation. Sensors 2025, 25, 5488. https://doi.org/10.3390/s25175488

AMA Style

Guo B, Wang G, Chen Y, Gao Y, Xie Q. Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation. Sensors. 2025; 25(17):5488. https://doi.org/10.3390/s25175488

Chicago/Turabian Style

Guo, Bingbing, Guina Wang, Yiyang Chen, Yue Gao, and Qian Xie. 2025. "Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation" Sensors 25, no. 17: 5488. https://doi.org/10.3390/s25175488

APA Style

Guo, B., Wang, G., Chen, Y., Gao, Y., & Xie, Q. (2025). Risk-Aware Reinforcement Learning with Dynamic Safety Filter for Collision Risk Mitigation in Mobile Robot Navigation. Sensors, 25(17), 5488. https://doi.org/10.3390/s25175488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop