Next Article in Journal
A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism
Previous Article in Journal
Path Optimization Strategy for Unmanned Aerial Vehicles Based on Improved Black Winged Kite Optimization Algorithm
Previous Article in Special Issue
Development of a Bayesian Network-Based Parallel Mechanism for Lower Limb Gait Rehabilitation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots

1
School of Sino-German Robotics, Shenzhen Institute of Information Technology, Shenzhen 518172, China
2
Inovance Industrial Robot Reliability Technology Research Institute, Shenzhen Institute of Information Technology, Shenzhen 518172, China
3
Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robots, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 311; https://doi.org/10.3390/biomimetics10050311
Submission received: 15 April 2025 / Revised: 7 May 2025 / Accepted: 9 May 2025 / Published: 12 May 2025
(This article belongs to the Special Issue Advanced Service Robots: Exoskeleton Robots 2025)

Abstract

:
To overcome the problem of insufficient adaptability to the motion environment of lower limb exoskeleton robots, this paper introduces computer vision technology into the motion control of lower limb exoskeleton robots and studies an obstacle-crossing-motion method based on detecting obstacle feature information. Considering the feature information of different obstacles and the distance between obstacles and robots, a trajectory planning method based on direct point matching was used to generate offline adjusted gait trajectory libraries and obstacle-crossing gait trajectory libraries. A lower limb exoskeleton robot obstacle-crossing motion decision-making algorithm based on obstacle feature information is proposed by combining gait constraints and motion constraints, enabling it to select appropriate motion trajectories in the trajectory library. The proposed obstacle-crossing-motion method was validated at three distances between the obstacle and the robot and with the feature information of four obstacles. The experimental results show that the proposed method can select appropriate trajectories from the trajectory library based on the detected obstacle feature information and can safely complete obstacle-crossing motions.

1. Introduction

The brain of individuals with sound physical abilities can perceive the movement environment and their state through visual reflex nerves and make corresponding decisions and exercise behaviors based on environmental factors that affect movements [1,2]. However, people with lower limb disabilities cannot execute their brain’s intentions smoothly, requiring external assistive devices to complete the desired actions [3,4]. In existing research, there is a potential for lower limb exoskeleton robots to perform desired human movements through brain–computer interfaces [5,6,7,8]. However, EEG signals are extremely weak [9], and there are challenges in applying brain–computer interfaces for motion control of lower limb exoskeleton robots [10,11]. Computer vision technology has significant advantages in the perception of motion environment for lower limb exoskeleton robots [12,13,14,15]. Using depth cameras or visual sensors [16,17,18,19,20,21] to accurately obtain obstacle feature information and distance, and then converting it into executable instructions to transmit to the controller of the lower limb exoskeleton robot, it can make scientific motion decisions based on the detected obstacle-related information, achieving safe obstacle-avoidance motion and avoiding collisions and falls [22].
At present, there are few reports on obstacle avoidance based on computer vision for lower limb exoskeleton robots. Laschowski et al. [14] designed an environment recognition system based on computer vision and deep learning. In this system, an optical camera is worn on the chest to record environmental data, and these environmental data are integrated into the controller of the lower limb exoskeleton robot. The assistive motions of the lower limb exoskeleton robot are achieved by combining with feature data, such as IMU and goniometer data. Liu et al. [17] developed a visual-assisted autonomous gait planning method to improve the adaptability of lower limb exoskeleton robots to their motion environment. They obtained environmental information through an RGB-D camera and extracted ground object information that affects gait. Based on environmental information, the robot status, and safety constraints, they planned the gait pattern of the lower limb exoskeleton robot that can cross obstacles. Hua et al. [18] proposed a visual information-based obstacle-clearance strategy to enhance the exoskeleton system’s ability to navigate obstacles. Their study designed a hybrid boundary box strategy that combines proximity regression in L-shaped sections and synchronous convergence in a convex hull search to accurately obtain the cross-sectional, attitude, and size characteristics of multiple obstacles in space. Their study also established a switching criterion based on the principle of instantaneous capture point transfer and inflection point guided interpolation to achieve humanoid behavior in the gait optimization process, to realize nonlinear model predictive control, and to avoid obstacles in the environment. Ramanathan et al. [23] studied visual-based environmental perception technology aimed at enabling lower limb exoskeleton robots to detect and cross obstacles. This study proposes a similarity measurement method based on color, gradient direction, and two-dimensional surface normal vectors to distinguish noise caused by the perception errors of obstacles and uses obstacle depth image data combined with inertial measurement unit data to obtain obstacle-related distances and sizes. The research results indicate that the proposed visual environment perception technology can enable lower limb exoskeleton robots to understand the environment and cross obstacles. Trombin et al. [24] proposed a new environment-adaptive gait planning method to solve the obstacle-avoidance problem of lower limb exoskeleton robots when walking in complex environments. This study uses RGB-D cameras combined with a random sampling consensus algorithm to identify the ground and obstacles and their relative positions. The authors use forward kinematics to calculate the pose of the robot in the environment and then use a Gaussian iterative search algorithm to obtain gait parameters to satisfy kinematic and safety constraints, thereby generating collision-free foot trajectories. However, this study only considered single-step planning and did not consider multi-step or continuous walking.
Some researchers have integrated two or more sensors to improve the environmental perception ability of lower limb exoskeleton robots. Wang et al. [25] developed a multi-sensor unit that integrates three wide-angle cameras and RGB-D sensors, providing precise environmental perception capabilities for exoskeletons. This study utilizes a language-extended indoor SLAM modeling environment, generates high-precision topographic maps by fusing RGB-D data, and analyzes the accessibility of the terrain. The experimental results show that the proposed method can effectively evaluate terrain accessibility, with an accurate F-score of 0.93. Wu et al. [26] integrated Hololens and Realsense devices to construct a perception and interaction platform based on mixed reality and stereo vision and proposed a visual gait planning system for lower limb exoskeleton robots in complex terrains. This study used the Bessel curve for path planning and used heuristic search algorithms to search for the optimal gait path within the restricted area. The results indicate that the proposed method can plan smooth and feasible gait paths in complex terrains. Tricomi et al. [27] studied hip exoskeleton-assisted movements using computer vision technology. Using an RGB camera to capture the type of environment in front of the motion (stairs or flat ground), an adaptive oscillator estimates the user’s gait phase from IMU data, and a modulation strategy adjusts the assist level of each leg to adapt to environmental changes. Although the control strategy proposed in this study performed well in experiments, the applicability of this method needs further verification.
Although the above research achieved obstacle crossing for lower limb exoskeleton robots, it did not consider adjustments of the gait amplitude and decision-making for obstacle crossing for different obstacles and distances. This is important for improving the environmental adaptability of lower limb exoskeleton robots. In addition, although some studies have integrated multiple visual auxiliary sensors to aid in environmental perception, the operation of multiple sensors requires high computing power, which poses a challenge to the computing resources of lower limb exoskeleton robot systems.
The challenge of obstacle-crossing motions for lower limb exoskeleton robots involves making scientific decisions about movements based on obstacle feature information to ensure safe navigation. To tackle this issue, this paper integrates computer vision technology with the motion control of lower limb exoskeleton robots, establishing two offline gait trajectory libraries through trajectory planning methods. An obstacle-crossing-motion decision-making algorithm, which depends on obstacle feature information, combines motion and gait constraints within human–robot systems. The proposed algorithm selects appropriate robot motion trajectories from the trajectory library based on the detected obstacle features to promote safe obstacle-crossing motions.

2. Method of Obstacle-Crossing Motions

2.1. Acquisition of Obstacle Feature Information

To accurately obtain obstacle information in the motion environment of lower limb exoskeleton robots, the depth camera must be calibrated to obtain its internal parameters [28].
K = f / d x 0 n x o 0 f / d y n y o 0 0 1
where f is the focal length of the depth camera, and d x , d y , n x o , n y o are the dimensions of the image in the x and y directions and the center of the image in the x and y directions, respectively.
The two-dimensional environmental depth map obtained by the depth camera is converted into a three-dimensional point cloud to facilitate the recognition of environmental information. n x and n y are the X-axis and Y-axis of the image coordinate system, and ( P X , P Y , P Z ) are the coordinates of the target point. P Z is the value of the target point and the z-axis direction of the depth camera. Therefore, the coordinates of the target point are as follows [17]:
P X P Y P Z = P Z f / d x 0 n x o 0 f / d y n y o 0 0 1 1 n x n y 1
The coordinate values of object surface sampling points obtained through depth camera scanning in the world coordinate system are used to obtain the feature information of obstacles in the world coordinate system.
Before conducting obstacle detection, the first step is to obtain ground information. The ground is defined as an ideal plane, which can be represented in a three-dimensional space as
X x + Y y + Z z = P
where x , y , z is the point on the spatial fitting plane; X , Y , Z is the unit normal vector of the plane; and P is the distance from the coordinate origin to the plane.
The key to extracting ground information from three-dimensional point cloud data is to determine X , Y , Z and P . If the point cloud dataset is defined as D O = x i , y i , z i | i = 1 N , then the points on the plane need to satisfy
D O 1 X Y Z P T = 0
Using the random sampling consensus algorithm to fit the plane, the ground model is determined as follows:
X 0 x + Y 0 y + Z 0 z = P 0
After obtaining the ground model, the ground information is removed from the point cloud data obtained by the depth camera, and the point cloud of obstacles is obtained. Then, the distance from each point to the plane is obtained by calculating Equation (6).
d i = X 0 x + Y 0 y + Z 0 z P 0 X 0 2 + Y 0 2 + Z 0 2
The distance set D S d i is obtained from all points of the obstacle to the plane, and then, the minimum bounding rectangle that envelops the obstacle in the point cloud is calculated based on all distances. The length, width, and height of the obstacle are calculated based on the 8 points of the minimum rectangle.

2.2. Gait Parameters Based on Obstacle Feature Information

Based on the obstacle information detected by the depth camera, the length, width, and height of the obstacle are obtained, as well as the distance between the obstacle and the lower limb exoskeleton robot, as shown in Equation (7).
R = φ I , x
where I is the depth image information obtained by the depth camera; x contains status information of lower limb exoskeleton robots; and R includes the obstacle feature information h o , l o and w o , detected by depth cameras, and the distance d o r between obstacles and lower limb exoskeleton robots.
Then, combined with the motion constraints of the lower limb exoskeleton robot, the required step length and step height for the lower limb exoskeleton robot to cross obstacles are obtained, as shown in Equation (8).
h a , d a = ξ R h o , l o , w o , d o r , C
where h a and d a are the step height and step length when crossing obstacles, respectively. ξ · represents obtaining the appropriate step height and step length to cross the obstacle based on the feature information R of the obstacle and the motion constraint C of the lower limb exoskeleton robot.

2.3. Gait Trajectory Planning Based on Obstacle Feature Information

  • Description of gait trajectory planning problem
Based on the periodicity of gait motions in lower limb exoskeleton robots, the dynamic system is constrained to a single support phase and simplified as a single-segment continuous trajectory optimization problem, with the system dynamics constrained as follows:
x ˙ = f x , τ a
where x is the state variable, and τ a is the actual input torque.
The impact mapping of the end of the swinging leg colliding with the ground at the end of the gait is constrained as follows:
x 0 = Δ x t F
where t F is the end time of a single gait.
The movements of lower limb exoskeleton robots cannot violate the physiological constraints of human joints. The knee joint movement angle is constrained.
θ t 1 θ t 2 θ K max
where θ K max is the protective margin for excessive extension of the knee joint.
While walking, the human body has a slight inclination angle between the upper body and torso to ensure balance. The constraint on the forward and backward inclination angle of the torso is defined as
θ t 3 t θ t o - f θ t 4 t θ t o - b
where θ t o - b and θ t o - f are the backward and forward inclination constraint of the torso.
To ensure that the motion trajectory is tracked by the motor, the upper limit of the input torque during the optimization process is determined based on the peak torque parameters of the joint motor of the lower limb exoskeleton robot, as shown in Equation (13).
τ a max τ a t τ a max
The constraint between the gait cycle time and step length can be expressed as
P E t F = d a 0 0 T
where P E t F is the position of the end of the swinging leg at the end of the gait cycle.
During gait movement, the end of the swinging leg should be higher than the ground. To avoid conflicts with the constraint that the end height of the swinging leg at the beginning and end of gait is zero, the ground clearance constraint is defined as
P E - v t H P E - h t
where H is the ground clearance constraint function. P E - v t and P E - h t are the vertical and horizontal components of the swinging leg end P E , respectively.
The constraint between the height of the swinging leg end and the gait cycle can be expressed as
h t = h a h a t t e n d 2 2 t e n d 2 2
where h t is the minimum height of the swinging leg end for each timestamp, the height constraint of the swinging leg end is h t h o p < 0 , and h o p is the trajectory of the height of the swinging leg end that needs to be optimized and solved. h a is the expected height for lifting the leg. t e n d is the end of a gait cycle time.
The objective function is defined as the time integral of the quadratic function of the input torque:
J = 0 T τ a T Z τ a d t
where Z is a quadratic matrix.
The purpose of the objective function is to prevent excessive torque locally, where the quadratic form is a positive definite matrix. If it is a semi-positive definite matrix, this will result in an input torque that maintains the upper limit value in the optimization result. A positive definite matrix can smooth the curves of an input torque and state variables, reducing errors in approximating the state trajectory using smooth curves [29].
The above constraints and objective function are summarized as a trajectory optimization problem:
min t 0 , t F , x t , τ a t J = 0 T τ a T Z τ a d t s . t . x ˙ = f x , τ a x 0 = Δ x t F θ t 1 ( t ) θ t 2 ( t ) θ K max θ t 3 ( t ) θ t o - f θ t 4 ( t ) θ t o - b τ a max τ a t τ a max P E , k t F = d a 0 0 T h t = h a h a t t e n d 2 2 / t e n d 2 2 h t h o p < 0 P E - v , k t H P E - h , k t
  • Offline gait trajectory planning
The trajectory planning problem belongs to the optimal control problem. The direct collocation method applied to trajectory planning problems discretizes the continuous optimal control process, takes the obtained coefficients as new variables, and then uses modern optimization tools to solve nonlinear optimization problems [30,31].
Continuous functions are related to state variables and control variables and require interpolation approximations based on discrete points for state variables and control variables. The target optimization trajectory is divided into N interpolation segments (including starting point, midpoint, and endpoint), and the endpoints of adjacent interpolation segments are the same as the starting point, so there are 2N + 1 sampling points, as shown in Equation (19).
t 0 , , t k , t k + 1 2 , t k + 1 , , t N
The time for each interpolation segment is
ζ k = t k + 1 t k
Similarly, the state variables and control variables are also divided into 2N + 1 sampling points:
x 0 , , x k , x k + 1 2 , x k + 1 , , x N τ 0 , , τ k , τ k + 1 2 , τ k + 1 , , τ N
Sampling points are used to define interpolation functions to approximate the original function. The integral of a quadratic interpolation function can be represented by the function values of three points (starting point, midpoint, and endpoint) in the integration interval. Assuming the quadratic function γ t
γ t = a + b t + c t 2
where, a, b, and c are constants.
The starting point, midpoint, and endpoint of the quadratic interpolation function in the interval 0 , ζ can be expressed as
γ 0 = a γ ζ 2 = a + ζ 2 b + ζ 2 4 c γ ζ = a + ζ b + ζ 2 c
Therefore, from (23),
a = γ 0 b ζ = 3 γ 0 + 4 γ ζ 2 r γ ζ c ζ 2 = 2 γ 0 4 γ ζ 2 + 2 γ ζ
The integral U of the quadratic function γ t on the interval 0 , ζ can be expressed as
U = 0 h a + b t + c t 2 = a ζ + 1 2 b ζ 2 + 1 3 c ζ 3 = h ζ 6 γ 0 + 4 γ ζ 2 + γ ζ
According to (25), the integral of the derivative of the system state between two nodes in a single interpolation segment can be expressed as
x k + 1 x k = t k t k + 1 x ˙ d t = t k t k + 1 f x , τ a d t
where f · represents the system dynamic equation, and x k represents the system state at the k-th node.
Equation (26) can be written as
x k + 1 x k = ζ k 6 f x k , τ a k + 4 f x k + 1 2 , τ a k + 1 2 + f x k + 1 , τ a k + 1
The left side of Formula (27) represents the difference between the starting and ending states of a single interval, while the right side represents the estimated integral value of the derivative of the state variables at three points within the interval. In the nonlinear optimization solution, it can be defined as an equation relationship. The variable x k + 1 2 , to be optimized in Formula (27), satisfies the constraint relationship with the starting and ending points of the interval:
x k + 1 2 = 1 2 x k , x k + 1 + ζ k 8 f x k , τ a k f x k + 1 , τ a k + 1
Equations (27) and (28) are system dynamic constraints based on discrete variables.
Similarly, a similar approach can be applied to the objective function. The integrand of the objective function can be expressed as
l t = τ a t T Z τ a t
The integral within the gait cycle can be expressed as
t 0 t F l t d t k = 0 N 1 h k 6 l k + 4 l k + 1 2 + l k + 1
Therefore, the objective function can be expressed as
J = k = 0 N 1 h k 6 l k + 4 l k + 1 2 + l k + 1
Ultimately, the nonlinear optimization problem of the transformed trajectory planning can be summarized as
min t 0 , t F , x t , τ a t J = k = 0 N 1 h k 6 l k + 4 l k + 1 2 + l k + 1 s . t . x k + 1 x k = ζ k 6 f x k , τ a k + 4 f x k + 1 2 , τ a k + 1 2 + f x k + 1 , τ a k + 1 x ˙ = f x , τ a x 0 = Δ x t F θ t 1 ( t ) θ t 2 ( t ) θ K max θ t 3 ( t ) θ t o - f θ t 4 ( t ) θ t o - b τ a max τ a 1 , a 2 , a 3 , a 4 , k ( t ) τ a max P E , k t F = d a 0 0 T h t = h a h a t t e n d 2 2 / t e n d 2 2 h t h o p < 0 P E - v , k t H P E - h , k t
The optimized discrete state variables and control variables are interpolated to obtain the desired continuous trajectory. The local time variables within the interpolation segment are specified as
κ = t t k
Therefore, the interpolation reconstruction of the control trajectory is represented as
τ a t = 2 ζ k 2 τ a k κ ζ k κ ζ k 2 2 τ a k + 1 2 τ κ τ κ ζ k + τ a k + 1 κ κ ζ k 2
The dynamic equation is reconstructed as
x ˙ t = f x t , τ a t = 2 ζ k 2 f x k , τ a k κ ζ k κ ζ k 2 f x k + 1 2 , τ a k + 1 2 2 κ κ ζ k + f x k + 1 , τ a k + 1 κ κ ζ k 2
By integrating the dynamic equations, the state variables are reconstructed as
x t = 0 t x ˙ κ d τ = x k + ζ k f x k , τ a k κ ζ k + 1 2 3 f x k , τ a k + 4 f x k + 1 2 , τ a k + 1 2 f x k + 1 , τ a k + 1 κ ζ k 2 + 1 3 2 f x k , τ a k 4 f x k + 1 2 , τ a k + 1 2 + 2 f x k + 1 , τ a k + 1 κ ζ k 3
According to the nonlinear optimization results, the optimized motion trajectory can be obtained by setting the expected step length, step height, and cycle time. The gait and motion constraint parameters defined for gait trajectory optimization are shown in Table 1.
During the assisted-movement process of lower limb exoskeleton robots, the feature information of obstacles (size information, such as height h and length d o of obstacles) and the distance between obstacles and lower limb exoskeleton robots are detected through depth cameras. Then, the lower limb exoskeleton robot control system calculates the required leg lift height h a and step length d a to cross the obstacle based on the obstacle size information, as shown in Equation (37):
h a = h + Δ h d a = d 1 + d o + d 2
In obstacle-avoidance movements based on obstacle detection information, there is the problem of being too close or too far from obstacles during normal gait movements. This paper uses optimization methods to set the distance between different obstacles and lower limb exoskeleton robots, as well as the feature information of obstacles, to generate offline adjusted gait trajectory libraries and obstacle-crossing gait trajectory libraries containing multiple gait parameters.
(1)
Adjusted gait trajectory: when the distance between the lower limb exoskeleton robot and the obstacle cannot achieve safe crossing within one gait cycle, the step length needs to be adjusted appropriately when approaching the obstacle.
(2)
Obstacle-crossing gait trajectory: when the lower limb exoskeleton robot reaches a safe crossing position after adjusting its gait trajectory, it selects an appropriate obstacle-crossing gait trajectory from the trajectory library based on the detected obstacle information.
It should be mentioned that when generating the trajectory library offline, the motion trajectory of the lower limb exoskeleton robot crossing obstacles considers the dynamics of the human–robot system. Therefore, the gait trajectory selected in both trajectory libraries is the optimal trajectory that can safely cross obstacles.

2.4. Obstacle-Crossing Motion Decision-Making Algorithm Based on Obstacle Feature Information

The lower limb exoskeleton robot’s obstacle-crossing-motion control framework based on obstacle feature information is shown in Figure 1.
The lower limb exoskeleton robot mainly focuses on two tasks when crossing obstacles: (1) accurately detect obstacle feature information that affects assisted motions in the environment, and (2) select an appropriate motion trajectory based on the detected obstacle feature information to achieve safe and accurate obstacle-crossing motion. To achieve safe obstacle-crossing movements of lower limb exoskeleton robots, it is necessary to construct a decision-making algorithm for obstacle-crossing movements of lower limb exoskeleton robots based on obstacle feature information, as shown in Algorithm 1. The obstacle-crossing motion decision-making algorithm selects the appropriate gait trajectory from two trajectory libraries based on gait constraints, motion constraints, obstacle feature information, and the distance between the obstacle and the lower limb exoskeleton robot, allowing the lower limb exoskeleton robot to safely cross obstacles.
Algorithm 1. Decision-making algorithm for obstacle-crossing motions of lower limb exoskeleton robot
  • Input: Gait information: step length d a , step height h a , step time T ;
  •    Safety constraints: protective margin for excessive extension of the knee joint θ K max , constraint on the forward and backward inclination angle of the torso θ t o - f and θ t o - b
  •    Other parameters: d s , d 1 , d 2 , and d f
  • Output: If no obstacles are detected, execute the normal motion trajectory based on the initial gait information;
  •    Enter initial gait information.
  •    Enter a normal gait trajectory with a step length of d a , a step height of h a , and a gait cycle of T ;
  • Repeat
  •    Else
  •    If obstacles in the motion trajectory space are detected, obtain obstacle feature information and the distance between the obstacle and the lower limb exoskeleton robot, and execute the obstacle-crossing-motion decision-making algorithm;
  •     If the distance d c between the obstacle and the lower limb exoskeleton robot meets condition d a < d c < 1.5 d a , based on the detected distance, the robot controller adjusts the gait trajectory library to a trajectory that is less than d a but greater than 0.5 d a and moves it for one motion cycle. If the height d of the obstacle is less than or equal to the set threshold d s ( d d s ), the lower limb exoskeleton robot executes a normal motion trajectory.
  •     Else
  •      If  d > d s , the depth camera detects spatial feature information of obstacles: h , d o and d c .
  •      If  d c is greater than the set threshold d 1 , and d c d a d f > 0 , execute the normal trajectory.
  •       Else calculate the step height h a and step length d a ( h a = h + Δ h ; d a = d 1 + d o + d 2 ) for safely crossing obstacles based on obstacle feature information. The robot controller selects the appropriate trajectory from the obstacle-crossing gait trajectory library based on the calculated h a and d a to complete the crossing motion. And then, execute the normal motion trajectory.
  •       End if
  •      End if
  •     End if
  • Until Gait movement ends
The obstacle-crossing-motion decision-making algorithm includes three gait motion modes, normal motions, transitional motions, and obstacle-crossing motions, which are used for switching the motion trajectory of lower limb exoskeleton robots to ensure the safety of motions. The three gait motion modes are as follows:
(1)
Normal motions: the lower limb exoskeleton robot maintains a fixed step length and leg lift height in its motion trajectory.
(2)
Transition motions: the gait distance is adjusted based on the distance information between the detected lower limb exoskeleton robot and obstacles, so that the lower limb exoskeleton robot moves to the appropriate position, ensuring the safety of obstacle-crossing motions.
(3)
Obstacles-crossing motions: the lower limb exoskeleton robot detects obstacle feature information through a depth camera, adjusts the distance through transition motions, selects appropriate crossing motion trajectory, and safely crosses obstacles.

3. Experimental Section

This experiment was conducted with the assistance of three volunteers, including one wearing a lower limb exoskeleton robot, one assisting beside the wearer to prevent falls, and one responsible for the start and stop switches of the lower limb exoskeleton robot. Volunteers wore lower limb exoskeleton robots integrated with depth cameras to conduct obstacle-crossing-motion experiments, as shown in Figure 2.
The depth camera was installed in the middle of an aluminum bracket through a threaded hole, and the aluminum bracket was fixed on both sides of the waist link of the lower limb exoskeleton robot. The vertical height of the depth camera to the ground was 0.96 m, and the horizontal distance from the lower limb exoskeleton robot’s waist link was 0.24 m. The depth camera communicated with a high-performance computer through a USB transmission cable and transmitted information to the controller of the lower limb exoskeleton robot.

3.1. Experimental Design for Different Obstacle Distances

To verify the proposed obstacle feature information-based lower limb exoskeleton robot obstacle-crossing-motion decision-making algorithm, three experiments were conducted to determine the distances between obstacles and lower limb exoskeleton robots. The experimental design was as follows:
(1)
To ensure that the obstacle feature information detected by the depth camera was the same, the lower limb exoskeleton robot was set to cross the same obstacles.
(2)
The motion step length of the lower limb exoskeleton robot was 0.28 m, and the initial distance between the obstacle and the lower limb exoskeleton robot must have been greater than two step lengths. In the experiment, we set the step time to 1.6 s.
(3)
Due to measurement errors between obstacles and robots, the measured distance was rounded to two decimal places. The distance between the obstacle and the robot was set to 1.26 m, 1.14 m, and 1.21 m, respectively.
(4)
To ensure that there was no sudden change in position during movements, the starting state of each gait trajectory was consistent with the ending state of the previous trajectory. In two types of offline trajectory library generations, this situation was considered.

3.2. Experimental Design of Different Obstacle Feature Information

Four types of obstacles with different feature information were designed as objects for the lower limb exoskeleton robot to cross to verify the algorithm proposed in this paper. The actual size and measure size of the obstacles are shown in Table 2. Table 2 shows that the measure size of the four obstacles was larger than the actual size. The reason for this is that when depth cameras detect obstacle feature information, computer vision algorithms use 3D point cloud filtering and downsampling methods to generate and display bounding boxes that can surround the contours of obstacles, resulting in measurement errors. It should be noted that this paper focuses on trajectory planning and obstacle-crossing motions based on obstacle feature information. Therefore, the small measurement error caused by obstacle detection is within an acceptable range. The 3D point cloud was projected onto a 2D image to mark bounding boxes on the original image, and the feature information of the detected obstacles, including height, width, and length, were calculated. At the same time, the depth sensor embedded in the depth camera detected the distance between the lower limb exoskeleton robot and obstacles.

4. Results and Discussion

4.1. Experimental Results for Motion Decision-Making for Different Obstacle Distances

The actual distance between different obstacles and the lower limb exoskeleton robot was set as described above, and the distance information was detected by a depth camera and transmitted to the lower limb exoskeleton robot controller. Based on decision-making algorithms, the controller selected an appropriate gait trajectory from the adjusted gait trajectory library based on the detected distance information to complete gait motion adjustments. The experimental results of the lower limb exoskeleton robot at three different distances from obstacles are shown in Figure 3, Figure 4 and Figure 5. Figure 3, Figure 4 and Figure 5 show the motion trajectory curves of the hip and knee joints at obstacle distances of 1.26 m, 1.14 m, and 1.21 m, respectively.
From the experimental results of different distances between lower limb exoskeleton robots and obstacles, it can be seen that by the proposed decision-making algorithm, lower limb exoskeleton robots can select appropriate gait trajectories and adjust their movements based on distance information. Based on the distance information between obstacles and the robot detected by the depth camera, the lower limb exoskeleton robot executes a normal gait trajectory when the distance from the obstacle is far. When the lower limb exoskeleton robot approaches an obstacle and determines that it will collide with the obstacle with a normal trajectory, the lower limb exoskeleton robot will switch to adjusting the trajectory. The controller selects an appropriate gait adjustment trajectory from the trajectory library, and the joint angles of the hip and knee joints of the gait adjustment trajectory decrease. At this time, the corresponding step length of the lower limb exoskeleton robot decreases. The hip and knee joint angles of gait adjustment trajectories for obstacles and lower limb exoskeleton robots at different distances are shown in Table 3. After executing the gait adjustment trajectory, the lower limb exoskeleton robot approaches the obstacle and switches to the obstacle-crossing gait trajectory in the next gait. At this point, the lower limb exoskeleton robot controller will select an appropriate trajectory from the obstacle-crossing gait trajectory library based on the feature information of the obstacle for the obstacle-crossing motion.

4.2. Obstacle-Crossing Experiment with Different Obstacle Feature Information

Volunteers wore the lower limb exoskeleton robot for four obstacle-crossing experiments, and the joint motion angles are shown in Figure 6, Figure 7, Figure 8 and Figure 9. In terms of Figure 6, Figure 7, Figure 8 and Figure 9, it can be seen that for different obstacles, the lower limb exoskeleton robot will select appropriate gait trajectories from the obstacle-crossing gait trajectory library based on the obstacle feature information collected by the depth camera to complete safe obstacle-crossing movements. The angles of the hip and knee joints of the lower limb exoskeleton robot corresponding to the crossing motion of different obstacles are different, which is related to the gait trajectory corresponding to the characteristic parameters of the obstacles. Based on the obstacle feature information, when the height and length of the obstacle increase, the lower limb exoskeleton robot will choose an appropriate motion trajectory based on motion and gait constraints to ensure safe crossing. The corresponding hip and knee joint motion angles will be greater than the joint angles under a normal gait trajectory. When the height of the obstacle is small, but the length is large, the increase in the hip joint angle crossing the gait trajectory is large, but the increase in the knee joint angle is small.
Figure 10 shows the spatial trajectory curves of the step length and step height corresponding to the end of the swinging leg of the lower limb exoskeleton robot when crossing obstacles 1, 2, 3, and 4. In terms of Figure 10, it can be seen that the spatial trajectory curve of the end of the lower limb exoskeleton robot when crossing obstacles follows an approximate sine trajectory, which is consistent with the characteristics of a foot spatial trajectory during human gait movements. When crossing obstacles, the spatial trajectory of the end of the lower limb exoskeleton robot changes accordingly, and the step length and step height are adjusted accordingly. Table 4 shows the specific values of gait parameters, such as the step height and step length, when the end of the swinging leg of the lower limb exoskeleton robot crosses obstacles with different feature information.
By analyzing the joint motion angles and corresponding spatial trajectory curve changes when crossing different obstacles, from Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, it can be observed that the decision-making algorithm for obstacle-crossing proposed in this paper for lower limb exoskeleton robots based on obstacle detection information is effective. Through this decision-making algorithm, an appropriate adjusted gait trajectory and crossing gait trajectory can be selected based on the distance between the obstacles and the robot and obstacle feature information to ensure safe obstacle-crossing movements of the lower limb exoskeleton robot and improve its environmental adaptability.

4.3. Limitations and Future Work

This paper introduces computer vision technology into lower limb exoskeleton robots. Through gait trajectory planning methods, two gait trajectory libraries were generated offline based on different obstacle feature information and different distances between obstacles and lower limb exoskeleton robots. During the motion process, computer vision technology is used to detect obstacle features and distances and to select appropriate trajectories from the trajectory library, which belongs to offline motion planning methods.
In future work, online gait trajectory planning based on environmental information will be considered. By using the feature information of obstacles and the distance information between obstacles and lower limb exoskeleton robots, scientific obstacle-crossing motion trajectories will be generated in real time to drive lower limb exoskeleton robots to achieve obstacle-crossing movements. This approach is more in line with daily motion environments and usage needs. At the same time, experiments will also be considered in darker environments and with different obstacles, people, walking speeds, and pathological gaits, and the adjustment ability of lower limb exoskeleton robots will be evaluated.
However, the online gait trajectory planning of lower limb exoskeleton robots faces more complex environments and obstacles with complex appearance features, which require higher requirements for obstacle detection algorithms and computer hardware systems for model computation.

5. Conclusions

This paper combines computer vision technology with lower limb exoskeleton robots to study the method of obstacle-crossing for lower limb exoskeleton robots. Feature information of obstacles is obtained in the motion environment of lower limb exoskeleton robots and the distance between lower limb exoskeleton robots and obstacles by depth cameras. The gait planning method, based on the direct collocation method, generates an offline adjusted gait trajectory library and an obstacle-crossing gait trajectory library containing multiple sets of gait parameter information. A lower limb exoskeleton robot obstacle-crossingmotion decision-making algorithm based on obstacle feature information was proposed by combining gait and motion constraints. Appropriate motion trajectories are selected from the trajectory library based on the detected obstacle feature information to achieve safe obstacle-crossing motions for lower limb exoskeleton robots. Obstacle-crossing experiments were conducted with three distances between obstacles and robots, as well as obstacle-crossing experiments with feature information from four obstacles. The experimental results show that the proposed method can enable lower limb exoskeleton robots to select appropriate trajectories in the trajectory library, adjust gait movements, and safely cross obstacles. The step height and step length of the four motion trajectories when crossing four obstacles were 0.22 m and 0.34 m, 0.14 m and 0.31 m, 0.15 m and 0.40 m, and 0.19 m and 0.33 m, respectively.

Author Contributions

Conceptualization, Y.Z. and G.C..; methodology, Y.Z. and G.C; validation, Y.Z.; formal analysis, Y.Z., J.W., B.G. and G.C.; investigation, Y.Z.; resources, Y.Z. and H.W.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z.; visualization, Y.Z.; supervision, L.X., C.L. and G.C.; project administration, B.G., L.X., G.C. and H.W.; funding acquisition, Y.Z., J.W. and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Projects of Shenzhen Institute of Information Technology under grants SZIIT2025KJ021 and SZIIT2025KJ022; the Open Fund of the Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robot, Shenzhen University, under grant ECIR2024001; and, in part, by the National Natural Science Foundation of China under grant 92467204.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gallivan, J.P.; Chapman, C.S.; Wolpert, D.M.; Flanagan, J.R. Decision-Making in Sensorimotor Control. Nat. Rev. Neurosci. 2018, 19, 519–534. [Google Scholar] [CrossRef] [PubMed]
  2. Tang, C.; Xu, Z.; Occhipinti, E.; Yi, W.; Xu, M.; Kumar, S.; Virk, G.S.; Gao, S.; Occhipinti, L.G. From Brain to Movement: Wearables-Based Motion Intention Prediction across the Human Nervous System. Nano Energy 2023, 115, 108712. [Google Scholar] [CrossRef]
  3. Plaza, A.; Hernandez, M.; Puyuelo, G.; Garces, E.; Garcia, E. Lower-Limb Medical and Rehabilitation Exoskeletons: A Review of the Current Designs. IEEE Rev. Biomed. Eng. 2023, 16, 278–291. [Google Scholar] [CrossRef]
  4. Zhang, Y.P.; Cao, G.Z.; Li, L.L.; Diao, D.F. Interactive Control of Lower Limb Exoskeleton Robots: A Review. IEEE Sens. J. 2024, 24, 5759–5784. [Google Scholar] [CrossRef]
  5. Asanza, V.; Peláez, E.; Loayza, F.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.H. Identification of Lower-Limb Motor Tasks via Brain–Computer Interfaces: A Topical Overview. Sensors 2022, 22, 2028. [Google Scholar] [CrossRef]
  6. Ferrero, L.; Soriano-Segura, P.; Navarro, J.; Jones, O.; Ortiz, M.; Iáñez, E.; Azorín, J.M.; Contreras-Vidal, J.L. Brain–Machine Interface Based on Deep Learning to Control Asynchronously a Lower-Limb Robotic Exoskeleton: A Case-of-Study. J. Neuroeng. Rehabil. 2024, 21, 48. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, C.J.; Sie, T.Y. Integration of Virtual Reality-Enhanced Motor Imagery and Brain-Computer Interface for a Lower-Limb Rehabilitation Exoskeleton Robot. Actuators 2024, 13, 244. [Google Scholar] [CrossRef]
  8. Ortiz, M.; Iáñez, E.; Contreras-Vidal, J.L.; Azorín, J.M. Analysis of the EEG Rhythms Based on the Empirical Mode Decomposition During Motor Imagery When Using a Lower-Limb Exoskeleton. A Case Study. Front. Neurorobot. 2020, 14, 48. [Google Scholar] [CrossRef]
  9. Huang, X.; Xu, Y.; Hua, J.; Yi, W.; Yin, H.; Hu, R.; Wang, S. A Review on Signal Processing Approaches to Reduce Calibration Time in EEG-Based Brain–Computer Interface. Front. Neurosci. 2021, 15, 733546. [Google Scholar] [CrossRef]
  10. Gu, X.; Cao, Z.; Jolfaei, A.; Xu, P.; Wu, D.; Jung, T.P.; Lin, C.T. EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 1645–1666. [Google Scholar] [CrossRef]
  11. Mridha, M.F.; Das, S.C.; Kabir, M.M.; Lima, A.A.; Islam, M.R.; Watanobe, Y. Brain-Computer Interface: Advancement and Challenges. Sensors 2021, 21, 5746. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, C.; Pei, Z.; Fan, Y.; Qiu, S.; Tang, Z. Review of Vision-Based Environmental Perception for Lower-Limb Exoskeleton Robots. Biomimetics 2024, 9, 254. [Google Scholar] [CrossRef] [PubMed]
  13. Khalili, M.; Ozgoli, S. Environment Recognition for Controlling Lower-Limb Exoskeletons, by Computer Vision and Deep Learning Algorithm. In Proceedings of the 2022 8th International Conference on Control, Instrumentation and Automation (ICCIA 2022), Tehran, Iran, 2–3 March 2022; pp. 1–5. [Google Scholar]
  14. Laschowski, B.; McNally, W.; Wong, A.; McPhee, J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 4631–4635. [Google Scholar]
  15. Ren, B.; Luo, X.; Wang, Y.; Chen, J. A Gait Trajectory Control Scheme through Successive Approximation Based on Radial Basis Function Neural Networks for the Lower Limb Exoskeleton Robot. J. Comput. Inf. Sci. Eng. 2020, 20, 31008. [Google Scholar] [CrossRef]
  16. Bao, W.; Villarreal, D.; Chiao, J.C. Vision-Based Autonomous Walking in a Lower-Limb Powered Exoskeleton. In Proceedings of the IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE 2020), Cincinnati, OH, USA, 26–28 October 2020; pp. 830–834. [Google Scholar]
  17. Liu, D.X.; Xu, J.; Chen, C.; Long, X.; Tao, D.; Wu, X. Vision-Assisted Autonomous Lower-Limb Exoskeleton Robot. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3759–3770. [Google Scholar] [CrossRef]
  18. Hua, Y.; Zhang, H.; Li, Y.; Zhao, J.; Zhu, Y. Vision Assisted Control of Lower Extremity Exoskeleton for Obstacle Avoidance With Dynamic Constraint Based Piecewise Nonlinear MPC. IEEE Robot. Autom. Lett. 2022, 7, 12267–12274. [Google Scholar] [CrossRef]
  19. Sharifi, M.; Mehr, J.K.; Mushahwar, V.K.; Tavakoli, M. Autonomous Locomotion Trajectory Shaping and Nonlinear Control for Lower Limb Exoskeletons. IEEE/ASME Trans. Mechatron. 2022, 27, 645–655. [Google Scholar] [CrossRef]
  20. Karacan, K.; Meyer, J.T.; Bozma, H.I.; Gassert, R.; Samur, E. An Environment Recognition and Parameterization System for Shared-Control of a Powered Lower-Limb Exoskeleton. In Proceedings of the 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), New York, NY, USA, 29 November–1 December 2020; pp. 623–628. [Google Scholar]
  21. Luo, J.; Zhou, X.; Zeng, C.; Jiang, Y.; Qi, W.; Xiang, K.; Pang, M.; Tang, B. Robotics Perception and Control: Key Technologies and Applications. Micromachines 2024, 15, 531. [Google Scholar] [CrossRef]
  22. Mohamad, H.; Ozgoli, S. Online Gait Generator for Lower Limb Exoskeleton Robots: Suitable for Level Ground, Slopes, Stairs, and Obstacle Avoidance. Robot. Auton. Syst. 2023, 160, 104319. [Google Scholar] [CrossRef]
  23. Ramanathan, M.; Luo, L.; Er, J.K.; Foo, M.J.; Chiam, C.H.; Li, L.; Yau, W.Y.; Ang, W.T. Visual Environment Perception for Obstacle Detection and Crossing of Lower-Limb Exoskeletons. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems; IEEE, Kyoto, Japan, 23–27 October 2022; pp. 12267–12274. [Google Scholar]
  24. Trombin, E.; Tortora, S.; Menegatti, E.; Tonin, L. Environment-Adaptive Gait Planning for Obstacle Avoidance in Lower-Limb Robotic Exoskeletons. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 13640–13647. [Google Scholar]
  25. Wang, J.; Mattamala, M.; Kassab, C.; Zhang, L.; Fallon, M. Exosense: A Vision-Centric Scene Understanding System For Safe Exoskeleton Navigation. IEEE Robot. Autom. Lett. 2025, 10, 3510–3517. [Google Scholar] [CrossRef]
  26. Wu, X.; Li, J.; Liu, L.; Tao, D. The Visual Footsteps Planning System for Exoskeleton Robots Under Complex Terrain. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5149–5160. [Google Scholar] [CrossRef]
  27. Tricomi, E.; Mossini, M.; Missiroli, F.; Lotti, N.; Zhang, X.; Xiloyannis, M.; Roveda, L.; Masia, L. Environment-Based Assistance Modulation for a Hip Exosuit via Computer Vision. IEEE Robot. Autom. Lett. 2023, 8, 2550–2557. [Google Scholar] [CrossRef]
  28. Kao, Y.H.; Chen, C.K.; Chen, C.C.; Lan, C.Y. Object Pose Estimation and Feature Extraction Based on PVNet. IEEE Access 2022, 10, 122387–122398. [Google Scholar] [CrossRef]
  29. Li, L.L.; Zhang, Y.P.; Cao, G.Z.; Li, W.Z. Human-in-the-Loop Trajectory Optimization Based on SEMG Biofeedback for Lower-Limb Exoskeleton. Sensors 2024, 24, 5684. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, H.; Chen, X.; Dong, C.; Yu, Z.; Huang, Q. Online Running Pattern Generation for Humanoid Robot with Direct Collocation of Reference-Tracking Dynamics. IEEE/ASME Trans. Mechatron. 2024, 29, 2091–2102. [Google Scholar] [CrossRef]
  31. Bordalba, R.; Schoels, T.; Ros, L.; Porta, J.M.; Diehl, M. Direct Collocation Methods for Trajectory Optimization in Constrained Robotic Systems. IEEE Trans. Robot. 2023, 39, 183–202. [Google Scholar] [CrossRef]
Figure 1. Obstacle-crossing-motion control framework.
Figure 1. Obstacle-crossing-motion control framework.
Biomimetics 10 00311 g001
Figure 2. Lower limb exoskeleton robots integrated with depth cameras.
Figure 2. Lower limb exoskeleton robots integrated with depth cameras.
Biomimetics 10 00311 g002
Figure 3. (a) The motion trajectory of hip joint at a distance of 1.26 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.26 m between obstacles and lower limb exoskeleton robots.
Figure 3. (a) The motion trajectory of hip joint at a distance of 1.26 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.26 m between obstacles and lower limb exoskeleton robots.
Biomimetics 10 00311 g003
Figure 4. (a) The motion trajectory of hip joint at a distance of 1.14 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.14 m between obstacles and lower limb exoskeleton robots.
Figure 4. (a) The motion trajectory of hip joint at a distance of 1.14 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.14 m between obstacles and lower limb exoskeleton robots.
Biomimetics 10 00311 g004
Figure 5. (a) The motion trajectory of hip joint at a distance of 1.21 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.21 m between obstacles and lower limb exoskeleton robots.
Figure 5. (a) The motion trajectory of hip joint at a distance of 1.21 m between obstacles and lower limb exoskeleton robots; (b) The motion trajectory of knee joint at a distance of 1.21 m between obstacles and lower limb exoskeleton robots.
Biomimetics 10 00311 g005
Figure 6. (a) Joint angles of hip joint crossing the trajectory of obstacle 1; (b) Joint angles of knee joint crossing the trajectory of obstacle 1.
Figure 6. (a) Joint angles of hip joint crossing the trajectory of obstacle 1; (b) Joint angles of knee joint crossing the trajectory of obstacle 1.
Biomimetics 10 00311 g006
Figure 7. (a) Joint angles of hip joint crossing the trajectory of obstacle 2; (b) Joint angles of knee joint crossing the trajectory of obstacle 2.
Figure 7. (a) Joint angles of hip joint crossing the trajectory of obstacle 2; (b) Joint angles of knee joint crossing the trajectory of obstacle 2.
Biomimetics 10 00311 g007
Figure 8. (a) Joint angles of hip joint crossing the trajectory of obstacle 3; (b) Joint angles of knee joint crossing the trajectory of obstacle 3.
Figure 8. (a) Joint angles of hip joint crossing the trajectory of obstacle 3; (b) Joint angles of knee joint crossing the trajectory of obstacle 3.
Biomimetics 10 00311 g008
Figure 9. (a) Joint angles of hip joint crossing the trajectory of obstacle 4; (b) Joint angles of knee joint crossing the trajectory of obstacle 4.
Figure 9. (a) Joint angles of hip joint crossing the trajectory of obstacle 4; (b) Joint angles of knee joint crossing the trajectory of obstacle 4.
Biomimetics 10 00311 g009
Figure 10. Step length and step height of lower limb exoskeleton robot when obstacle-crossing. (a) The step length and step height of obstacle-crossing 1; (b) The step length and step height of obstacle-crossing 2; (c) The step length and step height of obstacle-crossing 3; (d) The step length and step height of obstacle-crossing 4.
Figure 10. Step length and step height of lower limb exoskeleton robot when obstacle-crossing. (a) The step length and step height of obstacle-crossing 1; (b) The step length and step height of obstacle-crossing 2; (c) The step length and step height of obstacle-crossing 3; (d) The step length and step height of obstacle-crossing 4.
Biomimetics 10 00311 g010
Table 1. Parameters of gait constraints and motion constraints defined in gait trajectory planning.
Table 1. Parameters of gait constraints and motion constraints defined in gait trajectory planning.
Parameter SymbolsParameter MeaningValues
θ t o - b Backward inclination angle of the torso 0.03 r a d
θ t o - f Forward inclination angle of the torso 0.03 r a d
θ K max Protective margin for excessive extension of the knee joint 0.03 r a d
d a Step length when crossing obstaclesCalculated from the length of obstacles
h a Step height when crossing obstaclesCalculated from the height of obstacles
H h a Ground clearance constraint for swinging legs H h a = 0.024 ( h a ) 2 + 0.024
Δ h Safety margin for crossing obstaclesCustomized according to experimental constraints
T * Step timeSet up from the experiment
d a * One step length of normal trajectoryFrom the normal trajectory measurement of the experiment
h a * Step height of normal trajectoryFrom the normal trajectory measurement of the experiment
d s Threshold for trajectory selection in motion decision-making algorithm for obstacle crossingCustomized from experimental constraints
d c Actual horizontal distance between the support foot and the obstacleObtained from depth camera detection
d 1 Horizontal distance between the support foot and the obstacleCustomized from experimental constraints
d 2 Horizontal distance between the swinging foot and the obstacle after landingCustomized from experimental constraints
d f Horizontal distance extending from the toe to the end of the swinging legCustomized from experimental constraints
Table 2. Actual size and measure size of four types of obstacles.
Table 2. Actual size and measure size of four types of obstacles.
Real ImagesSizeLength (m)Width (m)Height (m)
Biomimetics 10 00311 i001Actual size0.1250.0190.08
Measure size0.12840.197020.0921
Biomimetics 10 00311 i002Actual size0.110.200.03
Measure size0.13070.20630.0386
Biomimetics 10 00311 i003Actual size0.1850.260.03
Measure size0.19010.27040.4011
Biomimetics 10 00311 i004Actual size0.090.1650.053
Measure size0.10120.17230.6029
Table 3. Each joint angle of adjustment trajectory at different distances of obstacle and robot.
Table 3. Each joint angle of adjustment trajectory at different distances of obstacle and robot.
DistancesJoint Angles (°)
Left HipLeft KneeRight HipRight Knee
Distance 1: 1.26 m21.1942.2821.1843.03
7.127.11
Distance 2: 1.14 m16.3731.7416.3730.70
5.085.09
Distance 3: 1.21 m18.5439.6117.2139.62
6.646.41
Table 4. Specific values of gait parameters of lower limb exoskeleton robots crossing obstacles with different feature information.
Table 4. Specific values of gait parameters of lower limb exoskeleton robots crossing obstacles with different feature information.
ObjectsSpecific Values
Step Height (m)Step Length (m)
Obstacle 10.220.34
Obstacle 20.140.31
Obstacle 30.150.40
Obstacle 40.190.33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Cao, G.; Wu, J.; Gao, B.; Xia, L.; Lu, C.; Wang, H. Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots. Biomimetics 2025, 10, 311. https://doi.org/10.3390/biomimetics10050311

AMA Style

Zhang Y, Cao G, Wu J, Gao B, Xia L, Lu C, Wang H. Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots. Biomimetics. 2025; 10(5):311. https://doi.org/10.3390/biomimetics10050311

Chicago/Turabian Style

Zhang, Yuepeng, Guangzhong Cao, Jun Wu, Bo Gao, Linzhong Xia, Chen Lu, and Hui Wang. 2025. "Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots" Biomimetics 10, no. 5: 311. https://doi.org/10.3390/biomimetics10050311

APA Style

Zhang, Y., Cao, G., Wu, J., Gao, B., Xia, L., Lu, C., & Wang, H. (2025). Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots. Biomimetics, 10(5), 311. https://doi.org/10.3390/biomimetics10050311

Article Metrics

Back to TopTop