Next Article in Journal
IAROA: An Enhanced Attraction–Repulsion Optimisation Algorithm Fusing Multiple Strategies for Mechanical Optimisation Design
Previous Article in Journal
An Innovative Retrieval-Augmented Generation Framework for Stage-Specific Knowledge Translation in Biomimicry Design
Previous Article in Special Issue
Approach to Semantic Visual SLAM for Bionic Robots Based on Loop Closure Detection with Combinatorial Graph Entropy in Complex Dynamic Scenes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Planning Framework Based on Semantic Segmentation and Flipper Motions for Articulated Tracked Robot in Obstacle-Crossing Terrain

School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 627; https://doi.org/10.3390/biomimetics10090627
Submission received: 8 August 2025 / Revised: 11 September 2025 / Accepted: 15 September 2025 / Published: 17 September 2025
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)

Abstract

Articulated tracked robots (ATRs) equipped with dual active flippers are widely used due to their ability to climb over complex obstacles like animals with legs. This paper presents a novel planning framework designed to empower ATRs with the capability of autonomously generating global paths that integrate obstacle-crossing maneuvers in complex terrains. This advancement effectively mitigates the issue of excessive dependence on remote human control, thereby enhancing the operational efficiency and adaptability of ATRs in challenging environments. The framework consists of three core components. First, a lightweight DeepLab V3+ architecture augmented with an edge-aware module is used for real-time semantic segmentation of elevation maps. Second, a simplified model of the robot-terrain contact is constructed to rapidly calculate the robot’s pose at map sampling points through contact point traversal. Finally, based on rapidly-exploring random trees, the cost of flipper motion smoothness is incorporated into the search process, achieving collaborative planning of passable paths and flipper maneuvers in obstacle-crossing scenarios. The framework was tested on our Crawler robot, which can quickly and accurately identify flat areas, obstacle-crossing areas, and impassable areas, avoiding redundant planning in non-obstacle areas. Compared to manually operated remote control, the planned path demonstrated shorter travel time, better stability, and lower flipper energy expenditure. This framework offers substantial practical value for autonomous navigation in demanding environments.

Graphical Abstract

1. Introduction

From emergency disaster relief to environmental exploration and infrastructure maintenance [1,2], mobile robots are transforming various industries. Complex terrain poses challenges for the movement of mobile robots. Through thousands of years of evolution, creatures in nature have developed the ability to effortlessly climb obstacles using their flexible limbs. Humans seek inspiration from them, leading to the development of snake-like robots, legged robots, and other such designs.Articulated tracked robots equipped with dual active flippers are extensively deployed for their superior obstacle negotiation performance. The wider track contact area provides excellent stability. Meanwhile, compared to traditional crawler robots, they are as flexible as legged animals and can adapt to different complex terrains by changing the angle of their flippers, thus easily crossing obstacles. Our Climber robot is designed on the basis of this principle, shown in Figure 1.
Traditional systems rely on remote teleoperation, which requires simultaneous manual control of both locomotion and flipper angles, which imposes a significant burden on the operator. With advancements in simultaneous localization and mapping (SLAM) technology and increased computational power, 2D indoor ground mobile robot navigation has become relatively mature. By improving the robustness and efficiency of existing 2D autonomous navigation solutions [3,4,5], autonomous navigation has been widely adopted in various fields [6,7]. However, navigating articulated tracked robots in obstacle-crossing terrain remains challenging, as their trajectories extend beyond the planar workspace. There exist highly variable obstacle-crossing maneuvers and, at the same time, they are different from the 3D motion planning of UAVs because the robots always have to make contact with the ground to move. Therefore, autonomously planning a stable and passable path that includes active front and rear flipper movements is a significant challenge for robots to navigate complex obstacle-crossing scenarios.
Efficient environment representation is a prerequisite for complex scene navigation. Traditional methods rely on geometric representations such as 2.5D height maps [8,9], point clouds [10], or truncated signed distance fields (TSDF), which encode topological and geometric features but lack semantic information needed for advanced tasks [11]. Vision-based methods are widely adopted to embed semantic understanding [12]. For example, Tateno et al. [13] leveraged CNN-based depth estimation integrated with monocular SLAM to achieve semantic mapping.However, integrating these methods with LiDAR and IMU SLAM systems requires additional visual sensors. Wellhausen et al. [14] trained CNNs to infer foothold semantics for legged robots using elevation maps.
Several studies have explored Articulated Tracked Robot navigation in complex terrains, emphasizing flipper angle adaptation. Discrete flipper states effectively address terrain-specific configurations. A D-Lite-based planner [15] selects the optimal angle of the flipper from four predefined poses based on terrain data; however, it treats position and flipper planning as separate processes, leading to suboptimal integration. These methods often overlook motion smoothness, leading to inefficiencies and high energy consumption as a result of frequent state switching. A simplified skeleton model [16,17,18] enables continuous motion generation by searching a degenerate configuration space but prioritizes local transitions over global efficiency.
More recently, a planar RTI model [19] has been proposed to describe robot-terrain interactions. It formulates the hybrid terrain optimization (HTO) problem as a multi-objective NLP solved in real time to switch between travel and traversal modes. However, its lack of global path planning hinders path and motion planning coordination. Neural networks [20] and reinforcement learning [21,22] have been applied to flipper planning but remain limited to specific terrains, therefore, the objective of this study is to develop a universal planning methodology that is adaptable to diverse terrains. For ATRs, commonly deployed in search and rescue missions, efficient obstacle-crossing requires coordinated path and flipper motion planning. However, existing methods struggle to integrate both seamlessly, highlighting the need for improved autonomous navigation in complex environments.
Inspired by animals’ agile movement in complex environments, we aim for robots to possess biological-like semantic understanding of maps, enabling them to plan their own limb movements and feasible global paths.Therefore, we propose a planning framework for Articulated Tracked Robots in obstacle-crossing terrains (Figure 2), which contains three core components: (1) a lightweight DeepLab V3+ network with an added edge awareness module for map semantic segmentation; (2) a contact-point traversal-based computation of flipper motions; and (3) a RRT * algorithm based on the computation of flipper motions to generate global path, which takes into account the obstacle-crossing difficulty cost and the flipper traveling stability cost. The framework realizes the collaborative planning of path and flipper actions in obstacle-crossing terrains, reduces the operator’s manipulation burden, and produces smooth and efficient motions.
The structure of this paper is as follows: in Section 1, we describe the background of this study, related work, and the structure of the article. In Section 2 semantic sgmentation of the environment perceived by the robot is discussed, with the main algorithmic framework based on an improved DeepLab V3+ network; Section 3 describes the method for calculating the robot’s arm angle; Section 4 introduces the global path optimization method based on RRT*; Section 5 describes the implementation of the navigation framework on Climber; and the conclusions are presented in Section 6.

2. Map Semantic Segmentation

We trained a DeepLab V3+ network that combines MobileNet V3 and the boundary aware module method (MB-DeepLab V3+) for map semantic segmentation, divide the map into flat areas, obstacle-crossing areas, and impassable areas, and store the map semantics in τ for use in subsequent calculations. The framework is shown in Figure 3. We converted the point cloud map to elevation raster map data as input, a region of the elevation raster map centered on the robot was extracted, and mapped the height values in the area to a grayscale image of 224 × 224 size. Participate in training by annotating grayscale images, red indicates the obstacle-crossing area, and blue indicates the impassable area.Then several improvements are made based on DeepLab V3+ [23], a widely used semantic segmentation framework: (1) the encoder adopts the lightweight MobileNet V3 [24] as the feature extraction network and changes it to a single channel input; (2) Since the output feature map size of MobileNet V3 is 7 × 7, we changed the dilation coefficients of the dilated convolution in ASPP to 1, 3, and 5. This ensures that the context information of different scales is covered, while avoiding insufficient information due to the receptive field exceeding the boundary. (3) The part of the CBM module in the B2Cnet network [25] that performs edge feature enhancement of the feature map is extracted and added to the decoder as an edge-aware module boundary-aware module (BAM), shown in Figure 4, which consists of the lightweight attention mechanism SimAM, the average pooling layer, 1 × 1 convolution and Sigmod activation function.

3. Calculation of Flipper Posture

3.1. Simplified Modeling of the Robot

We use the Climber robot (Figure 1), which is equipped with front and rear flippers on a standard tracked chassis; the front two flippers are driven by one drive motor to realize simultaneous up and down pitch, and the same principle of the rear two flippers. The robot is symmetric in structure on both sides.
In the lateral projection, since the diameter of the small wheel of the flipper is different from that of the big wheel, making the definition of the angle of the flipper complicated, the small wheel is conceptually expanded to match the diameter of the large wheel (Figure 5). The dotted line is the outline of the expanded flipper. When calculating the angle of the front and rear flippers, the angular offset θ 0 introduced by the virtual expansion of the flipper wheels is considered. At this point, the contour of the ground can be extended outward along the normal direction of the contour, with the extension distance being the radius of the track wheels. The robot is then simplified into a three-link structure and a thick-free robot model is obtained that is convenient for calculation (the green line segment in Figure 5). After expansion, complex geometric calculations in the literature [19] can be avoided, improving the efficiency of subsequent calculations.
The terrain beneath the robot is represented by a discrete set of points. The local terrain can be sensed by the LiDAR on board and combined with the map building algorithm [26] to extract the map points of the side tracks and flippers, obtaining the original point cloud information; then the point cloud is inflated and resampled to obtain the set of terrain points inflated, which is denoted as the set T(the red dotted line in Figure 5, where the points are denoted as p T .

3.2. Model Calculation

The calculation method draws on the literature  [27], the posture of a robot in terrain is determined by its own configuration and the contact points with the terrain. Assuming that the ground reference frame is [ G ] , the pose prediction problem can be described in the following form. There are the following definitions:
  • T: The set of points in the terrain.
  • x R : The x-axis coordinate of the chassis center in [ G ]
  • θ f : The angle of the robot’s front flipper, the counterclockwise is positive.
  • θ r : The angle of the robot’s rear flipper, the clockwise is positive.
Solving for robots in a static steady state:
  • y R : Coordinate on the y-axis of the center of the chassis in [ G ] .
  • ϕ : The pitch angle of the robot in [ G ] , positive when counterclockwise.
Figure 5 illustrates the specific meaning of the above parameters. Solving the pose is to find the terrain points, that is, the contact points, from the set of points T that satisfy the constraints and then determine the pose parameters. Here, the pose of the robot is calculated in the ‘steady state’, based on which two constraints can be imposed:
  • Condition 1: The existence of more than one point of contact with the terrain on each side of the front and back of the robot’s center of mass is an essential condition for stability;
  • Condition 2: The pose of the robot, determined by the contact points, must ensure that every point of the robot lies above all points of the terrain within its reachable range—a fundamental physical constraint that prevents collisions with the ground.
Establish a local coordinate system [L] with a y-axis parallel to the direction of gravity and an x-axis in the direction of robot orientation, using the position to be predicted ( x R , y R ) in [G] as the origin. By making the center of the robot chassis coincide with the origin and the chassis parallel to the x axis, the model shown in Figure 6 can be obtained. The model can be described by segmented straight lines for each component:
K = l r 2 r 1 front flipper r 2 r 1 l r 3 r 2 chassis r 3 r 2 l r 4 r 3 rear flipper r 4 r 3
The terrain point–robot contact can be represented as a point on a segmented linear model. Specifically, the robot K is first rotated around the origin at the pitch angle ϕ in the frame [ L ] , then transformed to K by translating h along the y axis and is in contact with the terrain points p c 1 ( x c 1 , y c 1 ) T 1 and p c 2 ( x c 2 , y c 2 ) T 2 , as shown in Figure 6.
To satisfy constraint 1, the terrain points must be located on the front and back sides of the robot’s center of mass. The geometric relationship between a point and a line can be expressed as the product of the dot of the homogeneous coordinate vector of the line and the homogeneous coordinate vector of the point being zero.
p c 1 and p c 2 may be located on the flipper or on the chassis, the geometric relationship is as follows:
l r 1 r 2 T · p c 1 = 0 , p c 1 on the front flipper , x c 1 ( x r 2 , x r 1 ] l r 4 r 3 T · p c 1 = 0 , p c 1 on the chassis , x c 1 ( x center , x r 2 ]
l r 3 r 2 T · p c 2 = 0 , p c 2 on the chassis , x c 2 ( x r 3 , x center ] l r 4 r 3 T · p c 2 = 0 , p c 2 on the rear flipper , x c 2 ( x r 4 , x r 3 ]
where: l r i + 1 r i ( i = 1 , 2 , 3 ) is the line that passes through points and in the transformed robot model K, p c j (j = 1, 2) is the contact point expressed as a homogeneous coordinate vector, as shown in Equation (4).
p c j = x c j y c j 1 , l r i + 1 r i = y r i + 1 y r i x r i + 1 x r i x r i + 1 y r i x r i y r i + 1
where ( x r i , y r i ) are the coordinates of r i , which are obtained by rotating and translating r i in the initial state in Figure 7, as in Equation (5).
x r i ( h , ϕ ) y r i ( h , ϕ ) = cos ϕ sin ϕ sin ϕ cos ϕ x r i y r i + 0 h
In Constraint 1, the contact points also need to satisfy the requirement that they are distributed in front of and behind the center of mass, that is, the range of x-axis coordinates of the contact points in Equations (2) and (3) should be far away from the center of mass. The coordinates ( x c e n t e r , y c e n t e r ) of the center of mass of the robot model in [L] can be expressed as Equation (6):
x c e n t e r = 1 m B + 2 m F m F l F ( cos θ f cos θ r ) + m B x b , y c e n t e r = 1 m B + 2 m F m F l F ( sin θ f + sin θ r ) + m B y b .
where ( x b , y b ) are the coordinates of the chassis center of mass, l 0 is the distance from the flipper center of mass to the flipper rotational axis, and  m B and m F are the masses of the robot’s unilateral chassis as well as the individual flippers. Assuming that the chassis center of mass coincides with the center of the chassis shape, that is ( x b , y b ) = (0, 0), we have Equation (7):
x c e n t e r 2 + y c e n t e r 2 = 2 1 cos θ f + θ r m F l F m B + 2 m F 2 2 m F l F m B + 2 m F 2
In other words, the center point ( x c e n t e r , y c e n t e r ) varies within a circle with a radius of 2 m F l F m B + m F , with the center of gravity of the vehicle chassis as the origin.
x c e n t e r 2 m F l F m B + m F , 2 m F l F m B + m F
We can initially divide all the terrain points under [L] into two subsets before and after T 1 and T 2 with the upper and lower boundaries of the x c e n t e r as boundaries, and set the candidate contact points to be located in them, i.e.,  p c 1 ( x c 1 , y c 1 ) T 1 ,   p c 2 ( x c 2 , y c 2 ) T 2 respectively (Figure 6). The position of a point relative to a line can be expressed as the dot product of and p, positive or negative. Therefore, this condition can be transformed into Equation (8):
l r 2 r 1 T · p 0 , x p ( x r 2 , x r 1 ] l r 3 r 2 T · p 0 , x p [ x r 3 , x r 2 ] , p ( x p , y p ) T l r 4 r 3 T · p 0 , x p [ x r 4 , x r 3 )
The (h, ϕ ) corresponding to the candidate contact points p c 1 and p c 2 that finally satisfy all the conditions is the predicted pose of the robot at the current position. Associative Equations (2)–(5) and (8) can be solved. A feasible solution may not always exist for certain terrain configurations, and the invalid solution corresponds to the robot configuration and terrain location, i.e., the non-drivable point, which will be avoided in the path planning.
When the total number of terrain points is limited, the system of equational equations consisting of (2) and (3) can be utilized to traverse to solve the analytical solutions of robot quickly poses corresponding to all the candidate contact point pairs ( p c 1 , p c 2 ) in the local terrain point set. Possibilities that satisfy all the constraints can be obtained by filtering them according to the interval range condition and inequality condition. Based on this idea, a tracked robot pose prediction method that can be applied to complex terrain is designed. Firstly, a pair of candidate contact points ( p c 1 , p c 2 ) are selected from the terrain point sets T 1 and T 2 , and since the contact points may be located either on the flipper or the chassis, (2) and (3) can be composed into a system of equations to be solved by proposing one equation each according to the following cases:
  • p c 1 is on the front flipper, and p c 2 is on the rear flipper;
  • p c 1 is on the front flipper and p c 2 is on the chassis;
  • p c 1 is on the chassis, and  p c 2 is on the rear flipper;
  • p c 1 is on the chassis, and  p c 2 is on the chassis.
The basic form of these systems of equations is the same, and the second case is used next as an example, while the other three instances will just replace the corresponding parameters. Substituting Equations (4) and (5) into the system of equations consisting of the first line of Equation (2) and the first line of Equation (3), so that t = tan( ϕ /2), then sin ϕ = 2 t / ( 1 + t 2 ) and cos ϕ = ( 1 t 2 ) / ( 1 + t 2 ) . The system of equations about h and t is obtained by organizing as in Equation (9):
( A 1 X 1 h ) t 2 + ( B 1 2 Y 1 h ) t + ( C 1 + X 1 h ) = 0 ( A 2 X 2 h ) t 2 + ( B 2 2 Y 2 h ) t + ( C 2 + X 2 h ) = 0
The coefficients A i , B i , C i , X i , Y i ( i = 1 , 2 ) are constants related only to the original coordinates of the robot model ( x r i , y r i ) and the coordinates of the candidate contact points ( x c i , y c i ). Assuming that the coefficients of the highest-order terms in Equation (9) ( A i X i h ) are all nonzero, expanding the two equations yields a quartic equation in h. Using polynomial root-finding algorithms, the solutions for h can be obtained. Further, using the quadratic formula and the inverse tangent function, the corresponding ϕ for h can be obtained. Using the same method, solutions for the other three cases can be obtained. Based on the range conditions of the x-coordinate of the contact point and the equality constraint conditions, a set of attitude parameters (h, ϕ ) that satisfy the contact point conditions can be found among these four cases.
Based on the analytical solution method above, given a pair of candidate contact point coordinates, the attitude parameters satisfying constraint condition 1 can be calculated. Subsequently, according to Equation (8), it is determined whether constraint condition 2 is satisfied. If not satisfied, a new pair of candidate contact points is selected from T 1 and T 2 , ultimately finding the single-sided robot attitude when the robot adopts the specified forward and backward swing arm angles (configuration) on the given terrain.
When the total number of terrain points in T 1 and T 2 is 30–40, the average solution time for a single pose with the specified x R and θ f and θ r is about 0.7 ms . 1000–1500 poses are predicted per second, which can satisfy the demand for real-time planning.

4. Flipper Motion Planning RRT* Algorithm

Flipper Motion Planning RRT* (FMP-RRT*) is proposed to solve the problem of global motion path generation for Articulated Tracked Robots in obstacle-crossing terrain. Its framework is based on Informed-RRT* [28], as Algorithm 1 shown. The following definitions are relevant:
  • x R 3 denotes a spatial point.
  • S R 4  denotes the state of the robot, including the height of the center of mass of the body z, the angle of inclination of the body ϕ , the angle of front flipper θ f , and the angle of rear flipper θ r .
  • x ¯ R 2  indicates a planar point projected onto the plane from space.
  • x ^ R 3  denotes the location of the robot center on the terrain.
  • H denotes a path consisting of a series of states.
Unlike traditional RRT, where each tree element represents only the position, our method defines each node as N i = ( x i , s i , τ i ) , S i denotes the robot state, τ i represents the area to which the node belongs, including the flat area, obstacle-crossing area and impassable area, its calculation is derived from Section 2. According to the above definition, given two nodes N 1 , N 2 , and then l 12 denotes the Euclidean distance between them in 3D space, the cost function of the line connecting N 1 , N 2 is defined as follows:
f ( N 1 , N 2 ) = ( 1 + j ω ) · l 12
Which
ω = F F min F max F min = m g sin α + μ m g cos α μ m g m g μ m g
l 12 = ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 + ( z 2 z 1 ) 2
α = arcsin z 2 z 1 l 12
where j is the crossing penalty scale factor, ω is the crossing difficulty cost, characterized by the minimum power output of the robot crossing process and normalized, and  μ is the rolling friction coefficient between the track and the ground so that the algorithm aims to minimize the path length and traversal difficulty. Pseudocode for Algorithm 1, the common subfunctions of the Informed-RRT* algorithm can be found in [28], It is briefly described as follows:
  • SampleEllipsoid(): sampling is conducted in the elliptical area from the starting point to the ending point.
  • RandomSample(): global random sampling.
  • FindNearest(T, x ¯ r a n d ): find the nearest point in the tree T to x ¯ r a n d .
  • Steer( N n e a r e s t , x ¯ r a n d ): move a fixed distance from N n e a r e s t to x ¯ r a n d to obtain new points.
  • FindNeighbors(V, N n e w ): look for the set of neighboring points of N n e w in the set of nodes V.
  • FindParent( Ω n e a r , N n e w ): look for the parent node of N n e w in Ω n e a r .
  • Rewire(T, Ω n e a r , N n e w ): reconnect the tree T to optimize the path, specifically, in the neighborhood nodes near N n e w , checking whether N n e w is used as the parent node can reduce the path cost f( N n e w , Ω n e a r ) of these neighborhood nodes. If possible, reconnect its parent node as N n e w .
  • InGoalRegion( N n e w ): determine whether N n e w is near the finish line.
Other subfunctions are described as follows.
  • Pos( x ¯ ): given 2D raster point coordinates x ¯ , returns the corresponding node N.
  • Posture Calculation ( θ f , θ r , b, c): given the initial angles θ f and θ r of the robot’s front and rear flippers and the resolution b and c of the angle prediction, solve for ϕ n e w and y n e w under different arm angle conditions to form the set of possible robot states S 1 , S 2 , S 3 , ….
  • Configuration Planning ( N 1 , N 2 ): Given two nodes, calculate the best S 1 at the N 2 node, including the best angles of the front and rear flipper θ f b e s t , θ r b e s t , the best body height y b e s t and the best body angle of pitch ϕ b e s t .
The calculation principle of the configuration planning function is as follows:
Algorithm 1 FMP-RRT* ( N s t a r t , N g o a l , k)
  1:
V { N s t a r t } , E , Ω g o a l
  2:
T = ( V , E )
  3:
for i = 1 to k do
  4:
     if  H  then
  5:
          x ¯ r a n d SampleEllipsoid ( )
  6:
     else
  7:
          x ¯ r a n d RandomSample ( )
  8:
     end if
  9:
      x ¯ n e a r e s t FindNearest ( T , x ¯ r a n d )
10:
      x ¯ n e w Steer ( x ¯ n e a r e s t , x ¯ r a n d )
11:
      N n e w Pos ( x ¯ n e w )
12:
     if  τ n e w =obstacle-crossing then
13:
          { s 1 , s 2 , s 3 , } PostureCalculation ( θ f , θ r , b , c )
14:
     end if
15:
     if  N n e w and τ n e w impassable and S n e w  then
16:
          Ω n e a r FindNeighbors ( V , N new )
17:
         if  Ω n e a r  then
18:
             N p a r e n t FindParent ( Ω n e a r , N n e w )
19:
             V V { N n e w }
20:
             E E { ( N p a r e n t , N n e w ) }
21:
            if  τ n e w = obstacle-crossing or τ p a r e n t = obstacle-crossing  then
22:
                 S n e w ConfigurationPlanning ( N p a r e n t , N n e w )
23:
            end if
24:
             T ( V , E )
25:
             T Rewire ( T , Ω n e a r , N n e w )
26:
         end if
27:
     end if
28:
     if  InGoalRegion ( N n e w )  then
29:
          Ω g o a l Ω g o a l { N n e w }
30:
          H GeneratePath ( Ω goal )
31:
     end if
32:
 end for
33:
 return  H
Pose planning at the two nodes according to the preset robot configuration ( θ f , θ r ) produces a discrete state space consisting of the path point position, front and rear flipper angles, robot pitch angle and height above the ground. The robot pose at the specified position x ¯ i and flipper angles θ f , θ r are defined as a discrete state S x ¯ i , θ f , θ b = z ( x ¯ , θ f , θ b ) , ϕ ( x ¯ , θ f , θ b ) , calculated as described in Section 3, which means that the robot’s center of mass height and body pitch angle are determined by the position the robot is at θ f , θ r , and the front and rear flipper angles are calculated. The angles that the front and rear flippers change between adjacent path points are indicated as Δ θ f , Δ θ r , in the range [ b c , b , 0 , b , b c ]   b Z ; where b is the discretized resolution of the flipper angle and c is the number of unit resolutions that the flipper joint angle can change at most whenever the robot advances one path point. Based on the experience of human operators, a cost function g is designed to evaluate the cost of taking an action( Δ θ f , Δ θ r ) at the state S x ¯ i , θ f , θ b :
g x ¯ i , θ f , θ r , Δ θ f , Δ θ r = g ϕ + g h + g s
where g ϕ is the cost of the change in pitch angle between the front and back states, g z is the cost of the difference between the height of the center of mass and the average height of the ground in the chassis range in the current state, and g s is the cost of the stability of the pitch direction in the current state. The definitions of the three are as follows.
g ϕ = ω ϕ ϕ x ¯ i + 1 , θ f + Δ θ f , θ r + Δ θ r ϕ x ¯ i , θ f , θ r 2
g h = ω h h mean z x ¯ i + 1 , θ f + Δ θ f , θ r + Δ θ r 2
g s = ω s ϕ x ¯ i + 1 , θ f + Δ θ f , θ r + Δ θ r 2
ω ϕ , ω h and ω s represent the cost weighting coefficients for their respective functions, which determine the proportion of each cost in the total cost. These coefficients are tuned on the basis of empirical data. In this study, our goal is to achieve a balanced cost distribution among the three, thus setting each coefficient to 1/3. h m e a n represents the average height of the local shape in the current terrain. After obtaining the robot states in various configurations at each point of the path and the generation values when taking multiple actions, the dynamic programming algorithm is used to find the optimal state transfer path in the state space of the robot connecting the start state and the goal state, so that the cumulative cost value is minimized.
g all ( S x ¯ , θ f , θ b ) = min g all ( S x ¯ 1 , θ f Δ θ f , θ b Δ θ b ) + g ( S x ¯ , θ f + Δ θ f , θ r + Δ θ r )
The specific steps of the dynamic programming algorithm are as follows:
  • Step 1: Specify the robot’s starting node N p a r e n t and the desired target node N n e w . At this point, the robot state set at the desired target node has already been calculated by the preceding ’PostureCalculation function’ (line 13 of the pseudocode algorithm).
  • Step 2: Identify the optimal state within the target node’s robot state set that minimizes the cost function gall. This state becomes the new node’s s n e w .
  • Step 3: Upon encountering the next desired target node, calculate the cumulative cost of all previous transition states. Update the optimal state for all nodes, along with the optimal front/rear flippers angles θ f b e s t , θ r b e s t , chassis height z b e s t n e w and chassis pitch angle ϕ b e s t n e w .
The FMP-RRT* algorithm iteratively samples the configuration space and refines the path via rewiring to asymptotically converge to the optimal solution.Specifically, the FMP-RRT* algorithm samples and unfolds in a 2D method to obtain a new planar point x ¯ . Then x ¯ mapping to the spatial point, a new N n e w node is generated, and subsequently, the region attributes of this node are judged; if it belongs to the obstacle-crossing region, the prediction of flipper angle is carried out, and all the possible poses are computed given the initial angle of the flipper and the angular resolution; next, the FMP-RRT* algorithm connects the N n e w to the tree, and if the validity of the N n e w is verified by collision checking, and the flipper angle exists a feasible solution, then the father node of the new node is found, and the optimal flipper angle change from the parent node to the child node is computed, and then the path reconnection is performed so that the path cost is minimized; the FMP-RRT* algorithm repeats the above operation until it iterates to the specified maximal iteration tree and returns the optimal solution H * . The FMP -RRT* algorithm has the following advantages in addition to inheriting the fast search and convergence speed of the Informed-RRT* algorithm: Firstly, this algorithm plans the obstacle-crossing maneuvers and paths simultaneously. Secondly, this algorithm does not add more computation while increasing the planning of the flipper action and only predicts the flipper action in the obstacle-crossing region during the unfolding of the random sampling tree, which reduces the computation of terrain analysis, avoids useless analysis, and accelerates the response speed of the algorithm.

5. Experiments and Results

As shown in Figure 8, the experiment utilizes the articulated tracked robot Climber. The LiDAR sensor is the Robosence Helios 32, while the IMU sensor is the WIT 9073. Together, they serve as mapping and localization sensors. The computing unit is an NVIDIA Orin, featuring an ARM Cortex-A78 CPU and 64GB of memory to run all algorithms. The camera is a Microsoft D435i, used to capture video information in front of the robot (for manual operation). The communication antenna facilitates data transmission between the robot and a remote host computer. The battery supplies power to the robot.We utilize the DLIO algorithm [29] to combine LiDAR scans and IMU data for mapping and localization. A simple PID controller is used as the motion-following controller.

5.1. Map Semantic Segmentation

We set up 15 environments that contain obstacle-crossing areas on the Gazebo simulation platform. The obstacle-crossing areas mainly consist of different platforms and stairs (as shown in Figure 9). We controlled the robot to move in this environment and collected a total of 2500 real-time elevation grid maps, which were mapped to images with a resolution of 224 × 224 based on height values. The data was then divided into a training dataset and a validation dataset at a ratio of 9:1. The images from the training dataset were classified into safe passage areas, obstacle-crossing areas, and impassable areas. During training, data enhancement techniques such as random flipping, random cropping, and padding were applied. The model was trained with transfer learning: In Phase 1, the feature extraction layers were frozen and only the newly added classifier head was updated. In the second phase, the training was unfrozen and the feature extraction network was included in the training. The initial learning rate is set to 0.005, the batch size to 16, and the number of epochs to 50 and 250, respectively. Focal loss is used as the loss function, and the Adam optimization algorithm is used to reduce training time and accelerate model convergence. The model’s terrain recognition accuracy is evaluated using the mean intersection over union (MIoU) and mean pixel accuracy (MPA) as metrics, while the frame rate (FPS) is used to assess the model’s inference speed.
Three common semantic segmentation models were trained on the constructed dataset. The proposed model was compared with PSPNet, UNet, and DeepLab v3+ equipped with different backbone networks. Evaluation metrics are shown in Table 1, where MIoU and MPA reached 92.44% and 94.98%, respectively. With an inference speed of 56.45 frames per second, the proposed model demonstrated the highest recognition accuracy.
The recognition performance in real-world scenarios is shown in Figure 10c,g,l. Unscanned maps and obstacles to be avoided are marked in black, flat areas are marked in blue, and obstacle-crossing areas are marked in gray. The real-time recognition time is within 80–100 ms , which meets the planning requirements. The experimental results demonstrate that this method can accurately distinguish between flat areas, obstacle-crossing areas, and obstacle-avoidance areas, facilitating motion planning.

5.2. Validation of Planning Algorithms

We simulated the experiments in three scenarios, setting the sample step size at 0.5 m and the Obstacle-crossing penalty scale factor j to 1.
Scene 1 (Figure 10a): A 22 cm high platform and a 50 cm high black box are placed in a hallway. The space on both sides of the platform is too narrow for the robot to pass, and the target point is placed in front of it. The robot autonomously plans a motion path, successfully avoiding the black box and crossing the platform (initial solution time: 265 ms , asymptotic optimization time: 782 ms ).
Scene 2 (Figure 10e): Similarly to Scenario 1, but with the platform placed against one wall, leaving sufficient passage on the left, while the black box is near the opposite wall. The robot initially plans a path, avoiding both obstacles (initial solution time: 223 ms , asymptotic optimization time: 563 ms ). When the obstacle-crossing penalty factor j is adjusted to 0.5, the robot instead plans a path crossing the platform, demonstrating the influence of the penalty factor in the cost function.
Scene 3 (Figure 10g): The robot starts at the entrance of the staircase with the target point on the stairs. It successfully generates a 3D path to climb the stairs (initial solution time: 326 ms , asymptotic optimization time: 628 ms ).
Experiments across these three scenarios demonstrate that the proposed planning framework can adapt to diverse obstacle-crossing scenarios, enabling obstacle-crossing planning for classic obstacles such as stairs and steps within an acceptable timeframe.

5.3. Movement Quality Analysis

We quantitatively compared autonomous planning with professional operator teleoperation in Scenarios using three key metrics:
  • Time: total movement time.
  • Max ϕ : the maximum absolute pitch angle, the larger it is, the more likely the robot will fall.
  • Rot.Ang: the cumulative absolute pitch angle of the front and rear flippers.
The robot’s maximum speed is limited to 0.2 m / s , with a maximum acceleration of 0.3 m / s 2 and a fixed flipper angular velocity of 25 / s . We performed five operations and calculated the average metrics. The same initial and target positions were set for each scene. The results are shown in the following Table 2.
Analysis of the four motion sequences in three scenarios reveals that the autonomous movement method requires consistently less time than the manual operation method in all scenarios. This is because climbing elevated platforms and stairs requires coordinated forward-backward flippers. Improper control can cause the robot to sway or even tip over during ascent. The additional complexity of the control often leads operators to reduce speed, thus increasing movement time. In contrast, autonomous movement offers a significant advantage, achieving faster speeds when autonomously planning obstacle-crossing actions. This shows that the proposed planning method is more efficient during task execution. During movement, autonomous and manual operations exhibit similar movements, resulting in comparable Max ϕ values. In Scenario 2’s obstacle avoidance, both values are zero because no obstacle-crossing action occurs, eliminating the need for flipper movement. In other scenarios, the sum of Max ϕ and the rotation angle for autonomous motion is consistently lower than that for manual operation. This is because when approaching the boundary between obstacles and flat ground—where flipper angle control is required—autonomous motion avoids redundant swinging present in manual operation, indicating superior stability. Autonomous motion also eliminates the repeated flipper angle adjustments required in manual operation, resulting in a smaller Max ϕ . This indicates that autonomous motion offers superior smoothness and reduced energy consumption.

6. Conclusions

This paper introduces an autonomous planning framework tailored for articulated tracked robots navigating complex terrains with obstacle-crossing maneuvers. The framework aims to achieve efficient autonomous navigation through semantic segmentation of the map and collaborative planning. Specifically, a method based on the MB-DeepLab V3+ network is introduced for semantic segmentation, accurately classifying terrain into flat, obstacle-crossing, and impassable areas. A simplified robot-terrain contact model is constructed to enable rapid calculation of the robot’s pose at map sampling points through contact point traversal, thus avoiding complex geometric calculations and enhancing planning efficiency while ensuring stability in complex terrain. Additionally, the FMP-RRT* global path planning algorithm is proposed, which incorporates the smoothness cost of flipper motions into the search process to achieve collaborative planning of obstacle-crossing paths and flipper motions, substantially improving the efficiency and stability of path planning. Experiments demonstrate that the MB-DeepLab V3+ semantic segmentation algorithm achieves high efficiency and high precision segmentation. The entire framework outperforms manual control in terms of travel time and stability, while also reducing flipper oscillations and enhancing movement smoothness. Future efforts will concentrate on optimizing the real-time performance and robustness of the algorithm to tackle increasingly complex and dynamic environments.

Author Contributions

Conceptualization, P.Z.; methodology, P.Z.; software, J.L.; validation, P.Z. and J.L.; formal analysis, P.Z.; investigation, P.Z.; resources, J.S.; data curation, P.Z.; writing—original draft preparation, P.Z.; writing—review and editing, J.L.; visualization, J.L.; supervision, J.S. and Y.F.; project administration, J.S.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset that supports the central findings of this study is directly available in this article. Additional data can be requested from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ATRsArticulated Tracked Robot
RRTRapidly-exploring Random Tree
FMP-RRT*Flipper Motion Planning-Rapidly Exploring Random Tree
BAMBoundary Aware Module

References

  1. Li, F.; Hou, S.; Bu, C.; Qu, B. Rescue robots for the urban earthquake environment. Disaster Med. Public Health Prep. 2022, 17, e98. [Google Scholar] [CrossRef] [PubMed]
  2. Niroui, F.; Zhang, K.; Kashino, Z.; Nejat, G. Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robot. Autom. Lett. 2019, 4, 610–617. [Google Scholar] [CrossRef]
  3. Putz, S.; Simon, J.S.; Hertzberg, J. Move base flex: A highly flexible navigation framework for mobile robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 3416–3421. [Google Scholar]
  4. Bansal, S.; Tolani, V.; Gupta, S.; Malik, J.; Tomlin, C. Combining optimal control and learning for visual navigation in novel environments. In Proceedings of the Conference on Robot Learning, London, UK, 18–20 November 2020; Volume 100, pp. 420–429. [Google Scholar]
  5. Kalogeiton, V.S.; Ioannidis, K.; Sirakoulis, G.C.; Kosmatopoulos, E.B. Real-time active SLAM and obstacle avoidance for an autonomous robot based on stereo vision. Cybern. Syst. 2019, 50, 239–260. [Google Scholar] [CrossRef]
  6. Yuan, W.; Li, Z.; Su, C. Multisensor-based navigation and control of a mobile service robot. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 2624–2634. [Google Scholar] [CrossRef]
  7. Xiao, A.; Luan, H.; Zhao, Z.; Hong, Y.; Zhao, J.; Chen, W.; Wang, J.; Meng, M.Q. Robotic autonomous trolley collection with progressive perception and nonlinear model predictive control. In Proceedings of the 2022 IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4480–4486. [Google Scholar]
  8. Wermelinger, M.; Fankhauser, P.; Diethelm, R.; Krusi, P.; Siegwart, R.; Hutter, M. Navigation planning for legged robots in challenging terrain. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 1184–1189. [Google Scholar]
  9. Chavez-Garcia, R.O.; Guzzi, J.; Gambardella, L.M.; Giusti, A. Image Classification for Ground Traversability Estimation in Robotics; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; pp. 325–336. [Google Scholar]
  10. Krüsi, P.; Furgale, P.; Bosse, M.; Siegwart, R. Driving on point clouds: Motion planning, trajectory optimization, and terrain assessment in generic nonplanar environments. J. Field Robot. 2016, 34, 940–984. [Google Scholar] [CrossRef]
  11. Oleynikova, H.; Taylor, Z.; Fehr, M.; Siegwart, R.; Nieto, J. Voxblox: Incremental 3D Euclidean signed distance fields for on-board MAV planning. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1366–1373. [Google Scholar]
  12. Mccormac, J.; Handa, A.; Davison, A.; Leutenegger, S. SemanticFusion: Dense 3D semantic mapping with convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4628–4635. [Google Scholar]
  13. Tateno, K.; Tombari, F.; Laina, I.; Navab, N. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6565–6574. [Google Scholar]
  14. Wellhausen, L.; Dosovitskiy, A.; Ranftl, R.; Walas, K.; Cadena, C.; Hutter, M. Where should I walk? Predicting terrain properties from images via self-supervised learning. IEEE Robot. Autom. Lett. 2019, 4, 1509–1516. [Google Scholar] [CrossRef]
  15. Colas, F.; Mahesh, S.; Pomerleau, F.; Liu, N.M.; Siegwart, R. 3D path planning and execution for search and rescue ground robots. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 722–727. [Google Scholar]
  16. Yuan, Y.; Wang, L.; Schwertfeger, S. Configuration-space flipper planning for rescue robots. In Proceedings of the 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Chemnitz, Germany, 28–31 October 2019; pp. 37–42. [Google Scholar]
  17. Yuan, Y.; Xu, Q.; Schwertfeger, S. Configuration-space flipper planning on 3D terrain. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 12–15 October 2020; pp. 318–325. [Google Scholar]
  18. Ohno, K.; Morimura, S.; Tadokoro, S.; Koyanagi, E.; Yoshida, T. Semi-autonomous control system of rescue crawler robot having flippers for getting over unknown-steps. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29–31 October 2007; pp. 3392–3397. [Google Scholar]
  19. Xu, Z.; Chen, Y.; Jian, Z.; Tan, J.; Wang, X.; Liang, B.L. Hybrid trajectory optimization for autonomous terrain traversal of articulated tracked robots. IEEE Robot. Autom. Lett. 2023, 9, 755–762. [Google Scholar] [CrossRef]
  20. Sokolov, M.; Afanasyev, I.; Klimchik, A.; Mavridis, N. HyperNEAT-based flipper control for a crawler robot motion in 3D simulation environment. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 16–19 December 2017; pp. 2652–2656. [Google Scholar]
  21. Pecka, M.; Salansky, V.; Zimmermann, K.; Svoboda, T. Autonomous flipper control with safety constraints. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 2889–2894. [Google Scholar]
  22. Mitriakov, A.; Papadakis, P.; Kerdreux, J.; Garlatti, S. Reinforcement learning based, staircase negotiation learning: Simulation and transfer to reality for articulated tracked robots. IEEE Robot. Autom. Mag. 2021, 28, 10–20. [Google Scholar] [CrossRef]
  23. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder—Decoder with Atrous Separable Convolution for Semantic Image Segmentation; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; pp. 762–778. [Google Scholar]
  24. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNet V3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  25. Zhang, Z.; Bao, L.; Xiang, S.; Xie, G.; Gao, R. B2CNet: A progressive change boundary-to-center refinement network for multitemporal remote sensing images change detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 11322–11338. [Google Scholar] [CrossRef]
  26. Deng, W.; Huang, K.; Chen, X.; Zhou, Z.; Shi, C.; Guo, R.; Zhang, H. Semantic RGB-D SLAM for rescue robot navigation. IEEE Access 2020, 8, 221320–221329. [Google Scholar] [CrossRef]
  27. Chen, B.; Huang, K.; Pan, H.; Ren, H.; Chen, X.; Xiao, J.; Wu, W.; Lu, H. Geometry-based flipper motion planning for articulated tracked robots traversing rough terrain in real-time. J. Field Robot. 2023, 40, 2010–2029. [Google Scholar] [CrossRef]
  28. Gammell, J.D.; Srinivasa, S.S.; Barfoot, T.D. Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2997–3004. [Google Scholar]
  29. Chen, K.; Nemiroff, R.; Lopez, B.T. Direct LiDAR-inertial odometry: Lightweight LIO with continuous-time motion correction. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023. [Google Scholar]
Figure 1. Climber robot moving in obstacle-crossing scenarios.
Figure 1. Climber robot moving in obstacle-crossing scenarios.
Biomimetics 10 00627 g001
Figure 2. The planning framework based on semantic segmentation and flipper motions for articulated tracked robots in obstacle-crossing terrain.
Figure 2. The planning framework based on semantic segmentation and flipper motions for articulated tracked robots in obstacle-crossing terrain.
Biomimetics 10 00627 g002
Figure 3. The MB-DeepLab V3+ architecture incorporates a MobileNet V3 encoder and BAM decoder modules within the DeepLab V3+ framework.
Figure 3. The MB-DeepLab V3+ architecture incorporates a MobileNet V3 encoder and BAM decoder modules within the DeepLab V3+ framework.
Biomimetics 10 00627 g003
Figure 4. Boundary Aware Module(BAM).
Figure 4. Boundary Aware Module(BAM).
Biomimetics 10 00627 g004
Figure 5. Simplified model of articulated tracked robot contact with terrain.
Figure 5. Simplified model of articulated tracked robot contact with terrain.
Biomimetics 10 00627 g005
Figure 6. Segmented model of a horizontally placed robot in local coordinate system.
Figure 6. Segmented model of a horizontally placed robot in local coordinate system.
Biomimetics 10 00627 g006
Figure 7. Schematic diagram of rotating and then translating the robot model.
Figure 7. Schematic diagram of rotating and then translating the robot model.
Biomimetics 10 00627 g007
Figure 8. Articulated tracked robot: Climber.
Figure 8. Articulated tracked robot: Climber.
Biomimetics 10 00627 g008
Figure 9. Simulation environment for making data sets.
Figure 9. Simulation environment for making data sets.
Biomimetics 10 00627 g009
Figure 10. Global planning experiments for three scenarios.
Figure 10. Global planning experiments for three scenarios.
Biomimetics 10 00627 g010
Table 1. Comparison of different semantic segmentation algorithms.
Table 1. Comparison of different semantic segmentation algorithms.
AlgorithmBackboneMIOU/%MPA/%FPS (Frame/s)
PSPNetResnet5082.3792.3516.67
Mobilenet v283.5191.8865.89
UNetResnet5079.2291.4310.38
Vgg81.2391.686.35
DeepLab V3+Xception78.9891.2610.10
Mobilenet V284.8792.2425.13
Mobilenet V3-Large91.3193.7729.57
Mobilenet V3-Small88.2293.4557.33
MB-DeepLab V3+92.4493.9856.45
Table 2. Performance Comparison Between Manual Operation and Autonomous Movement Across Multiple Scenarios.
Table 2. Performance Comparison Between Manual Operation and Autonomous Movement Across Multiple Scenarios.
ScenarioMethodTime (s)Max ϕ ()Rot. Ang ()
1Manual operation40.4837.28478.36
Autonomous movement30.2330.56386.56
2-crossingManual operation42.4237.62435.98
Autonomous movement31.4430.88390.23
2-avoidingManual operation28.320.000.00
Autonomous movement25.140.000.00
3Manual operation30.2331.23268.21
Autonomous movement26.5629.32196.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, P.; Liu, J.; Fu, Y.; Sun, J. A Planning Framework Based on Semantic Segmentation and Flipper Motions for Articulated Tracked Robot in Obstacle-Crossing Terrain. Biomimetics 2025, 10, 627. https://doi.org/10.3390/biomimetics10090627

AMA Style

Zhang P, Liu J, Fu Y, Sun J. A Planning Framework Based on Semantic Segmentation and Flipper Motions for Articulated Tracked Robot in Obstacle-Crossing Terrain. Biomimetics. 2025; 10(9):627. https://doi.org/10.3390/biomimetics10090627

Chicago/Turabian Style

Zhang, Pu, Junhang Liu, Yongling Fu, and Jian Sun. 2025. "A Planning Framework Based on Semantic Segmentation and Flipper Motions for Articulated Tracked Robot in Obstacle-Crossing Terrain" Biomimetics 10, no. 9: 627. https://doi.org/10.3390/biomimetics10090627

APA Style

Zhang, P., Liu, J., Fu, Y., & Sun, J. (2025). A Planning Framework Based on Semantic Segmentation and Flipper Motions for Articulated Tracked Robot in Obstacle-Crossing Terrain. Biomimetics, 10(9), 627. https://doi.org/10.3390/biomimetics10090627

Article Metrics

Back to TopTop