Next Article in Journal
Snake Robot Gait Design for Climbing Eccentric Variable-Diameter Obstacles on High-Voltage Power Lines
Previous Article in Journal
A Novel Overlapped Compensation Structure and Its Effectiveness Verification for Expansion Joints in Plate-Type PMEDS Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing

by
Chen Zhong
1,2,
Qingjia Kong
2,3,
Ke Wang
2,3,*,
Zhe Zhang
3,
Long Cheng
3,
Sijia Liu
2,3 and
Lizhu Han
2
1
Shenyang Fire Science and Technology Research Institute of MEM, Shenyang 110034, China
2
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
3
Zhengzhou Research Institute, Harbin Institute of Technology, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Actuators 2025, 14(4), 183; https://doi.org/10.3390/act14040183
Submission received: 3 March 2025 / Revised: 3 April 2025 / Accepted: 7 April 2025 / Published: 9 April 2025
(This article belongs to the Section Actuators for Robotics)

Abstract

:
Autonomous navigation in indoor environments demands reliable perception and control strategies for nonholonomic mobile robots operating under geometric constraints. While visual servoing offers a promising framework for such tasks, conventional approaches often rely on explicit 3D feature estimation or predefined reference trajectories, limiting their adaptability in dynamic scenarios. In this paper, we propose a novel nonholonomic mobile robot corridor-following and doorway-passing method based on image-based visual servoing (IBVS) by using a single dioptric camera. Based on the unifying central spherical projection model, we present the projection mechanism of 3D lines and properties of line images for two 3D parallel lines under different robot poses. In the normalized image plane, we define a triangle enclosed by two polar lines in relation to line image conic features, and adopt a polar representation for visual features, which will naturally become zero when the robot follows the corridor middle line. The IBVS control law for the corridor-following task does not need to pre-calculate expected visual features or estimate the 3D information of image features, and is extended to doorway-passing by simply introducing an upper door frame to modify visual features for the control law. Simulations including straight corridor-following, anti-noise performance, convergence of the control law, doorway-passing, and loop-closed corridor-following are conducted. We develop a ROS-based IBVS system on our real robot platform; the experimental results validate that the proposed method is suitable for the autonomous indoor visual navigation control task for a nonholonomic mobile robot equipped with a single dioptric camera.

1. Introduction

Navigating autonomously using only vision in urban or indoor environments is a challenging task for intelligent robots [1,2]. In the field of urban navigation, a typical Automatic Drive System for robots controls their low-level motion adaptively based on available visual information such as curves [3] or lanes [4]. For an unmanned aerial vehicle (UAV), visual information can be applied to control automatic landing based on ground lines [5,6]. In some smart home applications, researchers have developed smart wheelchairs for helping the elderly or disabled individuals, and proposed corresponding adaptive vision-guided corridor-following and doorway-passing methods as fundamental capabilities of nonholonomic mobile robots [7].
Vision-based control is also called visual servoing, which was originally developed for automatic object manipulation by using a robotic arm with an eye-in-hand or eye-to-hand configuration [8]. Generally, two types of control methods, position-based visual servoing (PBVS) and image-based visual servoing (IBVS) [9], can be used to design a mobile robot controller according to the feedback signals of the closed-loop systems. PBVS methods generally require estimating the robot’s pose or position from a vision system, and since the error and input signals are spatial pose, the controller is relatively simple and singular values can be avoided [10]. Trajectory tracking (TC) represents a compelling research direction, wherein a general framework based on position-based visual servoing (PBVS) is commonly adopted to follow dynamically evolving paths. The research [11] presents an online robust adaptive dynamic programming algorithm (RADPA) to control the robot with an omnidirectional vision system for the worst-case disturbance, and a neural network is used to approximate a zero-sum game in the learning scheme, which must be trained online. Based on an omnidirectional onboard camera, the research [12] proposed an adaptive PBVS method in which multiple sensors are used to estimate the robot’s pose. Another study [13] proposed the extended POSIT method to estimate robot pose and camera parameters online; however, this method is limited to environments where a camera must be pre-fixed to the ceiling. The IBVS method expresses the control objective and control law directly in the parameter domain of image features, and is more robust than PBVS when dealing with uncertainties and disturbances introduced by the models of the robot and the camera [14]. In the recent application [15], researchers employed the principle of virtual work to transform the propagation of errors in the image plane into the transmission of virtual forces, and proposed a novel image-based IBVS method designed for robotic manipulators. This approach enables a more intuitive formulation of physical constraints on the robot. It eliminates the need for Jacobian matrix inversion and effectively avoids singularities arising from matrix inversion. Another study [16] utilized both monocular and stereo cameras to propose a one-time alignment method that is independent of depth information for rapid-fire source alignment in fire scenarios. The IBVS method plays a significant role in further enhancing the alignment accuracy of the stereo camera. Therefore, some studies focus on robust control concerning the calibration issues of onboard monocular cameras [17] or overhead fixed cameras [18,19]. A common characteristic of these methods is their reliance on point features to facilitate the development of robust IBVS controllers.
As a practical extension of path-following in navigation control, vision-based corridor-following and doorway-passing for nonholonomic mobile robots are receiving increasing attention [3]. The research [20] proposed a novel doorway-passing method that steers a robot through the middle of a gate by defining the control laws according to different coordinates of door planar geometry. A wheelchair robot [1] is developed, which requires the identification of the vanishing point feature from corridor images for the corridor-following task. This method relies on using the projective properties of a unidirectional camera during robot ego-motion, and applies polar parameters to the IBVS controller, which has to be manually assisted. Studies [21,22] proposed IBVS navigation methods for Pepper indoor navigation by using appearance features with only line segments or both points and lines. The advantage is that a 3D reconstruction process is not required; however, the robot has to keep key images in a predefined database to initialize the control system. Chen [23] proposed a novel collision-free spatial path planning method for IBVS, demonstrating the existence of a unique path node (PN) on the line connecting the target and the camera, where the orientation error is zero. Based on this finding, intermediate points along the image path are generated as sub-goals for visual servo control, enabling perfect obstacle avoidance. Dong [24] developed a new Kalman-based depth estimation model and defined the state equations according to the number of state variables, thereby achieving adaptive tuning of the servo gain in image-based visual servo control. To further enable wheelchairs to pass through doorways safely [7], recently, deep learning frameworks were introduced into autonomous corridor-following. The research [25] developed an end-to-end CNN to recognize the entire corridor image and generate the control output. The proposed approach is claimed to be more robust compared with traditional methods. Building on this foundation, GRIEF-DE [26] enhances the robustness of IBVS under dynamic environmental changes by optimizing feature descriptors through evolutionary algorithms, effectively mitigating control failures caused by feature point drift. Additionally, some researchers [27] have integrated IBVS with YOLOv5, employing a gimbal camera-based IBVS servo system to improve UAV target tracking and navigation capabilities in GPS-denied environments. Tsapin [28] incorporated Squeeze-and-Excitation (SE) blocks into MobileNetV2, ResNet50, and DenseNet121 architectures for robust image processing in challenging environments. Their method achieved faster response times while maintaining the same level of accuracy as the traditional HOG-BoVW-BPNN approach. A more recent study [29] proposed a visual servoing algorithm that integrates deep learning, using a CNN to directly predict the required velocity commands for the robot. This approach eliminates the reliance on 3D pose estimation of the object, which is typically required in conventional PBVS methods. As a result, the method enhances the robustness of visual servoing tasks, particularly in environments lacking prominent features. Nevertheless, existing CNN-based approaches remain highly dependent on extensive training data and involve considerable computational overhead, thereby limiting their practicality for rapid deployment in varying environments.
The corridor-following methods mentioned above are mostly based on a single-directional camera and inevitably suffer from a restricted field of view (FOV). Therefore, a path planning approach can be employed to ensure that target features remain within the FOV during robot motion [30]. For resource-constrained mobile robots like Pepper, scholars have developed a topological navigation system [31] based on image memory and the teach-and-repeat paradigm. This system integrates two IBVS controllers to enhance navigation efficiency and accuracy. However, an alternative choice is to directly increase the FOV of the onboard camera by using omnidirectional vision. Based on the ground line, which is a common feature in man-made environments, previous research [32] proposed a catadioptric vision-based IBVS method for a nonholonomic robot. This method achieves robot control through a single 3D line, although it necessitates prior specification of the desired image feature based on the three-dimensional geometry of the target line. In the framework [33], the interaction matrix was further modified according to the quadratic form of a conic curve image projected from a target 3D line based on the single-viewpoint model [34]. Using omnidirectional vision, the investigation [35] proposed a real-time generalized Voronoi diagram to extract a free-space skeletonization path for nonholonomic robot IBVS navigation. This approach detects two points on the image skeletonization with a fixed distance to approximate a conic curve; therefore, a special camera configuration with respect to the robot has to be considered to satisfy the convergency of the control law.
In this paper, inspired by wheelchair IBVS research [7], we propose a novel 3D line-based IBVS method for both corridor-following and doorway-passing of nonholonomic robots equipped with a catadioptric camera. To our knowledge, purely applying a single catadioptric vision system to IBVS indoor navigation control has not been reported for both corridor-following and doorway-passing in the literature before. In our approach, we adopt the unified spherical projection model, which projects a 3D line into a conic curve in the image domain. By analyzing the properties of the intersection point of polar lines estimated under different robot poses, we design an image triangle with at least two polar lines for our polar-based control law, which can integrate two navigation control tasks together. Compared with the latest analogous research [7], the catadioptric camera we use has a much larger FOV than a general directional camera, which will reduce the possibility of losing the visual feature. In addition, we define the visual feature based on polar coordinates rather than Cartesian coordinates, and the polar feature representation naturally constrains the parameter space and overcomes the numerical issue in the framework [7]. Finally, it is unnecessary for our approach to estimate the spatial trajectory when performing the doorway-passing task. Instead, we integrate this task into our IBVS framework by using door frame image features without any estimation of spatial trajectory. Other similar IBVS methods based on omnidirectional vision are described in the frameworks [33,34], which depend on a single horizontal 3D line. However, these methods need to specify the desired feature image and are only suitable for the corridor-following task. To sum up, the novel contributions of this paper can be detailed as follows:
  • A single dioptric vision-based indoor navigation control for nonholonomic robots is proposed, using line images of 3D environmental features for both corridor-following and doorway-passing. The approach is purely based on an IBVS scheme, which does not require the estimation of 3D geometric information in the control law.
  • For the IBVS corridor-following task, based on the image features of two horizontal ground lines under a unifying spherical projection, we define a polar-line-based triangle in which a median line connecting the vertex and middle point on the triangle base is chosen to be the visual feature for the IBVS controller. The polar parameters of the median line naturally go to zero in the normalized image pace as the robot approaches the middle of the corridor, which relieves the pre-calculation of the expected image feature.
  • Compared with the work that tries to establish several coordinates according to the geometry around the door based on a directional camera [20], our approach simply extends our IBVS scheme for corridor-following to the doorway-passing task by using two vertical and one upper door frame, and the polar line of the upper door frame line image is defined as the triangle base instead, which makes the robot approach the middle of the door more quickly than by using only two vertical door frames.
The paper is organized as follows. In Section 2, we investigate the image line properties of 3D parallel lines using the unified central projection model of a catadioptric camera. In Section 3, we introduce our nonholonomic mobile robot model, the polar IBVS control law, and the convergence analysis based on the proposed line image features. In Section 4, some simulation results are presented to validate the proposed method for nonholonomic mobile robot corridor-following and doorway-passing tasks. Finally, we conduct two types of experiments to further validate the proposed IBVS method on our real mobile robot platform.

2. Central Catadioptric Line Images

Figure 1 demonstrates the catadioptric camera and its corresponding images captured in a corridor environment. The image lines corresponding to 3D straight lines on the floor are seriously distorted by the camera fisheye lens. With a unified projection model [36], it is possible to approximate its imaging process and utilize some specific properties of the image line projected from the 3D line, which are desirable for robot IBVS-based corridor-following. Based on the conic fitting algorithm [37] and a modified RHT method on a sphere [38], the projective curves as well as their polar lines can be extracted, as shown in the right picture of Figure 1.

2.1. Line Image of Single 3D Line

As shown in Figure 2, a unified central projection model of a catadioptric camera has two frames, O and  O c , where O defines the center of a virtual unit sphere, and  O c  represents a new projection center with coordinates  [ 0 , 0 , ξ ] T  in relation to O, where  ξ  represents the translation along the Z-axis between these two frames. According to the unified central projection model, a 3D straight line L will be sequentially projected into the unit sphere in relation to O, and then into a conic curve  Ω ¯  in relation to  O c .
Let  L i  be a pair of vectors defined as
L i = u i , n i | u i = 1 , n i = 1 , n i · u i = 0 , i = 1 , , m
where  u i = u x , u y , u z T  denotes the normalized orientation of  L i , and  n i = n x , n y , n z T  represents the unit normal vector of normalized plane  Π i , which is defined by  u i  and center O, and  n i · u i = 0 . Therefore, point  x = x , y , z T  lying on  L i  is orthogonal to  n i :
n x x + n y y + n z z = 0
Any line  L i  on plane  Π i  projected to sphere center O will intersect the virtual unit sphere at arc  S i . It is then projected, in relation to  O c , into a conic curve  Ω ¯ i  on the normalized plane  Π , and any point  x ¯  of  Ω ¯ i  verifies the following:
x ¯ T Ω ¯ i x ¯ = 0
with
Ω ¯ i = α n x 2 n z 2 ξ 2 α n x n y n x n z α n x n y α n x 2 n z 2 ξ 2 n y n z n x n z n y n z n z 2
which describes the relationship between  Ω ¯ i  and  n i  of  L i  given  α = 1 ξ 2 .
Finally,  Ω ¯ i  is mapped into its image curve  Ω ^ i  in the image plane by using affine transformation  H c :
Ω ^ i = H c T Ω ¯ i H c 1
where  H c  is defined as follows:
H c = f x 0 c x 0 f y c y 0 0 1
by providing the parameters  f x f y c x , and  c y  of the camera, which can be estimated by using the camera calibration procedure [36].

2.2. Line Images of Two 3D Parallel Lines

Let  O ^  be the pole of  Ω ^ i ; then, its corresponding polar line  ß ^ i  can be defined as follows:
ß ^ i = Ω ^ i · O ^
Choose the projected center  O ¯ = 0 , 0 , 1 T  in  Π i  as the pole of  Ω ¯ i ; from Equation (4), the polar line  ß ¯ i  of  O ¯  in relation to  Ω ¯ i  is
ß ¯ i = Ω ¯ i · O ¯
which makes  ß ¯ i = n x , i , n y , i , n z , i T  after some formula simplifications, so that point  x ¯ = x ¯ , y ¯ , 1 T  on  ß ¯ i  satisfies the following equation:
n x , i x ¯ + n y , i y ¯ + n z , i = 0
The polar line  ß ¯ i  is actually the perspective projection of  L i  in  Π i . In this way, it is able to represent the projection of a 3D line in a catadioptric camera system by a simple polar line. This projection property is used in our IBVS control law for robot corridor-following.
If two 3D parallel lines  L 1  and  L 2  are presented in the camera FOV, as shown in Figure 2, the intersection points  p 1  and  p 2  of two corresponding conic curves  Ω ^ 1  and  Ω ^ 2  and image center  O ^  should lie on the same line  l ^ . Since  H c  preserves incidence, three lines,  ß ^ 1 ß ^ 2 , and  l ^ , will intersect at the same point  D ^ = H c D ¯ , and accordingly,  ß ¯ 1 ß ¯ 2 , and  l ¯  will also intersect at the same point  D ¯  in  Π i .
D ¯  can be calculated by using Equation (9) for  ß ¯ 1  and  ß ¯ 2 :
D ¯ = x ¯ 0 y ¯ 0 = n z , 2 n y , 1 n z , 1 n y , 2 n x , 1 n y , 2 n x , 2 n y , 1 n z , 2 n x , 1 n z , 1 n x , 2 n x , 2 n y , 1 n x , 1 n y , 2
Since  L 1  is parallel to  L 2 , it can be proved that  n x , 1 = n x , 2 , and
y ¯ 0 = n z , 2 n z , 1 n y , 1 n y , 2 = tan γ
with the camera pitch angle  γ  provided.
Observation reveals that  y ¯ 0  is only correlated with camera pitch angle  γ  and is fixed if  γ  is given. Therefore, it makes  D ¯  ( D ^ ) move along a horizontal line going through  y ¯ 0 ( y ^ 0 ). As shown in Figure 3, we obtain the following three properties:
  • ß ¯ 1  and  ß ¯ 2  intersect at  D ¯ , if  0 < γ < π / 2 .
  • The polar lines  ß ^ 1  and  ß ^ 2  are two vertical parallel lines, and  D ¯  will be infinite, if  γ = π / 2  or  γ = π / 2 .
  • Ω ¯ 1  and  Ω ¯ 1  will degenerate into two straight lines so that  D ¯ = O ¯ , if  γ = 0 .
In Figure 4, we present the simulated line images and corresponding polar lines calculated from two horizontal 3D parallel lines  L 1  and  L 2  when the robot is placed in the corridor with different poses. The distance between  L 1  and  L 2  is L. The first three columns of Figure 4 demonstrate two projected line images represented by (blue and red) conic curves, (blue and red) polar lines in relation to pole  O ^  of two curves, and intersection  D ^  of two polar lines in the normalized image plane when the robot rotates in three directions  θ w 30 , 0 , 30  at different positions in the corridor. In the top row, the robot is situated in the middle of the corridor; as it rotates from  30  to  30 D ^  translates from left to right, with y of  D ^  remaining unchanged, so that each image in the bottom row also has a similar observation. This proves the previous motion result of  D ^ . Another point is when the robot is situated in the middle of  L 1  and  L 2 , with  θ w = 0 ; the left and right conics and polar lines will be symmetric with respect to  O ^ D ^ , respectively, while this is not the case in the bottom row when  θ w = 0  since there is a slight offset  0.2 L  from the middle point of corridor.
In Figure 5, a simulated robot with different poses is placed in front of two vertical parallel lines  L 1  and  L 2 , which often happens during robot indoor navigation when it is going through a doorway. In this case,  D ¯ = 0 , tan γ , 1 T , which means that  D ^  will not change as the robot pose varies. However, the slope of the polar line and angle between two polar lines may change accordingly. A similar symmetry phenomenon can be observed when the robot’s orientation is  θ w = 0 .

3. IBVS for Corridor-Following

The IBVS control approach is applied to a nonholonomic mobile robot equipped with an omnidirectional camera, and the involved frames, relevant variables, and constants are illustrated in Figure 6. Our IBVS method is used to control the robot to move towards the middle of the corridor and hold an orientation parallel to the side wall by only using the image features from two corresponding horizontal 3D lines, and additionally guide the robot to pass through the doorway with a little visual feature modification within a unified control scheme.

3.1. Modeling of Mobile Robot

The camera pitch angle  0 < γ < π / 2 . The camera center is  ( w , 0 , l ) T  with respect to the robot center. The width of the corridor is L. The height of the camera is h. The transformation matrix  T  between the camera and robot frame is calculated by
T = 0 1 0 w sin γ 0 cos γ 0 cos γ 0 sin γ l
The robot pose is represented by  X w = x w , y w , θ w T , where  x w , y w T  depicts robot location, and  θ w ( π , π ]  represents robot orientation.
Let the control variable  u = v , ω T ; then, the nonholonomic robot kinematic can be modeled as
X ˙ w = x ˙ w y ˙ w θ ˙ w = sin θ w cos θ w 0 v + 0 0 1 ω

3.2. Visual Features for Corridor Horizon Lines

D ¯  of two polar lines  ß ¯ 1  and  ß ¯ 2  is closely related to robot pose. As shown in Figure 7 ß ¯ 1 ß ¯ 2 , and a horizon (red) line segment  l ¯  passing through  O ¯  are chosen as three edges of a triangle in  Π i . Let  ß ¯ 0  be the line segment  D ¯ X f  passing through  D ¯ , and  X f = ( x f , y f ) T  be the median point of  l ¯ .
Rewrite Equation (9) for each  ß ¯ i  in a polar coordinate form:
cos θ i x ¯ + sin θ i y ¯ + ρ i = 0
where  i { 1 , 2 } .
We then obtain the interior angles  θ l  and  θ r  of the triangle enclosed by two polar lines  ß ¯ 1  and  ß ¯ 2 . The polar angle  θ m  can be calculated as follows:
θ m = arctan 1 2 cot θ l cot θ r
Define the visual feature for IBVS as  s = θ m , ρ m T x f  can be calculated by the intersection with  ß ¯ 0  since  n x , 1 = n x , 2 ,
x f = n z , 2 + n z , 1 2 n x , 1
The polar radius  ρ m  for  D ¯ X f  can be represented as
ρ m = x f cos θ m + y f sin θ m

3.3. Visual Features for Doorway Vertical Lines

In Figure 5 D ¯  is constant regardless of robot pose variation, and the chosen feature  s = θ m , ρ m T  will degenerate into one feature.
x f = D ¯ x tan θ m = tan γ tan θ m
Therefore, as shown in Figure 8, besides two polar lines  ß ¯ 1  and  ß ¯ 2  of two vertical door frames, we introduce a third image conic  Ω ¯ 3  projected from the upper door frame, and let middle point  X f  lie on the polar line  ß ¯ 3  of  Ω ¯ 3 ; then, the polar radius  ρ m  for  ß ¯ 3  can be calculated by
ρ m = x f sin θ m + y f cos θ m

3.4. IBVS Control Law

Given median line  D ¯ X f  and polar line  ß ¯ 3  in  Π i , interaction matrix  L  is theoretically feasible and can be simplified as
L = A B
where
A = 1 h λ θ cos θ m λ θ sin θ m λ θ ρ m λ ρ cos θ m λ ρ sin θ m λ ρ ρ m
B = ρ m cos θ m ρ m sin θ m 1 ρ m 2 + 1 sin θ m ρ m 2 + 1 cos θ m 0
with  λ θ = ρ m 2 + 1 / ρ m · sin θ m sin γ  and  λ ρ = ρ m 2 + 1 cos θ m sin γ  [32].
In this way,  L  can be employed universally for horizontal or vertical lines.
The error function  e  based on visual features can be defined as follows:
e = s s *
where  s *  is the desire value of visual feature  s .
Since the target 3D lines remain static as the camera moves, we may define the visual feature dynamics as
s ˙ = L c
where  L  links the variation in visual feature  s  and camera velocity  c = v c , ω c T . The actual robot velocity   in the camera frame can be expressed as
c = T R C
where  T R C  represents transformation from robot frame to camera frame
T R C = 0 sin γ cos γ 0 0 0 l 0 w 0 cos γ sin γ T
By injecting Equations (23) and (24) into Equation (22), we obtain
s . = J v v + J ω ω
where
J v = 1 h λ θ sin θ m sin γ + λ θ ρ m cos γ λ ρ sin θ m sin γ + λ ρ ρ m cos γ
and
J ω = λ θ cos θ m l h + λ θ ρ m w h + ρ m sin θ m cos γ + sin γ λ ρ cos θ m l h + λ ρ ρ m w h + ρ m 2 + 1 cos θ m cos γ
If an exponential decay of  e  is required based on  e ˙ = λ e , according to Equation (25), Equation (25) can be rewritten as
λ e = J v v + J ω ω
Combining Equations (21) and (27), assuming the robot moves at a constant driving speed  v = v * , we then obtain the IBVS control law for both corridor-following and doorway approaching based on monocular omnidirectional vision
ω = J ω + λ e + J v v *

3.5. Convergency Analysis

In this section, we present the stability analysis of the closed-loop visual servo systems. Consider the convergence of the control law for corridor-following; we define the Lyapunov function  L = 1 2 e ( t ) 2 , and its corresponding derivative is
L ˙ = e T s ˙
From Equations (25) and (28), Equation (29) is rewritten as
L ˙ = e T J v v * J ω J ω + ( λ e + J v v * )
Simplifying Equation (30), we have
L ˙ = λ e T J ω J ω + e + e T ( J v J ω J ω + J v ) v *
Therefore, the sufficient condition for the local asymptotic stability of such a nonlinear system is  L ˙ < 0  when  s s * ,
e T ( J v J ω J ω + J v ) v * < λ e T J ω J ω + e

3.5.1. Corridor-Following Framework

For a corridor-following task, as the robot approaches the corridor center line,  s s * , then  x f 0 , θ m 0 , and  ρ m 0 , and according to the Law of Sines and Equation (11),  lim ρ m 0 , θ m 0 sin θ m ρ m = cot γ , we then obtain
lim s s * λ θ = cos γ
and
lim s s * λ ρ = sin γ
When  s s * J ^ ω  can be estimated by
lim s s * J ^ ω = K 1 · Y
where  K 1 = 1 l / h l / h 1  and  Y = ( sin γ , cos γ ) T
Since  K 1 > 0  and  K e r J ω T = K e r J ω + , and suppose  e = ( δ v , δ ω ) T 0 , then
lim s s * J ω T e = e s i n ( γ a r c t a n ( l h ) + a r t a n ( δ ω δ v ) )
Suppose that the robot motion curvature  c ( t ) = ω v  does not change much as it approaches the corridor centre line; this leads to  lim s s * c ˙ ( t ) = 0 , and subsequently  δ ω δ v = ω v .
Therefore,  e 0 , and  lim s s * J ω T e 0  equals the constrain of  c ( t ) , which should satisfy the following inequality:
c ( t ) = ω v l h tan ( γ ) h + l t a n ( γ )

3.5.2. Doorway-Passing

For a doorway approaching task, as the robot approaches doorway center,  s s * , then  x f 0 , θ m 0  and  ρ m y f , which makes
lim s s * λ θ = 0
and
lim s s * λ ρ = ρ m 2 + 1 sin γ
When  s s * J ^ ω  can be estimated by
lim s s * J ^ ω = K 2 · Y
where  K 2 = 1 0 ρ m 2 + 1 ( l ρ m w ) / h 1 + ρ 2  and  Y = ( sin γ , cos γ ) T .
Since  K 2 > 0  and  K e r J ω T = K e r J ω + , and suppose  e = ( δ v , δ ω ) T 0 , then
lim s s * J ω T e = ( 1 + κ ) sin γ δ v + ι c o s γ δ ω
where  κ = 1 + ρ m 2 ( l ρ m w ) / h , and  ι = 1 + ρ m 2 .
Therefore,  e 0 , and  lim s s * J ω T e 0  equals the constrain of  c ( t ) , which should satisfy the following inequality:
c ( t ) = ω v tan γ 1 + κ ι
To sum up, for our robot system, if these preconditions are kept during robot motion, it is then reasonable to assume that  e K e r J ω + , which means  J ω + e  is always non-null and  J ω T e 0 .
Therefore, we have  e T J ω J ω + e = ( J ω T e ) 2 / ( J ω T J ω ) > 0 .
Since  v * > 0 , and  J ω J ω + I , Equation (31) can be rewritten as
e T ( J v J ω J ω + J v ) e T J ω J ω + e < λ v *
Equation (43) provides a sufficient condition for the convergence of the closed-loop system at  e = 0 . It can be seen that parameters may affect the convergence of the control law and this lower bound is necessary for the stability of this system. We will present some simulation results on  λ  under a corridor environment in the next section.

4. Simulation Results

In Figure 9, a block diagram of our IBVS system is presented. The system is implemented under the ROS framework with two functional nodes. The first is the Image Processing Node, which receives environmental images captured by a panoramic camera. It extracts image features through a sequence of operations, including edge detection, line fitting, and an improved stochastic Randomized Hough Transform (sRHT). More implementation details are provided in the experimental section. The second is the IBVS Control Node, which takes the extracted image features as input to a predefined IBVS control law. This law computes the robot’s control commands (e.g., motion velocity or steering angle), which are then transmitted to the underlying mobile robot for execution.
This method follows the classical two-stage visual servoing pipeline: feature detection followed by image-feature-error-based control. Compared with recent learning-based end-to-end CNN approaches, the proposed method has limited adaptability to nonholonomic constraints and dynamic environmental disturbances. However, under typical task accuracy requirements, it offers an intuitive and computationally efficient solution with controllable computational cost, making it well suited for rapid deployment.

4.1. Simulation for Corridor-Following

In the first virtual scenario, the robot is placed inside a corridor (width  L = 4  m). The floor lines are represented by two parallel 3D lines:  x = 2 , y = t , z = 0  and  x = 2 , y = t , z = 0 . These lines are projected into the image plane through a simulated catadioptric camera. The image size is  1280 × 1024  pixels, with an image center at  c x = 683 , c y = 511  and focal lengths  f x = f y = 400 . Other involved constants are as follows:  h = 1  m,  w = 0.1  m,  l = 0.2  m, and  γ = 45 . The robot’s linear velocity is set to  v = 0.2  m/s. The robot’s initial pose is  y w = 1  m,  θ w = 0 . The constant  λ  in Equation (27) is set to 1.
The navigation task is to guide the robot to move along the middle of the corridor and keep it heading along the corridor based on our control law. To validate the stability of the proposed control law, an intrinsic matrix is calibrated with errors ( ± 10 %  on  f x  and  f y ± 5  pixels on  c x  and  c y ), and visual features are computed through image measurements; we assume that the conic curves are detected and tracked using the method proposed with added white Gaussian noise (standard deviation  σ = 2 ).
Figure 10a shows the robot trajectory based on our visual servoing method. It can be seen that the robot finally reaches the middle of the corridor, and its heading direction is parallel to the reference 3D lines. Figure 10b,c present the evolution of two visual features,  x f  and  θ m , respectively. Figure 10d–f show the evolution of the robot trajectory parameters, such as lateral distance  y w , heading angle  θ w , and angular velocity w, respectively. From these figures, angle and lateral deviations are well regulated to zero; although there is a kind of low-frequency oscillation caused by the errors of parameters, it has little influence on the convergence result, which confirms the validity of the control law.
Figure 11 presents some visual features’ trajectories simulated during corridor-following in Figure 10, such as conic features in image (Figure 11a,b), polar lines (Figure 11c,d), the median line (Figure 11e), and the trajectories of polar features in the polar coordinates (Figure 11f), with noise added for clarity. The red curves, points, and lines represent the initial features, while the blue curves, points, and lines are the features when the robot arrives at a steady state, and the green ones are intermediate transitions at every control frame. It can be seen from these figures that conic and polar curves tend to converge towards the final target curve. In Figure 11f, the feature point  ρ m , θ m  trajectory of  ß ¯ 1 ß ¯ 2  and  ß ¯ 0  in the parameter space are present, the feature point trajectory of the median line  ß ¯ 0  moves to the final steady state  0 , 0 , while  ß ¯ 1  and  ß ¯ 2  move to the final steady state  0.81 , 0.62  and  0.81 , 0.62 , respectively. This reveals the effectiveness of our method, since the feature value of the steady state for  ß ¯ 1  or  ß ¯ 2  is not as easy to compute as for  ß ¯ 0 .

4.2. Anti-Noise Capability

In this simulation, different levels of Gaussian noise are added to the image to verify the validity of our control law. The initial robot poses are shown in Table 1.
Figure 12 shows trajectories that start from different initial poses with or without different levels of image white Gaussian noise. In this simulation,  σ = 5 , 10 , 15  is applied to image feature detection, respectively. The solid color lines refer to the cases without noise, while the corresponding dotted lines refer to cases with Gaussian noise.
It can be seen that the trajectories eventually converge in all situations. As the deviation of noise level increases, the trajectory shows a slight deviation from the corresponding real one. Let  ε  be the absolute error around the middle line; we may obtain the MSE of the relative position distance  y 0  along the median line of the corridor, as shown in Table 2. The results reveal that the means and standard deviations of  ε  generally increase as the image noise rises, except the means when  σ = 10 , 15 . However, the standard deviation of  ε  when  σ = 15  is larger than  σ = 10 , and the mean value is much more uncertain when  σ = 15 .

4.3. Convergence of Control Law

In our IBVS method, the lower bound of  λ  allows the convergence of our control law. To test the influence of parameter  λ  on the control law in Equation (28), as shown in Figure 13, we present a simulation in which different values of parameter  λ { 0.1 , 0.2 , 0.5 , 1 , 2 , 5 } . We keep the robot with the same external conditions, including initial pose, robot velocity, camera parameters, and image noise, and then the convergence speed only depends on  λ . The robot trajectory with different  λ  reveals that the increase in  λ  makes it tend to steady state more smoothly and quickly. It can be seen that  λ < 0.5  may lead to the overshoot of the system,  λ = 5  and  λ = 2  nearly coincide, and the rise time is a bit longer than  λ = 1 . Therefore, in our IBVS system,  λ = 1  is good enough for our corridor-following task.

4.4. Simulation for Doorway Approaching

In this part, simulations for doorway approaching based on our IBVS method are presented. This situation often occurs when the robot plans to pass a door anonymously by using door frame features. Similar to the corridor-following task, a simulated robot is place in front of a doorway with a random initial pose, and our IBVS is used to guide the robot to approach the middle point of the doorway as far as possible.
As shown in Figure 14a, the doorway in our simulation is represented by three door frames, two vertical 3D parallel lines (red points) and an upper horizontal door frame that connects these two points; the parameter equations of the vertical lines are  x = 0 , y = 0.5 , z = 1  and  x = 0 , y = 0.5 , z = 1 , respectively. Other parameters for the IBVS algorithm are the same as for the previous corridor-following task, and some initial robot states are shown in Table 3. By using these three lines, it is possible for a robot with different given initial poses to approach the target door center. The final error with respect to the middle of the door for each trajectory is presented in the last row of Table 3, and the deviation may be closely related to the given initial  y w  since the errors from ① or ⑥ are comparatively larger than those from other initial states (②, ③, ④, or ⑤).
To analyze whether or not a third door frame is necessary in addition to two vertical frames, in Figure 14b, we present two IBVS-based robot motion trajectories by using  X f  from two polar lines (Figure 8) and three polar lines (Figure 8), respectively. It can be seen that the former trajectory converges much faster and stops much closer to door center than the latter, which reveals the feasibility of employing a third horizontal door frame. In Figure 14c, we also present the change in three polar lines gathered from the trajectory in Figure 14; feature  X f  goes from the initial position to the final target, and the initial acute triangle formed by the three polar lines becomes an obtuse triangle, and almost degenerates to a line segment eventually. Notice that, in front of the door in Figure 14b, the corresponding triangle obtained at position P on this trajectory is marked in red. We will explain it later in the experimental part.

4.5. Simulation in Closed-Loop Corridor

To further validate the effectiveness of our method, we present a closed-loop square corridor environment, which involves several parts of the corridor, including the corners. As shown in Figure 15a, the robot starts from the initial pose and follows four different parts of the corridor, including the straight corridor and corner areas, to complete all the navigation tasks by using our IBVS method. The constants and parameters of the robot are the same as in the above simulations.
Figure 15b shows a magnified part of the region overlapped with a dashed box in Figure 15a. It demonstrates how the robot moves from the initial pose to the middle of the corridor and realizes a smooth transition between two adjacent pairs of corridor lines. As the robot approaches the corner from a certain distance, two parallel lines on the floor will provide less information for IBVS of the robot; in this case, the vertical boundary lines in the corridor are selected as the new candidate target 3D lines for our IBVS when the mobile robot is approaching the corner, as shown in Figure 15b. The robot trajectories are labeled with different colors, and between the two green ones, we demonstrate a smooth transition trajectory generated by using our IBVS method with two vertical 3D lines. Until the robot leaves the corner and two vertical boundary lines get out of the field of view, it selects two floor horizontal lines in the subsequent corridor again. This simulation result validates that the control law may also perform well in a closed-loop environment; even when different features are adopted, the path generated by our IBVS framework satisfies the corridor-following navigation task.

5. Experimental Results

In this section, we demonstrate two experimental results by using our IBVS algorithm on a real robot platform. We introduce the robot platform and the corresponding image processing procedure. Then, a corridor-following experiment and doorway approaching experiment are presented to show the effectiveness of our approach.

5.1. Mobile Robot Platform

We choose a low-cost mobile robot, TurtleBot2, as our experimental platform to verify the proposed IBVS approach in two indoor environments, corridor and doorway, respectively, as shown in Figure 6. The robot is equipped with a single Dalsa® Genie® camera CR-GM00-H1402 with resolution  1400 × 1024 , and we use Fujinon® FE185C057HA-1 with focus length 1.8 mm, which is a super wide-angle lens with a  185  FOV. The image center is  c x = 699.8 , c y = 512.5 , f x = 390 , f y = 389.7 . The onboard computer is ThinkPad P50 (20ENCTO1WW, Lenovo, Made in China) with Intel® Xeon® E3-1505M v5 2.80 GHz CPU and 16 G memory.

5.2. IBVS System

We developed our IBVS system under a ROS (Robot Operating System) by using Visual Studio Code on Ubuntu 16.04. The system integrates two additional nodes, the image acquisition node and IBVS node, respectively. As shown in Figure 16a, the former node grabs the catadioptric image from camera. The latter will subsequently detect the edge pixels by using a Canny edge detector (Figure 16b), and then adopt a modified line image detection algorithm based on [5] to generate the control value for robot motion. In this approach, all the edge points are inversely projected onto a unit sphere, and the target projected line image pixels will lie along the corresponding two great circles on the sphere, as shown in Figure 16c; we then use a Randomized Hough Transform (RHT) to cluster the great circles and generate the candidate great circle (red one) for each target spatial line, as shown in Figure 16d. In Figure 16e, we present two detected line images overlapped by red curve segments for two reference lines on the ground. As shown in Figure 16f, these two segments are fitted into two conics based on the research [37], and the corresponding polar lines are calculated for point  X f , which are then used to control robot motion; the whole control cycle of our IBVS system is about 10 Hz.

5.3. Experimental Results for Corridor-Following

In this experiment, we present the corridor-following performance of our IBVS method by using two horizontal parallel lines. The width of the corridor is about 3.7 m, and a surveillance camera is placed on the roof of the corridor to record the robot trajectory, as shown in Figure 16. A Genie® camera is fixed on the aluminum profile, which is installed at the center of the robot top tray; it makes  γ = 45 , and the parameters related to the configuration of the camera are  h = 1.0  m,  w = 0.001  m, and  l = 0.32  m.
As shown in Figure 17, we choose two-sided black lines of the corridor as reference parallel lines to control robot motion. The robot is firstly placed at point (a) with initial pose  y w = 1  m,  θ w = 0 , and then it is gradually guided towards the middle line of the corridor by our IBVS algorithm. We demonstrate this trajectory marked by a red dotted line, and the translucent robots overlapping on this path from (b) to (e) are some recorded robot images for clarity. It can be seen that this trajectory is similar to the previous simulation result; the robot approximately reaches the middle line and the heading direction is nearly parallel to the reference line.
As shown in Figure 18, we present visual feature curves  X f  for the IBVS controller and the corresponding controller angular velocity output  ω . The two curves of the whole control procedure are accompanied by a small range of noise, and  ω  experiences a damped oscillation procedure; however, as a whole,  X f  and  ω  go towards 0 gradually.
In Figure 19, we illustrate the processed images with detected curve segments (red curves) at positions from (a) to (f) in Figure 17. Observations of detected curves reveal that these two curves may change from initial deflection to approximate symmetry with respect to  x = c x  in the image. We also demonstrate the changes in left and right conics for the reference corridor lines (in Figure 19g,h) and their corresponding polar lines (in Figure 19i,j) in the image. The red curves in each figure are initial image features at (a) in Figure 17, the blue curves represent the image features when robot arrives at (f) in Figure 17, and the green ones correspond to the feature transactions from the initial state to final state. It can be seen that, in final state (f), the left and right image features tend to be symmetrical with respect to  x = c x  as the robot moves to the middle of the corridor.
Our IBVS system is based on the ROS framework, which is not a real-time system, and we do not optimize the image processing part of the IBVS node to deal with environmental disturbance, so that observation noise and a processing delay will inevitably be introduced. Therefore, in practice, it can be seen from Figure 18 that curves corresponding to  X f  and  ω  are accompanied by obvious disturbance, and the convergence speed is slow. However, our IBVS node still generates a damped oscillation output  ω  and manages to guide the robot approximately to the middle of the corridor.

5.4. Experimental Results for Doorway-Passing

In this part, we conduct doorway approaching experiment, as shown in Figure 20. To further validate the simulation, we choose a door with the same height and width as the previous simulation setup. Hence, the target 3D lines include two vertical door frames and an upper door frame. We also use a surveillance camera to record the robot motion from the top, as shown in Figure 20a; the robot motion trajectory is presented, like in the previous experiment of corridor-following, and the initial robot state marked by (1) is close to ⑥ in Figure 14a, and the stop point (4) is close to point P in Figure 14a. From Figure 20b–e, we demonstrate three door frames detected in the image grabbed from positions (1) to (4), respectively. The triangle containing the three polar lines of the door frame conics is shown in Figure 20c. It can be seen that the shape change of the triangle is similar to Figure 14c, from an initial acute triangle to an obtuse one. In this experiment, the shape of the final triangle is also similar to the triangle shape at P in Figure 14b; in addition, there are some outlier points around the trajectory of feature  X f  because of the instability of image feature extraction. However, it seems not to affect the convergence and further verifies the extensibility of our IBVS method from the corridor-following task to doorway approaching task to some extent.

5.5. Experimental Results for Contrast

To demonstrate the superior performance of our proposed IBVS control method, we conduct comparative experiments using a recently published study [39] as a reference. This reference introduces a state-of-the-art panoramic vision-based IBVS path-tracking approach for mobile robots, which plans a desired trajectory in the image plane and executes visual servo control accordingly. Both methods are designed for IBVS control using panoramic cameras and therefore share common characteristics.
To quantitatively compare the path-tracking performance of the two methods, we implement simulation experiments using a unicycle-type differential-drive model under nonholonomic constraints. The setup is defined as follows: (1) The model state is denoted as  ( x , y , θ ) , where  x , y  represent the planar position and  θ  denotes the heading angle. (2) The robot is required to follow a straight-line path defined by  y = 0 . (3) The initial state includes both lateral and angular deviations: the robot starts 2 m off the desired path with an initial heading error of  30 . Both methods are assigned the same maximum forward velocity and initialized under identical conditions. We record the lateral deviation (path error) over time and the convergence time required for the model to achieve stable tracking, which serve as metrics to evaluate control accuracy and responsiveness.
As shown in Figure 21a, the error curves for both methods under the same path-tracking task reveal that our method reduces the error at a faster rate, reaching approximately one-quarter of the initial error within 5 s. Moreover, the overall error remains consistently lower than that of the reference method throughout the experiment. This indicates that our IBVS controller yields a smaller path deviation and higher tracking precision during convergence. The corresponding motion trajectories in the plane (top-down view) are illustrated in Figure 21b. Compared to the reference method, our approach executes a sharper turning maneuver, allowing the robot to merge back onto the target line within a shorter distance. Combining the error curves and trajectory plots, we conclude that our proposed method achieves smaller path deviations and faster convergence while ensuring stability, thus exhibiting superior control performance.

6. Conclusions

In this paper, we propose a novel IBVS-based indoor navigation control for a nonholonomic mobile robot using a single omnidirectional camera (Lens FUJINON FE185C057HA-1, camera Dalsa Genie HM1400). Based on the unified spherical projection properties, we define the line images of 3D parallel lines. The corresponding visual features in normalized space will naturally be zero when the robot is following the middle line of the corridor, which is desirable for our IBVS system since the target image features are provided implicitly. Therefore, the proposed method is applicable to the corridor-following task in an indoor environment. We present the control law, stability analysis, and further extension to the doorway approaching task by introducing an additional door frame feature. Several simulations on corridor-following, stabilization, and the convergence of the control law under different conditions are used to validate the proposed method. Our approach is also simulated for a doorway approaching task and a closed-loop corridor environment. In addition, we develop a ROS-based IBVS system on a real nonholonomic mobile robot platform, and conduct corridor-following and doorway approaching experiments. All the simulation and experimental results verify the effectiveness and practicability of our proposed IBVS method. The proposed approach is not only suitable for indoor environment navigation, but it has a great significance for intelligent visual control of robot autonomous rescue in indoor fire scenarios.
It is worth noting that the present study focuses solely on goal-directed tasks and does not address the issue of obstacle avoidance. Moreover, the simulation environment we constructed is relatively idealized. In practical applications, the system is subject to significant environmental disturbances, particularly in terms of scene texture variability and lighting conditions. These factors can adversely affect the stability and accuracy of critical tasks such as image line extraction of 3D parallel lines and edge detection. In scenarios involving directional failure, the reliability of robot trajectory data and motion stability also remains to be further improved. In future work, we aim to enhance the generalizability, real-time performance, convergence speed, and overall stability of the proposed IBVS approach under various types of noise and even in the presence of obstacles. This will involve further efforts in several key areas, including robustness against environmental disturbances, advanced image processing and dynamic feature capture, motion control optimization, and disturbance rejection.

Author Contributions

C.Z.: writing—original draft, simulations, and Methodology. Q.K.: writing—image processing methodology. K.W.: conceptualization, methodology, and funding acquisition. Z.Z.: validation. L.C.: simulation validation. S.L.: writing—review. L.H.: experimental validation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China under Grant 2022YFC3090500 and the Key Science and Technology Program of China Emergency Management Department under Grant 2024EMST111104.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors confirm that there are no financial or personal conflicts of interest that could have affected the findings reported in this paper.

References

  1. Narayanan, V.K.; Pasteau, F.; Marchal, M.; Krupa, A.; Babel, M. Vision-based adaptive assistance and haptic guidance for safe wheelchair corridor following. Comput. Vis. Image Underst. 2016, 149, 171–185. [Google Scholar] [CrossRef]
  2. Huang, Y.; Su, J. Visual servoing of nonholonomic mobile robots: A review and a novel perspective. IEEE Access 2019, 7, 134968–134977. [Google Scholar] [CrossRef]
  3. Cherubini, A.; Chaumette, F.; Oriolo, G. Visual servoing for path reaching with nonholonomic robots. Robotica 2011, 29, 1037–1048. [Google Scholar] [CrossRef]
  4. de Lima, D.A.; Victorino, A.C. A hybrid controller for vision-based navigation of autonomous vehicles in urban environments. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2310–2323. [Google Scholar] [CrossRef]
  5. Rafique, M.A.; Lynch, A.F. Output-feedback image-based visual servoing for multirotor unmanned aerial vehicle line following. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3182–3196. [Google Scholar] [CrossRef]
  6. Alshahir, A.; Kaaniche, K.; Albekairi, M.; Alshahr, S.; Mekki, H.; Sahbani, A.; Alanazi, M.D. An Advanced IBVS-Flatness Approach for Real-Time Quadrotor Navigation: A Full Control Scheme in the Image Plane. Machines 2024, 12, 350. [Google Scholar] [CrossRef]
  7. Pasteau, F.; Narayanan, V.K.; Babel, M.; Chaumette, F. A visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot. Auton. Syst. 2016, 75, 28–40. [Google Scholar] [CrossRef]
  8. Corke, P. Robotics, Vision and Control: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  9. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  10. Liu, S.; Dong, J. Robust online model predictive control for image-based visual servoing in polar coordinates. Trans. Inst. Meas. Control 2020, 42, 890–903. [Google Scholar] [CrossRef]
  11. Luy, N.T. Robust adaptive dynamic programming based online tracking control algorithm for real wheeled mobile robot with omni-directional vision system. Trans. Inst. Meas. Control 2017, 39, 832–847. [Google Scholar] [CrossRef]
  12. Li, L.; Liu, Y.H.; Jiang, T.; Wang, K.; Fang, M. Adaptive trajectory tracking of nonholonomic mobile robots using vision-based position and velocity estimation. IEEE Trans. Cybern. 2017, 48, 571–582. [Google Scholar] [CrossRef] [PubMed]
  13. Kang, Z.; Zou, W.; Ma, H.; Zhu, Z. Adaptive trajectory tracking of wheeled mobile robots based on a fish-eye camera. Int. J. Control. Autom. Syst. 2019, 17, 2297–2309. [Google Scholar] [CrossRef]
  14. Li, C.L.; Cheng, M.Y.; Chang, W.C. Dynamic performance improvement of direct image-based visual servoing in contour following. Int. J. Adv. Robot. Syst. 2018, 15, 1729881417753859. [Google Scholar] [CrossRef]
  15. Yu, Q.; Wei, W.; Wang, D.; Li, Y.; Gao, Y. A Framework for IBVS Using Virtual Work. Actuators 2024, 13, 181. [Google Scholar] [CrossRef]
  16. Pan, L.; Li, W.; Zhu, J.; Zhao, J.; Liu, Z. Accurate and Fast Fire Alignment Method Based on a Mono-binocular Vision System. Fire Technol. 2024, 60, 401–429. [Google Scholar] [CrossRef]
  17. Zhang, X.; Fang, Y.; Li, B.; Wang, J. Visual servoing of nonholonomic mobile robots with uncalibrated camera-to-robot parameters. IEEE Trans. Ind. Electron. 2016, 64, 390–400. [Google Scholar] [CrossRef]
  18. Liang, X.; Wang, H.; Liu, Y.H.; Chen, W.; Jing, Z. Image-based position control of mobile robots with a completely unknown fixed camera. IEEE Trans. Autom. Control 2018, 63, 3016–3023. [Google Scholar] [CrossRef]
  19. Liang, X.; Wang, H.; Liu, Y.H.; Liu, Z.; You, B.; Jing, Z.; Chen, W. Purely image-based pose stabilization of nonholonomic mobile robots with a truly uncalibrated overhead camera. IEEE Trans. Robot. 2020, 36, 724–742. [Google Scholar] [CrossRef]
  20. Salaris, P.; Vassallo, C.; Soueres, P.; Laumond, J.P. Image-based control relying on conic curves foliation for passing through a gate. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 684–690. [Google Scholar]
  21. Bista, S.R.; Giordano, P.R.; Chaumette, F. Appearance-based indoor navigation by IBVS using line segments. IEEE Robot. Autom. Lett. 2016, 1, 423–430. [Google Scholar] [CrossRef]
  22. Bista, S.R.; Giordano, P.R.; Chaumette, F. Combining line segments and points for appearance-based indoor navigation by image based visual servoing. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2960–2967. [Google Scholar]
  23. Chen, Z.; Li, T.; Jiang, Y. Image-based visual servoing with collision-free path planning for monocular vision-guided assembly. IEEE Trans. Instrum. Meas. 2024, 73, 7508717. [Google Scholar] [CrossRef]
  24. Dong, J.; Li, Y.; Wang, B. Image-based visual servoing with Kalman filter and swarm intelligence optimisation algorithm. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2024, 238, 820–834. [Google Scholar] [CrossRef]
  25. Dorbala, V.S.; Hafez, A.A.; Jawahar, C. A deep learning approach for robust corridor following. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3712–3718. [Google Scholar]
  26. Fu, G.; Xiong, G.; Wang, Y.; Cheng, P.; Deng, W.; Tao, R. Research on Forest Fire Target Detection and Tracking Based on Drones. In Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering, Xi’an, China, 26–28 January 2024; pp. 610–614. [Google Scholar]
  27. da Silva, A.F.; Araújo, A.F.; Durand-Petiteville, A.; Mendes, C.S. Feature Descriptor based on Differential Evolution for Visual Navigation in Dynamic Environments. IFAC-PapersOnLine 2023, 56, 5041–5046. [Google Scholar] [CrossRef]
  28. Tsapin, D.; Pitelinskiy, K.; Suvorov, S.; Osipov, A.; Pleshakova, E.; Gataullin, S. Machine learning methods for the industrial robotic systems security. J. Comput. Virol. Hacking Tech. 2024, 20, 397–414. [Google Scholar] [CrossRef]
  29. Jokić, A.; Jevtić, Đ.; Brenjo, K.; Petrović, M.; Miljković, Z. Deep Learning-based Visual Servoing Algorithm For Wheeled Mobile Robot Control. In Proceedings of the 15th INTERNATIONAL SCIENTIFIC CONFERENCE MMA2024-FLEXIBLE TECHNOLOGIES, Novi Sad, Serbia, 24–26 September 2024; pp. 71–74. [Google Scholar]
  30. Bechlioulis, C.P.; Heshmati-Alamdari, S.; Karras, G.C.; Kyriakopoulos, K.J. Robust image-based visual servoing with prescribed performance under field of view constraints. IEEE Trans. Robot. 2019, 35, 1063–1070. [Google Scholar] [CrossRef]
  31. Bista, S.R.; Ward, B.; Corke, P. Image-based indoor topological navigation with collision avoidance for resource-constrained mobile robots. J. Intell. Robot. Syst. 2021, 102, 55. [Google Scholar] [CrossRef]
  32. Hadj-Abdelkader, H.; Mezouar, Y.; Andreff, N.; Martinet, P. Omnidirectional visual servoing from polar lines. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, Orlando, FL, USA, 15–19 May 2006; pp. 2385–2390. [Google Scholar]
  33. Hadj-Abdelkader, H.; Mezouar, Y.; Martinet, P.; Chaumette, F. Catadioptric visual servoing from 3-d straight lines. IEEE Trans. Robot. 2008, 24, 652–665. [Google Scholar] [CrossRef]
  34. Mariottini, G.L.; Prattichizzo, D. Image-based visual servoing with central catadioptric cameras. Int. J. Robot. Res. 2008, 27, 41–56. [Google Scholar] [CrossRef]
  35. Marie, R.; Said, H.B.; Stéphant, J.; Labbani-Igbida, O. Visual servoing on the generalized voronoi diagram using an omnidirectional camera. J. Intell. Robot. Syst. 2019, 94, 793–804. [Google Scholar] [CrossRef]
  36. Junior, J.M.; Tommaselli, A.; Moraes, M. Calibration of a catadioptric omnidirectional vision system with conic mirror. ISPRS J. Photogramm. Remote Sens. 2016, 113, 97–105. [Google Scholar] [CrossRef]
  37. Barreto, J.P.; Araujo, H. Fitting conics to paracatadioptric projections of lines. Comput. Vis. Image Underst. 2006, 101, 151–165. [Google Scholar] [CrossRef]
  38. Torii, A.; Imiya, A. The randomized-Hough-transform-based method for great-circle detection on sphere. Pattern Recognit. Lett. 2007, 28, 1186–1192. [Google Scholar] [CrossRef]
  39. Albekairi, M.; Mekki, H.; Kaaniche, K.; Yousef, A. An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan. Sensors 2023, 23, 9667. [Google Scholar] [CrossRef]
Figure 1. Omnidirectional camera and its corresponding line images.
Figure 1. Omnidirectional camera and its corresponding line images.
Actuators 14 00183 g001
Figure 2. The central catadioptric image of 3D line  L i .
Figure 2. The central catadioptric image of 3D line  L i .
Actuators 14 00183 g002
Figure 3. Polar lines of line images projected from two 3D parallel lines under different camera pitch angles  γ : (left 0 < γ < π / 2 , (middle γ = π / 2 , and (right γ = 0 .
Figure 3. Polar lines of line images projected from two 3D parallel lines under different camera pitch angles  γ : (left 0 < γ < π / 2 , (middle γ = π / 2 , and (right γ = 0 .
Actuators 14 00183 g003
Figure 4. Simulated line images and corresponding polar lines calculated from two horizontal 3D parallel lines when the robot is placed on the floor with different poses.
Figure 4. Simulated line images and corresponding polar lines calculated from two horizontal 3D parallel lines when the robot is placed on the floor with different poses.
Actuators 14 00183 g004
Figure 5. Simulated line images and corresponding polar lines calculated from two vertical 3D parallel lines when the robot is placed on the floor with different poses.
Figure 5. Simulated line images and corresponding polar lines calculated from two vertical 3D parallel lines when the robot is placed on the floor with different poses.
Actuators 14 00183 g005
Figure 6. Configuration of our nonholonomic mobile robot.
Figure 6. Configuration of our nonholonomic mobile robot.
Actuators 14 00183 g006
Figure 7. Visual features  D ¯ X f  for visual servoing.
Figure 7. Visual features  D ¯ X f  for visual servoing.
Actuators 14 00183 g007
Figure 8. Visual features  ß ¯ 1 ß ¯ 2 , and  ß ¯ 3  for three door frames.
Figure 8. Visual features  ß ¯ 1 ß ¯ 2 , and  ß ¯ 3  for three door frames.
Actuators 14 00183 g008
Figure 9. System scheme of our IBVS corridor-following.
Figure 9. System scheme of our IBVS corridor-following.
Actuators 14 00183 g009
Figure 10. Simulation results for our IBVS corridor-following. (a) Trajectory of the mobile robot. (b) Visual feature  x f . (c) Visual feature  θ m . (d) Relative position in the corridor. (e) Angular position in the corridor. (f) Angular velocity  ω .
Figure 10. Simulation results for our IBVS corridor-following. (a) Trajectory of the mobile robot. (b) Visual feature  x f . (c) Visual feature  θ m . (d) Relative position in the corridor. (e) Angular position in the corridor. (f) Angular velocity  ω .
Actuators 14 00183 g010aActuators 14 00183 g010b
Figure 11. Trajectories of simulated visual features during robot corridor-following. (a) Conic curves  Ω ^ 1 . (b) Conic curves  Ω ^ 2 . (c) Polar lines  π ^ 1 . (d) Polar lines  π ^ 2 . (e) Polar lines  π ^ 0 . (f) Polar lines  ß ¯ 1 ß ¯ 2 , and  ß ¯ 0 .
Figure 11. Trajectories of simulated visual features during robot corridor-following. (a) Conic curves  Ω ^ 1 . (b) Conic curves  Ω ^ 2 . (c) Polar lines  π ^ 1 . (d) Polar lines  π ^ 2 . (e) Polar lines  π ^ 0 . (f) Polar lines  ß ¯ 1 ß ¯ 2 , and  ß ¯ 0 .
Actuators 14 00183 g011aActuators 14 00183 g011b
Figure 12. Corridor-following simulations from different initial poses with different image noise levels.
Figure 12. Corridor-following simulations from different initial poses with different image noise levels.
Actuators 14 00183 g012
Figure 13. Simulation from the same initial pose with different constant parameters  λ .
Figure 13. Simulation from the same initial pose with different constant parameters  λ .
Actuators 14 00183 g013
Figure 14. Simulation for doorway approaching. (a) Doorway approaching by using three door frames. (b) IBVS trajectories (three door frames vs. two door frames). (c) Change in three polar lines and  X f .
Figure 14. Simulation for doorway approaching. (a) Doorway approaching by using three door frames. (b) IBVS trajectories (three door frames vs. two door frames). (c) Change in three polar lines and  X f .
Actuators 14 00183 g014
Figure 15. IBVS simulation in a closed-loop corridor. (a) The whole robot trajectory. (b) Magnified top view. (c) Three-dimensional viewpoint of the first corner.
Figure 15. IBVS simulation in a closed-loop corridor. (a) The whole robot trajectory. (b) Magnified top view. (c) Three-dimensional viewpoint of the first corner.
Actuators 14 00183 g015
Figure 16. Image processing procedure in IBVS node. (a) Input image. (b) Edge detection using Canny. (c) Edge points on sphere. (d) Line extraction using sRHT. (e) Line images extracted. (f) Conic curve fitting.
Figure 16. Image processing procedure in IBVS node. (a) Input image. (b) Edge detection using Canny. (c) Edge points on sphere. (d) Line extraction using sRHT. (e) Line images extracted. (f) Conic curve fitting.
Actuators 14 00183 g016
Figure 17. Corridor-following trajectory based on our IBVS algorithm.
Figure 17. Corridor-following trajectory based on our IBVS algorithm.
Actuators 14 00183 g017
Figure 18. Visual feature  X f  and output  ω  from IBVS controller. (a) Visual feature  X f . (b) Output angular velocity  ω  of IBVS controller.
Figure 18. Visual feature  X f  and output  ω  from IBVS controller. (a) Visual feature  X f . (b) Output angular velocity  ω  of IBVS controller.
Actuators 14 00183 g018
Figure 19. Processed features during robot motion from (af) in Figure 17. (a) Image at (a) in Figure 17. (b) Image at (b) in Figure 17. (c) Image at (c) in Figure 17. (d) Image at (d) in Figure 17. (e) Image at (e) in Figure 17. (f) Image at (f) in Figure 17. (g) Right conic. (h) Left conic. (i) Right polar line. (j) Left polar line.
Figure 19. Processed features during robot motion from (af) in Figure 17. (a) Image at (a) in Figure 17. (b) Image at (b) in Figure 17. (c) Image at (c) in Figure 17. (d) Image at (d) in Figure 17. (e) Image at (e) in Figure 17. (f) Image at (f) in Figure 17. (g) Right conic. (h) Left conic. (i) Right polar line. (j) Left polar line.
Actuators 14 00183 g019
Figure 20. Doorway approaching. (a) Robot trajectory towards doorway. (b) Change in three polar lines and  X f . (c) Doorframe detected at (1). (d) Doorframe detected at (2). (e) Doorframe detected at (3). (f) Doorframe detected at (4).
Figure 20. Doorway approaching. (a) Robot trajectory towards doorway. (b) Change in three polar lines and  X f . (c) Doorframe detected at (1). (d) Doorframe detected at (2). (e) Doorframe detected at (3). (f) Doorframe detected at (4).
Actuators 14 00183 g020
Figure 21. Path-tracking performance evaluation.
Figure 21. Path-tracking performance evaluation.
Actuators 14 00183 g021
Table 1. Initial robot poses in Figure 12.
Table 1. Initial robot poses in Figure 12.
State
θ w  (degree)03060806030
y w  (m)−1.5−1−0.500.751.5
Table 2. Convergence error of relative position  y 0  in the corridor under different levels of image noise.
Table 2. Convergence error of relative position  y 0  in the corridor under different levels of image noise.
Noise Level  σ  (Pixel)MSE  ε  of Steady State (cm)
5 0.0308 ± 0.0003
10 0.2741 ± 0.1068
15 1.3441 ± 0.3639
20 1.0792 ± 1.2669
Table 3. Initial robot state and final average error for each trajectory in Figure 14a.
Table 3. Initial robot state and final average error for each trajectory in Figure 14a.
State
θ w  (degree)03060806030
y w  (m)−1.5−1−0.500.751.5
x w  (m)10.5−0.25−0.5−0.5−1
Error (cm)4.020.700.110.190.231.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhong, C.; Kong, Q.; Wang, K.; Zhang, Z.; Cheng, L.; Liu, S.; Han, L. Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing. Actuators 2025, 14, 183. https://doi.org/10.3390/act14040183

AMA Style

Zhong C, Kong Q, Wang K, Zhang Z, Cheng L, Liu S, Han L. Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing. Actuators. 2025; 14(4):183. https://doi.org/10.3390/act14040183

Chicago/Turabian Style

Zhong, Chen, Qingjia Kong, Ke Wang, Zhe Zhang, Long Cheng, Sijia Liu, and Lizhu Han. 2025. "Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing" Actuators 14, no. 4: 183. https://doi.org/10.3390/act14040183

APA Style

Zhong, C., Kong, Q., Wang, K., Zhang, Z., Cheng, L., Liu, S., & Han, L. (2025). Central Dioptric Line Image-Based Visual Servoing for Nonholonomic Mobile Robot Corridor-Following and Doorway-Passing. Actuators, 14(4), 183. https://doi.org/10.3390/act14040183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop