Next Article in Journal
Workspace and Singularity Analysis of 4-DOF 3R1T Parallel Mechanism with a Circular Rail
Previous Article in Journal
PGTI: Pose-Graph Topological Integrity for Map Quality Assessment in SLAM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed-Time and Prescribed-Time Image-Based Visual Servoing with Asymmetric Time-Varying Output Constraint

School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, China
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(12), 190; https://doi.org/10.3390/robotics14120190
Submission received: 5 November 2025 / Revised: 11 December 2025 / Accepted: 12 December 2025 / Published: 16 December 2025

Abstract

This paper addresses image-based visual servoing with the field-of-view limitation of the camera. A novel control method is proposed with dual constraints based on fixed-time and prescribed-time convergence. With the introduction of a prescribed-time performance function and an asymmetric barrier Lyapunov function, asymmetric time-varying output constraints are achieved. This ensures that the image features remain within the predefined range, thereby addressing the field-of-view constraint problem in visual servoing applications. The combination of the prescribed-time performance function and the fixed-time stability theory ensures that the tracking error converges to a predetermined range within a prescribed time. Furthermore, it can converge to zero in fixed time, thus significantly improving the error convergence rates. The effectiveness and superiority of the method are demonstrated through physical experiments. Moreover, a case study of a contact network component bolt alignment task, aiming at automatically aligning a sleeve to a bolt, is carried out to demonstrate the applicability of the proposed method in practice.

1. Introduction

In recent years, with the rapid development of vision technology, visual servoing (VS) has been widely applied in many robotics fields such as unmanned aerial vehicles (UAVs) [1], soft robots [2], mobile robots [3], and industrial robots [4]. Depending on how the visual error is defined and regulated, visual servoing is typically divided into PBVS [5], which uses pose-level feedback; IBVS [6], which directly regulates image features; and hybrid approaches that blend the two [7]. Visual servoing is usually used in high-precision control scenarios, requiring the real-time performance of the task. Compared with PBVS, IBVS directly uses image features as feedback, and thus the control accuracy is less sensitive to camera calibration errors. Therefore, IBVS is considered in this paper [8].
Due to the limitation of the visual sensor’s field of view (FOV), once the visual features leave the camera’s FOV, the VS system can easily crash. Therefore, the FOV constraint is a key point to consider in visual servoing control [9]. There have been a number of studies on FOV constraints in VS. In [10], a VS controller was designed using the Model Predictive Control (MPC) method, which takes into account the input and output constraints of the system and thus satisfies the camera’s FOV constraint requirements. A novel MPC system enhancing visual servo performance is proposed in the literature [11], which maintains the original controller’s speed without attenuation. To ensure that the FOV constraints are satisfied in [12], hybrid Dynamic Motion Prototyping (DMP) is used to define the safe motion region and model the implicit constraints by learning the task and manual demonstration, leading to a closed-loop controller that is obtained by solving an optimization problem rather than using a direct analytical servo law. Additionally, there are implementations used to solve the camera’s FOV constraints by means of path planning [13,14]. While these approaches have demonstrated effectiveness in enforcing FOV-related constraints, deploying predictive optimization or learning-based schemes on real robots often requires additional computational resources and careful tuning (e.g., solver configuration, horizon selection, or training/validation procedures). Moreover, under fast visual servoing, practical systems may still suffer from feature degradation or intermittent feature loss due to perception limitations, which motivates constraint-handling designs that are solver-free, straightforward to implement, and compatible with real-time IBVS pipelines.
In [15,16], prescribed performance control (PPC) was proposed for visual servoing control, which can constrain the FOV in real time and has low computational requirements. In [17], a time-varying performance specification is enforced on the image error and integrated with an asymmetric barrier Lyapunov function (BLF) to achieve custom convergence accuracy, which converges the image error to a predetermined range. The IBVS controller in [18] was designed based on a performance function with camera parameters, combined with a tan-type BLF. The convergence accuracy of the image can be customized, and the camera’s FOV constraints are satisfied. In the above study on PPC, a time-varying performance function was used where the tracking error could converge to a custom range when t . This only guarantees that the error will eventually converge to the specified range, and the accurate convergence time cannot be determined. Additionally, the controller is designed to make the IBVS system asymptotically stable, but the accurate convergence time cannot be determined.
Moreover, prescribed-time behavior can also be enforced via a prescribed-time performance function, as demonstrated in studies on the fixed-time control and prescribed-time consensus of multi-agent systems [19,20]. Compared to the time-varying performance function [15,16,17,18], the prescribed-time performance function [21] allows the tracking error to converge to a custom range at a prescribed time. A prescribed-time performance function and a log-type BLF are introduced in [22] to limit the tracking error, implementing a symmetric output constraint that allows the error to converge to a custom range in a predetermined time. In [23], a prescribed-time performance boundary function and a tan-type transformation function are introduced to limit the tracking error for each subsystem, making the error converge to a threshold value within a predefined time. In the above studies, the parameters of the prescribed-time performance function were artificially set based on self-experience, e.g., the prescribed time T, the initial value of the performance function, etc. Moreover, the prescribed-time performance function has not been applied in the field of visual servoing research, and the above studies use symmetric output constraints that do not satisfy the asymmetric constraint requirements of VS, i.e., the camera FOV angle constraints.
In numerous applications, finite-time stabilization is desirable to comply with stringent performance requirements [24]. A fractional-order sliding-mode VS scheme with an online adaptation of sliding-surface parameters is reported in [25] to achieve finite-time convergence. However, in conventional finite-time control, the convergence time bound typically depends on the initial conditions of the closed-loop system. However, fixed-time control ensures faster convergence, and the convergence time is independent of the initial state. In UAV visual servoing [26], tracking differentiators and fixed-time visual servo controllers are designed to estimate the image feature derivatives in fixed time, and the tracking error can also converge to a neighborhood near zero in fixed time. The high-performance controller, designed using the performance function and fixed-time stabilization theory, allows the system to stabilize in fixed time while satisfying the FOV constraints. In the visual formation control of mobile robots [27], the performance function was used to constrain the formation error to satisfy the camera’s FOV constraints due to the camera’s viewing distance limitation. Combined with the theory of fixed-time stability, the formation tracking error was made to converge to a neighborhood near zero in fixed time.
Motivated by the above results, a novel control method combining the prescribed-time performance function and the fixed-time control theory is proposed for the IBVS system. Firstly, this paper introduces prescribed-time control into the VS control domain to ensure that the image error converges to a predefined range within a prescribed time. Secondly, the asymmetric time-varying output constraints are achieved by designing a specific prescribed-time performance function and combining it with the asymmetric BLF to design the controller, which satisfies the camera’s FOV constraints. Combining fixed-time and prescribed-time control significantly improves the speed of convergence of image errors. Finally, the advantages and effectiveness are verified through theoretical analyses and experiments. The main contributions of this paper are as follows:
  • This paper introduces prescribed-time control into visual servoing. Combined with an asymmetric BLF, it ensures the image error converges to a predetermined range within a prescribed time while satisfying the camera’s FOV constraint.
  • Compared with traditional prescribed-time performance functions, where parameters rely on empirical tuning [21,22,23], this paper integrates the prescribed-time performance function with fixed-time stability theory. This novel integration ensures that the prescribed-time parameter is systematically determined by the control system parameters, enabling adaptive adjustment without manual redesign when conditions change. Furthermore, the proposed control strategy incorporating fixed-time control and predetermined time control methods ensures that the image tracking error not only converges to a predefined range within a prescribed time but also further approaches zero in a fixed time, significantly accelerating convergence.
  • Distinguished from our prior work [28], this study achieves dual improvements in the control methodology: a universal time-varying barrier Lyapunov function is integrated with fixed-time stability theory to reconstruct the control framework. The proposed approach completely eliminates dependence on system initial conditions, uniformly accommodates both constrained and unconstrained systems, and effectively circumvents singularity risks.
  • Beyond laboratory algorithmic verification, practical validation was conducted in engineering scenarios—a case study involving the bolt alignment process of Overhead Contact System (OCS) components demonstrates the applicability of the proposed method in practice. This is a critical step for advancing visual servoing control through the transition from theoretical investigation to practical applications.
The remainder of this paper is structured as follows. Section 2 states the problem and outlines the visual servoing setup. Section 3 presents the controller design. Section 4 reports comparative studies and application-oriented experiments. Section 5 concludes this paper.

2. Preliminaries and Problem Statement

2.1. Definitions and Lemmas

Lemma 1
([29]). Consider the system characterized by a set of differential equations: x ˙ ( t ) = f ( t , x ) , x ( 0 ) = x 0 , where x R n , and f : R + × R n R n . If there exists a continuous radially unbounded positive definite function V ( x ) satisfying
V ˙ ( x ) a V α ( x ) b V β ( x )
where a , b > 0 , α > 1 , 0 < β < 1 , it follows that the origin is globally fixed-time stable, with a uniform settling time bound satisfying T T max .
T T max = 1 a ( α 1 ) + 1 b ( 1 β )
Lemma 2
([30]). Assume x i 0 for all i = 1 , , n . For any exponent ϵ > 0 , the sum of powered terms admits the lower bounds in (3) for the cases 0 < ϵ 1 and ϵ > 1 , respectively:
i = 1 n x i ϵ i = 1 n x i ϵ , 0 < ϵ 1 , i = 1 n x i ϵ n 1 ϵ i = 1 n x i ϵ , ϵ > 1 .
Notation 1.
For α 0 , the signed-power operator is defined by x α : = | x | α sgn ( x ) , x R , where sgn ( · ) denotes the signum function. In particular,
sgn ( x ) = 1 , x > 0 , 0 , x = 0 , 1 , x < 0 .

2.2. Image-Based Visual Servoing

An eye-in-hand IBVS configuration is adopted (Figure 1). For clarity, the notation used in this study is summarized in Table 1. Consider n image feature points with n 3 . For each point i, the camera frame coordinates are P i = X i , Y i , z i T with depth z i > 0 , and the corresponding pixel feature is s i = u i , v i T R 2 . After camera calibration, the detected image points are undistorted before being used for control; therefore, the pinhole projection relationship [8] is defined as follows:
s i = u i v i = f x 0 0 f y X i / z i Y i / z i + c x c y ,
where f x and f y are the focal lengths in pixels, and ( c x , c y ) is the principal point. z i 0 is the real-time measured depth provided by the D435i Camera and is used online in the computation of the interaction matrix. We further introduce the normalized image coordinates
x i = u i c x f x , y i = v i c y f y ,
which will be used to express the interaction matrix in a standard form.
Based on [8], the time derivatives of the image feature points are
s ˙ i = Γ i V c
where V c = v x , v y , v z , w x , w y , w z T represents the speed controllers that need to be designed. The image Jacobian matrix Γ i , which relates the normalized image coordinates to the camera velocity, is
Γ i = 1 z i 0 x i z i x i y i ( 1 + x i 2 ) y i 0 1 z i y i z i 1 + y i 2 x i y i x i .
The objective of IBVS control schemes is to drive an error term E i to zero, where E i is defined as follows:
E i = s i s i * = u i u i , d v i v i , d
where E i = E i u , E i v T . s i * = u i , d , v i , d T indicates the pixel coordinate values of the given image feature points, which are designed as constants. Considering (7), the time derivative of the overall image feature error E h can be expressed as follows:
E ˙ h = Γ h V c
where Γ h = Γ 1 T , Γ n T T R 2 n × 6 represents the image Jacobian matrices for all image feature points, and E h = E 1 T , E n T T = E 1 u , E 1 v , E n u , E n v T R 2 n represents the overall image feature error vectors.
Since the position information is obtained through the camera, and the camera’s FOV is inherently limited, visual servoing control must also address the issue of camera FOV constraints. If the feature points exceed the camera’s FOV, it can result in the instability of the visual servoing system. Consequently, the image coordinates must satisfy the following FOV constraints:
u min < u i < u max v min < v i < v max
The constants u min , u max , v min , v max specify the allowable pixel-domain range of the image coordinates and are set according to the camera resolution and the selected admissible region. These bounds are intrinsically determined by the camera’s resolution.
Assumption 1.
At t = 0 , all selected image features are visible in the camera image, i.e., u min < u i ( 0 ) < u max and v min < v i ( 0 ) < v max for all i.
Assumption 2.
To exclude spurious equilibrium minima that may arise in pseudoinverse-based IBVS when 2 n > 6 under degenerate or poorly conditioned feature configurations (see, e.g., [8]), we assume that for all t 0 along the closed-loop motion, (1) the stacked interaction matrix Γ h ( t ) R 2 n × 6 is full column rank and uniformly well-conditioned, i.e., σ min Γ h ( t ) σ ̲ > 0 ; (2) all selected visual features remain detectable so that Γ h ( t ) is well defined; and (3) the feature depths are positive and bounded.

3. Main Results

In this section, a fixed-time and prescribed-time IBVS controller will be designed to achieve the following objectives: (1) avoid image feature points exceeding the camera’s FOV and satisfy the FOV constraints (11); (2) achieve prescribed transient and steady-state performance for all image feature errors; and (3) analyze the stability of the IBVS control system. The flow of the proposed control method is shown in Figure 2.

3.1. FOV Constraints

The IBVS FOV constraint problem (11) is transformed into an asymmetric time-varying output constraint problem in the IBVS system. To address this problem, a prescribed-time performance function ξ i [22] is introduced.
ξ ( t ) = ξ 0 ξ exp ( γ t ) ( T f t ) q T f q + ξ , 0 t < T f ξ t T f
The time derivative of (12) is
ξ ˙ ( t ) = ξ 0 ξ ( T f t ) q 1 T f q e γ t ( γ ( T f t ) + q ) , 0 < t < T f 0 , t T f
where ξ ξ ( t ) ξ 0 . When t T f , it follows that ξ ( T f ) = ξ . γ and q are the control parameters we selected. ξ is a parameter customized by us for the final convergence error range, and T f is the prescribed time. The parameters ξ 0 and T f are designed to follow the explanation detailed below, where ξ 0 > ξ > 0 .
Substituting (9) into (11) yields
u min u i , d < E i u < u max u i , d v min v i , d < E i v < v max v i , d
To achieve the FOV constraints, a prescribed-time performance function is introduced to constrain the image error as follows
u min u i , d H i u ξ i u ( t ) < E i u ( t ) < ξ i u ( t ) u max u i , d v min v i , d H i v ξ i v ( t ) < E i v ( t ) < ξ i v ( t ) v max v i , d
where H i u = u min u i , d u i , d u max > 0 , and H i v = v min v i , d v i , d v max > 0 . The parameters are designed as ξ i , 0 u = u max u i , d > 0 , and ξ i , 0 v = v max v i , d > 0 . ξ i , u and ξ i , u are the error convergence accuracies customized by us. The prescribed-time performance function can be rewritten as follows:
ξ i u ( t ) = u max u i , d ξ i , u exp ( γ t ) ( T f t ) 2 T f 2 + ξ i , u , 0 t < T f ξ i , u t T f
ξ i v ( t ) = v max v i , d ξ i , v exp ( γ t ) ( T f t ) 2 T f 2 + ξ i , v , 0 t < T f ξ i , v t T f
At the initial moment, ξ i u ( 0 ) = u max u i , d , H i u ξ i u ( 0 ) = u min u i , d , ξ i v ( 0 ) = v max v i , d , and H i v ξ i v ( 0 ) = v min v i , d . It can be obtained that ξ i u ( t ) u max u i , d , and ξ i v ( t ) v max v i , d , combined with (14), and Assumption 1 can be obtained:
H i u ξ i u ( 0 ) < E i u ( 0 ) < ξ i u ( 0 ) H i v ξ i v ( 0 ) < E i v ( 0 ) < ξ i v ( 0 )
Remark 1.
The above results show the designed prescribed-time performance function at the initial moment ( t = 0 ) such that (17) holds, which means that u min u i , d < E i u ( 0 ) < u max u i , d and v min v i , d < E i v ( 0 ) < v max v i , d also hold. This means that u min < u i ( 0 ) < u max and v min < v i ( 0 ) < v max hold. The conditions of the FOV constraints are satisfied at the initial moment.

3.2. Control Method Design

We define the following image error variables
z i u ( t ) = E i u ( t ) ξ i u ( t ) z i v ( t ) = E i v ( t ) ξ i v ( t )
We convert the time-varying constraints (15) into time-invariant constraints
z ̲ i u < z i u ( t ) < z ¯ i u z ̲ i v < z i v ( t ) < z ¯ i v
where z ̲ i u = H i u , z ̲ i v = H i u , and z ¯ i u = z ¯ i v = 1 .
A fractional barrier Lyapunov function (FBLF) is introduced to transform the image error as follows:
V i m = 1 2 ( χ i m ) 2 , χ i m = z ¯ i m z ̲ i m z i m z ¯ i m z i m z ̲ i m + z i m , m { u , v }
where z ̲ i m and z ¯ i m are positive constants. z ̲ i m and z ¯ i m are the lower and upper bounds of z i m , respectively. The time derivative of χ i m is given by
χ ˙ i m = z ¯ i m z ̲ i m z i m 2 + z ¯ i m z ̲ i m z ˙ i m z ¯ i m z i m 2 z ̲ i m + z i m 2 = z ¯ i m z ̲ i m z i m 2 + z ¯ i m z ̲ i m z ¯ i m z i m 2 z ̲ i m + z i m 2 ξ i m E ˙ i m E i m ξ ˙ i m ξ i m
Remark 2.
The FBLF (20) shows that only for z i m = 0 does χ i m = 0 hold, which means that E i m 0 . When z ¯ i m and z ̲ i m are designed as z ̲ i m = H i m and z ¯ i m = 1 , respectively, and combined with Remark 1 and Assumption 1, it can be obtained that z ̲ i m < z i m ( 0 ) < z ¯ i m . When z i m z ¯ i m or z i m z ̲ i m , there exists χ i m or χ i m ; therefore V i m . This means that whenever V i m is bounded, the error z i m will not exceed z ¯ i m or z ̲ i m . This implies that the inequalities H i m < z i m < 1 and H i m ξ i m < E i m < ξ i m hold, and the FOV constraint is satisfied.
Remark 3.
Compared to existing BLF-based visual servoing methods with FOV constraints, the method proposed in this paper is more general. Symmetric and asymmetric output constraints were implemented with logarithmic BLFs in [15,17], respectively, and the FOV constraints were experimentally validated, and it was not proven theoretically that the FOV constraints hold. The proposed method is theoretically proven to satisfy the camera’s FOV constraints. This is achieved by designing a performance function based on the camera’s parameters and integrating it with the FBLF. The method’s applicability in real scenarios is enhanced, making the performance function design more general. Finally, the FOV constraint’s effectiveness is experimentally verified. Different from our previous method combining logarithmic Lyapunov functions with finite-time stability theory [28], this study achieves two methodological improvements: the fixed-time scheme provides a uniform upper bound on the convergence time independent of the initial conditions [30], and the universal (fractional/power-type) barrier function yields simpler closed-form constraint-handling terms than logarithmic formulations, which is implementation-friendly and mitigates the numerical sensitivity associated with derivative singularities near the constraint boundaries [31].
On the basis of (21), a fixed-time and prescribed-time controller with prescribed performance is designed as follows:
V c = Γ k 1 σ h Λ h + k 2 η h
where Γ R 6 × 2 n denotes the pseudoinverse of the interaction matrix. Specifically, when Γ h has full rank 6 n ( n 3 ) , Γ is computed as Γ = Γ h T Γ h 1 Γ h T [8]. σ h = σ 1 u , σ 1 v , σ n u , σ n v T R 2 n , where σ i m = ξ i m z ¯ i m z ̲ i m 2 z i m 3 z ¯ i m z ̲ i m + z i m 2 z ¯ i m z i m z ̲ i m + z i m . Λ h = Λ 1 u , Λ 1 v , Λ n u , Λ n v T R 2 n , where Λ i m = E i m ξ ˙ i m ξ i m . η h = η 1 u , η 1 v , η n u , η n v T R 2 n , where η i m = ξ i m z ¯ i m 1 2 z ̲ i m 1 2 z i m 1 2 z ¯ i m z i m 3 2 z ̲ i m + z i m 3 2 z ¯ i m z ̲ i m + z i m 2 . k 1 and k 2 are positive design parameters.

3.3. Stability Analysis

Based on Assumption 1, Assumption 2, and the proposed fixed-time and prescribed-time control method, the following conclusion can be derived.
Theorem 1.
Suppose that Assumptions 1–2 hold. Consider the IBVS dynamics (7) subject to the FOV bounds (11) and the fixed-time and prescribed-time control law (22). Then, the following statements are guaranteed:
  • The image feature error E i m reaches the origin within a fixed time (uniformly bounded with respect to the initial condition).
  • The transformed error satisfies the prescribed performance constraint (19), and hence (14)–(15) are enforced for all t 0 . Consequently, the FOV constraints (11) are satisfied throughout the closed-loop motion.
Proof. 
Consider the following Lyapunov function
V = 1 2 χ h T χ h = 1 2 i = 1 n χ i u 2 + χ i v 2
with χ h = χ 1 u , χ 1 v , χ n u , χ n v T R 2 n . Differentiating (24) along (21), (18), and (10) gives
V ˙ = χ h T z ¯ 1 u z ̲ 1 u z 1 u 2 + z ¯ 1 u z ̲ 1 u z ¯ 1 u z 1 u 2 z ̲ 1 u + z 1 u 2 ξ 1 u E ˙ 1 u E 1 u ξ ˙ 1 u ξ 1 u z ¯ 1 v z ̲ 1 v z 1 v 2 + z ¯ 1 v z ̲ 1 v z ¯ 1 v z 1 v 2 z ̲ 1 v + z 1 v 2 ξ 1 v E ˙ 1 v E 1 v ξ ˙ 1 v ξ 1 v z ¯ n u z ̲ n u z n u 2 + z ¯ n u z ̲ n u z ¯ n u z n u 2 z ̲ n u + z n u 2 ξ n u E ˙ n u E n u ξ ˙ n u ξ n u z ¯ n v z ̲ n v z n v 2 + z ¯ n v z ̲ n v z ¯ n v z n v 2 z ̲ n v + z n v 2 ξ n v E ˙ n v E n v ξ ˙ n v ξ n v = χ h T A h E ˙ h Λ h
where A i m = z ¯ i m z ̲ i m z i m 2 + z ¯ i m z ̲ i m z i m z ¯ i m z i m 2 z ̲ i m + z i m 2 ξ i m , and A h = diag ( A 1 u , A 1 v , , A n u , A n v ) . E ˙ h = E ˙ 1 u , E ˙ 1 v , E ˙ n u , E ˙ n v T .
From (24) and using (10) and (22), it follows that
V ˙ = χ h T A h k 1 σ h Λ h + k 2 η h + Λ h = k 1 χ h T A h σ h k 2 χ h T A h η h = k 1 χ h T χ h 3 k 2 χ h T χ h 1 2 = k 1 i = 1 n χ i u 4 + χ i v 4 k 2 i = 1 n | χ i u | 3 2 + | χ i v | 3 2 = k 1 i = 1 n χ i u 2 2 + χ i v 2 2 k 2 i = 1 n χ i u 2 3 4 + χ i v 2 3 4
where χ h 3 = ( χ 1 u ) 3 , ( χ 1 v ) 3 , ( χ n u ) 3 , ( χ n v ) 3 T , and χ h 1 2 = χ 1 u 1 2 , χ 1 v 1 2 , χ n u 1 2 , χ n v 1 2 T .
From (25) and using Lemma 2, it follows that
i = 1 n χ i u 2 2 + χ i v 2 2 ( 2 n ) 1 i = 1 n χ i u 2 + χ i v 2 2 i = 1 n χ i u 2 3 4 + i = 1 n χ i v 2 3 4 i = 1 n χ i u 2 + χ i v 2 3 4
From (25) and (26), we have
V ˙ 4 k 1 ( 2 n ) 1 i = 1 n 1 2 χ i u 2 + 1 2 χ i v 2 2 k 2 2 3 4 i = 1 n 1 2 χ i u 2 + 1 2 χ i v 2 3 4 2 k 1 n V 2 k 2 2 3 4 V 3 4 a 1 V 2 b 1 V 3 4
where a 1 = 2 k 1 n , and b 1 = 2 3 4 k 2 . By Lemma 1, the origin of the closed-loop IBVS system is globally fixed-time stable, and the settling time is upper-bounded by T max . Therefore we have
T T max = 1 a ( α 1 ) + 1 b ( 1 β ) = 1 a 1 + 1 b 1
According to Remark 2, when z i m z ¯ i m or z i m z ̲ i m , there exists χ i m or χ i m ; therefore V i m . It follows from Assumption 1 and Remark 1 that the initial state satisfies (17). V is differentiable and continuous, and V ˙ 1 0 , and the states follow (19) [31]. This completes the proof. □
Remark 4.
It can be seen from the above that all signals in the closed-loop system are uniformly ultimately bounded. According to Remark 2, since V 1 is bounded, the error satisfies inequality ρ i s < E i s < 1 [31]. This implies that inequality (11) holds. Consequently, image features are prevented from leaving the camera’s FOV.
Remark 5.
To the best of our knowledge, this work is the first to combine a prescribed-time performance function with fixed-time stability theory in the context of image-based visual servoing (IBVS) and to provide an explicit upper bound on the settling time that is expressed directly in terms of controller gains. Related IBVS works [15,16,17,18] and prescribed-time control research [21,22,23] address performance constraints or prescribed performance but either (i) guarantee only asymptotic convergence to a predefined range, (ii) use empirical tuning for the prescribed-time parameters, or (iii) consider symmetric performance bounds that do not satisfy asymmetric camera FOV requirements. In contrast, our contribution is three-fold: (i) we design an asymmetric prescribed-time performance function tailored to camera FOVs; (ii) we integrate it with fixed-time stability analysis to obtain a computable upper bound T f = T max determined by controller gains; and (iii) we validate the approach with both comparative experiments and a real-world bolt alignment task, and we provide parameter selection guidance and sensitivity analysis.

4. Experimental Results

In this section, we conduct three sets of experiments to verify the effectiveness of the proposed control method in FOV constraints, the superiority of control performance, and the applicability of the experiments for real-world applications. Finally, an HD video of the experimental demonstrations is available at https://youtu.be/RnZMddgVcoA (accessed on 11 December 2025).

4.1. Experimental Setup

The experimental platform (Figure 1) comprises a 6 DoF UR5 collaborative manipulator and an Intel RealSense D435i camera. A stationary AprilTag (family 36h11) is used as the visual target, providing four corner features ( n = 4 ) that form a square of side length 7 cm . These corners are selected as the image features, and their real-time extraction is implemented using ViSP, an open-source visual servoing library developed by the IRISA-Inria Rainbow team [32].
Case 1: FOV Constraint Experiment. This case evaluates the proposed controller against a conventional IBVS scheme. The image feature trajectories are reported to assess whether the features remain within the admissible region, thereby validating the effectiveness of the imposed FOV constraints.
Case 2: Comparison Experiment. These experiments are carried out between the proposed control method and other control methods to verify the advantages of the proposed control method.
Case 3: Deployment in the Real-World Environment. A bolt alignment task for OCS components was performed to verify the applicability of our method in practice.
Timing and latency: The UR control loop is event-triggered by the arrival of each processed vision update after image acquisition and feature extraction. Thus, the controller update frequency is synchronized with the vision pipeline and equals the achieved vision update rate. The visual processing latency is task-dependent (e.g., ViSP-based AprilTag corner extraction versus YOLOv7-based bolt corner detection). In our experiments, the achieved closed-loop rates are 30 fps/30 Hz for the AprilTag setup and approximately 10 fps/10 Hz for the bolt setup; these effective rates are reported after visual processing and therefore reflect the practical perception and I/O latency in the closed loop.

4.2. FOV Constraint Experiment

In Case 1, a comparative study is conducted to validate the proposed FOV-constrained controller against the conventional IBVS scheme [8]. An Intel RealSense D435i depth camera is used with an image resolution of 640 × 480 pixels. In Case 1, the initial camera-to-target distance is approximately z ( 0 ) 0.55 m, and the desired distance at convergence is approximately z d 0.27 m. The depth z i used in the interaction matrix is measured online by the Intel RealSense D435i. The number of image feature points is four, i.e., n = 4 . Therefore the parameters in Equation (Section 3.1) are selected as follows: u max = 460 , u min = 150 , v max = 390 , and v min = 80 . The gains in (22)–(23) are set to k 1 = 0.4 and k 2 = 0.5 . The performance function parameters in (13) are chosen as ξ i , u = ξ i , v = 3 , γ = 0.2 , q = 2 , and T f = 10 , in which T f = T max = n 2 k 1 + 4 2 3 4 k 2 10 , where n signifies the total number of image feature points. n = 4 is selected for the experiments in this paper.
Parameter selection guideline: The performance function ξ ( t ) in (12) specifies a time-varying envelope satisfying ξ ξ ( t ) ξ 0 for t [ 0 , T f ) and ξ ( T f ) = ξ , which is used to constrain the evolution of image feature errors under the field-of-view (FOV) limits. The parameters in (12)–(13) are selected according to the following reproducible procedure.
(1) Initial bounds ξ i , 0 u and ξ i , 0 v (feasibility at t = 0 ). To ensure that the prescribed constraints are satisfied from the initial time, the initial bounds are computed from the admissible image region and the desired feature locations. Specifically, for the i-th feature point,
ξ i , 0 u = u max u i , d > 0 , ξ i , 0 v = v max v i , d > 0 ,
where ( u i , d , v i , d ) denotes the desired image plane coordinates. Here, u min , u max and v min , v max denote the lower and upper bounds (in pixels) of the admissible image plane coordinates, which are determined by the camera resolution (and the predefined admissible region under the FOV constraint).
(2) Terminal bounds ξ u i , and ξ v i , . The parameters ξ u i , and ξ v i , specify the terminal accuracy bounds of the corresponding image errors. Smaller values impose tighter steady-state accuracy but may increase sensitivity to measurement noise and feature jitter; thus, ξ u i , and ξ v i , should be selected according to the desired final precision and sensing quality.
(3) Prescribed time T f . The prescribed time T f determines when the envelope reaches ξ . In this work, T f is chosen using the fixed-time upper bound derived in the stability analysis, i.e., T f = T max , which can be computed directly from the controller gains and the number of feature points n (with n = 4 in our experiments).
(4) Shape parameters γ and q. The parameters γ and q shape how fast ξ ( t ) shrinks over [ 0 , T f ) . From (13), increasing γ and/or q generally leads to a more aggressive contraction (larger | ξ ˙ ( t ) | ) over part of the interval, which can accelerate transients but may require larger control effort and amplify noise effects. Therefore, γ and q can be tuned to balance convergence speed and robustness while preserving the prescribed-time property.
A 36h11 AprilTag with a side length of 7 cm is employed as the visual target, and the corresponding desired feature coordinates are given by
s * = 417 215 215 417 157 157 359 359
The initial pixel coordinates of the detected features are
s ( 0 ) = 457 381 297 373 306 222 297 383
Figure 3 depicts the motion trajectories of the four image feature points within the camera’s FOV. These trajectories illustrate the movement of the image feature points from their initial positions (green crosses) to their target positions (red crosses). Figure 3a shows the trajectories produced by a traditional control method [8], whereas Figure 3b presents those obtained under the proposed control method’s constraints. To further assess FOV constraint satisfaction, Figure 4 shows the feature trajectories in pixel coordinates. The dashed black boundary indicates the imposed limits specified by u min , u max , v min , and v max . Additionally, the black solid lines indicate the camera’s maximum resolution (640 × 480 pixels), representing the physical limitations of its FOV. As shown in Figure 4a, the traditional method causes image feature points 1 and 2 to exceed the constraint boundaries, leading to the failure of the visual servoing task under constraint requirements. By contrast, Figure 4b demonstrates that enforcing the prescribed performance constraints keeps the image features strictly within the specified FOV. This strongly confirms the effectiveness of the proposed method.

4.3. Comparison Experiment

To evaluate the proposed approach, the proposed fixed-time and prescribed-time controller (FTPT) (22) is compared with the fixed-time controller using a common performance function (FT) (32), the symmetric constant log-type controller (SCBLF) [15], the asymmetric log-type prescribed performance controller (PPC) [16], and the tan-type BLF controller (TBL) [18].
V c = Γ k 1 σ h Λ h + k 2 η h
where ξ i m ( t ) = ξ i , 0 m ξ i , m exp ( γ t ) + ξ i , m . The methodology and parameters remain the same as those of the proposed control method, except for the modified performance function.
To implement visual servoing, a 36h11 series AprilTag measuring 7cm in size is selected. The coordinates of the desired feature points are defined as follows:
s * = 417 215 215 417 157 157 359 359
The initial pixel coordinates of the detected features are as follows:
s ( 0 ) = 168 58 25 137 52 22 130 160
With the Intel RealSense D435i operating at 640 × 480 resolution, the pixel bounds are set to u min = 0 , u max = 640 , v min = 0 , and v max = 480 . In (13), the performance function parameters are chosen as ξ i , u = ξ i , v = 3 , γ = 0.2 , q = 2 , and T f = T max = n 2 k 1 + 4 2 3 4 k 2 10 , where n signifies the total number of image feature points. n = 4 is selected for the experiments in this paper. This means that the steady-state error is prescribed in advance to a maximum of three pixel points. In (22), the controller gains are set to k 1 = 0.4 and k 2 = 0.5 .
The image plane trajectories of the four AprilTag features obtained with different control methods are presented in Figure 5. Green crosses indicate the initial feature positions, whereas red crosses denote the desired positions. As shown in Figure 6, the u and v errors of all four features enter the predefined admissible band within the prescribed time under the proposed method, thereby ensuring the desired transient behavior and FOV satisfaction. Additionally, the proposed FTPT method achieves a faster convergence rate than other control methods. To better illustrate performance, the total image feature error in visual servoing is plotted in Figure 7. To quantify the overall tracking performance, the total image feature error is evaluated by the Euclidean norm E t = ( E 1 u ) 2 + ( E 1 v ) 2 + + ( E 4 u ) 2 + ( E 4 v ) 2 . As shown in Figure 7, the error curves in blue, red, cyan, green, and black reach a neighborhood near zero at 4.9 s, 6.3 s, 8.8 s, 11.1 s, and 11.9 s, respectively. For ease of comparison, the key experimental results are summarized in Table 2. The control methods (22) and (32) with fixed-time stability have faster convergence than the other methods [15,16,18] with asymptotic stabilization. Furthermore, the blue and red error curves show that the proposed FTPT method converges faster than the FT method. The FTPT method incorporates both fixed-time and prescribed-time theories, while the FT method only uses fixed-time theory. The experimental results verify that the proposed method has better transient performance.

4.4. Deployment in the Real-World Environment

The proposed method was applied to the task of aligning bolts on OCS components. The bolt components and the alignment scenarios are shown in Figure 8a,b, respectively. The relative transform between the camera frame c and the end-effector/tool frame O e is obtained by standard hand-eye calibration. The M12 bolt is characterized by a hexagonal surface with an opposite side length of 18 mm. The sleeve is a hexagonal surface with a length of 19 mm on the opposite side. In this task, the goal is to align the sleeve at the end of the manipulator to the bolts of the contact network components. High requirements are placed on the assembly positioning between the sleeve and the bolt. In this case, visual servoing control was used to accomplish the alignment task.
In the practical experiment, the YOLOv7-based corner detection algorithm [33] was applied to the OCS components’ bolts, replacing the earlier AprilTag recognition algorithm. The algorithm can mark four corner points of the bolt. Although the bolt head is hexagonal, we use four consistently detectable and ordered corner points for robust real-time servoing; using all six corners is possible but less reliable under reflections and may introduce correspondence switching. In this experiment, the bolt corner checking algorithm requires the camera to have a higher resolution, so the Intel RealSense D435i camera pixels are 1920 × 1080, and therefore the parameters of the prescribed-time performance function are specified to be u max = 1920 , u min = 0 , v max = 1080 , and v min = 0 . The robot controller is configured with a computer featuring an i5-13490F CPU, 32 GB RAM, and NVIDIA GeForce RTX 4070.
Figure 9 shows the experimental process of bolt visual servoing alignment using the control method proposed in this paper, where the bottom-left pictures in Figure 9a–c represent the camera’s field of view. Figure 9a–c illustrate the visual servoing control sequence during a robotic bolt alignment task using a sleeve. This validates that the bolt is maintained within the camera’s FOV during the entire servoing process, preventing failures due to FOV exceedance. To demonstrate the visual servoing results more intuitively and to verify their validity, a sleeve alignment bolt experimental step was added after the end of visual servoing. Firstly, the 6D pose of the camera ( O c ) was converted to the 6D pose of the end of the manipulator ( O e ) through the calibration relationship of the camera. Then the robot was controlled to bring the end-of-manipulator pose to the camera pose. Finally, the manipulator end performs the linear alignment task. Figure 9d shows the sleeve successfully aligning the bolt, validating the applicability of visual servoing in the bolt alignment task. Snapshots of the trajectories of the image feature points in Figure 9a,c in the camera’s field of view are zoomed in as in Figure 10a,b. The four corners of the bolt were selected as image feature points, represented by green crosses. The red crosses represent the desired image feature points. The green curve represents the trajectory of the image feature point. In Figure 10b, the red and green crosses are nearly overlapping, indicating that visual servoing was successfully completed. The trajectories of the feature points in the real experiments are not as smooth as they are in theory. The reason for this is the slight jittering of the bolt corner points detected in real time. It is worth noting that when visual servoing control is applied to actual robotic systems, it will inevitably encounter the influence of external noise factors, such as variations in lighting, camera calibration parameters, and image sensor noise. In this application experiment, the switch from QR codes to bolts as the detection target resulted in poorer image quality. The image detection frame rate decreased from 30 frames per second to 11 frames per second, exerting a certain degree of influence on the vision servo control. These challenges remain key areas for our future research. Ultimately, the proposed control method enabled the robot to successfully align the socket with the bolt, validating the effectiveness of this approach.
Despite the encouraging experimental results, several limitations of the proposed approach should be acknowledged. First, the method assumes that all selected visual features are initially within the camera field of view; if features lie outside the field of view at initialization, additional re-detection or exploratory motion is required before the constrained IBVS law can be engaged. Second, the closed-loop performance depends on reliable perception and accurate depth and interaction matrix computation; illumination changes, motion blur, and intermittent detection may degrade feature localization accuracy and introduce measurement jitter that propagates to the commanded motion. Third, the use of the pseudoinverse of the stacked interaction matrix presumes a non-degenerate configuration with sufficient numerical conditioning; near-singular feature geometries or the temporary loss of some features can deteriorate conditioning and lead to degraded transients or slower convergence. Moreover, although the proposed design aims to keep valid features within a predefined admissible image region, partial target exit from the field of view may still occur in practice due to missed detections, tracking drift, or abrupt motions near image boundaries; in such cases, the constrained IBVS law should be re-engaged after feature recovery restores valid measurements within the admissible region. Finally, while the prescribed-time parameter can be systematically bounded via fixed-time analysis, practical gain tuning still involves a trade-off between convergence speed and sensitivity to measurement noise, and overly aggressive gains may amplify jitter in real-world sensing. To further improve robustness under realistic sensing constraints, an enhanced visual servoing framework is currently under development that integrates a dual-rate perception control scheme, prediction-assisted feature and state estimation, and robust feature management mechanisms; comprehensive treatment and experimental validation will be reported in future work.
Remark 6.
Failure cases and discussion. We observed representative failure/near-failure cases when the visual features approach the boundary of the camera field of view (FOV) or when the perception quality degrades. First, in the FOV-constrained scenario, the conventional IBVS baseline may drive some feature points outside the admissible region, causing feature loss and task failure (see Figure 4a). Second, in real-world deployment, the detected feature corners may exhibit jitter due to illumination changes, motion blur, and sensor noise. This effect becomes more evident when the detection frame rate decreases (e.g., from 30 fps to 11 fps in the bolt alignment experiment), which may induce oscillations in the image error and can potentially lead to transient FOV violations or degraded convergence. The proposed method alleviates these issues by enforcing asymmetric time-varying output constraints through the prescribed-time performance function and barrier Lyapunov design, which keeps features inside the predefined range throughout the servoing process. Nevertheless, when perception updates are excessively sparse or detection intermittently fails, closed-loop performance may deteriorate; this motivates future work on perception-robust feature tracking and outlier rejection.

5. Conclusions

In this study, a novel fixed-time and prescribed-time control method is proposed for IBVS systems with FOV constraints. The method integrates a prescribed-time performance function and an asymmetric BLF to achieve asymmetric time-varying output constraints. This ensures that the image features remain within the predefined range, thereby addressing the FOV constraint problem in visual servoing applications. The combination of the prescribed-time performance function and fixed-time stability theory guarantees that the tracking error converges to a predetermined range within the prescribed time. Additionally, it converges to zero in fixed time, significantly improving the convergence rate of tracking errors. Finally, comparative experiments demonstrate the effectiveness and superiority of the method. The bolt alignment practical application experiments verified the method’s applicability. In the future, the method can be further validated in more dynamic environments by integrating advanced perception technologies to achieve broader applications.

Author Contributions

Conceptualization, J.L., L.M., Y.C., Y.W. and D.W.; methodology, J.L., N.Q., Y.W. and D.W.; software, N.Q. and Y.C.; validation, J.L.; formal analysis, J.L.; writing—original draft, J.L.; writing—review and editing, J.L. and L.M.; visualization, J.L.; supervision, L.M.; project administration, L.M. and D.H.; funding acquisition, L.M. and D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sichuan Province Science and Technology Support Program under Grant 2020Z DZX0015.

Data Availability Statement

The data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, Y.; Wu, Y.; Zhang, Z.; Miao, Z.; Zhong, H.; Zhang, H.; Wang, Y. Image-Based Visual Servoing of Unmanned Aerial Manipulators for Tracking and Grasping a Moving Target. IEEE Trans. Ind. Inform. 2023, 19, 8889–8899. [Google Scholar] [CrossRef]
  2. Xu, F.; Zhang, Y.; Sun, J.; Wang, H. Adaptive Visual Servoing Shape Control of a Soft Robot Manipulator Using Bézier Curve Features. IEEE/ASME Trans. Mechatron. 2023, 28, 945–955. [Google Scholar] [CrossRef]
  3. Miao, Z.; Zhong, H.; Lin, J.; Wang, Y.; Chen, Y.; Fierro, R. Vision-Based Formation Control of Mobile Robots with FOV Constraints and Unknown Feature Depth. IEEE Trans. Contr. Syst. Technol. 2021, 29, 2231–2238. [Google Scholar] [CrossRef]
  4. Liu, Z.; Wang, K.; Liu, D.; Wang, Q.; Tan, J. A Motion Planning Method for Visual Servoing Using Deep Reinforcement Learning in Autonomous Robotic Assembly. IEEE/ASME Trans. Mechatron. 2023, 28, 3513–3524. [Google Scholar] [CrossRef]
  5. Ribeiro, E.G.; Mendes, R.Q.; Terra, M.H.; Grassi, V. Second-Order Position-Based Visual Servoing of a Robot Manipulator. IEEE Robot. Autom. Lett. 2024, 9, 207–214. [Google Scholar] [CrossRef]
  6. Rotithor, G.; Salehi, I.; Tunstel, E.; Dani, A.P. Stitching Dynamic Movement Primitives and Image-Based Visual Servo Control. IEEE Trans. Syst. Man Cybern. 2023, 53, 2583–2593. [Google Scholar] [CrossRef]
  7. Deng, L.; Janabi-Sharifi, F.; Wilson, W.J. Hybrid motion control and planning strategies for visual servoing. IEEE Trans. Ind. Electron. 2005, 52, 1024–1040. [Google Scholar] [CrossRef]
  8. Chaumette, F.; Hutchinson, S. Visual servo control - Part I: Basic approaches. IEEE Robot. Automat. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  9. Miao, Z.; Zhong, H.; Wang, Y.; Zhang, H.; Tan, H.; Fierro, R. Low-Complexity Leader-Following Formation Control of Mobile Robots Using Only FOV-Constrained Visual Feedback. IEEE Trans. Ind. Inform. 2022, 18, 4665–4673. [Google Scholar] [CrossRef]
  10. He, S.; Xu, Y.; Guan, Y.; Li, D.; Xi, Y. Synthetic Robust Model Predictive Control with Input Mapping for Constrained Visual Servoing. IEEE Trans. Ind. Electron. 2023, 70, 9270–9280. [Google Scholar] [CrossRef]
  11. Paolillo, A.; Forgione, M.; Piga, D.; Hoffman, E.M. Fast predictive visual servoing: A reference governor-based approach. Control Eng. Pract. 2023, 136, 105521. [Google Scholar] [CrossRef]
  12. Prakash, R.; Behera, L. Neural Optimal Control for Constrained Visual Servoing via Learning From Demonstration. IEEE Trans. Autom. Sci. Eng. 2024, 21, 2987–3000. [Google Scholar] [CrossRef]
  13. Wang, R.; Zhang, X.; Fang, Y.; Li, B. Virtual-Goal-Guided RRT for Visual Servoing of Mobile Robots with FOV Constraint. IEEE Trans. Syst. Man Cybern. 2022, 52, 2073–2083. [Google Scholar] [CrossRef]
  14. Keshmiri, M.; Xie, W.F. Image-Based Visual Servoing Using an Optimized Trajectory Planning Technique. IEEE/ASME Trans. Mechatron. 2017, 22, 359–370. [Google Scholar] [CrossRef]
  15. Jiang, J.; Wang, Y.; Jiang, Y.; Xie, H.; Tan, H.; Zhang, H. A Robust Visual Servoing Controller for Anthropomorphic Manipulators with Field-of-View Constraints and Swivel-Angle Motion: Overcoming System Uncertainty and Improving Control Performance. IEEE Robot. Autom. Mag. 2022, 29, 104–114. [Google Scholar] [CrossRef]
  16. Bechlioulis, C.P.; Heshmati-alamdari, S.; Karras, G.C.; Kyriakopoulos, K.J. Robust Image-Based Visual Servoing with Prescribed Performance Under Field of View Constraints. IEEE Trans. Robot. 2019, 35, 1063–1070. [Google Scholar] [CrossRef]
  17. Jiang, J.; Wang, Y.; Jiang, Y.; Feng, Y.; Zhong, H.; Yang, C. Robust Image-Based Adaptive Fuzzy Controller for Guarantee Field of View with Uncertain Dynamics. IEEE Trans. Fuzzy Syst. 2024, 32, 1564–1575. [Google Scholar] [CrossRef]
  18. Wang, D.; Lin, J.; Ma, L.; Huang, D.; Wu, Y. Image-Based Visual-Admittance Control with Prescribed Performance of Manipulators in Feature Space. IEEE Trans. Ind. Electron. 2025, 72, 5060–5070. [Google Scholar] [CrossRef]
  19. Ning, B.; Han, Q.L.; Zuo, Z.; Ding, L.; Lu, Q.; Ge, X. Fixed-Time and Prescribed-Time Consensus Control of Multiagent Systems and Its Applications: A Survey of Recent Trends and Methodologies. IEEE Trans. Ind. Inform. 2023, 19, 1121–1135. [Google Scholar] [CrossRef]
  20. Sun, Z.Y.; Li, J.J.; Wen, C.; Chen, C.C. Adaptive Event-Triggered Prescribed-Time Stabilization of Uncertain Nonlinear Systems with Asymmetric Time-Varying Output Constraint. IEEE Trans. Autom. Control 2024, 69, 5454–5461. [Google Scholar] [CrossRef]
  21. Liu, D.; Liu, Z.; Chen, C.L.P.; Zhang, Y. Prescribed-time containment control with prescribed performance for uncertain nonlinear multi-agent systems. J. Franklin Inst. 2021, 358, 1782–1811. [Google Scholar] [CrossRef]
  22. Wang, J.; Liu, J.; Li, Y.; Chen, C.L.P.; Liu, Z.; Li, F. Prescribed Time Fuzzy Adaptive Consensus Control for Multiagent Systems with Dead-Zone Input and Sensor Faults. IEEE Trans. Autom. Sci. Eng. 2024, 21, 4016–4027. [Google Scholar] [CrossRef]
  23. Chen, G.; Dong, J. Adaptive Prescribed Time Fuzzy Control of Interconnected Nonlinear Systems and Its Applications: A Compensation-Based Approach. IEEE Trans. Autom. Sci. Eng. 2025, 22, 6944–6953. [Google Scholar] [CrossRef]
  24. Meng, Q.; Ma, Q.; Shi, Y. Adaptive Fixed-Time Stabilization for a Class of Uncertain Nonlinear Systems. IEEE Trans. Autom. Control 2023, 68, 6929–6936. [Google Scholar] [CrossRef]
  25. Sharma, R.S.; Nair, R.R.; Agrawal, P.; Behera, L.; Subramanian, V.K. Robust Hybrid Visual Servoing Using Reinforcement Learning and Finite-Time Adaptive FOSMC. IEEE Syst. J. 2019, 13, 3467–3478. [Google Scholar] [CrossRef]
  26. Miranda-Poya, A.; Castañeda, H.; Wang, H. Fixed-Time Differentiator-Based Adaptive Nonsingular Fast Terminal Image-Based Visual Servoing for a Quadrotor UAV Subject to Turbulent Wind. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 2807–2818. [Google Scholar] [CrossRef]
  27. Dai, S.L.; Lu, K.; Jin, X. Fixed-Time Formation Control of Unicycle-Type Mobile Robots with Visibility and Performance Constraints. IEEE Trans. Ind. Electron. 2021, 68, 12615–12625. [Google Scholar] [CrossRef]
  28. Lin, J.; Ma, L.; Huang, D.; Wang, D.; Lu, W. Finite-time image-based visual servoing with field of view and performance constraints. Control Eng. Pract. 2025, 164, 106433. [Google Scholar] [CrossRef]
  29. Polyakov, A. Nonlinear Feedback Design for Fixed-Time Stabilization of Linear Control Systems. IEEE Trans. Autom. Control 2012, 57, 2106–2110. [Google Scholar] [CrossRef]
  30. Zuo, Z. Nonsingular fixed-time consensus tracking for second-order multi-agent networks. Automatica 2015, 54, 305–309. [Google Scholar] [CrossRef]
  31. Jin, X. Adaptive Fixed-Time Control for MIMO Nonlinear Systems with Asymmetric Output Constraints Using Universal Barrier Functions. IEEE Trans. Autom. Control 2019, 64, 3046–3053. [Google Scholar] [CrossRef]
  32. Marchand, E.; Spindler, F.; Chaumette, F. ViSP for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robot. Autom. Mag. 2005, 12, 40–52. [Google Scholar] [CrossRef]
  33. Chen, Y.; Wei, M.; Ma, J.; Qin, N.; Huang, D. Super-Resolution Integrated Semantic Segmentation Method for the Corner Position of Catenary Bolt. In Proceedings of the 2024 14th International Conference on Information Science and Technology (ICIST), Chengdu, China, 6–9 December 2024; pp. 90–99. [Google Scholar]
Figure 1. Coordinate frames used in IBVS experiments.
Figure 1. Coordinate frames used in IBVS experiments.
Robotics 14 00190 g001
Figure 2. Visual servoing control scheme.
Figure 2. Visual servoing control scheme.
Robotics 14 00190 g002
Figure 3. The trajectories of the image feature points in the camera’s FOV are illustrated through snapshots at three time instances: t = 0 s, t = 4 s, and t = 10 s. Green crosses: current feature points; red crosses: desired feature points. (a) The traditional IBVS P-control baseline [8] with proportional gain λ = 0.5 (ViSP setting [32]). (b) The proposed control method.
Figure 3. The trajectories of the image feature points in the camera’s FOV are illustrated through snapshots at three time instances: t = 0 s, t = 4 s, and t = 10 s. Green crosses: current feature points; red crosses: desired feature points. (a) The traditional IBVS P-control baseline [8] with proportional gain λ = 0.5 (ViSP setting [32]). (b) The proposed control method.
Robotics 14 00190 g003
Figure 4. The pixel plane trajectories of the four image features. The black dashed boundary denotes the prescribed admissible region, and the black solid boundary indicates the camera FOV. (a) The traditional IBVS control method [8]. (b) The proposed control method.
Figure 4. The pixel plane trajectories of the four image features. The black dashed boundary denotes the prescribed admissible region, and the black solid boundary indicates the camera FOV. (a) The traditional IBVS control method [8]. (b) The proposed control method.
Robotics 14 00190 g004
Figure 5. The image plane trajectories of the feature points in a comparative experiment, illustrating their evolution within the camera FOV under different control methods. (a) The experiment’s initial position. (b) The proposed control method (FTPT). (c) The fixed-time control method (FT) (32). (d) The tan-type BLF control method (TBL) [18]. (e) The symmetric constant log-type control method (SCBLF) [15]. (f) The asymmetric log-type prescribed performance control method (PPC) [16].
Figure 5. The image plane trajectories of the feature points in a comparative experiment, illustrating their evolution within the camera FOV under different control methods. (a) The experiment’s initial position. (b) The proposed control method (FTPT). (c) The fixed-time control method (FT) (32). (d) The tan-type BLF control method (TBL) [18]. (e) The symmetric constant log-type control method (SCBLF) [15]. (f) The asymmetric log-type prescribed performance control method (PPC) [16].
Robotics 14 00190 g005
Figure 6. (a) Error trajectories of the four image features in the u pixel direction. (b) Error trajectories of the four image features in the v pixel direction. The blue, red, cyan, green, and black curves correspond to the proposed fixed-time and prescribed-time controller (FTPT) (22), the fixed-time controller with a common performance function (FT) (32), the tan-type BLF controller (TBL) [18], the symmetric constant log-type controller (SCBLF) [15], and the asymmetric log-type prescribed performance controller (PPC) [16], respectively. The pink line represents the prescribed-time performance function (12) proposed in this paper.
Figure 6. (a) Error trajectories of the four image features in the u pixel direction. (b) Error trajectories of the four image features in the v pixel direction. The blue, red, cyan, green, and black curves correspond to the proposed fixed-time and prescribed-time controller (FTPT) (22), the fixed-time controller with a common performance function (FT) (32), the tan-type BLF controller (TBL) [18], the symmetric constant log-type controller (SCBLF) [15], and the asymmetric log-type prescribed performance controller (PPC) [16], respectively. The pink line represents the prescribed-time performance function (12) proposed in this paper.
Robotics 14 00190 g006aRobotics 14 00190 g006b
Figure 7. The time evolution of the total image feature error E t . The blue, red, cyan, green, and black curves correspond to the proposed fixed-time and prescribed-time controller (FTPT) (22), the fixed-time controller with a common performance function (FT) (32), the tan-type BLF controller (TBL) [18], the symmetric constant log-type controller (SCBLF) [15], and the asymmetric log-type prescribed performance controller (PPC) [16], respectively.
Figure 7. The time evolution of the total image feature error E t . The blue, red, cyan, green, and black curves correspond to the proposed fixed-time and prescribed-time controller (FTPT) (22), the fixed-time controller with a common performance function (FT) (32), the tan-type BLF controller (TBL) [18], the symmetric constant log-type controller (SCBLF) [15], and the asymmetric log-type prescribed performance controller (PPC) [16], respectively.
Robotics 14 00190 g007
Figure 8. (a) Bolt and sleeve components. (b) Bolt alignment scene.
Figure 8. (a) Bolt and sleeve components. (b) Bolt alignment scene.
Robotics 14 00190 g008
Figure 9. Bolt alignment process. (a) Initial visual servoing position. (b) Position during visual servoing. (c) Visual servoing ending position. (d) Aligning sleeve to bolt.
Figure 9. Bolt alignment process. (a) Initial visual servoing position. (b) Position during visual servoing. (c) Visual servoing ending position. (d) Aligning sleeve to bolt.
Robotics 14 00190 g009
Figure 10. Snapshots of trajectories of image feature points in camera’s FOV for practical experiment. (a) Initial visual servoing position. (b) Visual servoing ending position.
Figure 10. Snapshots of trajectories of image feature points in camera’s FOV for practical experiment. (a) Initial visual servoing position. (b) Visual servoing ending position.
Robotics 14 00190 g010
Table 1. Notation for image space variables, performance bounds, and controller parameters.
Table 1. Notation for image space variables, performance bounds, and controller parameters.
SymbolDescription
Image space coordinates
u min Minimum allowable value of u
u max Maximum allowable value of u
v min Lower bound of v
v max Upper bound of v
u i , d Desired u of feature i
v i , d Desired v of feature i
Performance function parameters
ξ u i , 0 Initial performance bound for u
ξ v i , 0 Initial performance bound for v
ξ u i , Steady-state bound for u
γ Convergence rate
T f Prescribed time
T max Maximum settling time
Controller-related quantities
Γ i Image Jacobian of feature i
s i 2-D image feature vector
k 1 Control gain
k 2 Fixed-time control gain
Table 2. Quantitative comparison of convergence time and steady-state error.
Table 2. Quantitative comparison of convergence time and steady-state error.
ControllersConvergence Time (s)Steady-State Errors (Pixel)
FTPT4.90.31
FT6.30.34
TBL8.80.38
SCBLF11.10.47
PPC11.90.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, J.; Ma, L.; Huang, D.; Qin, N.; Chen, Y.; Wang, Y.; Wang, D. Fixed-Time and Prescribed-Time Image-Based Visual Servoing with Asymmetric Time-Varying Output Constraint. Robotics 2025, 14, 190. https://doi.org/10.3390/robotics14120190

AMA Style

Lin J, Ma L, Huang D, Qin N, Chen Y, Wang Y, Wang D. Fixed-Time and Prescribed-Time Image-Based Visual Servoing with Asymmetric Time-Varying Output Constraint. Robotics. 2025; 14(12):190. https://doi.org/10.3390/robotics14120190

Chicago/Turabian Style

Lin, Jianfei, Lei Ma, Deqing Huang, Na Qin, Yilin Chen, Yutao Wang, and Dongrui Wang. 2025. "Fixed-Time and Prescribed-Time Image-Based Visual Servoing with Asymmetric Time-Varying Output Constraint" Robotics 14, no. 12: 190. https://doi.org/10.3390/robotics14120190

APA Style

Lin, J., Ma, L., Huang, D., Qin, N., Chen, Y., Wang, Y., & Wang, D. (2025). Fixed-Time and Prescribed-Time Image-Based Visual Servoing with Asymmetric Time-Varying Output Constraint. Robotics, 14(12), 190. https://doi.org/10.3390/robotics14120190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop