Next Article in Journal
Position and Force Synchronization Control of Master–Slave Bilateral Teleoperation Manipulators Based on Adaptive Super-Twisting Sliding Mode
Previous Article in Journal
Research on Magnetorheological Semi-Active Suspension Control Using RBF Neural Network-Tuned Active Disturbance Rejection Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator

School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Actuators 2026, 15(4), 185; https://doi.org/10.3390/act15040185
Submission received: 4 March 2026 / Revised: 25 March 2026 / Accepted: 26 March 2026 / Published: 27 March 2026
(This article belongs to the Section Control Systems)

Abstract

Omni-directional mobile manipulators (OMMs) are inherently nonlinear, strongly coupled, and multiple-input multiple-output systems, posing significant challenges in developing accurate mechanistic models due to their complexity. Koopman operator theory offers a data-driven modeling framework that leverages input–output data to characterize system dynamics, but there often exist modeling errors. In this paper, an event-triggered data-driven linear model predictive control (MPC) framework is proposed for an OMM, without using any prior knowledge of the robot system. A finite-dimensional approximate linear Koopman model is established for an OMM using input–output data. The Gaussian process regression (GPR) is employed to estimate the model’s errors, while an extended state observer (ESO) is designed to estimate external disturbances. Since the introduction of GPR increases the computational burden, an event-triggered (ET) mechanism is introduced to reduce unnecessary controller recomputations and controller update frequency. Finally, comparative experiments are carried out to verify the effectiveness and performance superiority of the proposed control scheme.

1. Introduction

A mobile manipulator integrates a manipulator with a mobile platform, inheriting the flexibility of the operating space of the robotic manipulator and the vastness of the working space of the mobile platform [1,2]. Among many types of mobile manipulators, omni-directional mobile manipulators (OMMs), which are considered as holonomic, are less restricted in terms of their movement than non-holonomic differential-driven mobile manipulators [3]. Thus, OMMs are used to perform remarkable application tasks, such as healthcare assistive service [4] and explosive ordnance disposal [5]. However, the coupling interaction between the mobile platform and robotic arm introduces great complexity for modeling and control. Therefore, achieving precise trajectory tracking control poses a significant challenge for OMMs [6].
Many studies have been carried out on the modeling and control of mobile manipulators. Viet et al. [7] presented a sliding mode tracking approach for a three-wheeled OMM system in the presence of disturbance and friction. For a redundantly actuated OMM affected by uncertainties and disturbances, Xu et al. [3] introduced a robust trajectory tracking strategy based on the combination of neural networks and sliding mode control. Sun et al. [8] introduced a gradient neural network for repetitive motion generation of an OMM. Sun et al. [9] presented a hybrid orthogonal repetitive motion and obstacle avoidance scheme to address the problem of the OMM not being able to accurately return to the starting position after completing the obstacle avoidance task. Khan et al. [10] proposed a recurrent neural network topology based on a metaheuristic optimization algorithm to perform incomplete constraint-time mobile manipulator tracking control. Nie et al. [11] proposed a control scheme based on hierarchical quadratic programming-prescribed performance control to achieve trajectory tracking and obstacle avoidance under multiple constraints for mobile manipulators. Zhang et al. [12] proposed a radial basis function neural network structure and designed a cooperative trajectory tracking control strategy to solve the problem of cooperative trajectory tracking under unknown dynamics and time-varying trajectories. To summarize, the above-mentioned control systems typically rely on the nominal nonlinear model of mobile manipulators; meanwhile, the modeling process is computationally intensive, and the resulting nonlinear model is often complex and inaccurate.
Data-driven methods, including machine learning and deep learning, can be utilized to construct models without specific assumptions as well as in relation to the derivation of empirical formulae [13]. These models characterize system dynamics through black box mappings from inputs to outputs. However, their mathematical forms are typically incompatible with conventional model-based control design techniques, as they are usually not represented by differential equations and may lack explicit invertibility. In recent years, deep learning and reinforcement learning-based hybrid approaches have also been increasingly explored for system modeling and control [14,15]. These methods usually provide strong representation capability and improved adaptability for robotic modeling and control tasks. Such approaches, however, often remain less interpretable and may require large amounts of training data and considerable computational effort, particularly in reinforcement learning-based settings. Meanwhile, the trial-and-error nature of reinforcement learning methods may increase the risk of structural damage to the robot during experiments. In contrast, Koopman operator theory offers a data-enabled approach which can yield an explicit control-oriented model and represent dynamics of complex nonlinear systems in a linear form on an infinite-dimensional observable space [16]. This property facilitates the practical application of linear control design techniques, including the linear quadratic regulator (LQR) and model predictive control (MPC). Consequently, Koopman operator theory has been employed in a wide range of applications including autonomous vehicle dynamic model construction [17], power systems dynamic state estimation [18] and robot-assisted physical therapy [19]. However, despite the effectiveness of the Koopman operator in characterizing nonlinear system behavior through linear evolution in the lifted observable space [20], the established model still suffers from modeling errors. Therefore, improving the accuracy of the learned linear model is an important issue for advancing engineering applications of Koopman operator.
In the control field, MPC stands as a closed-loop optimal control strategy that offers a systematic framework for addressing input and state constraints. Specifically, MPC operates by defining a performance index to be optimized, enabling the computation of optimal control signals within a finite prediction horizon. Based on the characteristics of the adopted system model, MPC is usually classified into nonlinear and linear forms. In engineering applications, linear MPC is more frequently employed due to its favorable computational efficiency and relative ease of implementation. Compared with nonlinear MPC, it reduces computational complexity while preserving control performance, making it suitable for systems with constraints and real-time optimization requirements. Compared with traditional control methods, such as LQR, linear MPC is more effective in managing constraints. MPC has been widely applied in the control of mobile robots [21], automated vehicles [22], and robotic arms [23].
Since the effectiveness of MPC relies on accurate forward predictions, addressing unknown nonlinear system dynamics, including inherent modeling errors and external disturbances, remains a critical challenge. Gaussian process regression (GPR) is widely recognized as a powerful Bayesian non-parametric approach for nonlinear regression problems. Given an appropriate prior covariance function (i.e., kernel function) and under mild assumptions on the target function, GPR can consistently approximate arbitrary continuous functions [24]. Specifically, it can predict the target function in the form of a posterior probability distribution, which quantifies learning uncertainties [25]. Therefore, based on this principle, GPR can be effectively employed to estimate system modeling errors. It has demonstrated powerful capabilities in various applications such as environmental modeling [26], motion planning [27], and fault prediction [28].
In addition to the modeling error of Koopman model, the control performance is further influenced by unknown external disturbances. The extended state observer (ESO) is a disturbance observer proposed by Han [29] in the active disturbance rejection control strategy. Specifically, an ESO is designed by extending the state to track combined effects of model uncertainties and external perturbations [30]. Moreover, the ESO is characterized by its fast convergence, precise estimation and low dependency on accurate mathematical model of the system. However, the increased observer order may introduce phase delays, high observer gains, and sensitivity to initial state variations [31]. To mitigate these issues, reduced-order ESO, which features lower observer gains and reduced phase delays, can offer a viable replacement for conventional ESO designs. ESO has been successfully applied in various practical applications including motion control [32], ratio control [33] and robot control [34].
However, the introduction of GPR significantly increases computational cost on the controller. Event-triggered control, in which a triggering condition is designed to determine when to update control inputs, is regarded as a highly effective approach to optimize computational efficiency in robotic systems [35,36]. Specifically, when the prescribed performance index exceeds a predefined threshold or when the event-triggering condition is satisfied, the event-triggered mechanism activates control actions. In recent decades, event-triggered control has received considerable attention due to its ability to reduce the frequency of online solving optimization control problems [37,38].
In this paper, an event-triggered data-driven linear MPC framework is proposed for an OMM, and experiments are carried out to verify the effectiveness of the proposed control scheme. Based on Koopman operator theory, a finite-dimensional approximate linear dynamic model is established for the OMM using input–output data. GPR is incorporated to estimate the modeling errors of the learned linear model, while an ESO is designed to estimate external disturbances in real time. In addition, an event-triggered mechanism is introduced so that the controller is activated only when the prescribed performance indexes exceed predefined thresholds. Comparative experiments are finally conducted to evaluate the tracking performance and robustness of the proposed control scheme.
It should be emphasized that the novelty of this work does not lie in proposing a new standalone theory for the Koopman operator, GPR, ESO, or event-triggered control. Instead, it lies in the development of a control-oriented integrated framework for OMM trajectory tracking, in which data-driven prediction, uncertainty compensation, and event-triggered updating are coordinated in a unified manner to simultaneously address modeling difficulty, disturbance rejection, and the online computational burden. The main contributions of this paper are summarized as follows:
(1)
An event-triggered Koopman data-driven MPC scheme is proposed for OMMs based on GPR and ESO, without using any prior knowledge of the robot dynamics.
(2)
The introduction of an event-triggered mechanism significantly reduces unnecessary online controller recomputations and lowers the controller update frequency, while the control performance is guaranteed.
(3)
Comparative experiments are conducted to validate the effectiveness and performance superiority of the proposed control scheme.
The rest of this paper is structured as follows. In Section 2, the basic knowledge of Koopman operator theory is presented and the Koopman model of the robot system is given. The event-triggered data-driven linear MPC framework is presented in Section 3. In Section 4, experimental comparisons are conducted. Section 5 concludes this paper.

2. Modeling

2.1. Basis of Koopman Operator Theory

Consider a discrete-time dynamical system without control inputs
x ( k + 1 ) = f ( x ( k ) ) ,
where x ( k ) M denotes the n-dimensional system state, M R N represents the state space, k is the discrete-time index, and f describes the state evolution in the state space.
The Koopman operator K : F F is an infinite-dimensional linear operator that describes the evolution of observables Ψ F along the system trajectories, defined as follows:
K Ψ ( x ( k ) ) = Ψ ( f ( x ( k ) ) ) ,
where F denotes an invariant function space under the Koopman operator, usually referred to as the observable space, and Ψ ( x ) represents user-defined lifting functions that are typically nonlinear. The Koopman operator is capable of characterizing the underlying dynamical system, provided that the observable space F contains sufficiently rich information about the system state x ( k ) .
We combine (1) and (2), as follows:
K Ψ ( x ( k ) ) = Ψ ( x ( k + 1 ) ) .
The Koopman operator describes a nonlinear system by lifting the state space into an infinite-dimensional observable space. It should be noted that system (1) is uncontrolled, whereas a number of studies have extended the Koopman operator framework to controlled systems [39,40]. Moreover, since the Koopman operator is infinite-dimensional, finite-dimensional approximations are essential for practical applications.

2.2. Data-Driven Finite-Dimensional Approximation of the Koopman Operator

The Koopman operator (3) is infinite dimensional and can be derived for uncontrolled dynamical system. In [40], the Koopman operator framework is generalized to controlled dynamical systems, while EDMD is utilized to construct its finite-dimensional approximation, leading to a linear controlled representation of the original nonlinear system. In the following, we directly provide the results from [40], and the readers are referred to [40] for more details.
Consider the following nonlinear dynamic system model with control inputs:
x ( k + 1 ) = f ( x ( k ) , u ( k ) ) ,
where x ( k ) R n denotes the nonlinear system state, u ( k ) R m denotes the control input, and f is the nonlinear dynamical mapping.
As discussed previously, EDMD provides a data-enabled approach to obtaining a finite-dimensional approximation K R ( N + m ) × ( N + m ) of the Koopman operator by solving the following minimization problem:
min K k = 1 P Ψ ( x ( k + 1 ) , u ( k + 1 ) ) K Ψ ( x ( k ) , u ( k ) ) 2 2 ,
where P denotes the number of measured samples, and Ψ ( x ( k ) , u ( k ) ) = ψ 1 , ψ 2 , , ψ N : R n + m R N denotes a set of basis functions, i.e., lifting functions, which are user defined and typically nonlinear.
To construct a linear predictor in the standard form of a controlled linear dynamical system, Ψ ( x ( k ) , u ( k ) ) = [ Ψ x T ( x ( k ) ) , Ψ u T ( u ( k ) ) ] T can be selected. Because future control inputs do not need to be predicted, we can directly impose Ψ u ( u ( k ) ) = u ( k ) . The first N rows of matrix K , denoted by K ¯ , can be decomposed as K ¯ = [ A , B ] , where A R N × N and B R N × m are linear time-invariant matrices. Accordingly, (5) can be rewritten as
min A , B k = 1 P Ψ x ( x ( k + 1 ) ) A Ψ x ( x ( k ) ) B u ( k ) 2 2 .
Define the lifted state z ( k ) R N as
z ( k ) = Ψ x ( x ( k ) ) = [ ψ x 1 ( x ( k ) ) , ψ x 2 ( x ( k ) ) , . . . , ψ x N ( x ( k ) ) ] T .
A high-dimensional linear Koopman model can then be formulated in the standard form of a linear dynamical system as
z ( k + 1 ) = A z ( k ) + B u ( k ) x ^ ( k ) = C z ( k ) ,
where x ^ ( k ) denotes the estimated of the original nonlinear system state x ( k ) , and C R n × N is a constant linear matrix. It is worth noting that, in practical applications, Ψ x ( x ( k ) ) usually includes the system state itself; in this case, C = [ I 0 ] and x ( k ) = x ^ ( k ) .
The matrix C can be determined by solving the following minimization problem:
min C k = 1 P x ( k ) C Ψ x ( x ( k ) ) 2 2 .
According to (8), the lifted state z ( k ) evolves in the high-dimensional space under the action of matrices A and B , while C projects it back to the low-dimensional space to recover the predicted actual state x ( k ) .
To obtain the matrices A , B and C , the input–output data are collected from the nonlinear system (4) as follows:
X = [ x ( 1 ) , , x ( P ) ] , Y = [ x ( 2 ) , , x ( P + 1 ) ] , U = [ u ( 1 ) , , u ( P ) ] .
The minimization problems in (6) and (9) are then converted into the following forms:
min A , B Y l i f t A X l i f t BU F ,
min C X C X l i f t F ,
where X l i f t = [ Ψ x ( x ( 1 ) ) , . . . , Ψ x ( x ( P ) ) ] , and Y l i f t = [ Ψ x ( x ( 2 ) ) , . . . , Ψ x ( x ( P + 1 ) ) ] .
Using the least-squares method, the analytical solutions of (11) and (12) are obtained as follows:
[ A , B ] = Y l i f t [ X l i f t , U ] ,
C = X X l i f t ,
where † represents the Moore–Penrose pseudo-inverse of a matrix.
The linear model in (8) is developed purely from data and does not depend on the prior knowledge of the original dynamical system in (4). Moreover, it can be integrated with linear control methods, thereby avoiding the restrictions imposed by the nonlinearity of the original system model in (4).
Remark 1.
The selection of lifting function Ψ x ( x ( k ) ) significantly impacts the predictive capability of the learned linear model. A commonly used option is radial basis functions [40]. Recently, deep neural networks have been widely used to learn Koopman lifting functions from collected data, thereby avoiding manual trial-and-error design [41,42]. However, such learning-based approaches usually involve considerable training time, and their performance is strongly influenced by the quality and amount of the training data. Thus, considering these limitations and motivated by computational efficiency, this paper adopts a set of representative radial basis functions as the lifting functions of the learned linear model.

3. Controller Design

Considering the modeling error in the Koopman model (8) and the external disturbances, the system model can be rewritten as
x ( k + 1 ) = C A z ( k ) + C B u ( k ) + H w ( k ) + d ( k ) ,
where w ( k ) R v is the modeling error of the Koopman operator, H R n × v is the constant coefficient matrix, v is the dimension of the modeling error, and d ( k ) R n is the external disturbance of the system.
The controller design can be divided into two parts: linear control and uncertainty compensation. Therefore, the control input u ( k ) can be expressed as
u ( k ) = u m p c ( k ) + u d ( k ) ,
where u m p c ( k ) represents the MPC component, and u d ( k ) denotes the uncertainty compensation, which can be further decomposed into the modeling error compensation u d 1 ( k ) and the disturbance compensation u d 2 ( k ) .

3.1. MPC Design Based on Koopman Model

Define N c , N p as the control horizon and the predictive horizon, respectively, subject to the constraint 0 N c N p . Define x ( k + i | k ) as the predicted future state based on the state and control input at current time instant k.
If the nonlinear term ( H w ( k ) + d ( k ) ) in system (15) can be well estimated, it can be canceled by the following feedback linearization term:
u d ( k ) = ( C B ) 1 ( H w ^ ( k ) + d ^ ( k ) ) ,
where w ^ ( k ) and d ^ ( k ) denote the estimation of w ( k ) and d ( k ) , which can be learned by GPR (see Section 3.2) and estimated by ESO (see Section 3.3), respectively. It then results in the following linear system:
x ( k + 1 ) = C A z ( k ) + C B u m p c ( k ) .
Thus, the prediction equation of this linear model can be expressed as
X ( k ) = F z ( k ) + Φ U ( k ) ,
where
X ( k ) = x ( k + 1 | k ) x ( k + 2 | k ) x ( k + N p | k ) , U ( k ) = u m p c ( k ) u m p c ( k + 1 ) u m p c ( k + N c 1 ) ,
F = C A C A 2 C A N p T , Φ = C B O O C A B C B O C A N p 1 B C A N p 2 B C A N p N c B .
The desired state sequence defined in the predictive horizon is expressed as X d ( k ) = x d ( k + 1 ) x d ( k + 2 ) x d ( k + N p ) T .
The cost function is selected as
J = ( X X d ) T Q ( X X d ) + U ( k ) T R U ( k ) ,
where Q and R are the positive weighted matrices.
By minimizing the cost function in (20), the optimal control sequence is obtained, and the corresponding optimization problem is formulated as follows:
min U U ( k ) T P U ( k ) + U ( k ) T E s . t . z ( k + 1 ) = A z ( k ) + B u ( k ) , u ( k ) = u m p c ( k ) + u d ( k ) , u d ( k ) = ( C B ) 1 ( H w ^ ( k ) + d ( k ) ) , u ( k ) u max ( k ) ,
where P = Φ T Q Φ + R , E = Φ T Q ( F z ( k ) X d ( k ) ) , and u max ( k ) denotes the upper bound of the control sequence.
To ensure the stability of the closed-loop system, the optimization problem (21) needs to satisfy the terminal constraint
x ( k + N P | k ) = x d ( k + N P ) .
Therefore, by solving the optimization problem in (21) subject to the terminal constraints in (22), the optimal control sequence over the control horizon N c can be obtained. The first element of this sequence is then applied as the linear control input at the current instant, denoted by u m p c ( k ) . This control law can be expressed as the following nonlinear mapping:
u m p c ( k ) = G ( x ( k | k ) ) ,
where G represents the nonlinear mapping induced by the solution of the optimization problem.
Remark 2.
In this part, we develop a linear MPC framework based on the established linear model (8). Notably, the proposed Koopman-based modeling approach can be applied to a broader spectrum of model-based linear control strategies beyond MPC, including but not limited to optimal control and sliding mode control. This highlights the methodological flexibility of this data-driven modeling approach.

3.2. Modeling Error Compensation Based on GPR

It is known that compensating for modeling errors is crucial for enhancing control performance. GPR, a statistical method for learning system uncertainties, is employed in this part to estimate the modeling error w ( k ) in the data-driven model (15).
The collected state data x j and input data u j of the OMM, which are used as the inputs for GPR, are denoted by ξ j = x j T , u j T T , j = 0 , , M . Here, M denotes the total number of sampled data points. The corresponding output y w j is defined as the difference between the measured output of the OMM and the output generated by the learned linear model. That is,
y w j = H ( x j + 1 C A Ψ x ( x j ) C B u j ) ,
where denotes the Moore–Penrose pseudo-inverse. Therefore, the dataset used for GPR is given by D = ( ξ j , y w j ) , j = 0 , , M .
It should be noted that each component of y w is learned independently; accordingly, y w = [ y w 1 , y w 2 , , y w v ] T . Each element of y w follows a Gaussian distribution with prior mean function m i ( ξ ) and covariance function k i ( ξ , ξ ) , where k i ( ξ , ξ ) measures the similarity between two input data ξ and ξ , i = 1 , 2 , , v .
In general, the prior mean function m i ( ξ ) is taken as zero, and the squared exponential function is adopted as the covariance function
k i ( ξ , ξ ) = σ f , i 2 exp [ 1 2 ( ξ ξ ) T L i 2 ( ξ ξ ) ] ,
where σ f , i 2 denotes the prior variance, and L i is a positive diagonal length-scale matrix. The quantities σ f , i 2 and L i are referred to as hyper-parameters. In this study, the hyper-parameters are determined by maximizing the log-likelihood function of the training data, i.e.,
Θ i = arg max Θ i log p y w i ξ , Θ i ,
where Θ i = { L i , σ f , i 2 } denotes the hyper-parameter set of the i-th output dimension.
For each test point ξ , the predictive distribution of the outputs in the i-th dimension is Gaussian with mean μ i ( ξ ) and covariance σ 2 i ( ξ ) , given by
μ i ( ξ ) = k T ( K + σ n 2 I ) 1 y w σ 2 i ( ξ ) = k k T ( K + σ n 2 I ) 1 k ,
where k = k ( ξ , ξ ) R , and k = k ( ξ 1 , ξ ) , , k ( ξ M , ξ ) denotes the covariance vector between the test point and the collected data points, while the covariance matrix K is expressed as K = k 11 ( ξ 1 , ξ 1 ) k 1 M ( ξ 1 , ξ M ) k M 1 ( ξ M , ξ 1 ) k M M ( ξ M , ξ M ) R M × M .
The mean vector μ ( ξ ) = [ μ 1 ( ξ ) , , μ v ( ξ ) ] T serves as the prediction of the modeling error w ( k ) in model (15). Accordingly, the compensation term for the modeling error w ( k ) can be written as
u d 1 ( k ) = ( C B ) 1 H μ ( ξ ) .
In the proposed framework, the GPR model is trained offline using the collected dataset and then employed online for modeling error prediction during controller updates.

3.3. Disturbance Compensation Based on Reduced-Order ESO

ESO can be used to estimate external disturbance by treating system uncertainties as an extended state. However, since ESO increases the observer order, it may lead to issues such as phase delay, high observer gain requirements, and sensitivity to initial state variations. It is proven that the reduced-order ESO can achieve a better estimation performance compared with conventional ESO [43]. Therefore, in this part, a reduced-order linear ESO is introduced to estimate the remaining nonlinear term d ( k ) in the model (15).
Define the system states as x 1 ( k ) = x ( k ) and x 2 ( k ) = D ( k ) = d ( k ) Δ t , wherein x 2 ( k ) is an extended state consisting of external disturbances d ( k ) , and Δ t is the sampling period.
Then the system model can be written into the following state-space equations:
x 1 ( k + 1 ) = C A x 1 ( k ) + C B u ( k ) + H w ( k ) + Δ t x 2 ( k ) x 2 ( k + 1 ) = D ( k + 1 ) .
The second-order ESO can be designed [44] as
x ˜ 1 ( k ) = x 1 ( k ) x ^ 1 ( k ) x ^ 1 ( k + 1 ) = C A x ^ 1 ( k ) + C B u ( k ) + H w ( k ) + Δ t ( x ^ 2 ( k ) + β 1 x ˜ 1 ( k ) ) x ^ 2 ( k + 1 ) = x ^ 2 ( k ) + Δ t β 2 x ˜ 1 ( k ) ,
where x ^ 1 and x ^ 2 are the estimation of x 1 and x 2 , respectively, x ˜ 1 ( k ) denotes the estimation error, and β 1 , β 2 are the adjustable gains of ESO.
It is known that the performance of estimation can be optimized by tuning the gain parameters β 1 and β 2 . Specifically, by employing the bandwidth-based configuration method [45], the gain parameters β 1 and β 2 are designed as β 1 = diag ( 2 w o 1 , 2 w o 2 , , 2 w o 10 ) and β 2 = diag ( w o 1 2 , w o 2 2 , , w o 10 2 ) respectively, where w o i denotes the observer bandwidth corresponding to the i-th state channel.
Therefore, the remaining uncertainty term d ( k ) in the system (15) is estimated by ESO with the value d ^ ( k ) = Δ t x ^ 2 ( k ) . The compensation part can be given as
u d 2 ( k ) = ( C B ) 1 Δ t x ^ 2 ( k ) .
Thus, u d ( k ) in (16), which represents the uncertainty compensation control, can be obtained through feedback linearization, as follows:
u d ( k ) = u d 1 ( k ) + u d 2 ( k ) = ( C B ) 1 ( H μ ( ξ ) + Δ t x ^ 2 ( k ) ) .
In summary, the controller consisting of a linear control component u m p c ( k ) and an uncertainty component u d ( k ) has been designed, as formulated in Equation (16).

3.4. Controller Design Based on Event-Triggered Mechanism

The introduction of GPR has resulted in an increased computational burden for the MPC controller. In this part, an event-triggered mechanism is introduced to reduce the unnecessary controller recomputations. The triggering instants k l and the corresponding triggering conditions are designed such that the controller updates the control signal only when the triggering condition is satisfied; otherwise, the controller remains dormant. The first event is assumed to be occurred at k 1 = 0 , which means the first event is triggered at the initial time.
According to the event-triggered mechanism [46,47], the control signals are updated only when the triggering condition is satisfied, which is specified as
x ( k ) x ( k l ) > ε 1 ,
o r x ( k ) x ^ ( k | k l ) > ε 2 ,
o r k > k l + N p ,
where · denotes the Euclidean norm, ε 1 and ε 2 are the user-specified constants, x ( k ) represents the current system state, x ( k l ) denotes the state at the occurrence of the l-th event, and x ^ ( k | k l ) denotes the MPC-predicted state at the current instant k, generated at the last triggering instant k l based on the system model in (18) and the prediction equation in (19).
Note that the thresholds ε 1 and ε 2 are tuning parameters for balancing the tracking performance and controller update frequency. They were selected empirically by gradually increasing the initially small values until the tracking performance started to degrade. The final values were chosen as a compromise between the control performance and reduced unnecessary online updates.
The first event-triggering condition requires the controller to recompute and update when the difference between the current actual state and the state at the last triggering instance exceeds a threshold ε 1 (i.e., Equation (33)), the second condition triggers controller updates if the difference between the current actual state and the MPC-predicted state surpasses a threshold ε 2 (i.e., Equation (34)), and the third condition ensures that the controller recomputes and updates when the optimal control input sequence generated by MPC is fully executed (i.e., Equation (35)). Moreover, note that the controller will update if any of the conditions is satisfied. Specifically, at each updating instant k l , x ( k ) x ( k l ) = 0 or x ( k ) x ^ ( k | k l ) = 0 is always satisfied.
Remark 3.
The third event-triggering condition (35) imposes an upper bound on the inter-event interval. Its purpose is to prevent excessive reliance on outdated predictions and to ensure that the previously computed optimal control sequence is not used beyond the prediction horizon. Therefore, the proposed strategy is an event-triggered mechanism with a maximum allowable update interval, rather than a standard periodic control scheme.
The control law based on the event-triggered mechanism is designed as follows.
(1)
When one of the event-triggering conditions (33), (34) or (35) is satisfied, let x ( k l ) = x ( k ) , and the control signal is recalculated. Subsequently, the optimization problem (21) is solved to obtain the optimal control sequence and the predicted state sequence, as follows:
U ( k l ) = [ u m p c ( k l ) u m p c ( k l + N c 1 ) ] T , X ( k l ) = x ( k l + 1 | k l ) x ( k l + N p | k l ) T .
The control signal u ( k ) is obtained as
u ( k ) = u m p c ( k l ) + u d ( k l ) ;
(2)
If the event-triggering conditions are not satisfied, the time instant is set as k = k l + n T , where n is a constant and T represents the sampling period. In this case, the MPC, GPR and ESO remain dormant, and the control signal u ( k ) is defined as follows:
u ( k ) = u m p c ( k ) + u d ( k ) .
Based on the aforementioned analysis, the control input is updated at each instant k l , ensuring that the controller remains invariant during the interval [ k l , k l + 1 ) . Thus, this approach effectively reduces the controller updating frequency. For theoretical analysis of the proposed event-triggered control framework, including the boundedness results of the GPR prediction error, ESO estimation error, and the overall closed-loop tracking error, please refer to Appendix A.
Remark 4.
It is observed that if the triggering condition is not satisfied, the controller maintains its previous state. This mechanism ensures that more optimal control inputs obtained at the last triggering time are executed, rather than only the first element of the optimal control sequence. Consequently, this event-triggered approach significantly reduces the the frequency of online optimization updates.

4. Experiment Analysis

4.1. Experimental Setup

Figure 1a depicts the OMM prototype. The mobile platform incorporates three omni-directional wheels arranged at 120 intervals, each powered by an independent DC motor. The manipulator is a four-link parallelogram arm providing two degrees of freedom (i.e., θ 1 and θ 2 ), actuated by two additional DC motors. All five identical Maxon DC motors (Maxon Motor ag, Sachseln, Switzerland) feature a 186:1 gear reduction ratio and operate at 24 V nominal voltage.
The proposed control strategy is implemented in MATLAB/Simulink (R2020b) on a desktop PC (Intel(R) Core(TM) i7-4770, 3.40 GHz). Control signals are transmitted wirelessly to the OMM via an ESP8266 WIFI module (Espressif Systems Co., Ltd., Shanghai, China). An STM32F103VET6 microcontroller (MCU) (STMicroelectronics, Geneva, Switzerland) processes these signals and generates corresponding pulse width modulation (PWM) outputs for the five LMD18200 motor drivers (Texas Instruments, Dallas, TX, USA). The position and orientation of the OMM are tracked in real time using an OptiTrack motion capture system (NaturalPoint, Inc., Corvallis, OR, USA), and the measured state data are used as feedback signals for closed-loop controller implementation. Figure 1b shows the experimental setup.
In this paper, the state vector of the OMM is defined as x = q T , q ˙ T T , where q = x , y , ϕ , θ 1 , θ 2 T comprises the mobile platform’s position ( x , y ) , orientation ϕ , and the manipulator’s joint angles θ 1 and θ 2 (shown in Figure 1a). The voltages of five motors u = u 1 , u 2 , u 3 , u 4 , u 5 T are control signals.

4.2. Data Collection for Koopman Model

Since the Koopman model is established entirely from data, the quality of data acquisition plays a crucial role in obtaining an accurate model. In general, the dataset should capture as much nonlinear dynamic behavior of the OMM as possible. In each experiment, 250 sets of control inputs (i.e., u = u 1 , u 2 , u 3 , u 4 , u 5 T ) are randomly produced. Specifically, the control input voltage signals ( u 1 , u 2 , u 3 ) applied to the mobile platform are generated within [−24 V, 24 V], whereas the voltage commands ( u 4 , u 5 ) for the manipulator are generated within [−5 V, 5 V]. In this way, sufficiently diverse input variations can be introduced within safe operating ranges, enabling the collected data to capture abundant input–output information for Koopman model construction. To prevent possible damage to the OMM prototype caused by abrupt random inputs, the voltage signals are smoothed through linear interpolation so that the transitions remain continuous, as illustrated in Figure 2.
Then, 57 different random initial states (i.e., q 0 ) are given. For each initial state, the smoothed random voltage signals consisting of 250 samples (i.e., U in (10)) are applied to the OMM. Next, the OptiTrack motion capture system is employed to collect the current state and the corresponding output states (i.e., X and Y in (10)). Therefore, a total of 11,400 datasets are collected. The sampling time is set to 0.02 s.
Moreover, the dimension of lifted states is set as N = 25 . The typical radial basis functions are employed as the lifting functions, i.e., Ψ x ( x ( k ) ) = [ x ( k ) , ψ x ( x ( k ) ) ] T , where ψ x = x x c 2 log ( x x c ) and x c is a constant vector.

4.3. Control Performance

In this part, the proposed event-triggered data-driven MPC scheme (K-GPETMPC) is compared with Koopman model-based MPC (K-MPC) and data-driven MPC (K-GPMPC) schemes, respectively. To test the robustness performance, a payload of 2.5 kg is added at the end of the manipulator, as shown in Figure 1a. Note that the payload is not included in the dataset collected for Koopman model. In other words, the payload is unknown external disturbances for the learned linear model. The sampling time is set as 0.02 s.
Two experimental tests are conducted, wherein the mobile platform is respectively commanded to perform two typical trajectories (i.e., square trajectory and circle trajectory), while the manipulator performs sinusoidal motion.
In the first experimental test, the mobile platform is commanded to track a square reference trajectory with side length of 1 m and constant speed of 0.1 m/s. The joint angle θ 1 d is set to zero, and the joint angle θ 2 d is set to perform a sinusoidal motion θ 2 d rad = 0.6 sin π t 20 , 0 s t < 40 s.
In order to quantitatively compare the control performance of three control algorithms, the integral absolute error (IAE) and the maximum absolute error (MAE) are selected as evaluation metrics, which are defined as follows:
(a)
Integral absolute error (IAE):
IAE x , y m = T k = 1 N e x ( k ) + e y ( k ) IAE ϕ rad = T k = 1 N e ϕ ( k ) d t IAE θ 1 , θ 2 rad = T k = 1 N e θ 1 ( k ) + e θ 2 ( k ) d t
(b)
Maximum absolute error (MAE):
MAE x , y m = max max e x ( k ) , max e y ( k ) MAE ϕ rad = max e ϕ ( k ) MAE θ 1 , θ 2 rad = max max e θ 1 ( k ) , max e θ 2 ( k )
The control parameters and hyper-parameters of GPR in the three control schemes are set as the same, as shown in Table 1. Figure 3a,b show the square trajectory tracking results of the OMM in the xy-plane and the tracking performances in each Degree of Freedom (DOF), respectively. As shown in Figure 3c, the proposed control scheme achieves much better tracking performances compared with the other two control schemes. This is because, in the proposed control scheme, the modeling errors are learned by the GPR approach, and the external disturbances are estimated by ESO (shown in Figure 3e), which are compensated in the controller in real time (shown in Figure 3d). As shown in Figure 3f, the tracking error of the proposed method closely approximates that of the K-GPMPC method. However, during the 40 s experimental testing period, the K-GPMPC controller performs 1982 online updates, while the K-GPETMPC controller requires only 1581 updates, resulting in a 20.3% reduction in online controller recomputations. In addition, Figure 4a,b show quantitative comparisons of IAE and MAE of three control schemes. It is obvious that the proposed control scheme achieves much better control performances.
In the next experimental tests, the mobile platform is set to track a circular reference trajectory with a radius of 0.8 m. The desired joint angle θ 1 d is set to zero, while θ 2 d follows a sinusoidal trajectory defined by θ 2 d rad = 0.6 sin π t 10 and 0 s t 60 s.
The control parameters are set as the same, with the square trajectory test shown in Table 1, except for Q B and w o i . In the circle trajectory test, Q B is tuned as Q B = diag ( 5880 , 6380 , 2800 , 790 , 750 , zeros ( 1 , 5 ) ) , and w o i is tuned as w o i = 4 , i = 1 , 2 , 3 , 4 , 5 ; w o i = 6 , i = 6 , 7 , 8 , 9 , 10 .
The tracking control performances of circle trajectory are shown in Figure 5a–f. Similarly to the square trajectory, the proposed control scheme achieves much higher tracking accuracy than that of the other two schemes.
As shown in Figure 5f, the tracking error of the proposed method closely approximates that of the K-GPMPC method. However, during the 60 s experimental testing period, the K-GPMPC controller performs 2984 online updates, whereas the K-GPETMPC controller executes only calculate 2644 updates, achieving an 11.4% reduction in online controller recomputations. Moreover, Figure 6a,b show quantitative comparisons of the IAE and MAE of three control schemes. It also demonstrates that the proposed control scheme achieves much better control performance in terms of accuracy and robustness.

5. Conclusions

In this paper, an event-triggered Koopman-based data-driven MPC method has been developed for the trajectory tracking control of an OMM. The experimental results show that the proposed method achieves better tracking accuracy and robustness than the K-MPC and K-GPMPC schemes, while reducing unnecessary online controller updates through the event-triggered mechanism. These results indicate that the proposed framework provides an effective way to combine data-driven linear prediction, uncertainty compensation, and event-triggered control for OMM systems. Future work will focus on improving online computational efficiency and extending the proposed method to more complex manipulation tasks and operating conditions.

Author Contributions

P.G.: Conceptualization, methodology, investigation, algorithm, software, and writing—original draft preparation. C.L.: Algorithm, software, validation, and data curation. B.W.: Algorithm, experimental validation, and writing—original draft preparation. C.R.: Conceptualization, methodology, resources, supervision, and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Define x ˜ 1 ( k ) = x 1 ( k ) x ^ 1 ( k ) , x ˜ 2 ( k ) = x 2 ( k ) x ^ 2 ( k ) , w ˜ ( k ) = w ( k ) w ^ ( k ) . The estimation error of ESO is then expressed as
x ˜ 1 ( k + 1 ) = x 1 ( k + 1 ) x ^ 1 ( k + 1 ) = C A ( Ψ x ( x 1 ) Ψ x ( x ^ 1 ) ) + x ˜ 2 ( k ) β 1 ( x 1 ( k l ) x ^ 1 ( k ) ) + w ˜ ( k ) = x ˜ 2 ( k ) + x ˜ 1 ( k ) β 1 ( x 1 ( k l ) x ^ 1 ( k ) ) + C A ( Ψ x ( x 1 ) Ψ x ( x ^ 1 ) ) + w ˜ ( k ) x ˜ 2 ( k + 1 ) = x 2 ( k + 1 ) x ^ 2 ( k + 1 ) = x ˜ 2 ( k ) β 2 x ˜ 1 ( k ) β 2 ( x 1 ( k l ) x 1 ( k ) ) + x 2 ( k + 1 ) x 2 ( k ) .
Let χ ( k ) = [ x ˜ 1 ( k ) , x ˜ 2 ( k ) ] T , e 2 ( k ) = x 1 ( k l ) x 1 ( k ) , h ( k ) = [ x 2 ( k + 1 ) x 2 ( k ) , w ˜ ( k ) ] T , and Ψ x ( χ ) = [ Ψ x ( x 1 ) Ψ x ( x ^ 1 ) , Ψ x ( x 2 ) Ψ x ( x ^ 2 ) ] T . Subsequently, the ESO error dynamics are represented as
χ ( k + 1 ) = A 1 χ ( k ) + B 1 e 2 ( k ) + B 2 h ( k ) + B 3 Ψ x ( χ ) .
where A 1 = β 1 I β 2 I , B 2 = β 1 β 2 , B 2 = O I I O , B 3 = b m C A O O O , and O and I denote the zero matrix and the unit matrix, respectively.
Theorem A1.
For system (15), provided that the control signal is updated only when the event-triggering condition is satisfied, the error between the GPR-predicted mean and the true function remains bounded with probability at least 1 δ , i.e.,
P r ( μ ( ξ ( k l ) ) f ( ξ ( k ) ) β σ ( k l ) + e d ) 1 δ ,
where Pr denotes the probability, δ ( 0 , 1 ) ,   k l = k + n , β = β 1 , , β n T , β j = ( 2 f k 2 + 300 γ ln 3 ( N + 1 f ) ) 1 2 , and γ is the maximum information gain under the kernel function k: γ = max ξ 1 , ξ M Ξ 1 2 log | I σ n 2 K | .
Proof. 
The error between the GPR-predicted mean and the true function remains bounded with probability at least 1 δ , i.e.,: P r ( μ ( x ( k ) ) f ( x ( k ) ) β σ ( k ) ) 1 δ . Let e x = x ( k l ) x ( k ) , so that
P r ( μ ( ξ ( k l ) ) f ( ξ ( k l ) ) β σ ( k l ) ) 1 δ P r ( μ ( ξ ( k l ) ) f ( ξ ( k ) ) + f ( ξ ( k ) ) f ( ξ ( k l ) ) β σ ( k l ) ) 1 δ P r ( μ ( ξ ( k l ) ) f ( ξ ( k ) ) + e d β σ ( k l ) ) 1 δ P r ( μ ( ξ ( k l ) ) f ( ξ ( k ) ) β σ ( k l ) + e d ) 1 δ .
where
e d = f ( ξ ( k ) ) f ( ξ ( k l ) ) = x ( k ) C A z ( k 1 ) C B u ( k 1 ) x ( k l ) + C A z ( k l 1 ) + C B u ( k l 1 ) = x ( k ) x ( k l ) + C A ( z ( k l 1 ) z ( k 1 ) ) .
The event-triggering condition yields e x < ε 1 , and the above equation implies that e d is bounded. Therefore, the error between the GPR-predicted mean and the true function remains bounded with probability at least 1 δ when the event-triggering condition is satisfied. □
Theorem A2.
For the system (15) subjected to bounded external perturbations, assume that there exist matrices P > 0 and Q satisfying Q = 3 A 1 T P A 1 P 0 . Then, the estimation error of the ESO under the event-triggered mechanism is ensured to be bounded.
Proof. 
The Lyapunov function is selected as v ( k ) = χ T ( k ) P χ ( k ) ; then,
Δ v ( k ) λ min ( Q ) χ ( k ) 2 + η + 3 λ max ( B 3 T P B 3 ) Ψ x ( χ ) 2 λ min ( Q ) χ ( k ) 2 + η + 3 λ max ( B 3 T P B 3 ) g χ ( k ) 2 λ min ( Q λ min ( B 3 P B 3 ) g ) λ max ( P ) χ T ( k ) P χ ( k ) + η λ min ( Q λ min ( B 3 P B 3 ) g ) λ max ( P ) v ( k ) + η .
where η = 3 λ max ( B 1 T P B 1 ) e 2 2 + 3 λ max ( B 2 T P B 2 ) h 2 , and g is a Lipschitz constant. Based on the results in [48], v ( k ) decays gradually under the event-triggering condition, and the ESO estimation error is guaranteed to be consistently bounded. □
Theorem A3.
For the system x ( k + 1 ) = C A z ( k ) + C B u ( k ) + b w ( k ) + D ( k ) , assume that the optimal problem with constraints is feasible and that K satisfies lim k G ( x ( k | k l ) K x ( k | k l ) = 0 ; then, the closed-loop system is stable, i.e., the tracking error of the system is eventually bounded with probability at least 1 δ .
Proof. 
Considering that the control signal is updated only when the event-triggering condition is satisfied, the closed-loop system can be obtained as follows:
x ( k + 1 ) = C A z ( k ) + C B ( u m p c ( k ) + u d ( k ) ) + b w ( k ) + d ( k ) = C A z ( k ) + C B G ( x ( k | k l ) ) + w ˜ ( k ) + d ˜ ( k ) = C ( A z ( k ) + B ( G ( x ( k | k l ) ) K x ( k | k l ) ) + B K x ( k | k l ) ) + w ˜ ( k ) + d ˜ ( k ) .
where w ˜ ( k ) and d ˜ ( k ) are the estimation errors of the GPR and ESO, respectively.
Let e = x ( k ) x d ( k ) and e 1 = x ( k | k l ) x ( k ) . Denote x ( k | k l ) as the MPC predictive state, and the closed-loop error system can be represented as
e ( k + 1 ) = C B x ( k ) + C ( A z ( k ) + B ( G ( x ( k | k l ) ) K x ( k | k l ) ) ) + B K e 1 + w ˜ ( k ) + d ˜ ( k ) x d ( k + 1 ) .
Through iterative calculations, we can demonstrate that
lim k e ( k ) = lim k ( A ¯ k + j = 0 k 1 A k 1 j A ¯ ) x ( 0 ) + ( j = 0 k 1 A k 1 j C B + C B + A ¯ k 1 j C B ) Δ ( j | k l ) + ( j = 0 k 1 A k 1 j B K + A ¯ + A ¯ k j ) e 1 ( j ) + j = 0 k 1 A ¯ k 1 j ( w ˜ ( j ) + d ˜ ( j ) x d ( j ) ) + ( C A k + j = 0 k 1 A ¯ k 1 j A C ) z ( 0 ) , .
where Δ ( j | k l ) = G ( x ( k | k l ) ) K x ( k | k l ) and A ¯ = C B K .
Combining with the assumptions we can obtain that
lim k e ( k ) = lim k ( A ¯ k + j = 0 k 1 A k 1 j A ¯ ) x ( 0 ) + ( j = 0 k 1 A k 1 j B K + A ¯ + A ¯ k j ) e 1 ( j ) + j = 0 k 1 A ¯ k 1 j ( w ˜ ( j ) + d ˜ ( j ) x d ( j ) ) + ( C A k + j = 0 k 1 A ¯ k 1 j A C ) z ( 0 ) .
From Theorems A1 and A2, it can be derived that the GPR prediction error is bounded with probability at least 1 δ , and the ESO estimation error is bounded with high probability. Then
P r ( lim k e ( k ) r + lim k j = 0 k 1 ( A k 1 j B K + A ¯ + A ¯ k j ) e 1 ( j ) ) 1 δ P r ( lim k e ( k ) r + lim k j = 0 k 1 ( A k 1 j B K + A ¯ + A ¯ k j ) ε 2 ) 1 δ P r ( lim k e ( k ) ( r + r e ( ε 2 ) ) ) 1 δ .
where
r = sup ( A ¯ k + j = 0 k 1 A k 1 j A ¯ ) x ( 0 ) + ( C A k + j = 0 k 1 A ¯ k 1 j A C ) z ( 0 ) + j = 0 k 1 A ¯ k 1 j ( w ˜ ( j ) + d ˜ ( j ) x d ( j ) )
In summary, it can be demonstrated that the estimations of GPR and ESO remain convergent and that the tracking errors of closed-loop system are bounded with probability at least 1 δ . □

References

  1. Wang, F.; Olvera, J.R.G.; Cheng, G. Optimal order pick-and-place of objects in cluttered scene by a mobile manipulator. IEEE Robot. Autom. Lett. 2021, 6, 6402–6409. [Google Scholar] [CrossRef]
  2. Rigotti-Thompson, M.; Torres-Torriti, M.; Auat Cheein, F.A.; Troni, G. -Based Terrain Disturbance Rejection for Hydraulically Actuated Mobile Manipulators With a Nonrigid Link. IEEE/ASME Trans. Mechatron. 2020, 25, 2523–2533. [Google Scholar] [CrossRef]
  3. Xu, D.; Zhao, D.; Yi, J.; Tan, X. Trajectory tracking control of omnidirectional wheeled mobile manipulators: Robust neural network-based sliding mode approach. IEEE Trans. Syst. Man Cybern. Syst. 2009, 39, 788–799. [Google Scholar]
  4. Gongor, F.; Tutsoy, O. On the remarkable advancement of assistive robotics in human-robot interaction-based healthcare applications: An exploratory overview of the literature. Int. J. Hum.-Comput. Interact. 2025, 41, 1502–1542. [Google Scholar] [CrossRef]
  5. Canaza Ccari, L.F.; Adrian Ali, R.; Valdeiglesias Flores, E.; Medina Chilo, N.O.; Sulla Espinoza, E.; Silva Vidal, Y.; Pari, L. JVC-02 Teleoperated Robot: Design, Implementation, and Validation for Assistance in Real Explosive Ordnance Disposal Missions. Actuators 2024, 13, 254. [Google Scholar] [CrossRef]
  6. Chen, N.; Song, F.; Li, G.; Sun, X.; Ai, C. An adaptive sliding mode backstepping control for the mobile manipulator with nonholonomic constraints. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 2885–2899. [Google Scholar] [CrossRef]
  7. Viet, T.D.; Doan, P.T.; Hung, N.; Kim, H.K.; Kim, S.B. Tracking control of a three-wheeled omnidirectional mobile manipulator system with disturbance and friction. J. Mech. Sci. Technol. 2012, 26, 2197–2211. [Google Scholar] [CrossRef]
  8. Sun, Z.; Tang, S.; Zhou, Y.; Yu, J.; Li, C. A GNN for repetitive motion generation of four-wheel omnidirectional mobile manipulator with nonconvex bound constraints. Inf. Sci. 2022, 607, 537–552. [Google Scholar] [CrossRef]
  9. Sun, Z.; Tang, S.; Fei, Y.; Xiao, X.; Hu, Y.; Yu, J. An Orthogonal Repetitive Motion and Obstacle Avoidance Scheme for Omnidirectional Mobile Robotic Arm. IEEE Trans. Ind. Electron. 2025, 72, 4978–4989. [Google Scholar] [CrossRef]
  10. Khan, A.H.; Li, S.; Chen, D.; Liao, L. Tracking control of redundant mobile manipulator: An RNN based metaheuristic approach. Neurocomputing 2020, 400, 272–284. [Google Scholar] [CrossRef]
  11. Nie, J.; Wang, Y.; Mo, Y.; Miao, Z.; Jiang, Y.; Zhong, H.; Lin, J. An HQP-Based Obstacle Avoidance Control Scheme for Redundant Mobile Manipulators Under Multiple Constraints. IEEE Trans. Ind. Electron. 2022, 70, 6004–6016. [Google Scholar] [CrossRef]
  12. Zhang, S.; Wu, Y.; He, X.; Wang, J. Neural Network-Based Cooperative Trajectory Tracking Control for a Mobile Dual Flexible Manipulator. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6545–6556. [Google Scholar] [CrossRef]
  13. Bruder, D.; Fu, X.; Gillespie, R.B.; Remy, C.D.; Vasudevan, R. Data-Driven Control of Soft Robots Using Koopman Operator Theory. IEEE Trans. Robot. 2021, 37, 948–961. [Google Scholar] [CrossRef]
  14. Wang, H.; Shi, Z.; Zhu, C.; Qiao, Y.; Zhang, C.; Yang, F.; Ren, P.; Lu, L.; Xuan, D. Integrating learning-based manipulation and physics-based locomotion for whole-body badminton robot control. In Proceedings of the EEE International Conference on Robotics and Automation (ICRA); IEEE: Piscataway, NJ, USA, 2025; pp. 15114–15120. [Google Scholar]
  15. Hazem, Z.B. A fuzzy-TD3 hybrid reinforcement learning framework for robust trajectory tracking of the Mitsubishi RV-2AJ robotic arm. Sci. Rep. 2026; online ahead of print. [CrossRef] [PubMed]
  16. Koopman, B.O. Hamiltonian systems and transformation in Hilbert space. Proc. Natl. Acad. Sci. USA 1931, 17, 315–318. [Google Scholar] [CrossRef]
  17. Zheng, H.; Li, Y.; Zheng, L.; Hashemi, E. Koopman-Based Hybrid Modeling and Zonotopic Tube Robust MPC for Motion Control of Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2024, 25, 13598–13612. [Google Scholar] [CrossRef]
  18. Zhao, T.; Yue, M.; Wang, J. Deep-Learning-Based Koopman Modeling for Online Control Synthesis of Nonlinear Power System Transient Dynamics. IEEE Trans. Ind. Inform. 2023, 19, 10444–10453. [Google Scholar] [CrossRef]
  19. Goyal, T.; Hussain, S.; Martinez-Marroquin, E.; Brown, N.A.; Jamwal, P.K. Learning Koopman embedding subspaces for system identification and optimal control of a wrist rehabilitation robot. IEEE Trans. Ind. Electron. 2022, 70, 7092–7101. [Google Scholar] [CrossRef]
  20. Xiao, Y.; Zhang, X.; Xu, X.; Liu, X.; Liu, J. Deep neural networks with Koopman operators for modeling and control of autonomous vehicles. IEEE Trans. Intell. Veh. 2022, 8, 135–146. [Google Scholar] [CrossRef]
  21. Lee, S.M. Global-Initialization-Based Model Predictive Control for Mobile Robots Navigating Nonconvex Obstacle Environments. Actuators 2025, 14, 454. [Google Scholar] [CrossRef]
  22. Yang, W.; Chen, Y.; Su, Y. A Double-Layer Model Predictive Control Approach for Collision-Free Lane Tracking of On-Road Autonomous Vehicles. Actuators 2023, 12, 169. [Google Scholar] [CrossRef]
  23. Carron, A.; Arcari, E.; Wermelinger, M.; Hewing, L.; Hutter, M.; Zeilinger, M.N. Data-driven model predictive control for trajectory tracking with a robotic arm. IEEE Robot. Autom. Lett. 2019, 4, 3758–3765. [Google Scholar] [CrossRef]
  24. Choi, T.; Schervish, M.J. On posterior consistency in nonparametric regression problems. J. Multivar. Anal. 2007, 98, 1969–1987. [Google Scholar] [CrossRef]
  25. Yuan, Z.; Zhu, M. Lightweight distributed Gaussian process regression for online machine learning. IEEE Trans. Autom. Control 2024, 69, 3928–3943. [Google Scholar] [CrossRef]
  26. Wang, J.; Filippi, P.; Haan, S.; Pozza, L.; Whelan, B.; Bishop, T.F. Gaussian process regression for three-dimensional soil mapping over multiple spatial supports. Geoderma 2024, 446, 116899. [Google Scholar] [CrossRef]
  27. Huang, Y.; Lin, X.; Hernandez-Rocha, M.; Narain, S.; Pochiraju, K.; Englot, B. Mission-Oriented Gaussian Process Motion Planning for UUVs Over Complex Seafloor Terrain and Current Flows. IEEE Rob. Autom. Lett. 2024, 9, 1780–1787. [Google Scholar] [CrossRef]
  28. Cai, L.; Yin, H.; Lin, J.; Zhao, D. An Update-Strategy-Based Gaussian Process Regression Method for Aeroengines Fault Prediction. IEEE Trans. Ind. Inform. 2024, 20, 1941–1951. [Google Scholar] [CrossRef]
  29. Han, J. From PID to active disturbance rejection control. IEEE Trans. Ind. Electron. 2009, 56, 900–906. [Google Scholar] [CrossRef]
  30. Cui, R.; Chen, L.; Yang, C.; Chen, M. Extended State Observer-Based Integral Sliding Mode Control for an Underwater Robot With Unknown Disturbances and Uncertain Nonlinearities. IEEE Trans. Ind. Electron. 2017, 64, 6785–6795. [Google Scholar] [CrossRef]
  31. Shao, X.; Wang, H. Back-stepping active disturbance rejection control design for integrated missile guidance and control system via reduced-order ESO. ISA Trans. 2015, 57, 10–22. [Google Scholar]
  32. Xu, Y.; Hao, X.; Zhu, D.; Wu, L.; Li, P. Model Predictive Control for Pneumatic Manipulator via Receding-Horizon-Based Extended State Observers. Actuators 2025, 14, 343. [Google Scholar] [CrossRef]
  33. Yang, J.; Wu, H.; Hu, L.; Li, S. Robust predictive speed regulation of converter-driven DC motors via a discrete-time reduced-order GPIO. IEEE Trans. Ind. Electron. 2018, 66, 7893–7903. [Google Scholar] [CrossRef]
  34. Shao, X.; Zhang, J.; Zhang, W. Distributed Cooperative Surrounding Control for Mobile Robots With Uncertainties and Aperiodic Sampling. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18951–18961. [Google Scholar] [CrossRef]
  35. Ge, C.; Ma, L.; Xu, S. Distributed Fixed-Time Leader-Following Consensus for Multi-Agent Systems: An Event-Triggered Mechanism. Actuators 2024, 13, 40. [Google Scholar] [CrossRef]
  36. Qin, D.; Jin, Z.; Liu, A.; Zhang, W.A.; Yu, L. Asynchronous event-triggered distributed predictive control for multiagent systems with parameterized synchronization constraints. IEEE Trans. Autom. Control 2023, 69, 403–409. [Google Scholar] [CrossRef]
  37. Liu, C.; Li, H.; Shi, Y.; Xu, D. Codesign of event trigger and feedback policy in robust model predictive control. IEEE Trans. Autom. Control 2019, 65, 302–309. [Google Scholar] [CrossRef]
  38. Shi, T.; Shi, P.; Wu, Z.G. Dynamic event-triggered asynchronous MPC of Markovian jump systems with disturbances. IEEE Trans. Cybern. 2021, 52, 11639–11648. [Google Scholar] [CrossRef]
  39. Hu, L.; Ding, L.; Yang, H.; Liu, T.; Zhang, A.; Chen, S.; Gao, H.; Xu, P.; Deng, Z. LNO-Driven Deep RL-MPC: Hierarchical Adaptive Control Architecture for Dynamic Legged Locomotion. IEEE Trans. Ind. Inform. 2025, 21, 8574–8584. [Google Scholar] [CrossRef]
  40. Korda, M.; Mezić, I. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica 2018, 93, 149–160. [Google Scholar] [CrossRef]
  41. Shi, H.; Meng, M.Q.H. Deep Koopman operator with control for nonlinear systems. IEEE Rob. Autom. Lett. 2022, 7, 7700–7707. [Google Scholar] [CrossRef]
  42. Chen, H.; Lv, C. Incorporating ESO into Deep Koopman Operator Modeling for Control of Autonomous Vehicles. IEEE Trans. Control Syst. Technol. 2024, 32, 1854–1864. [Google Scholar] [CrossRef]
  43. Qin, B.; Yan, H.; Zhang, H.; Wang, Y.; Yang, S.X. Enhanced reduced-order extended state observer for motion control of differential driven mobile robot. IEEE Trans. Cybern. 2021, 53, 1299–1310. [Google Scholar] [CrossRef]
  44. Huang, Y.; Wang, J.; Shi, D.; Shi, L. Performance Assessment of Discrete-Time Extended State Observers: Theoretical and Experimental Results. IEEE Trans. Circuits Syst. I Reg. Pap. 2018, 65, 2256–2268. [Google Scholar] [CrossRef]
  45. Khalil, H.K.; Praly, L. High-gain observers in nonlinear feedback control. Int. J. Robust Nonlin. Control 2014, 24, 993–1015. [Google Scholar] [CrossRef]
  46. Girard, A. Dynamic triggering mechanisms for event-triggered control. IEEE Trans. Autom. Control 2014, 60, 1992–1997. [Google Scholar] [CrossRef]
  47. Ding, L.; Han, Q.L.; Ge, X.; Zhang, X.M. An overview of recent advances in event-triggered consensus of multiagent systems. IEEE Trans. Cybern. 2017, 48, 1110–1123. [Google Scholar] [CrossRef] [PubMed]
  48. Lehmann, D.; Henriksson, E.; Johansson, K.H. Event-triggered model predictive control of discrete-time linear systems subject to disturbances. In Proceedings of the European Control Conference (ECC); IEEE: Piscataway, NJ, USA, 2013; pp. 1156–1161. [Google Scholar]
Figure 1. (a) OMM developed by our lab. (b) Experimental setup.
Figure 1. (a) OMM developed by our lab. (b) Experimental setup.
Actuators 15 00185 g001
Figure 2. The smoothed randomly generated voltage signals.
Figure 2. The smoothed randomly generated voltage signals.
Actuators 15 00185 g002
Figure 3. (a) Square trajectory tracking results in the xy-plane. (b) Square trajectory tracking results in five DOF. (c) Tracking errors of square trajectory tracking in five DOF. (d) Control voltage inputs of five motors. (e) System uncertainty estimation. (f) The time interval between consecutive triggering instances in the event-triggered mechanism.
Figure 3. (a) Square trajectory tracking results in the xy-plane. (b) Square trajectory tracking results in five DOF. (c) Tracking errors of square trajectory tracking in five DOF. (d) Control voltage inputs of five motors. (e) System uncertainty estimation. (f) The time interval between consecutive triggering instances in the event-triggered mechanism.
Actuators 15 00185 g003
Figure 4. (a) IAE calculation results of the square trajectory tracking experiment. (b) MAE calculation results of the square trajectory tracking experiment.
Figure 4. (a) IAE calculation results of the square trajectory tracking experiment. (b) MAE calculation results of the square trajectory tracking experiment.
Actuators 15 00185 g004
Figure 5. (a) Circle trajectory tracking results in the xy-plane. (b) Circle trajectory tracking results in five DOF. (c) Tracking errors of circle trajectory for five DOF. (d) Control voltage inputs of five motors. (e) System uncertainty estimation. (f) The time interval between consecutive triggering instances in the event-triggered mechanism.
Figure 5. (a) Circle trajectory tracking results in the xy-plane. (b) Circle trajectory tracking results in five DOF. (c) Tracking errors of circle trajectory for five DOF. (d) Control voltage inputs of five motors. (e) System uncertainty estimation. (f) The time interval between consecutive triggering instances in the event-triggered mechanism.
Actuators 15 00185 g005
Figure 6. (a) IAE calculation results of the circle trajectory tracking experiment. (b) MAE calculation results of the circle trajectory tracking experiment.
Figure 6. (a) IAE calculation results of the circle trajectory tracking experiment. (b) MAE calculation results of the circle trajectory tracking experiment.
Actuators 15 00185 g006
Table 1. Parameters of square trajectory tracking experiment.
Table 1. Parameters of square trajectory tracking experiment.
ParameterValue
N p 5
N c 5
Q diag ( Q a , , Q a , Q b )
Q a diag ( 5 , 5 , 10 , 5 , 6 , 0 , 0 , 0 , 0 , 0 )
Q b diag ( 5980 , 6380 , 2800 , 700 , 750 , 0 , 0 , 0 , 0 , 0 )
R diag ( R c , , R c )
R c diag ( 2.5 × 10 3 , 2.5 × 10 3 , 1 × 10 3 , 9.5 × 10 2 , 9 × 10 2 )
H I 5 0 5 × 5
L diag ( 35 , 25 , 20 , 20 , 32 )
σ f [ 1 , 1.5 , 1 , 2 , 1 ] T
w o i 4 , i = 1 , 2 , 3 , 4 , 5
w o i 5 , i = 6 , 7 , 8 , 9 , 10
ε 1 0.26
ε 2 0.27
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, P.; Li, C.; Wang, B.; Ren, C. Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator. Actuators 2026, 15, 185. https://doi.org/10.3390/act15040185

AMA Style

Guo P, Li C, Wang B, Ren C. Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator. Actuators. 2026; 15(4):185. https://doi.org/10.3390/act15040185

Chicago/Turabian Style

Guo, Pu, Chunli Li, Binjie Wang, and Chao Ren. 2026. "Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator" Actuators 15, no. 4: 185. https://doi.org/10.3390/act15040185

APA Style

Guo, P., Li, C., Wang, B., & Ren, C. (2026). Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator. Actuators, 15(4), 185. https://doi.org/10.3390/act15040185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop