2. Summary of the Contributions
The first contribution by Tianqing et al. (Contribution 1) presents a novel control strategy for brushless DC (BLDC) motors that integrates proportional–integral (PI) control with an infinite-norm mixed-sensitivity robust control approach (PI––MIX). The authors first derive the current-loop transfer functions of the BLDC motor and subsequently develop an uncertainty model of the control system that accounts for parameter variations over the entire operating range. Using infinite-norm mixed-sensitivity theory, appropriate weighting functions are selected, and an augmented generalized feedback system matrix is formulated to synthesize the desired controller. The effectiveness of the proposed control scheme is then validated through comparative simulations against a conventional PI controller.
The second contribution by Czopek and Bernat (Contribution 2) examines two feedback control strategies for an axisymmetric magnetorheological elastomer (MRE) membrane incorporating a permanent magnet. The first approach is based on a linear state observer (LSO) that exploits information from both slow and fast system dynamics. The second employs an active disturbance rejection control (ADRC) scheme with a linear extended state observer (LESO), relying on a reduced-order model that accounts only for fast dynamics. Both control algorithms were experimentally evaluated across a range of parameter settings and demonstrated comparable control performance. Discrete fast Fourier transform (DFFT) analysis revealed that measurement noise plays a critical role in control effectiveness, and that performance indices are strongly influenced by the cut-off frequencies of both the observer and the controller. The results suggest that MREs equipped with the proposed control strategies could be utilized in the development of intelligent actuators, such as variable-capacity pumping systems.
The third contribution by Lisowski (Contribution 3) presents the synthesis of a nonlinear multi-agent differential game model for safe ship control in maritime collision-avoidance scenarios. The game state equations are formulated using nonlinear dynamic models for the target ship and linear kinematic equations for the encountering vessels. The differential game model is developed in the LabVIEW 2022 environment and evaluated through simulation studies based on a navigational scenario in which the target ship safely passes three encountered ships under conditions of non-cooperative behavior, cooperative interaction, and baseline optimal (non-game) control. Simulation results are illustrated through ship trajectories and time histories of the corresponding game control variables. A distinctive feature of the LabVIEW-based implementation is its capability for real-time, interactive analysis, allowing users to observe the effects of parameter variations on the evolution of the differential game during simulation. The author notes that future enhancements of the model would focus on extending it to larger fleets and assessing the sensitivity of control performance to measurement uncertainties and variations in ship dynamic parameters.
The fourth contribution by Lisowski (Contribution 4) develops a modeling framework for the safe and optimal control of an autonomous surface vehicle operating in the presence of multiple encountering objects. A multi-agent game-theoretic algorithm is proposed to compute safe and optimal trajectories, along with a comparative trajectory optimization algorithm that neglects the maneuverability of other objects. Simulation studies are conducted to assess the optimality of the resulting trajectories under various admissible object strategies. In addition, a sensitivity analysis of the safe control performance (quantified in terms of collision risk) is performed with respect to navigation data inaccuracies and the number of possible object strategies.
The fifth contribution by Guo et al. (Contribution 5) investigates the global prescribed-time stabilization problem for a class of uncertain nonlinear systems expressed in power normal nonlinear system (PNNS) form. By introducing a novel state-scaling transformation (SST), the prescribed-time stabilization problem is converted into an asymptotic stabilization problem for the transformed system. Based on this transformation, a constructive quantized state-feedback control design is developed using the technique of adding a power of integrator (API). A notable advantage of the proposed approach is that the settling time can be specified a priori, which can be adjusted to meet practical requirements. The authors note, however, that the proposed controller relies on full state information and is therefore not applicable when only partial state measurements are available. Consequently, the development of control schemes for PNNSs using partial state feedback is identified as a direction for future research. In addition, extending the proposed methodology to multi-agent systems and cyber–physical systems is highlighted as an important and promising research avenue.
The sixth contribution by Reyhanoglu and Jafari (Contribution 6) addresses the robust trajectory tracking problem for a class of uncertain nonlinear dynamical systems subject to unknown disturbances. The approach enhances classical feedback linearization with adaptive mechanisms that compensate for model uncertainties and unknown disturbances without requiring persistent excitation or detailed disturbance models. The method employs gradient-descent learning rules to adapt control parameters and disturbance estimates online. This results in a closed-loop system capable of robust trajectory tracking while retaining simplicity and interpretability in both implementation and analysis. A learning-based control strategy is used to design a robust tracking control law, wherein a cost function derived from the desired closed-loop error dynamics is minimized via gradient descent. Stability of the closed-loop nonlinear system is established using pseudo-linear system theory. The learning capability of the control law enables effective mitigation of uncertainties and disturbances, and numerical simulations for a planar PPR robot demonstrate the performance and effectiveness of the proposed approach.
The seventh contribution by Chai and Kayacan (Contribution 7) presents the implementation of an adaptive model predictive control (AMPC–) algorithm on a nonlinear simulation of fly-back booster stage separation and reentry. The performance of the proposed controller is compared with baseline AMPC and pole placement controllers under nominal conditions, as well as in the presence of aerodynamic and guidance law uncertainties. The controllers are designed using linear models for longitudinal and lateral dynamics, which are successively linearized based on the current flight conditions. In all scenarios, the AMPC– controller demonstrates superior performance compared to both baseline AMPC and pole placement methods. The algorithm effectively compensates for model mismatches caused by nonlinearities and other uncertainties, while preserving the nominal AMPC performance. Computational performance is also evaluated: the AMPC– algorithm could be updated at up to 1 kHz on a single core of a Raspberry Pi Model 3 B while utilizing an average CPU load of 39%, demonstrating its suitability for deployment on resource-constrained systems.
The eighth contribution by Andrievsky et al. (Contribution 8) addresses the robust control of phase shift during constant-speed rotation of unbalanced rotors in a two-rotor vibratory machine. The authors design a control scheme comprising two proportional–integral (PI) rotor speed controllers with cross-coupling, along with a relay-type signal controller with integral action. The control law is developed using the speed-gradient method. The paper presents results from extensive experimental investigations conducted on a mechatronic vibration testbed, as well as simulation studies based on identified parameters of the drive model. Performance is evaluated for various reference phase-shift signals, including constant, harmonic, and chaotic profiles. The results demonstrate the effectiveness of the proposed control strategy and provide insights into the system’s performance characteristics.
The ninth contribution by Dey and Xu (Contribution 9) studies the distributed swarm control problem for large-scale multi-agent systems (LS-MASs). Unlike classical multi-agent systems, LS-MASs pose significant control design challenges due to the sheer number of agents involved, making the realization of complex missions such as collective swarming more difficult. To overcome these challenges, the authors develop a novel mixed game-theoretic framework integrated with a hierarchical learning strategy. Within this framework, the LS-MAS is modeled as a multi-group, large-scale leader–follower architecture. A cooperative game is formulated to design distributed swarm control laws for the group leaders, while a Stackelberg game is employed to effectively coordinate the interactions between leaders and their large-scale follower groups. Leveraging these leader–follower interactions, a mean-field game approach is used to seamlessly extend collective swarm behaviors from leaders to followers without incurring additional computational burden or communication overhead. Furthermore, a hierarchical learning algorithm is proposed to derive intelligent and optimal distributed swarm control policies for the multi-group leader–follower system. In particular, a multi-agent actor–critic algorithm is first developed to obtain distributed optimal control strategies for the leaders, followed by an actor–critic–mass method to compute decentralized swarm control laws for the large-scale followers. Numerical simulations and Lyapunov-based stability analysis demonstrate the closed-loop system’s performance.
The tenth contribution by Wang et al. (Contribution 10) investigates the fixed-time consensus (FXC) control problem for nonlinear multi-agent systems (MASs). For both leader–follower and leaderless scenarios, two adaptive discontinuous control protocols are developed to achieve the desired consensus objectives. Unlike conventional adaptive control schemes, which often substantially increase the system dimension, the proposed protocols require only two adaptive update laws, enhancing simplicity and practicality in engineering applications. In addition, aside from standard necessary assumptions, no extra conditions are imposed to guarantee fixed-time consensus. Simulation results further demonstrate the effectiveness of the proposed control protocols.
Finally, the eleventh contribution by Kosnowski et al. (Contribution 11) addresses the key tasks of a multi-layer control architecture for maritime autonomous surface ships (MASSs), including signal filtering and measurement estimation, the selection of appropriate algorithms and control strategies, and the optimization of propulsion allocation. The authors first introduce an evolutionary programming approach for computing optimal route paths. Subsequently, several control and decision-making algorithms are presented, including dynamic programming with neural state constraints, ant colony optimization, and neuro-phase safe control methods. For optimal control, linear matrix inequality (LMI) techniques and predictive line-of-sight approaches are employed. The direct control layer is implemented using a multi-operation switching-based strategy. Computer simulations are used to evaluate and validate the performance of the proposed algorithms.