Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = virtual state-feedback reference tuning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3932 KB  
Article
Control of a Scenedesmus obliquus UTEX 393 Microalgae Culture Using Virtual Reference Feedback Tuning
by Álvaro Pulido-Aponte, Claudia L. Garzón-Castro and Santiago Díaz-Bernal
Appl. Sci. 2026, 16(1), 507; https://doi.org/10.3390/app16010507 - 4 Jan 2026
Viewed by 335
Abstract
Microalgae are photosynthetic microorganisms capable of fixing CO2 to produce O2 and a wide variety of metabolites of interest. Attempts have been made to describe their growth dynamics using mathematical models; however, these models fail to fully represent the dynamics of [...] Read more.
Microalgae are photosynthetic microorganisms capable of fixing CO2 to produce O2 and a wide variety of metabolites of interest. Attempts have been made to describe their growth dynamics using mathematical models; however, these models fail to fully represent the dynamics of this bioprocess. Therefore, achieving maximum biomass production in the shortest possible time represents a control challenge due to the nonlinear and time-varying dynamics. Some classic control strategies implemented for this bioprocess are totally or partially dependent on a mathematical model, resulting in controllers with low performance, implementation complexity, and limited robustness. This is where the Virtual Reference Feedback Tuning (VRFT) approach becomes relevant, as it is a model-free control strategy. VRFT is based on the iterative generation of a virtual reference with the aim of minimizing steady-state error, without requiring an explicit model of the bioprocess. Its implementation involves the collection of experimental data in open loop, the minimization of a cost function in closed loop, and the linearization of the system around a stable equilibrium point. This work presents the design and implementation of a VRFT-based control strategy applied to the closed cultivation of the microalga Scenedesmus obliquus UTEX 393 in three flat photobioreactors at laboratory scale. The variables controlled using this strategy were temperature, photosynthetically active light intensity, and level. The experimental results showed that the pre-established references were met. A steady-state temperature of 25 ± 0.625 °C, a PAR (Photosynthetically Active Radiation) light intensity of 100 ± 5 µmol·m−2·s−1, and level control that ensured a constant volume of the culture medium were achieved. This suggests that VRFT is a viable control alternative for this type of bioprocess under nominal conditions. Full article
Show Figures

Figure 1

26 pages, 4653 KB  
Article
Mathematical Modeling and Robust Control of a Restricted State Suspended Biped Robot Implementing Linear Actuators for Articulation Mobilization
by Karla Rincon-Martinez, Isaac Chairez and Wen-Yu Liu
Appl. Sci. 2022, 12(17), 8831; https://doi.org/10.3390/app12178831 - 2 Sep 2022
Cited by 2 | Viewed by 2116
Abstract
The aim of this study is to develop an adaptive automatic control method for solving the trajectory tracking problem for a biped robotic device (BRD) and taking into account that each articulation is mobilized by a linear actuator. Each extremity of the BRD [...] Read more.
The aim of this study is to develop an adaptive automatic control method for solving the trajectory tracking problem for a biped robotic device (BRD) and taking into account that each articulation is mobilized by a linear actuator. Each extremity of the BRD has three articulations with a linear actuator enforcing the controlled motion for each articulation. The control problem considers the task of tracking reference trajectories that define a regular gait cycle. The suggested adaptive control form has state-dependent gains that drive the tracking error into an invariant and attractive ellipsoidal with a center at the origin; meanwhile, the articulation restrictions are satisfied permanently. The stability analysis based on a controlled Lyapunov function depending on the tracking error leads to the explicit design of the state-dependent adaptive gains. Taking into account the forward complete setting of the proposed BRD, an output feedback formulation of the given adaptive controller is also developed using a finite-time and robust convergent differentiator based on the super-twisting algorithm. A virtual dynamic representation of the BRD is used to test the proposed controller using a distributed implementation of the adaptive controller. Numerical simulations corroborate the convergence of the tracking error, while all the articulation restrictions are satisfied using the adaptive gains. With the purpose of characterizing the proposed controller, a sub-optimal tuned regular state feedback controller is used as a comparative approach for validating the suggested design. Among the compared controllers, the analysis of the convergence of the mean square error of the tracking error motivates the application of the designed adaptive variant. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

23 pages, 4778 KB  
Article
Trajectory Tracking within a Hierarchical Primitive-Based Learning Approach
by Mircea-Bogdan Radac
Entropy 2022, 24(7), 889; https://doi.org/10.3390/e24070889 - 28 Jun 2022
Cited by 10 | Viewed by 2440
Abstract
A hierarchical learning control framework (HLF) has been validated on two affordable control laboratories: an active temperature control system (ATCS) and an electrical rheostatic braking system (EBS). The proposed HLF is data-driven and model-free, while being applicable on general control tracking tasks which [...] Read more.
A hierarchical learning control framework (HLF) has been validated on two affordable control laboratories: an active temperature control system (ATCS) and an electrical rheostatic braking system (EBS). The proposed HLF is data-driven and model-free, while being applicable on general control tracking tasks which are omnipresent. At the lowermost level, L1, virtual state-feedback control is learned from input–output data, using a recently proposed virtual state-feedback reference tuning (VSFRT) principle. L1 ensures a linear reference model tracking (or matching) and thus, indirect closed-loop control system (CLCS) linearization. On top of L1, an experiment-driven model-free iterative learning control (EDMFILC) is then applied for learning reference input–controlled outputs pairs, coined as primitives. The primitives’ signals at the L2 level encode the CLCS dynamics, which are not explicitly used in the learning phase. Data reusability is applied to derive monotonic and safely guaranteed learning convergence. The learning primitives in the L2 level are finally used in the uppermost and final L3 level, where a decomposition/recomposition operation enables prediction of the optimal reference input assuring optimal tracking of a previously unseen trajectory, without relearning by repetitions, as it was in level L2. Hence, the HLF enables control systems to generalize their tracking behavior to new scenarios by extrapolating their current knowledge base. The proposed HLF framework endows the CLCSs with learning, memorization and generalization features which are specific to intelligent organisms. This may be considered as an advancement towards intelligent, generalizable and adaptive control systems. Full article
(This article belongs to the Special Issue Information Theory in Control Systems)
Show Figures

Figure 1

25 pages, 2477 KB  
Article
Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics
by Timotei Lala, Darius-Pavel Chirla and Mircea-Bogdan Radac
Energies 2022, 15(1), 267; https://doi.org/10.3390/en15010267 - 31 Dec 2021
Cited by 14 | Viewed by 3060
Abstract
This paper focuses on validating a model-free Value Iteration Reinforcement Learning (MFVI-RL) control solution on a visual servo tracking system in a comprehensive manner starting from theoretical convergence analysis to detailed hardware and software implementation. Learning is based on a virtual state representation [...] Read more.
This paper focuses on validating a model-free Value Iteration Reinforcement Learning (MFVI-RL) control solution on a visual servo tracking system in a comprehensive manner starting from theoretical convergence analysis to detailed hardware and software implementation. Learning is based on a virtual state representation reconstructed from input-output (I/O) system samples under nonlinear observability and unknown dynamics assumptions, while the goal is to ensure linear output reference model (ORM) tracking. Secondary, a competitive model-free Virtual State-Feedback Reference Tuning (VSFRT) is learned from the same I/O data using the same virtual state representation, demonstrating the framework’s learning capability. A model-based two degrees-of-freedom (2DOF) output feedback controller serving as a comparisons baseline is designed and tuned using an identified system model. With similar complexity and linear controller structure, MFVI-RL is shown to be superior, confirming that the model-based design issue of poor identified system model and control performance degradation can be solved in a direct data-driven style. Apart from establishing a formal connection between output feedback control, state feedback control and also between classical control and artificial intelligence methods, the results also point out several practical trade-offs, such as I/O data exploration quality and control performance leverage with data volume, control goal and controller complexity. Full article
(This article belongs to the Special Issue Intelligent Control for Future Systems)
Show Figures

Graphical abstract

26 pages, 5120 KB  
Article
Virtual State Feedback Reference Tuning and Value Iteration Reinforcement Learning for Unknown Observable Systems Control
by Mircea-Bogdan Radac and Anamaria-Ioana Borlea
Energies 2021, 14(4), 1006; https://doi.org/10.3390/en14041006 - 15 Feb 2021
Cited by 26 | Viewed by 3886
Abstract
In this paper, a novel Virtual State-feedback Reference Feedback Tuning (VSFRT) and Approximate Iterative Value Iteration Reinforcement Learning (AI-VIRL) are applied for learning linear reference model output (LRMO) tracking control of observable systems with unknown dynamics. For the observable system, a new state [...] Read more.
In this paper, a novel Virtual State-feedback Reference Feedback Tuning (VSFRT) and Approximate Iterative Value Iteration Reinforcement Learning (AI-VIRL) are applied for learning linear reference model output (LRMO) tracking control of observable systems with unknown dynamics. For the observable system, a new state representation in terms of input/output (IO) data is derived. Consequently, the Virtual State Feedback Tuning (VRFT)-based solution is redefined to accommodate virtual state feedback control, leading to an original stability-certified Virtual State-Feedback Reference Tuning (VSFRT) concept. Both VSFRT and AI-VIRL use neural networks controllers. We find that AI-VIRL is significantly more computationally demanding and more sensitive to the exploration settings, while leading to inferior LRMO tracking performance when compared to VSFRT. It is not helped either by transfer learning the VSFRT control as initialization for AI-VIRL. State dimensionality reduction using machine learning techniques such as principal component analysis and autoencoders does not improve on the best learned tracking performance however it trades off the learning complexity. Surprisingly, unlike AI-VIRL, the VSFRT control is one-shot (non-iterative) and learns stabilizing controllers even in poorly, open-loop explored environments, proving to be superior in learning LRMO tracking control. Validation on two nonlinear coupled multivariable complex systems serves as a comprehensive case study. Full article
(This article belongs to the Special Issue Intelligent Control for Future Systems)
Show Figures

Figure 1

24 pages, 2026 KB  
Article
Data-Driven Model-Free Tracking Reinforcement Learning Control with VRFT-based Adaptive Actor-Critic
by Mircea-Bogdan Radac and Radu-Emil Precup
Appl. Sci. 2019, 9(9), 1807; https://doi.org/10.3390/app9091807 - 30 Apr 2019
Cited by 46 | Viewed by 5381
Abstract
This paper proposes a neural network (NN)-based control scheme in an Adaptive Actor-Critic (AAC) learning framework designed for output reference model tracking, as a representative deep-learning application. The control learning scheme is model-free with respect to the process model. AAC designs usually require [...] Read more.
This paper proposes a neural network (NN)-based control scheme in an Adaptive Actor-Critic (AAC) learning framework designed for output reference model tracking, as a representative deep-learning application. The control learning scheme is model-free with respect to the process model. AAC designs usually require an initial controller to start the learning process; however, systematic guidelines for choosing the initial controller are not offered in the literature, especially in a model-free manner. Virtual Reference Feedback Tuning (VRFT) is proposed for obtaining an initially stabilizing NN nonlinear state-feedback controller, designed from input-state-output data collected from the process in open-loop setting. The solution offers systematic design guidelines for initial controller design. The resulting suboptimal state-feedback controller is next improved under the AAC learning framework by online adaptation of a critic NN and a controller NN. The mixed VRFT-AAC approach is validated on a multi-input multi-output nonlinear constrained coupled vertical two-tank system. Discussions on the control system behavior are offered together with comparisons with similar approaches. Full article
(This article belongs to the Special Issue Advances in Deep Learning)
Show Figures

Figure 1

Back to TopTop