Next Article in Journal
Performance Comparison of Mechanical and Ferrofluidic Micropumps: Structural and Operational Perspectives
Previous Article in Journal
Lightweight Adaptive Feature Compression and Dynamic Network Fusion for Rotating Machinery Fault Diagnosis Under Extreme Conditions
Previous Article in Special Issue
Fully-Distributed Bipartite Consensus for Linear Multiagent Systems with Dynamic Event-Triggered Mechanism Under Signed Topology Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Agentic AI for Real-Time Adaptive PID Control of a Servo Motor

by
Tariq Mohammad Arif
1,* and
Md Adilur Rahim
2
1
Department of Mechanical Engineering, Weber State University, Ogden, UT 84408, USA
2
Department of Civil & Environmental Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
*
Author to whom correspondence should be addressed.
Actuators 2025, 14(9), 459; https://doi.org/10.3390/act14090459
Submission received: 4 August 2025 / Revised: 18 September 2025 / Accepted: 18 September 2025 / Published: 20 September 2025
(This article belongs to the Special Issue Advanced Technologies in Actuators for Control Systems)

Abstract

This study explores a novel approach of using large language models (LLMs) in the real-time Proportional–Integral–Derivative (PID) control of a physical system, the Quanser QUBE-Servo 2. We investigated whether LLMs, used with an Artificial Intelligence (AI) agent workflow platform, can participate in the live tuning of PID parameters through natural language instructions. Two AI agents were developed: a control agent that monitors the system performance and decides if tuning is necessary, and an Optimizer Agent that updates PID gains using either a guided system prompt or a self-directed free approach within a safe parameter range. The LLM integration was implemented through Python programming and Flask-based communication between the AI agents and the hardware system. Experimental results show that LLM-based tuning approaches can effectively reduce standard error metrics, such as IAE, ISE, MSE, and RMSE. This study presents one of the first implementations of real-time PID tuning powered by LLMs, and it has the potential to become a novel alternative to classical control, as well as machine learning or reinforcement learning-based approaches. The results are promising for using agentic AI in heuristic-based tuning and the control of complex physical systems, marking the shift toward more human-centered, explainable, and adaptive control engineering.

1. Introduction

The integration of Artificial Intelligence (AI) into control systems has shown significant potential in recent years [1]. Many researchers and industry leaders are currently implementing AI into control systems to enhance performance, simplify control processes, and enable human–machine interactions (HMIs) [2,3,4]. This trend is driven by advancements in computer vision, deep learning, and reinforcement learning, as well as the rise of large language models (LLMs), and has opened up new opportunities for innovative applications across various engineering fields [5,6]. For example, modern robotic and control systems have evolved from completing set tasks in controlled environments to displaying intelligent behaviors by performing master–slave operations, learning from stored datasets, and adapting through real-time human–robot and human–machine interactions [7,8,9]. Other AI-based control systems incorporate multiphase reinforcement learning frameworks for PID tuning, demonstrating how deterministic policy gradients and neural networks can enable dynamic self-tuning [10,11,12]. Evolutionary, neural network, computer vision-based feedback, and heuristic approaches were also used to enhance tuning by incorporating Artificial Intelligence [13,14,15]. In a recent work, researchers utilized LLM agents/controllers to guide the adaptive compensator and dynamically adjust robot control policies as needed with minimal manual intervention [16,17].
Although numerous types of controllers exist, PID controllers have been among the most widely used controllers in industrial automation for decades, as they are simple and easy to implement. Over the years, many PID tuning strategies have been developed, including the heuristic rules of Ziegler–Nichols [18], Cohen–Coon [19,20], and Chien–Hrones–Reswick [21,22], as well as relay auto-tuning [23,24,25] and Internal Model Control (IMC)-based designs [26]. These strategies remain valuable in modern practice, but they share common limitations. Once tuned, the controllers typically remain static and require manual adjustments to handle disturbances, model drift, or nonlinearities. In addition, they offer little interpretability for human operators beyond numerical gain values. While modern control methods such as LQR [27,28,29,30], Model Predictive Control (MPC) [31], and H control [32,33,34] address many of the limitations of classical tuning approaches by offering systematic optimization and robustness guarantees, they remain mathematically intensive and often inaccessible to practitioners without specialized training. Moreover, these approaches still lack the ability to interact intuitively with human operators. In this context, recent advances in large language models (LLMs) and agentic AI integrations create new possibilities: controllers that not only tune parameters adaptively but also engage in natural human–machine dialogue, explaining decisions, and responding to user input during operation. Beyond dialogue, such AI agents can act with more independence by deciding when to run or stop machines based on various streams of feedback. This moves the system beyond rigid, pre-programmed control logic to a more intelligent and context-aware decision-making tool.
Recent studies show that, when LLMs are embedded in feedback-driven control systems, they can provide explainability and adaptability beyond what classical optimization achieves. For example, systems such as InCoRo combine an LLM controller with scene understanding to adjust actions in dynamic environments [35]. Other frameworks, such as BTLA, utilize LLMs to interpret system states and support operators in real-time [36]. More recently, methods such as Think, Act, Learn, and interpretable behavior-tree control show that LLMs can reflect on execution feedback, change parameters, and offer semantic reasoning [37]. In these contexts, the LLM acts less as a “solution generator” and more as an adaptive collaborator that improves established strategies with real-time understanding and flexibility.
In this study, we used an LLM-based AI agent to control a servo motor via adaptive PID tuning. While most work in this field is carried out on simulated robotic environments or abstracted dynamics, in this research, we deployed an LLM-based PID tuning agent in a real-time hardware setup. This hands-on approach addresses practical challenges such as real-time constraints, sensor noise, actuator delays, and safety limits.
To fine-tune the PID parameters of the system, we embedded LLM agents within an automated workflow by n8n [38]. This method shows real-time decision-making and closed-loop adjustments of LLM, connecting high-level AI reasoning and low-level motor control. Compared to the best of the current literature, this marks one of the first known instances of using LLMs to autonomously tune PID parameters for a physical system during real-time operation, not just generating control policies offline or in simulations. This research also validates LLM-based control and demonstrates its practical viability, extending beyond offline control, theoretical models, and digital twins.

2. Materials and Methods

2.1. Hardware

To implement a real-time Proportional–Integral–Derivative (PID) tuning using AI agents, a direct-drive DC motor with an integrated optical encoder is used. The hardware needs to provide accurate measurements and rapid actuation to maintain stable and responsive real-time control. For this purpose, we have used the Quanser QUBE-Servo 2 system, which integrates a DC motor mechanism with position feedback through an encoder [39]. This device can be operated with either a load disk or a pendulum attachment. In this study, we utilized the load disk with the rotary servo platform. Figure 1 shows a schematic diagram of the QUBE-Servo 2 setup with the inertia disk. Here, the DC motor shaft is connected to a load hub and disk, which together define the rotational inertia of the system (Figure 1).

2.1.1. Electromechanical Parameters

In the Quanser QUBE-Servo 2 system, the parameters R m , k t , k m , J m , L m , m h , r h , J h , m d , and r d represent the terminal resistance ( 8.4 Ω ), torque constant ( 0.042 Nm / A ), motor back electromotive force (emf) constant ( 0.04 V / ( rad / s ) ), rotor inertia ( 4.0 × 10 6 kg · m 2 ), rotor inductance ( 1.16 mH ), load hub mass ( 0.0106 kg ), load hub radius ( 0.0111 m ), load hub inertia ( 0.6 × 10 6 kg · m 2 ), mass of the disk load ( 0.053 kg ), and radius of the disk load ( 0.0248 m ), respectively.

2.1.2. Servo Motor and Model

The Quanser QUBE-Servo 2 system uses a high-performance coreless DC motor and an optical incremental encoder to make the platform responsive for real-time controls. The motor has a smooth and rapid torque response, and the optical encoder can record up to 2048 quadrature states per revolution, providing very accurate velocity and position feedback during control operation.
The dynamic behavior of the QUBE-Servo 2 system can be described using a first-order linear differential equation derived through first-principles modeling. From that dynamic behavior, we can find a voltage-to-speed transfer function: Ω m ( s ) V m ( s ) . For the position control, this voltage-to-speed transfer function can be further simplified to obtain a voltage-to-position transfer function as Θ m ( s ) V m ( s ) . However, in this study, we have used a heuristic tuning approach, where an AI agent iteratively adjusts the PID gains ( K p , K i , and K d ) based on observed performance metrics (rise time, overshoot, steady-state error, etc.). This approach does not require an explicit mathematical model and, therefore, the dynamic modeling of the servo is not presented here.

2.2. PID Control

To fine-tune the servo system’s performance, we utilized the classical PID controller that regulates the motor voltage based on the tracking or position error. The system was evaluated using a square wave input with an amplitude of ± π 3 and a period of 12 s. The position error was determined by calculating the difference between the desired signal and the actual response. It then uses PID gains to generate input, u ( t ) , to the system plant. In the time domain, the input is generated based on the error, as shown in Equation (1).
u ( t ) = K P . e ( t ) + K I e ( t ) d t + K D d e ( t ) d t
Here, e ( t ) is the error function, u ( t ) is the input to the system plant after PID operation, and K P , K I , and K D are proportional, integral, and derivative gains, respectively. A simplified block diagram of this control is presented in Figure 2.

2.3. Derivative Filter and Frequency Control

In the real-time PID control of electromechanical systems, the derivative term is highly susceptible to noise due to the amplification of high-frequency components. To damp this noise, we have applied a discrete-time adaptive low-pass filter after the derivative calculation. This filter employs a first-order exponential smoothing method that balances noise suppression and responsiveness. The derivative filter first computes the raw derivative as the difference between the current and previous error values divided by the elapsed time, Δ t (Equation (2)):
d raw [ n ] = d e ( t ) d t = e ( t ) e ( t Δ t ) Δ t
Here, d raw [ n ] is the current raw derivative. After this step, the current filtered derivative, d filtered [ n ] , is calculated using Equation (3).
d filtered [ n ] = α × d filtered [ n 1 ] + ( 1 α ) × d raw [ n ]
Here, α is a smoothing factor determined dynamically at each time step using Equation (4).
α = T d T d + Δ t
In Equation (4), T d = time constant parameter (set to 0.005 s).
This adaptive filtering approach accounts for variations in the control loop’s timing and provides more stable derivative feedback in real-time environments.
A frequency control mechanism is also integrated into the algorithm to regulate how often data logging and plotting are performed. Since the PID control loop of QUBE-Servo 2 is set to operate at a high frequency (500 Hz), by default, it is capable of doing precise actuations and fast updates. However, data logging in dynamic plots and exporting performance metrics are set to 50 Hz to maintain smooth operations and plotting. We have used a counter to track the number of control loop iterations, and only every 10th iteration triggers a data-capturing operation. Based on our experiment, this rate mitigates heavy input–output operations of the high-frequency servo control system and supports efficient optimization by the agentic framework. The combination of adaptive derivative filtering and a frequency control mechanism can perform effective real-time PID fine-tuning. For a sensitive and fast electromechanical system like QUBE-Servo 2, these steps are crucial for reliable and stable operations.

2.4. Calculation of Error Metrics

To evaluate the effectiveness of the real-time PID control algorithm with the QUBE-Servo 2 system, we used a set of error metrics based on the system’s response to square wave inputs. These metrics provide insight into the system’s behavior during both the transient phase and steady state. They are critical for real-time PID tuning through the AI agent and for automatically evaluating control performance. In our study, the program records the time, the reference position (desired), and the actual position at each control step. Once the system stabilizes after each positive step change, it calculates the other key performance metrics, as shown in Section 2.4.1, Section 2.4.2, Section 2.4.3, Section 2.4.4 and Section 2.4.5.

2.4.1. Overshoot and Steady-State Error

Peak overshoot is measured by calculating the amount by which the system output exceeds the desired setpoint during the transient phase. Using the maximum measured position, y p ( t ) , and input, r ( t ) , for a square wave, the percentage of overshoot is determined by Equation (5).
Overshoot % = y p ( t ) r ( t ) r ( t ) × 100
The steady-state error (SSE) is measured by calculating the difference between the measured output ( y ( t ) ) and the input signal after the system has settled (Equation (6)).
SSE = r ( t ) y ( t )

2.4.2. Rise Time and Settling Time

Rise time presents how fast the system responds, and it is measured by calculating the time for the response to increase from 10 % to 90 % of the input signal (Equation (7)).
Rise Time = t 90 % t 10 %
The settling time was determined by calculating the duration after which the system response remained within 5 % of the input signal for at least 0.5 s. In some cases, especially when the SSE was large, the system did not reach a settling point.

2.4.3. Integral Absolute Error (IAE)

The Integral Absolute Error (IAE) is determined by integrating the absolute value of the instantaneous error over time. It measures the total size of the error, regardless of its direction (Equation (8)).
IAE = 0 T | e ( t ) | d t i = 1 n | e i | · Δ t
Here, e ( t ) = r ( t ) y ( t ) is the control error, r ( t ) is the reference position, and y ( t ) is the measured position. This metric is very sensitive to prolonged errors [40,41].

2.4.4. Integral Squared Error (ISE)

The Integral Squared Error (ISE) gives more weight to large errors by squaring the instantaneous error before integrating. This measure is more responsive to peaks in the response and punishes high overshoots or oscillations (Equation (9)).
ISE = 0 T e ( t ) 2 d t i = 1 n e i 2 · Δ t
ISE emphasizes larger errors by giving them greater weight. This makes it useful for PID tuning in situations where high precision is required. However, it does not consider the timing of errors and is less effective in oscillating systems where squared errors can cancel each other out [40,41].

2.4.5. Mean Absolute Error (MAE)

The Mean Absolute Error gives the average of all absolute errors during the evaluation period (Equation (10)).
MAE = 1 n i = 1 n e i
Here, n is the total number of error samples or data points. This metric directly measures the extent to which the system deviates from its intended path.

2.4.6. Root Mean Squared Error (RMSE)

The Root Mean Squared Error is a widely used metric that aggregates the square of the errors and takes the square root of their mean (Equation (11)).
RMSE = 1 n i = 1 n e i 2
Here, n is the total number of error samples or data points. Unlike MAE, RMSE penalizes larger errors more and is usually more sensitive to outliers. It provides a reliable measure of control precision when occasional deviations need to be recorded.

2.4.7. Integral Time-Weighted Absolute Error (ITAE)

The Integral Time-Weighted Absolute Error (ITAE) is a refined metric. It multiplies the absolute error by the time (t) before integration. This means that errors that persist longer in the control window are penalized more severely (Equation (12)).
ITAE = 0 T t · | e ( t ) | d t i = 1 n t i · | e i | · Δ t
Here, n is the total number of error samples or data points. This metric promotes the quick removal of errors and a smooth settling process. It is commonly used in controller design for systems where late-stage errors cause more disruption than early errors [41].

2.4.8. Integral Time-Weighted Squared Error (ITSE)

The Integral Time-Weighted Squared Error (ITSE) applies a time-weighted penalty, but using squared error terms (Equation (13)).
ITSE = 0 T t · e ( t ) 2 d t i = 1 n t i · e i 2 · Δ t
Here, n is the total number of error samples or data points. ITSE is more sensitive to long-lasting, high-magnitude errors [41].

2.5. Agentic AI Framework

We have implemented a hierarchical, agentic AI framework for the PID tuning of the Quanser QUBE-Servo 2 platform. The servo system is controlled in real time using a Python program, and a Flask server (Web Server Gateway) is used to interface with AI agents. The Flask server can handle incoming HTTP requests and execute corresponding Python functions to serve web pages or APIs for data communication [42,43]. The goal of the agent is to fine-tune the proportional ( K p ), integral ( K i ), and derivative ( K d ) gains to minimize overshoot, rise time, and steady-state error (SSE). The program starts with arbitrary PID values ( K p = 0.1 , K i = 0.0 , K d = 0.0 ) that require further tuning. The control loop of the program runs at 500 Hz, and it can update K p , K i , and K d values while the system is running if directed by AI agents through the Flask server.
As a reference input, the program generates a square wave signal with an amplitude of ± π 3 radians and a period of 12 seconds. To follow the signal, the QUBE-Servo 2 system measures motor position, applies exponential smoothing to reduce noise, and calculates the control voltage based on the PID formula and a filtered derivative term. The filtered derivative is implemented using the function T d T d + Δ t , and it helps to prevent high-frequency amplification that often occurs in noisy settings. For safe operation, the voltage commands are clipped to the safe operating range of ± 8  V before they are sent to the servo motor.

2.6. Real-Time PID Tuning

2.6.1. n8n Integration and PID Control Agent

To implement AI-based control operation, we used n8n (AI workflow), an open-source, low-code automation platform [38]. It allows for the flexible coordination of AI agents and RESTful API services. n8n can also integrate with custom APIs, connect with large language models (LLMs), and has built-in workflow logic that supports event-driven and conditional actions. Using this architecture, we were able to eliminate the need for complex backend scripting and interact with powerful AI agents that can be easily understood in real time. In our study, n8n acts as the orchestration layer for two AI agents: (1) PID Control Agent and (2) Optimizer Agent. The “PID Control Agent” is activated through a chat interface and works as a supervisor for “Optimizer Agent”. It can interact with human users, can retrieve the system’s performance or error metrics through a “GetMetrics” node, and can evaluate them according to requirements or set thresholds. This agent (PID Control Agent) runs on OpenAI’s o3-mini model [44]. This small reasoning model was chosen for its low cost, high intelligence, and quick performance. Our aim was to conduct frequent and rapid experiments without incurring high inference costs during testing cycles. The outline of the PID control agent in n8n is shown in Figure 3.
Based on the prompt or chat input provided, when certain conditions are met, the PID control agent determines whether the system requires further tuning. If needed, this agent can trigger the “Optimizer Agent,” through a sub-workflow.

2.6.2. Optimizer Agent

The Optimizer Agent is a callable agent that optimizes the PID parameters for multiple objectives, as defined by the parent agent’s chat input or its system prompt. This agent uses OpenAI’s GPT-4.1-mini model [45], which has good reasoning abilities and handles structured control objectives more effectively than lighter models. The Optimizer Agent begins by gathering real-time system performance metrics and the current PID settings using RESTful GET requests at the “GetMetrics” and “GetPID” nodes, respectively. Based on the system’s behavior, it uses the rationale given in the system prompt of the embedded OpenAI chat model and suggests new gain values. Then, it sends these updated parameters ( K p , K i , or K d ) to the main QUBE-Servo 2 control system through the “UpdatePID” node. Communication between these workflows relies on n8n’s structured data exchange and tool chaining system, which ensures that updates happen in sync without disrupting the real-time control loop (Figure 4).

2.6.3. System Prompts

In the large language model (LLM)-based AI framework, the system prompt is a critical element that drives reasoning during operation. The prompt is read during workflow execution and sent to the underlying LLM, such as OpenAI’s o3-mini or gpt-4.1-mini. Since our goal is to control PID parameters, the prompt includes instructions to analyze the current state of the system using real-time performance metrics, such as SSE, overshoot, rise time, and settling time, along with the current PID gain values. It also provides reasoning to make decisions based on evaluation results or set thresholds that guide the operation. For example, the PID control agent’s prompt, when triggered by chat, first checks the current states of the system through the “GetMetrics” node. If the SSE exceeds 0.3 radians and the overshoot is below −30%, it sends the task to the Optimizer Agent for further fine-tuning.
The complete system prompt used in the “PID Control Agent,” which utilizes OpenAI’s “o3-mini” in the background, is provided in Appendix A.1. This prompt serves as a compact and structured communication link between real-time QUBE-Servo 2 operation and the AI agent’s reasoning engine. This ensures that the model has enough context to understand control goals, determine if tuning is necessary, and receive new gain values. The prompt in the “Optimizer Agent” is designed to manage the live servo system using nodes such as “GetMetrics,” “GetPID,” “UpdatePID,” and a set function (ToMainWorkflow) to send information back to the “PID Control Agent.” The “GetMetrics” node collects error or performance metrics calculated by the Python program after each positive square wave. As shown in Figure 4, the “GetPID” node gathers current gain values from the running system. The “UpdatePID” node sends the updated K p , K i , or K d values to the QUBE-Servo 2 system.
Appendix A.2 presents a system prompt that provides the AI agent (Optimizer Agent) with instructions for fine-tuning PID parameters using live feedback. We also used a different Optimizer Agent that functions without specific system prompt guidance. This agent relies on its LLM knowledge engine to adjust the PID values within set safe limits. The prompt for this agent (without system instruction) is given in Appendix A.3.

2.7. Overall Workflow

The overall PID tuning workflow using the AI agent can be described by using the flowchart shown in Figure 5. The main workflow agent (uses the o3-mini model) calls an Optimizer Agent to perform the tuning operation.
Data transfer from the AI agent to the servo system and from the servo system to the AI agent occurs through the Flask server while the system is running. The final output message explaining PID operation can also be sent back to the parent workflow (PID Control Agent). Based on this information, the PID Control Agent’s chat model block can engage in natural conversation, explain updated values of K p , K i , and K d applied to the physical system, and perform additional tuning if requested by the user.

3. Results

3.1. Real-Time Autonomous Tuning

This section presents the results of real-time autonomous PID tuning conducted with AI agents. We implemented two different tuning strategies for optimization, both using the gpt-4.1-mini LLM model. The Optimizer Agents can collect performance/error metrics from the live Quanser QUBE-Servo 2 system to evaluate its performance, make informed decisions, and adjust control parameters as necessary. In the first strategy, the Optimizer Agent followed specific instructions included in its system prompt, which provided guidelines for adjusting parameters (see Appendix A.2). In the second strategy, the agent has more freedom (see Appendix A.3). It only received a safe operating range for the PID gains ( K p : 0.1 to 2.5, K i : 0 to 1.0, and K d : 0 to 0.25) and was responsible for fine-tuning based on live system feedback and its own reasoning. The outcomes varied across experimental runs. However, the AI agent always stayed within safe limits and demonstrated intelligent decision-making throughout the tuning process.

3.1.1. LLM Model with Tuning Strategy

Figure 6 shows the live PID tuning process directed by system prompt guidelines (Appendix A.1). As seen in Figure 6a,b, the LLM first gradually increases the Kp value. It then reduces Kp while increasing the Ki and Kd values over time.
Figure 6c,d shows that, around 104 s, the agent finds a good set of tuned parameters and stops making more adjustments. The final optimized values for this run, which correspond to Run 2 in Table 1, were K p = 0.125 , K i = 0.10 , and K d = 0.025 .
The final tuned PID values varied across different runs. Table 1 shows sample results for some of these runs. In addition to live tuning, the Optimizer Agents also generate outputs that include explanations or justifications for the tuning strategies used. For example, the AI agent returned the following output from Run 3:
“Initial PID values started with Kp=0.1, Ki=0, Kd=0, which resulted
in a high steady-state error and significant negative overshoot.
Incremental steps were taken: by increasing Kp, the SSE improved,
though overshoot stayed high.
Introducing Ki and Kd gradually while fine-tuning Kp helped move the
overshoot closer to zero and reduced SSE to around 0.05,
though initial changes were inconsistent.
Further adjustments were made by increasing Kp to improve rise time
(while monitoring overshoot) and then fine-tuning with Ki and Kd,
which led to a balanced response.
Final PID values achieved a steady-state error around 0.21 and
overshoot slightly negative but near zero, with a low rise time of
about 0.18 s. The settling time remained somewhat high, which could be due to conservative system response or minor noise effects.
Final PID values: Kp = 0.2375 Ki = 0.125 Kd = 0.0375”

3.1.2. LLM Model Without Tuning Strategy

Figure 7 shows the live PID tuning process, which is not directed by the system prompt. Figure 7a,b shows that the AI agent increased the PID parameters gradually at the beginning. After analyzing some of the performance metrics, it increased Kp more than Ki and Kd values, and, at around 42 s, it finds the optimized fine-tuned values.
We observed that, when the AI agent operated without a tuning strategy, it typically found optimal parameters relatively fast (e.g., 50 to 80 s). However, in some cases, it aborted the tuning process, indicating “system instability” or “still room for improvement”. For example, in one failed run (without a tuning prompt/strategy), the Optimizer Agent returned the output shown in Appendix B.1. Table 1 shows PID tuning results for three experimental runs with and without system prompt instructions.
Table 1. PID fine-tuning results with and without system prompt instructions.
Table 1. PID fine-tuning results with and without system prompt instructions.
Fine Tuning—System PromptFine Tuning—Without System Prompt
RunKpKiKdKpKiKd
Run 10.33750.1750.07500.350.100.10
Run 20.12500.100.02500.800.1250.05
Run 30.23750.1250.03750.350.100.10
Note: Each row shows PID gains obtained from different experimental runs.

3.2. Overall Performance

Figure 8, Figure 9 and Figure 10 show the system’s performance during the tuning process in experimental Run 3 (Table 1). The system showed similar behavior in the other experimental runs where a final fine-tuned value was determined. In Figure 8a,b, it is clear that both tuning methods, whether guided by system prompts or not, were effective in reducing the Integral Absolute Error (IAE) and Integral Squared Error (ISE).
Figure 9a,b shows that both PID tuning strategies (with or without system prompts) successfully lowered the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).
Figure 10a,b shows how Integral Time-Weighted Absolute Error (ITAE) and Integral Time-Weighted Squared Error (ITSE) changed during the tuning process. These metrics focus more on errors that happen in later stages due to small oscillations.

3.3. Human Interaction with the System

This section demonstrates the human–machine interaction (HMI) aspect of the study and how the integration of agentic AI reshapes the way that users interact with a control system. For example, in situations where a human user watches how the system behaves and makes qualitative assessments, an LLM-based output is more intuitive than traditional tuning approaches. In our sample runs, when the LLM produced outputs such as, “Current tuning attempts lowered SSE but caused unacceptable overshoot. Gains kept increasing, yet system stability remains doubtful. Recommend aborting tuning to prevent instability.” or “It feels too jittery when approaching the setpoint,” it conveys more than just whether the controller succeeds or fails. Classical or modern controllers lack the ability to interpret such statements or determine subsequent actions. Additionally, without agentic AI integration, having only an LLM in the loop, a control engineer would need to observe system performance and interpret outputs, such as “recommend aborting tuning to prevent instability” or “jittery,” into a manual technical adjustment.
Figure 11 presents an example of HMI for a sample run without a tuning strategy. Initially, the agent determined the optimal values of K p , K i , and K d as 0.22 , 0.1 , and 0.02 , respectively. When the AI agent was asked why it lowered the Kp value during operation, it provided a detailed explanation (Figure 11).
Figure 12 and Figure 13 illustrate two additional interactions, where the questions posed were, “If the load torque increases by 10%, how should the controller change?” and “What if the sampling rate is doubled? Do you think it is a good idea for the current system?” These examples demonstrate that the AI agent interacts with a human user and can provide suggestions for improvement, such as, “Would you like to simulate this adjustment using our optimizer sub-workflow to see what updated PID parameters might look like for the increased load?
The complete responses from the AI agent corresponding to Figure 11, Figure 12 and Figure 13 are provided in Appendix C.1.

3.4. Video Demonstration

This section shows two video demonstrations of experimental runs (Table 2). Video 1 shows real-time PID tuning of the servo system using the prompt from Appendix A.2 for the Optimizer Agent. In Video 2, the prompt from Appendix A.3 is used, allowing the model greater freedom to fine-tune parameters within a safe range. After the tuning process, the Optimizer Agent produced a text response and sent it to the main PID control workflow, which is included at the end of each video.

4. Discussion

To assess the effectiveness of agentic AI-based PID tuning with LLMs, we conducted experiments on the Quanser QUBE-Servo 2 platform. We compared two modes of agentic operation: one where the AI model (gpt-4.1-mini) received structured guidance in the system prompt and another without such prompt instructions. Both agents iteratively adjusted the PID gains ( K p , K i , and K d values), as outlined in several sample runs in Table 1. We analyzed the results found in one of the runs (Run 3) using various time-domain error metrics: IAE, ISE, MAE, RMSE, ITAE, and ITSE. Other experimental runs showed similar performance, except in a few cases (about 1 out of 10 runs) where the AI operating without a system prompt instruction failed to produce a final PID value. An example of such a failed output is provided in Appendix B.1.
Figure 8a,b shows the changes in IAE and ISE over time as tuning progresses. The AI agents with or without the system prompt’s tuning strategy consistently produced lower error values after the initial transient phase. Although both setups displayed similar peak error early in the process (around 20 s), they quickly reached a more stable and lower error state.
Similarly, Figure 9a,b, which shows MAE and RMSE, outlines a rapid decline in error in both cases, after which both stabilize at lower values. The RMSE metric, which heavily penalizes large deviations, shows the good stability of the tuned controller. The peaks seen in the system without prompt instructions indicate periods of less effective tuning and transient oscillations.
Figure 10a,b highlights the time-weighted error accumulation through ITAE and ITSE. These metrics emphasize the importance of minimizing long-term errors. Although both tuning methods start at similar levels, the controller tuned with system prompt instructions shows flatter ITAE growth trends by the end of the experiment. Here, ITAE and ITSE both exhibited an increasing trend due to device sensitivity. In high-performance systems like the QUBE-Servo 2, small oscillations near the settling point can produce significant time-weighted accumulated errors later in operation. This problem can be mitigated by using a more guided control method, such as classical PD control or proportional-plus-rate feedback. This also indicates that even slight signal noise in the servo can cause the derivative term to behave erratically. By design, ITAE and ITSE metrics penalize long-lasting, late-stage oscillations more than early or transient deviations. In our experiments, even when IAE, ISE, MAE, and RMSE decreased significantly (showing strong average control), the small ripples near the setpoint accumulated over time, causing ITAE and ITSE to remain comparatively high. This does not necessarily indicate poor control performance overall, but rather that the QUBE-Servo 2’s residual noise and oscillatory behavior dominate these time-weighted measures. A further refinement of the filter constant (Td) or more conservative derivative tuning could mitigate those tiny oscillations near the setpoint (i.e., improve ITAE and ITSE). However, there is a trade-off: too much filtering slows response (not suitable for live tuning) and reduces the derivative’s stabilizing benefit.
The differences in PID gains in various runs illustrate the tuning behavior of agents. The system prompt led the agent to choose a more conservative Kp (0.2375 compared to 0.35) and a much lower Kd (0.0375 versus 0.10). With structured instructions (e.g., Appendix A.2), the optimizer followed a stepwise procedure that produced more conservative and stable parameter updates. In contrast, without such guidance (Appendix A.3—“Step 1: Use your knowledge to fine-tune Kp, Ki, and Kd values”), the LLM relied on its own internal knowledge to infer tuning strategies within the safe operating range. While this unguided reasoning allowed for faster or aggressive adjustments, it also introduced greater variability in the final tuning outcomes.
Section 3.3 shows that AI agent can parse informal user queries and infer their meaning in the context of control engineering. For example, in one test, the AI agent recommended “aborting tuning to prevent instability”. In another test, it responded with an explanation and a plan: “I noticed the system is oscillating at the setpoint, showing jittery behavior. This might happen because the derivative term is amplifying noise. I will reduce Kd slightly to smooth the response.” This kind of ability to interpret descriptions and connect them to control concepts offers significant potential for applications within human–robot interaction (HRI) and broader human–machine interaction [46]. Beyond interpreting system performance, such approaches can also reduce the dependence on specialized technical or programming expertise. For example, Vemprala et al. showed that ChatGPT can interpret user commands and generate corresponding robot control code, while also enabling iterative refinement through dialogue [47].
Some recent studies offer critical perspectives on the use of LLMs, cautioning about the validity of the chain-of-thought, the scalability of retrieval-augmented generation (RAG), and the reliability of LLM outputs in programming [48,49,50]. However, we would like to highlight that our framework is fundamentally different from retrieval-augmented generation (RAG) and open-ended LLM applications. Instead of retrieving unstructured documents from a knowledge base, our system continuously grounds the LLM to real-time control metrics obtained from the live servo motor. Using the n8n workflow, these control metrics (e.g., IAE, ITSE, overshoot, settling time) are directly linked to the physical system. This approach ensures that the LLM’s recommendations are always constrained by measurable outcomes, unlike RAG systems, which are dependent on retrieval quality and the semantic interpretation of external text. In this way, our method uses feedback-based grounding instead of retrieval-based grounding, making it more reliable for real-time PID tuning.
The use of AI agents also makes the control system more transparent and user-friendly. It can provide natural-language explanations for why certain gains were chosen or changed. Consider the following output: “Increased K p from 1.5 to 1.8 to reduce steady-state error, as the system was settling below the target. Slightly decreased K d to 0.2 to avoid amplifying noise, given the observed small oscillations.” This kind of rationale is incredibly valuable in practice; it is essentially the controller documenting its reasoning in a way that a human operator (or even a non-expert) can understand. Traditional tuning methods provide no such feedback.
In this study, the agentic AI framework serves as an advisor or expert, capable of explaining the tuning process. Users can question the controller’s decisions and receive meaningful responses, transforming the system from a black box into an interactive assistant for control.

5. Conclusions

This study introduces a novel method for real-time PID tuning by using large language models (LLMs) on a live control framework. We presented that an LLM can directly interact with a physical servo motor, the Quanser QUBE-Servo 2, by guiding a tuning process. The AI agent had two modes: one driven by a structured system prompt and the other based on safe ranges of PID values without specific instructions. Both tuning strategies produced good results in lowering performance metrics such as IAE, ISE, MSE, and RMSE. However, they were less effective in reducing ITAE and ITSE. This was primarily due to the high-frequency oscillations and noise in the control system, especially during long run times. Although reducing tiny oscillations at the end to improve ITAE and ITSE is possible through stronger filtering or conservative derivative tuning, it comes with the trade-off of slower response, which limits suitability for live tuning.
The live video demonstrations of agentic PID control (Section 3.4) highlights the potential of LLM-driven agents in bridging high-level AI reasoning to low-level, real-time control tasks. While traditional AI-based control often relies on off-line training or human-in-the-loop or model-based approaches, this study is the first known attempt to use LLMs for the direct, real-time control of a physical system and confirms the practical viability of agentic AI.
This work investigates the role of LLMs not just as tools for optimization but also as interactive agents in control engineering. They can adjust parameters in real time while allowing natural language interaction with the operator. This ability tackles two persistent issues in both traditional and modern control methods: (i) the lack of clear communication and dialogue between the controller and the human, and (ii) the challenge in adjusting to disturbances or changing environments without needing specialized skills. Our study demonstrated that LLMs can serve as both a tuning engine and a human–machine interface, paving the way for a new approach to human-centered control.
Future research in this area should incorporate usability studies of human–machine interaction, different prompting strategies, LLM training, and integration methods to improve system usability, reliability, and performance. Overall, this work is an important step in integrating heuristic, language-based reasoning into real-time machine control systems.

Author Contributions

T.M.A. led the development of this project, designed and implemented the integration between the AI agents and the QUBE-Servo 2 system, conducted experiments, and wrote the manuscript. M.A.R. contributed to the design and development of the AI agent and PID control integration framework. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study does not include a dataset. However, information on experimental runs is available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank the Department of Mechanical Engineering at Weber State University for supplying the Quanser QUBE-Servo 2 used in the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLMLarge Language Model
AIArtificial Intelligence
PIDProportional–Integral–Derivative
SSESteady-State Error
IAEIntegral Absolute Error
ISEIntegral Squared Error
MAEMean Absolute Error
RMSERoot Mean Squared Error
ITAEIntegral Time-Weighted Absolute Error
ITSEIntegral Time-Weighted Squared Error
HMIHuman–Machine Interaction
HRIHuman–Robot Interaction

Appendix A

Appendix A.1

  •             PID Control Agent—System Prompt   
#Role
You are a Control Systems Engineer AI Agent. Your task is to run a
subworkflow called “OptimizerAgent” when asked to tune Kp, Ki, or Kd
parameters or when asked to tune PID parameters.
#Context
First, get the current error/performance metrics from the “GetMetrics” node.
#Strategy
If you find that the SSE is more than 0.3 and Overshoot is less than
negative 30 (-30) or don’t see any other extraordinary errors, then
you execute the sub-workflow “OptimizerAgent”. After the tuning process is
complete, the “OptimizerAgent” will return a brief description of the
tuning procedure and the final Kp, Ki, and Kd values to you.
#Final Output
When you get data back from “OptimizerAgent” print that description and
final PID values in the chatbox.

Appendix A.2

  •             Optimizer Agent—System Prompt   
# Run if you get triggered from “When Executed by Another Workflow”.
#Role
You are a Control Systems Engineer AI Agent, and your task is to perform
intelligent PID (Proportional, Integral, Derivative) tuning for a Quanser
Qube 2 servo motor in real-time.
#Behavior
You behave like a professional yet curious control systems engineer. You
explain your reasoning at various stages, write down your tuning logic, and
continually refine your approach based on feedback from the system. Your
tone is technical, concise, and focused on optimization.
#Model Thinking
Include your reasoning and show your thought process clearly. Include your
opinion in each step, explaining how good or bad this tuning step is towards
fine-tuning the PID parameters.
For example,
Reasoning:
- Current SSE is 0.3, which is too high.
- Increasing Kp will help reduce SSE.
- Ki is already at 0.1, keeping it constant.
- Kd can be slightly increased to reduce oscillation.
Final Answer:
{
  “Kp”: 0.25,
  “Ki”: 0.1,
  “Kd”: 0.05
}
Be logical (according to control system theory) when you write down the
reasoning.
#Context and Tuning Strategy
First, get the current PID value from the GetPIDs node. Check if “Kp” “Ki”
and “Kd” values are in the safe range or not. The safe ranges of these
parameters are given below:
Kp : 0.1 to 2.5
Ki : 0 to 1.0
Kd : 0 to 0.25
At any point in the tuning, if you get Kp, Ki, and Kd outside of the safe
range, abort the operation.
## You follow a progressive, data-driven strategy:
Step 1. Increase Kp Gain in steps of 0.025.
Step 2. After the increment, get error/performance metrics from the
GetMetrics node. You will get:
o Overshoot
o Steady-State Error (SSE)
o Rise Time
o Settling Time
o IAE, ISE, MAE, RMSE, ITAE, ITSE
And get the Kp, Ki, and Kd values from the GetPIDs node.
Step 3. After the first update, does SSE decrease, and does Overshoot
increase? If yes, keep increasing Kp until the Overshoot is more than 30%
(positive 30).
Step 4: When the Overshoot is more than 30% (positive 30), start introducing
the “Ki” and “Kd” values. At the same time, decrease the “Kp” value by
0.0125. You can start with small values (increase Ki by 0.05 or by 0.025,
increase Kd by 0.025 or 0.0125, and decrease Kp by 0.0125). Continuously
update Kp, Ki and Kd with small values. Every time, after updating Kp, Ki,
and Kd, get the error/metrics and check the SSE and Overshoot. If SSE > 0.05
and Overshoot >15, you continue to increase Ki and Kd, and decrease Kp. Once
you get Overshoot less than 15, stop updating Kp, but keep increasing Ki and
Kd until SSE is less than 0.05.
Step 5: Once you see that SSE is less than 0.05, check the rise time from
GetMetrics. If the rise time is greater than 2, increase the Kp by 0.05,
until you get that condition (i.e., rise time < 2). But if you get a rise
time less than 0.001, then ignore this value and keep checking the rise time
from GetMetrics.
Step 6: After you fix the rise time, you are very close to the final fine-
tuned value. You should check the Overshoot and SSE error again. If
overshoot is greater than 5%, increase Kd by 0.0125. Do this until you get
an Overshoot below 5%. And if SSE is greater than 0.03, increase Ki by 0.05.
Do this until you get an SSE less than 0.03.
Step 7: After this, wait about 10 s (for the system to settle) and
check the SSE again from the GetMetrics node. If you see that the SSE is
again above 0.03, increase Ki by 0.0125 until the SSE is less than 0.03.
Step 8: After this, wait about 10 s and check the Overshoot from the
GetMetrics node. If you see Overshoot increasing unexpectedly or more than
2%, decrease the value of Kp (by 0.1), until you get an overshoot less than
2.
Step 9: When the program stops or you get an error, end the tuning and send
the reasoning of your tuning and final Kp, Ki, and Kd values to the
“OutputMainworkflow” node.

Appendix A.3

  •             Optimizer Agent—Without System Prompt/Instructions   
# Run if you get triggered from “When Executed by Another Workflow”.
#Role
You are a Control Systems Engineer AI Agent, and your task is to perform
intelligent PID (Proportional, Integral, Derivative) tuning for a Quanser
Qube 2 servo motor in real-time.
#Behavior
You behave like a professional yet curious control systems engineer. You
explain your reasoning at various stages, write down your tuning logic, and
continually refine your approach based on feedback from the system. Your
tone is technical, concise, and focused on optimization.
#Model Thinking
Include your reasoning and show your thought process clearly. Include your
opinion in each step, explaining how good or bad this tuning step is towards
fine-tuning the PID parameters.
#Context and Tuning Strategy
First, get the current PID value from the GetPIDs node. Check if “Kp” “Ki”
and “Kd” values are in the safe range or not. The safe ranges of these
parameters are given below:
Kp : 0.1 to 2.5
Ki : 0 to 1.0
Kd : 0 to 0.25
At any point in the tuning, if you get Kp, Ki, and Kd outside of the safe
range, abort the operation.
## You follow a progressive, data-driven strategy
Step 1: Use your knowledge to fine-tune Kp, Ki, and Kd values.
Step 2: Update parameters within the safe range.
Step 3. After updating, get error/performance metrics from the
GetMetrics node. And get the Kp, Ki, and Kd values from the GetPIDs node.
Then decide what you need to change to make a good response (i.e., minimum
overshoot and rise time, minimum SSE, etc.)
Step 4: Once you get a good response, wait 10 s and evaluate the
performance again. This is to make sure that the system is stable and
performance is still good. And then abort the tuning operation.
## Make sure to collect updated system performance from the “GetMetrics”
node before you send your result to the main workflow.
## If you don’t find good/optimum values and the system response is not
improving, then abort the tuning operation.

Appendix B

Appendix B.1

  •             Output—Without System Prompt/Instructions   
Initial SSE was very high (0.57) and overshoot was negative (indicative of
undershoot).
Increasing Kp, Ki, and Kd helped reduce SSE to around 0.25 but caused
overshoot to increase drastically (30.95%).
Further increase in Kp, Ki, and Kd reduced SSE slightly to 0.16 and
overshoot to 24.22%, but still too high.
Rise time and settling time are inconsistent or zero, indicating the system
might not be stable or responding well in the simulated time frame.
Overshoot is far above the desired less than 2%, and SSE is still above
0.03.
Given the deterioration in overshoot and unsettled rise/settling time, the
tuning is poor.
Increasing the derivative gain slightly has not mitigated the oscillation.
Continuing to increase gains might destabilize the system further.
Final Thoughts:
Current tuning attempts led to reduced SSE but unacceptable overshoot.
Gains increased progressively but system stability is questionable.
Recommend aborting tuning to avoid unstable operation.
Best to restart with a different approach or manual tuning.
Aborting tuning operation due to poor system response and unstable tuning
attempts. The current PID tuning approach, while marginally reducing the
steady-state error, has resulted in excessive overshoot and questionable
system dynamics. It is recommended to revisit the tuning strategy with a
different approach if further adjustment is desired.”
Since the tuning operation was aborted due to instability issues, no final
PID values (Kp, Ki, Kd) have been set.

Appendix C

Appendix C.1

  •             Question: Why Did You Reduce Kp?   
Reducing Kp was necessary because the initial configuration with a higher
Kp was causing significant overshoot (in our case, around -56.05%). While
a higher Kp can help the system respond more quickly, it also makes the
response more aggressive, which can lead to large overshoots and oscillatory
behavior. By lowering Kp, the system’s response becomes more gradual, thereby
reducing the peak overshoot. We then balanced the decreased proportional gain
with an appropriate amount of Ki (and a small Kd) to ensure that the
steady-state error remained low. This combination ultimately provided a more
stable and controlled response, achieving better overall performance.
  •             Question: If Load Torque Increases by 10%, How Should the Controller Change?   
An increase in load torque effectively acts as an additional disturbance on
the system. This means the controller will need a little more “oomph” to
drive the plant back to its setpoint, especially to address any additional
steady-state error that might occur. Here are a few options to consider:
• Increase Kp slightly: A higher proportional gain can help the system
respond faster to deviations caused by the extra torque. However, be
cautious, as too high a gain might lead to increased overshoot or instability.
• Adjust Ki: Since load disturbances can cause a lingering offset
(steady-state error), you might also consider increasing the integral gain a
bit to counteract the extra error that accumulates over time. This helps
eliminate any residual steady-state error.
• Review Kd: Depending on how the load change affects the system dynamics
(e.g., increased oscillations), the derivative term might be adjusted to help
dampen any possible oscillatory behavior.
In practice, these adjustments are best verified through simulation or
experimentation. One might run a new tuning cycle (using tools like our
OptimizerAgent2) with the updated load condition to fine-tune the PID values.
The target would be to achieve a balance where the system responds swiftly
without overshoots while maintaining minimal steady-state error.
Would you like to simulate this adjustment using our optimizer sub-workflow
to see what updated PID parameters might look like for the increased load?
  •             Question: What if the Sampling Rate Is Doubled? Do You Think It Is a Good Idea for the Current System?   
Doubling the sampling rate means the controller will update twice as often,
which can offer more precise control over the system dynamics. However,
whether it’s a good idea for the current system depends on a few factors:
Processing Capability: A higher sampling rate increases the computational
load. If the hardware can handle the faster rate without delays, that’s
beneficial. If not, you may introduce computation delays or overload the
system.
Noise Sensitivity: A faster sampling rate can also capture more
high-frequency
noise. Without proper filtering or noise mitigation, this may lead to control
signal fluctuations.
System Dynamics: For systems with fast dynamics, increasing the sampling rate
can better capture the behavior and help achieve a more accurate control
response. However, if the system dynamics are relatively slow, doubling
the rate might not yield significant performance improvements and
might even complicate the control strategy.
Tuning Adjustments: With a higher sampling rate, the discrete equivalent of
your controller will change. The gains (Kp, Ki, and Kd) might need
re-tuning to account for the new sampling period, as the discretized
control law will differ from the original one.
In summary, while doubling the sampling rate can improve
performance-especially in fast dynamic systems-it’s important
to ensure that your processing hardware can support it and
that you’re prepared to retune your controller to account for
the change. For the current system, if the benefits (improved response
and precision) outweigh the potential risks (increased noise
sensitivity and computational demands), then it might be a
good idea. Otherwise, a careful analysis or simulation of these
changes would be advisable before making the transition.

References

  1. Romero, M.L.; Suyama, R. Agentic AI for Intent-Based Industrial Automation. arXiv 2025, arXiv:2506.04980. [Google Scholar] [CrossRef]
  2. Cui, C.; Liu, J.; Hui, P.; Lin, P.; Zhang, C. GenControl: Generative AI-Driven Autonomous Design of Control Algorithms. arXiv 2025, arXiv:2506.12554. [Google Scholar] [CrossRef]
  3. Arif, T.M. Deep Learning on Embedded Systems: A Hands-On Approach Using Jetson Nano and Raspberry Pi, 1st ed.; John Wiley & Sons: Hoboken, NJ, USA, 2025. [Google Scholar]
  4. Mueller, B. AI in My PID Loops? It’s More Likely Than You’d Expect. Autom. World 2025. Available online: https://www.automationworld.com/control/article/55279765/control-system-integrators-association-csia-ai-in-industrial-process-control-enhancing-thepid-loop. (accessed on 7 April 2025).
  5. Arif, T.M.; Rahim, M.A. Deep Learning for Engineers, 1st ed.; Routledge: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  6. Arif, T.M. Introduction to Deep Learning for Engineers: Using Python and Google Cloud Platform; Synthesis Lectures on Mechanical Engineering; Springer: Cham, Switzerland, 2020; p. xv + 93. [Google Scholar] [CrossRef]
  7. Barlow, E.; Arif, T.M.; Bentley, P.; Hiatt, H.; Call, C. An Iterative Actuation Method for Master-Slave Telemanipulation by Humanoid Robot Arm. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, New Orleans, LA, USA, 29 October–2 November 2023; Volume 6: Dynamics, Vibration, and Control. [Google Scholar] [CrossRef]
  8. Intelligent Autonomous Systems Group, TU Darmstadt. Intelligent Autonomous Systems Group. Web Page, 2025. Available online: https://www.ias.informatik.tu-darmstadt.de/ (accessed on 30 August 2025).
  9. D Brown, e.F.; Xie, S.Q. Effectiveness of Intelligent Control Strategies in Robot-Assisted Rehabilitation—A Systematic Review. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 1828–1840. [Google Scholar] [CrossRef]
  10. Ding, Y.; Ren, X.; Zhang, X.; Liu, X.; Wang, X. Multi-Phase Focused PID Adaptive Tuning with Reinforcement Learning. Electronics 2023, 12, 3925. [Google Scholar] [CrossRef]
  11. Lakhani, A.I.; Chowdhury, M.A.; Lu, Q. Stability-Preserving Automatic Tuning of PID Control with Reinforcement Learning. arXiv 2022, arXiv:2112.15187. [Google Scholar] [CrossRef]
  12. Gundogdu, T.; Komurgoz, G. Self-tuning PID control of a brushless DC motor by adaptive interaction. IEEJ Trans. Electr. Electron. Eng. 2014, 9, 384–390. [Google Scholar] [CrossRef]
  13. Salem, A.; Mustafa, M.; Ammar, M. Tuning PID Controllers Using Artificial Intelligence Techniques. In Proceedings of the International Conference on Electrical Engineering, Kuta, Bali, Indonesia, 24–25 November 2014; Volume 9, pp. 1–13. [Google Scholar]
  14. Oonpramuk, M.; Tunyasirut, S.; Puangdownreong, D. Artificial Intelligence-Based Optimal PID Controller Design for BLDC Motor with Phase Advance. Indones. J. Electr. Eng. Inform. (IJEEI) 2019, 7, 720–733. [Google Scholar]
  15. Arif, T.M.; McKay, S.; Conklin, B. A Novel Platform Orientation System for Proportional-Integral-Derivative-Controlled Ball-Catching Robot. ASME Lett. Dyn. Syst. Control 2022, 2, 040903. [Google Scholar] [CrossRef]
  16. Tohma, K.; İbrahim Okur, H.; Gürsoy-Demir, H.; Aydın, M.N.; Yeroğlu, C. SmartControl: Interactive PID controller design powered by LLM agents and control system expertise. SoftwareX 2025, 31, 102194. [Google Scholar] [CrossRef]
  17. Zahedifar, R.; Soleymani, M.; Taheri, A. LLM-Controller: Dynamic Robot Control Adaptation Using Large Language Models. Robot. Auton. Syst. 2025, 186, 104913. [Google Scholar] [CrossRef]
  18. Åström, K.; Hägglund, T. The future of PID control. Control Eng. Pract. 2001, 9, 1163–1175. [Google Scholar] [CrossRef]
  19. Isdaryani, F.; Feriyonika, F.; Ferdiansyah, R. Comparison of Ziegler-Nichols and Cohen Coon tuning method for magnetic levitation control system. J. Physics Conf. Ser. 2020, 1450, 012033. [Google Scholar] [CrossRef]
  20. Bennett, J.; Bhasin, A.; Grant, J.; Lim, W.C. 9.3: PID Tuning via Classical Methods. Engineering LibreTexts, Chemical Process Dynamics and Controls (Woolf), 2025. Available online: https://eng.libretexts.org/Bookshelves/Industrial_and_Systems_Engineering/Chemical_Process_Dynamics_and_Controls_(Woolf)/09%253A_Proportional-Integral-Derivative_(PID)_Control/9.03%253A_PID_Tuning_via_Classical_Methods (accessed on 30 August 2025).
  21. Fernández-Ramos, J.; Narvarte, L.; López-Soria, R.; Almeida, R.; Carrêlo, I. An assessment of the proportional-integral control tuning rules applied to Photovoltaic Irrigation Systems based on Standard Frequency Converters. Sol. Energy 2019, 191, 468–480. [Google Scholar] [CrossRef]
  22. Sen, R.; Pati, C.; Dutta, S.; Sen, R. Comparison Between Three Tuning Methods of PID Control for High Precision Positioning Stage. MAPAN J. Metrol. Soc. India 2015, 30, 65–70. [Google Scholar] [CrossRef]
  23. Muresan, C.I.; Birs, I.; Ionescu, C.; Dulf, E.H.; De Keyser, R. A Review of Recent Developments in Autotuning Methods for Fractional-Order Controllers. Fractal Fract. 2022, 6, 37. [Google Scholar] [CrossRef]
  24. Pedret, C.; Vilanova, R.; Moreno, R.; Serra, I. A refinement procedure for PID controller tuning. Comput. Chem. Eng. 2002, 26, 903–908. [Google Scholar] [CrossRef]
  25. Hornsey, S. A Review of Relay Auto-Tuning Methods for the Tuning of PID-Type Controllers. In Reinvention: An International Journal of Undergraduate Research; School of Science and Engineering, Teesside University: Middlesbrough, UK, 2012; Volume 5. [Google Scholar]
  26. Divakar, K.; Praveen Kumar, M.; Dhanamjayulu, C.; Gokulakrishnan, G. A Technical Review on IMC-PID Design for Integrating Process with Dead Time. IEEE Access 2024, 12, 124845–124870. [Google Scholar] [CrossRef]
  27. Feng, L.; Ni, Y.H. Accelerated optimization landscape of linear–quadratic regulator. Automatica 2025, 171, 111927. [Google Scholar] [CrossRef]
  28. Leitão, D.; Cunha, R.; Lemos, J.M. Adaptive Control of Quadrotors in Uncertain Environments. Eng 2024, 5, 544–561. [Google Scholar] [CrossRef]
  29. Murugan, N.; Mohamed Ismail, M.R. An Efficient Coordinated Observer LQR Control in a Platoon of Vehicles for Faster Settling Under Disturbances. World Electr. Veh. J. 2025, 16, 28. [Google Scholar] [CrossRef]
  30. Zheng, Z.A.; Ye, Z.; Zheng, X. Intelligent vehicle lateral control strategy research based on feedforward + predictive LQR algorithm with GA optimisation and PID compensation. Sci. Rep. 2024, 14, 22317. [Google Scholar] [CrossRef] [PubMed]
  31. Camacho, E.F.; Bordons, C. Model Predictive Control, 2nd ed.; Advanced Textbooks in Control and Signal Processing; Springer: London, UK, 2007. [Google Scholar] [CrossRef]
  32. Mackenroth, U. Robust Control Systems: Theory and Case Studies, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar] [CrossRef]
  33. Green, M.; Limebeer, D.J.N. Linear Robust Control, reprint ed.; Dover Books on Electrical Engineering; Dover Publications: New York, NY, USA, 2012. [Google Scholar]
  34. Hajar, J.; Ghane, R.; Hassibi, B. Beyond Quadratic Costs: A Bregman Divergence Approach to H Control. arXiv 2025, arXiv:2505.00319. [Google Scholar] [CrossRef]
  35. Zhu, J.Y.; Gomez Cano, C.; Vazquez Bermudez, D.; Drozdzal, M. InCoRo: In-Context Learning for Robotics Control with Feedback Loops. arXiv 2024. [Google Scholar] [CrossRef]
  36. Fei, H.; Xue, T.; He, Y.; Lin, S.; Du, G.; Guo, Y.; Wang, Z. Large language model-driven natural language interaction control framework for single-operator bimanual teleoperation. Front. Robot. AI 2025, 12, 1621033. [Google Scholar] [CrossRef]
  37. Menon, A.R.; Sharma, R.K.; Singh, P.; Wang, C.; Ferreira, A.M.; Novak, M. Think, Act, Learn: A Framework for Autonomous Robotic Agents using Closed-Loop Large Language Models. arXiv 2025. [Google Scholar] [CrossRef]
  38. n8n GmbH. N8n—Workflow Automation Tool. 2024. Available online: https://n8n.io/ (accessed on 2 August 2025).
  39. Quanser Inc. QUBE-Servo2. Available online: https://www.quanser.com/products/qube-servo-2/ (accessed on 1 August 2025).
  40. Ogata, K. Modern Control Engineering, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  41. Dorf, R.C.; Bishop, R.H. Modern Control Systems, 13th ed.; Pearson: London, UK, 2016. [Google Scholar]
  42. Flask Web Framework. Version 2.0. 2021. Available online: https://flask.palletsprojects.com/ (accessed on 1 August 2025).
  43. Grinberg, M. Flask Web Development: Developing Web Applications with Python, 2nd ed.; O’Reilly Media: Sebastopol, CA, USA, 2018. [Google Scholar]
  44. OpenAI. O3-Mini Language Model. 2024. Available online: https://openai.com/index/openai-o3-mini/ (accessed on 2 August 2025).
  45. OpenAI. GPT-4.1-Mini Language Model. 2024. Available online: https://openai.com/index/gpt-4-1/ (accessed on 2 August 2025).
  46. Khan, M.T.; Waheed, A. Foundation Model Driven Robotics: A Comprehensive Review. arXiv 2025, arXiv:2507.10087. [Google Scholar] [CrossRef]
  47. Vemprala, S.H.; Bonatti, R.; Bucker, A.; Kapoor, A. ChatGPT for Robotics: Design Principles and Model Abilities. IEEE Access 2024, 12, 55682–55696. [Google Scholar] [CrossRef]
  48. Zhao, C.; Tan, Z.; Ma, P.; Li, D.; Jiang, B.; Wang, Y.; Yang, Y.; Liu, H. Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens. arXiv 2025, arXiv:2508.01191. [Google Scholar]
  49. Motamed, S.; Culp, L.; Swersky, K.; Jaini, P.; Geirhos, R. Do generative video models understand physical principles? arXiv 2025. [Google Scholar] [CrossRef]
  50. Rajput, V. Why Scaling RAGs for Production Is So Hard? AIGuys on Medium. 2024. Available online: https://medium.com/aiguys/why-scaling-rags-for-production-is-so-hard-a2f540785e97 (accessed on 17 September 2025).
Figure 1. Schematic diagram of QUBE-Servo 2 DC motor with inertia disk setup.
Figure 1. Schematic diagram of QUBE-Servo 2 DC motor with inertia disk setup.
Actuators 14 00459 g001
Figure 2. Schematic block diagram of PID control operation.
Figure 2. Schematic block diagram of PID control operation.
Actuators 14 00459 g002
Figure 3. Outline of PID Control Agent in n8n workflow.
Figure 3. Outline of PID Control Agent in n8n workflow.
Actuators 14 00459 g003
Figure 4. Outline of Optimizer Agent in n8n workflow.
Figure 4. Outline of Optimizer Agent in n8n workflow.
Actuators 14 00459 g004
Figure 5. Outline of agentic AI workflow.
Figure 5. Outline of agentic AI workflow.
Actuators 14 00459 g005
Figure 6. Fine-tuning PID parameters using system prompt or instructions.
Figure 6. Fine-tuning PID parameters using system prompt or instructions.
Actuators 14 00459 g006
Figure 7. Fine-tuning PID parameters without using system prompt or instructions.
Figure 7. Fine-tuning PID parameters without using system prompt or instructions.
Actuators 14 00459 g007
Figure 8. IAE and ISE in Run 3 with or without using system prompt instructions.
Figure 8. IAE and ISE in Run 3 with or without using system prompt instructions.
Actuators 14 00459 g008
Figure 9. MAE and RMSE in Run 3 with or without using system prompt instructions.
Figure 9. MAE and RMSE in Run 3 with or without using system prompt instructions.
Actuators 14 00459 g009
Figure 10. ITAE and ITSE in Run 3 with or without using system prompt instructions.
Figure 10. ITAE and ITSE in Run 3 with or without using system prompt instructions.
Actuators 14 00459 g010
Figure 11. AI agent offers a detailed explanation when asked about its decision to reduce the Kp value during operation.
Figure 11. AI agent offers a detailed explanation when asked about its decision to reduce the Kp value during operation.
Actuators 14 00459 g011
Figure 12. AI agent responds to the question of how the controller should react if the load torque increases by 10%.
Figure 12. AI agent responds to the question of how the controller should react if the load torque increases by 10%.
Actuators 14 00459 g012
Figure 13. AI agent addresses whether doubling the sampling rate would be a good idea for the current system.
Figure 13. AI agent addresses whether doubling the sampling rate would be a good idea for the current system.
Actuators 14 00459 g013
Table 2. Video demonstrations of real-time PID tuning experiments.
Table 2. Video demonstrations of real-time PID tuning experiments.
VideoLink
Video 1https://drive.google.com/file/d/1OD_wtOu5WGKWhlSoacgME-8QdfR5AGtZ/view?usp=sharing (accessed on 15 September 2025)
Video 2https://drive.google.com/file/d/1hIQhX_PGSJ4vXkFlxAPIH54zEdsQdcDN/view?usp=sharing (accessed on 15 September 2025)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arif, T.M.; Rahim, M.A. Agentic AI for Real-Time Adaptive PID Control of a Servo Motor. Actuators 2025, 14, 459. https://doi.org/10.3390/act14090459

AMA Style

Arif TM, Rahim MA. Agentic AI for Real-Time Adaptive PID Control of a Servo Motor. Actuators. 2025; 14(9):459. https://doi.org/10.3390/act14090459

Chicago/Turabian Style

Arif, Tariq Mohammad, and Md Adilur Rahim. 2025. "Agentic AI for Real-Time Adaptive PID Control of a Servo Motor" Actuators 14, no. 9: 459. https://doi.org/10.3390/act14090459

APA Style

Arif, T. M., & Rahim, M. A. (2025). Agentic AI for Real-Time Adaptive PID Control of a Servo Motor. Actuators, 14(9), 459. https://doi.org/10.3390/act14090459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop