Next Article in Journal
Optimum Design of Transformers for Offshore Wind Power Generators Considering Their Behavior
Next Article in Special Issue
Research on the Coordinated Control of Mining Multi-PMSM Systems Based on an Improved Active Disturbance Rejection Controller
Previous Article in Journal
A Unified Design Methodology for Front-End RF/mmWave Receivers
Previous Article in Special Issue
Very-High-Frequency Resonant Flyback Converter with Integrated Magnetics
 
 
Article
Peer-Review Record

Large Language Model-Based Tuning Assistant for Variable Speed PMSM Drive with Cascade Control Structure

Electronics 2025, 14(2), 232; https://doi.org/10.3390/electronics14020232
by Tomasz Tarczewski 1,2,*, Djordje Stojic 3 and Andrzej Dzielinski 1,2
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Electronics 2025, 14(2), 232; https://doi.org/10.3390/electronics14020232
Submission received: 23 November 2024 / Revised: 29 December 2024 / Accepted: 6 January 2025 / Published: 8 January 2025
(This article belongs to the Special Issue Control and Optimization of Power Converters and Drives)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors provide valuable contributions to the investigation of artificial intelligence-based methods for PMSM control, but in order to be accepted for publication, it must answer:

1. authors mention that SBMA offers good performance, but details about the performance differences between Tuning Assistant and SBMA-based approaches are lacking.

2. authors present a comparison of known LLMs, but it is limited to generalities. A detailed quantitative analysis for each model would be useful (e.g., error measures, response time, resources consumed).

3. authors do not clearly present how the “expertise” of human users was determined and how this factor affects the performance of the proposed solution

4. the comparative analysis is not unitary, for SBMA extensive details are provided, but for LLMs only qualitative observations

5. the data processing for the proposed model depends on manually prepared datasets, which limits scalability and generalization. How do you explain this?

6. compared to other methods, the implementation costs for a real system are not analyzed.

7. authors only marginally present the issues related to noise and vibrations generated by the current ripple. These are critical for real applications and should have been included in the evaluation of the methods. A more detailed analysis of the impact on the applicability of the proposed solutions would be valuable.

8. the comparison between the proposed solution and other established solutions is made, but a predominantly qualitative approach is evident, with significant limitations in the quantitative evaluation

9. authors do not make an analysis of the robustness of the solutions provided by LLM under different conditions (e.g., variation of PMSM motor parameters or introduction of unexpected disturbances). The performance of LLM is evaluated only based on simulations, without validation on real systems or complex scenarios.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This manuscript investigates the application of LLMs as assistants for tuning PI parameters. Though interesting, the nature of this work aligns more with a tutorial or application note rather than a research paper.

The study utilizes general-purpose LLMs, such as ChatGPT and Microsoft Copilot, to assist in tuning tasks. However, these models do not inherently possess advanced mathematical reasoning or problem-solving capabilities beyond summarizing existing knowledge. This limits their ability to innovate or provide specialized solutions in professional engineering contexts. The manuscript does not propose or develop new algorithms or methods to enhance the mathematical reasoning of LLMs, which diminishes its value as a research contribution.

The manuscript primarily demonstrates the application of existing LLMs for tuning, which is a well-documented process. It does not introduce new theories, insights, or techniques for tuning control systems. The results, while interesting, do not appear to advance the state of the art in the field of control systems or artificial intelligence.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

 

The topic of the paper is interesting, the research is up to date and the novel contributions are clear.

The main contributions of the paper is the proposal of a novel Tuning Assistant based on the large language model. The Tuning Assistant is then examined to validate its usefulness.

The manuscript is well written, some comments are written below to improve its quality, I have divided them into major and minor ones.

Major comments:

 #1: As the artificial intelligence does not always give 100% satisfactory results (in almost all domains), what are the limitations here? A short discussion of the limitations is expected.

Minor comments:

 #1: All shortcuts should be explained when used for the first time (even if listed at the end of the paper), also in the abstract (e.g. AI).

 #2: The paper ends with a section including discussion and conclusions. I suggest to divide it into two sections.

 #3: Table 1 is quite big with a small amount of white space between rows, making it hard to read. It would be better to make the rows higher or add the horizontal lines.

 #4: When giving a reference to a web page (e.g. [39]), a short description and last access date should be added.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have responded to all previous requests. I consider that the paper meets the conditions for acceptance for publication.

Author Response

Dear Reviewer 1,

Thank you for considering our manuscript. We appreciate the time and
effort that you put into evaluating our work and providing constructive feedback.

Yours sincerely,
Authors

Reviewer 2 Report

Comments and Suggestions for Authors

Firstly, I would like to apologize for my oversight in the first review. I did not fully recognize that the Ordemio-based TA is the authors’ custom-developed model, while ChatGPT and Microsoft Copilot were included for comparison. I appreciate the clarification provided by the authors in their response. However, even with this understanding, I still have concerns.

Did authors perform enough customization or fine-tuning of chatgpt and copilot for this specific task as ordemio? If not, the comparisons would be inherently limited. 

Prompt is a critical aspect of using LLMs. The authors have not provided any analysis or validation to confirm whether their current prompt design is optimal for these LLMs. It's hard to assess the robustness of the TA.

Still, in reviewer's opinion, the tuned tool appears to automate a known process rather than propose a fundamentally new way to approach tuning in control systems.

Comments on the Quality of English Language

The Quality of English Language has no problems.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

I am satisfied with responses and changes in the manuscript. I recommend the acceptance of the paper now.

When preparing a final version please check all the references inside the document, for example, line 128 – after changing the paper structure, the short description here should also be updated.

Author Response

Dear Reviewer 3,
Thank you for considering our manuscript and giving a positive rating. We appreciate your time and effort in evaluating our work and providing constructive feedback.

Sincerely yours,
Authors

Comments 1: When preparing a final version please check all the references inside the document, for example, line 128 – after changing the paper structure, the short description here should also be updated.
Response 1: Thank you for this remark. We have checked the references and introduced the necessary modifications.

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

Authors have explained the questions very well and the paper can be accepted in current form.

Back to TopTop