Review Reports
- Estefanía Avilés Mariño1,* and
- Antonio Sarasa Cabezuelo2,*
Reviewer 1: Valerie Storey Reviewer 2: Anonymous Reviewer 3: Anonymous
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsI greatly enjoyed reading your submitted manuscript as the topic is timely and highly relevant. Your work addresses an important gap in understanding how AI tools can be integrated into engineering pedagogy. However, while your study shows promise and tackles an important subject, several methodological and presentation issues need to be addressed. My primary concern is the lack of a control group, which fundamentally limits your ability to make causal claims about AI's impact. The observed improvements could equally be attributed to the PBL methodology itself, or increased instructor attention, or even the structured SCRUM approach. Claims of "30-33% improvement" and "37% increase" lack proper statistical validation. You need to provide, at a minimum, baseline measurements and post-intervention comparisons. Your reliance on self-reported measures raises validity concerns. Phrases like "were supposed to be done at home" suggest limited oversight of key activities.
Figures 3 and 4 require greater detail, and Table 1 needs to be revisited. Your comparisons to European standards (e.g., "15% higher than international average") lack proper citations and methodological details. How were these benchmarks established? What were the comparison methodologies?
Your discussion overinterprets the results, given the methodological limitations. Claims about "transformative potential" are not supported by the evidence presented. The section needs a clearer distinction between correlation and causation.
Comments on the Quality of English LanguageThe manuscript requires further editing to eliminate grammatically awkward sentences.
Author Response
Our answer is in the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis manuscript presents a pilot study on the integration of AI-enhanced problem-based learning (PBL) and experiential methodologies in a telecommunications engineering course. The topic is timely, and the authors’ effort to combine PBL, CLIL, and AI tools to improve soft skills, reasoning, and communication among engineering students is valuable. The relatively large student sample, the alignment with accreditation standards, and the connection to broader European education policies make this work a relevant contribution to the ongoing discussion on how AI can reshape engineering education. However, the manuscript has several weaknesses that need to be addressed before it can be considered for publication.
The most significant concern relates to the presentation of results. The paper reports numerical improvements such as a 30-33 percent increase in language proficiency and a 37 percent increase in reasoning, but it is not clear how these numbers were measured. There is insufficient detail about the instruments used, the nature of baseline scores, and whether any statistical testing was performed. As a result, the findings risk appearing descriptive rather than empirically validated. Clearer reporting of data collection methods, analysis procedures, and supporting visualizations or summary tables would improve the credibility of the results.
The methods section is also overly descriptive, with long content detailing student roles and tool usage. While this level of detail demonstrates replicability, it overwhelms the reader and detracts from the main pedagogical argument. Presenting such information in a table or appendix would allow the core methodology and rationale to remain clear and accessible, while still preserving detail for those interested in reproducing the study.
The discussion section often reads more like a continuation of the literature review rather than an interpretation of the study’s data. Results are frequently restated alongside citations to theoretical frameworks, which creates redundancy and obscures the authors’ original contributions. A more critical analysis of why certain skills improved while others lagged would be valuable. For instance, while students showed notable gains in communication and reasoning, they made limited progress in solution formulation and technical prototyping. Exploring possible causes for this discrepancy would enrich the discussion and provide more practical insights for educators seeking to replicate or extend the model.
Finally, the manuscript places heavy emphasis on specific AI platforms such as Grammarly, Canva, and Lumen5. While this is useful for describing the pilot course, the novelty of using these commercial tools is limited. The authors should instead emphasize the generalizable insights of their study, such as the role of AI in accelerating feedback, supporting collaborative workflows, or enhancing technical communication. This would make the work more widely applicable beyond the particular set of tools chosen.
Some grammar issues found:
“Pilot students averaged 20 TIC hours and 50 AI interactions, with 50% reporting high AI impact (Likert ≥4).”
“This section is aimed at analysing visible outputs which reflect the benefits and consequences of students’ exposure to the pilot course’s methodology and the diverse ICTs and AIs in use”.
“The pilot course, integrating PBL, TBL, EMI, Scrum, CLIL, TIC, and AI, yielded ~30% improvements in expression, argumentation, vocabulary, soft skills, and critical thinking for B1.2–B2.2 English students- this was the overall level of 2nd year engineering students in the sample for the pilot course.”
“This Engineering pilot course revolved around four stages, each stage combined different steps which included lectures, group work and assessment.”
Author Response
Our answer is in the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsComments to Authors:
The recommendation of this reviewer is “Reconsider after major revisions”, the following suggestions are for the authors’ reference.
(1) The structure of the article should be brought in line with the generally accepted MDPI structure. The structure of References section is confusing, please modify it carefully. There are large blank spaces on page five and nine of the manuscript, please reorganize the layout of the manuscript.
(2) The relationship between AI tool usage and measurable skill improvements (e.g., 50% reporting high utility) is not fully unpacked. A discussion of potential factors (e.g., user engagement, prior experience with AI) would add depth.
(3) The study's reliance on a single pilot course with a specific group of students limits its generalizability. Additionally, the experimental design lacks controls or comparative groups, making it difficult to isolate the effects of AI tools versus other pedagogical strategies.
(4) The conclusion effectively summarizes the study's contributions but could better connect the findings to broader implications for engineering education. Additionally, the proposed Proyecto de Innovación Educativa (PIE) is mentioned without clear details on how it will address the study's limitations.
Author Response
Our answer is in the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe author(s) response to the submitted report is thorough and enhances the submitted paper.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe authors have replied to the comments of the previous round of review , and have made detailed revisions to the article, recommending acceptance.