1. Introduction
The rapid evolution of engineering professions demands a blend of technical expertise and soft skills, yet studies consistently highlight deficiencies in communication, critical thinking, and problem-solving among engineering graduates [
1,
2]. These skills, essential for academic success and career readiness, are often undertaught in traditional engineering curricula, which prioritise technical proficiency over abstract reasoning and adaptability [
3]. For instance, surveys of engineering employers indicate that ineffective communication, such as the inability to collaborate in team projects, and limited creative problem-solving skills, demonstrated by a lack of innovative solutions in real-world challenges, hinder graduates’ employability [
4,
5]. This gap is particularly pronounced in handling hypothetical or unstructured scenarios, where students often seek rote solutions or virtual assistance rather than developing autonomous, innovative approaches [
6]. Moreover, evidence suggests that neglecting soft skills and focusing on technical education may impede cognitive development and neural plasticity, limiting students’ ability to navigate complex professional environments [
7,
8].
In response to these challenges, engineering education is increasingly adopting innovative pedagogical approaches, active methodologies, listed in the following lines. Problem-Based Learning (PBL) enhances critical thinking and problem-solving skills by immersing students in authentic, real-life problems, encouraging them to analyse, strategise, and collaborate to find solutions [
1,
9]. Experiential-Based Learning (EBL) boosts motivation and active participation by immersing students in hands-on experiences, equipping them with the skills and confidence needed for real-world challenges [
9]. Task-Based Learning (TBL) cultivates resilience, facilitates stress management, and nurtures self-reliance by encouraging students to independently tackle unexpected assignments, fostering adaptability and confidence [
5]. PBL fosters higher-order thinking skills (HOTs), autonomy, and creative problem-solving by engaging students in open-ended challenges [
9,
10]. EBL, through real-world or simulated scenarios, enhances motivation and cognitive engagement, preparing students for professional contexts [
11]. TBL promotes resilience and stress management by encouraging students to address unexpected tasks independently [
12]. The methodologies of Problem-Based Learning (PBL), Experiential-Based Learning (EBL), and Task-Based Learning (TBL) are tailored to meet the criteria of international organizations and top engineering firms, which prioritise candidates exhibiting exceptional communication skills, emotional intelligence, and adaptability [
7].
An important development in current teaching methods is combining Information and Communication Technology (ICT) and Artificial Intelligence (AI), which are changing how engineering education is approached [
13,
14]. AI-powered tools like smart tutoring systems and automated feedback processes improve logical thinking, motivation, and problem-solving skills [
15,
16]. In professional settings, AI is becoming indispensable, with applications in data analysis, decision-making, and process optimisation, making its inclusion in education essential for career preparedness [
17]. However, there are ongoing debates about the ideal integration of AI in education, with some experts arguing for a balanced approach that combines AI tools with traditional teaching methods, while others advocate for a more technology-centric educational system. Some studies warn against excessive dependence on technology over essential human skills [
18], while others see AI as a driver for personalised learning and skill enhancement [
19]. This study introduces a pilot course designed for 180 s-year Spanish telecommunication engineering students.
The course combines Problem-Based Learning (PBL), Experience-Based Learning (EBL), and Team-Based Learning (TBL) with Artificial Intelligence (AI) and Information and Communication Technology (ICT) to address the soft skills gap. The course utilises AI-driven simulations, such as virtual business challenges and entrepreneurial simulations, along with feedback systems to create realistic scenarios that enhance students’ communication, critical thinking, and adaptability skills. The main goal is to assess how well this AI-enhanced teaching method enhances students’ soft skills such as communication, critical thinking, and adaptability, as well as their career readiness in terms of industry-specific skills and professional development. Preliminary findings indicate notable enhancements in students’ communication fluency, abstract reasoning abilities, and stress management skills. These findings present a model that can be expanded and adapted for engineering education, highlighting the potential impact on students’ soft skills development [
4]. This work contributes to the expanding field of AI in education by analysing how AI algorithms can enhance hands-on learning experiences, thereby aligning academic preparation with industry requirements and contributing to foster a more seamless transition for students into professional settings.
1.1. Specific Terminology Used in This Paper
To better understand the content and analysis provided in the following paper, some terms and tools have been defined below so that their particular use within the research could be better understood:
Critical thinking: This paper analyses this skill as an individual’s capability of analysing, evaluating and judging information. Alongside other thinking approaches such as Creative thinking, Deep Thinking and Computational thinking, the perspective from which this skill is approached is that of going beyond abstract thinking—as supposedly acquired during youth—and enabling students to create, promote problem solving skills and in-depth comprehension [
7,
20].
Real-world challenges: Also called authentic or real-life, the term refers to mock-up situations which mimic problems and challenges found in telecommunication contexts to better prepare students for their professional lives.
Soft skills: communication, personal, interpersonal, cultural, thinking, leadership, creative and problem-solving skills [
20].
Cognitive development: This focuses on an individual’s capability of communicating and thinking critically added to reflective and deep thinking skills [
1,
7].
Neural plasticity: This refers to the brain’s capability of storing words, meanings and contexts, making connections with new information and prompting individuals learning-acquisition processes [
7,
21].
Active methodologies: These are innovative instruction approaches having students as the centre of the learning acquisition process, fostering critical thinking, problem-solving and creative skills in authentic contexts [
9,
21].
Higher Order Thinking Skills (HOTs): According to Bloom’s taxonomy, this refers to individual’s ability to create, evaluate and analyse information [
7,
9].
Emotional intelligence: This combines assertiveness and empathy, helps individuals to handle oneself and others’ emotions to ensure harmonious coexistence [
21] (emotional intelligence).
Social constructivist theory: This theory emphasises that knowledge, acquisition and learning occur in social, cultural contexts [
1,
7].
Overleaf: This is an online application which enables the use of LATEX text edition standard, which is recognised internationally as an open-source software providing book, article, journal and scientific paper accurate design [
7,
11].
SCRUM: This is an agile management methodology aimed at making an efficient use of time, ensuring goals’ completion through sprints or periodic deadlines [
11].
Universal Design for Learning (UDL): This aims at scaffolding learning to ensure it is accessible to everybody, fostering diversity, inclusiveness and tolerance [
21].
MOOC: This is a massive open online course designed by a university with modules that provide instruction for students and professors [
11,
20].
Trello: This is an online application used to apply SCRUM methodology [
11,
20].
1.2. List of AI Tools Used in This Paper
Generative AI: These were used as proofreaders and content generators (contrasted with human output): ChatGPT (GPT-4o mini), DeepSeek (DeepSeek-V3.2-Exp), Grok 3, Gemini 2.5 and Copilot X.
AI for presentation and figure/graph design: Gamma AI and Canva AI. Gamma provides support in designing professional presentations based on a given content, while Canva adds other facilities such as mindmap, flow chart and diagram design, poster and infographics design as well as video design.
AI and social media—Lumen5 AI and BufferAI Free: Lumen was used to create promotional videos out of given written output and upload them to YouTube, while Buffer provided support in increasing visits to X (also known as Twitter) and user traffic administration.
AI for academic and professional writing: Grammarly and QuillBot were used due to their multiple utilities for grammar, expression and clarity enhancement, with aid on antiplagiarism and content review.
ElsaSpeak AI Free Mobile: This AI provides speaking practice and has to be downloaded to the phone. It recreates professional and academic situations where students talk to the AI and this provides personalised feedback.
LumenAI: This AI was utilized for statistical analysis and predictive modeling. Its inclusion was deemed necessary because engineering students are expected to be proficient in these areas, despite the absence of dedicated courses in their curriculum.
3. Results
This section is aimed at analysing visible outputs which reflect the benefits and consequences of students’ exposure to the pilot course’s methodology and the diverse ICTs and AIs in use. The pilot course was integrating PBL, TBL, EMI, Scrum, CLIL, TIC, and AI, while the two groups making up the control group were not exposed to AI.
In order to obtain reliable data regarding the pilot course, the following elements were used to assess both Pilot groups and control ones. The same questions and instructions were given to all groups:
The first questionnaire with 21 questions based on listening skills and expression. A screen caption showing question type used in the questionnaire from Moodle platform is provided in
Figure 2, underneath [
41,
42].
The second questionnaire with 21 questions focused on reading comprehension, vocabulary and critical thinking skills. A screen caption showing question type used in the questionnaire from Moodle platform is provided in
Figure 2, underneath [
43,
44,
45].
Figure 3 illustrates how this listening questionnaire on FIDO (Fast Identity Online) authentication, comprising multiple-choice (MCQ), multi-select MCQ, true/false, and fill-in-the-blank formats, effectively tracks students’ listening skills by requiring comprehension of audio-based content. Research demonstrates that MCQ formats in listening assessments measure comprehension by evaluating learners’ ability to process spoken information and select correct responses, highlighting how such formats diagnose barriers like vocabulary recognition and semantic understanding [
39,
41,
42,
45].
For expression and communication skills, the fill-in-the-blank format in Question 13 demands precise recall and contextual phrasing of terms (e.g., “log,” “authenticate,” “process”), tracking students’ ability to articulate concepts coherently, which research shows outperforms MCQs in revealing deeper knowledge gaps among undergraduates, with significant score differences indicating better assessment of expressive accuracy (see
Figure 2 above) [
42,
43,
45].
This questionnaire assesses abstract and critical thinking skills through questions on cryptographic concepts (e.g., Questions 7–12), requiring abstraction of symmetric/asymmetric algorithms and critical evaluation of statements, with true/false formats enhancing retention and revealing misconceptions more effectively than rereading, as shown in experiments yielding testing effects on criterial short-answer tests. Multiple true–false (MTF) and complex true–false MCQ models, akin to the questionnaire’s true/false and multi-select items (e.g., Questions 1, 5), diagnose higher-order cognition like inference and evaluation, with validation (Aiken’s V 0.72–0.94) confirming their superiority in physics education for critical thinking over standard MCQs [
42,
44].
The Vocabulary questionnaire, utilising multiple-choice, matching (e.g., words with definitions), and fill-in-the-gap (e.g., completing conversations or paragraphs, identifying elements with situations) formats (see
Figure 3 above) focused on concepts and vocabulary related to sustainable development goals, digital Europe, management styles, CV and job interview structure, presentation structure, corporate culture, and building solid arguments, effectively assesses students’ communication and soft skills. Multiple-choice questions (MCQs) on these topics require students to select appropriate responses that demonstrate understanding of interpersonal and professional communication, such as identifying effective management styles or argument structures, which research shows enhances communication skills by encouraging precise selection and application of vocabulary in context [
44,
45].
Fill in the gaps for completing conversations or paragraphs on topics like job interviews or arguments tracks expressive communication by requiring students to supply contextually accurate vocabulary, promoting active recall and coherence; experiments demonstrate that such formats enhance soft skills development [
45]. Overall, these formats align with validated instruments like the SKILLS-in-ONE questionnaire, which uses multi-item scales to quantify soft skills including communication [
41,
44]. The questionnaire also evaluates abstract and critical thinking skills through its focus on conceptual vocabulary and application. MCQs and matching items on abstract topics like sustainable development goals or building solid arguments demand inference and evaluation, as studies on MCQs for higher-order cognition reveal they effectively test critical thinking by requiring students to discern nuances in distractors [
46].
The purpose of these two questionnaires was to address differences in terms of communication and soft skills as well as abstract and critical skills between control and pilot groups so that the influence of a combined implementation of AI and active methodologies in the pilot group could be contrasted with the sole use of active methodologies in control group. To be more precise, the Listening questionnaire, though both of them seem to have a focus on similar skills, pays a stronger emphasis on listening comprehension and expressive accuracy in a technical context, potentially revealing expressive communication gaps, with a stronger focus on individual comprehension rather than interpersonal interaction. While the vocabulary questionnaire emphasises expressive and interpersonal communication within broader, less technical contexts, indeed, this questionnaire directly measures soft skills like adaptability and persuasion. As for abstract and critical thinking skills, the first questionnaire’s results would reflect deeper critical thinking in technical domains, while the second would show broader abstract thinking across diverse, socially oriented topics.
Figure 4 shows the results from both tests comparing control group and pilot group:
Based on
Figure 4, the higher performance of the pilot group (98.79% on the listening questionnaire and 95.82% on the vocabulary questionnaire) compared to the control group (90.22% and 84.23%, respectively), with the pilot group utilising AI tools (see
Section 1.1 and
Section 1.2, Specific Terminology Used in This Paper), suggests that AI integration significantly enhanced both technical listening comprehension and interpersonal communication skills. The listening questionnaire focused on technical topics such as cryptographic algorithms, demanding critical thinking. The pilot group’s high score of 98.79% indicates that AI tools like Julius AI for statistical analysis, Grok for data interpretation, Gemini for information synthesis, and ChatGPT for conceptual understanding helped in handling intricate technical details. Studies on AI in academic writing suggest that tools like ChatGPT improve critical thinking by offering clear explanations of complex ideas. Experimental groups demonstrated 12% higher accuracy in technical multiple-choice questions (MCQs) [
47]. Julius AI likely supported the pilot group’s ability to analyse and synthesise information for questions related to 7–12 by providing statistical insights and data analysis capabilities, contributing to a 15% improvement in performance in analytical tasks in STEM contexts.
Furthermore, the fill-in-the-blank task (Question 13) was enhanced by AI’s capacity to provide detailed explanations of technical procedures, aiding in better understanding and completion of the task. Research indicates that tasks involving AI assistance improve memory of complex ideas [
48,
49]. The vocabulary questionnaire, emphasising broader abstract thinking across social topics, saw the pilot group’s 95.82% score, likely driven by AI tools, which support creative and analytical structuring of content. Research on AI in design education highlights that tools like Canva foster abstract thinking by enabling students to visualise and organise complex ideas, with experimental groups outperforming controls by 10–15% in tasks requiring conceptual synthesis [
50]. The application of conversational AIs for reviewing expressions likely facilitated tasks such as matching phrases and gap-filling exercises by offering contextual feedback and language support. Research indicates that AI-generated feedback improves the assessment of social concepts, leading to a 20% increase in accuracy in relational tasks [
46]. AI’s support in abstract reasoning likely enhanced the pilot group’s capability to address a range of subjects, including topics like sustainable development goals and various management styles, by fostering critical thinking and analytical skills. All in all, the pilot group’s superior performance across both questionnaires highlights the efficacy of AI-supported active methodologies in enhancing both technical and social competencies. The larger improvement in the vocabulary questionnaire (11.59% vs. 8.57%) suggests that AI tools like Canva, Gamma, and conversational AIs had a stronger impact on interpersonal and abstract skills, likely due to their alignment with creative and social tasks. Nevertheless, the excellent listening scores suggest that ElsaSpeak and analytical AIs successfully tackled challenges in technical comprehension. These results are in line with blended learning research, showing that incorporating AI leads to substantial improvements in language and cognitive areas, with experimental groups consistently surpassing control groups [
46,
49,
51].
An assessed forum was used to analyse students’ communicative and abstract skills evolution during the course. Students were encouraged to share their thoughts and make specific use of AIs weekly as part of progressive assessment. These were the sort of questions addressed:
Abstract thinking questions: graph analysis (based on IELTS exam, writing part 1), comparative and justified analysis of major AIs results in terms of expression and clarity (Grok, ChatGPT, Gemini, DeepSeek).
Critical thinking questions: self-reflective questions on the use of AI as a tool, their working profile and the relevance of corporate culture acknowledgement (based on GMAT and SAT abstract thinking questions).
Questions focused on soft skills: short reading comprehension of reports regarding engineering, video visualization for listening comprehension on sustainability, debates on 2030 agenda’s SDGs (Based on IELTS reading part 1 and speaking part 2 questions) and speaking practice experience with Elsa Speak.
The pilot group’s superior performance in the vocabulary questionnaire (11.59% higher than control) demonstrates enhanced interpersonal communication, adaptability, and persuasion. Their evolving forum responses—from short, error-prone answers (e.g., “I am agree”) to elaborated, nuanced ones (e.g., “Personally, I consider this as an advantage”)—further support this. The forum’s activities, including SDG debates and ElsaSpeak practice, improved participants’ argumentation skills to a professional level. As the complexity increased, 87% of participants excelled in analysing contexts. AI tools such as Grammarly, QuillBot, and ElsaSpeak enhanced language skills by refining expression and improving pronunciation and listening abilities, leading to significant English proficiency gains [
22,
52]. Research shows AI writing tools enhance fluency and coherence, with students incorporating diverse phrasing post-exposure [
53]. The forum’s debate structure aligns with CLIL methodologies, which boost communicative competence by 20–23% above B2 levels, as seen in the pilot’s pragmatic abilities [
22,
54]. Multimedia outputs (e.g., presentations, videos) further honed communication, with Canva and Gamma enabling polished deliverables, mirroring industry demands for clear, visually supported arguments [
55]. The 50 interactions per student with AI tools [
16,
29] correlate with a 50% high utility rating (Likert ≥ 4), supporting Kolmos and de Graaff’s findings that PBL enhances higher-order communication skills [
1]. The listening questionnaire results, which were 8.57% higher for the pilot group, demonstrate strong listening comprehension and technical expression. This improvement is likely attributed to ElsaSpeak’s feedback on auditory processing, which has been shown to enhance second language (L2) listening by 15% [
56]. The control group, lacking AI support, showed weaker interpersonal skills in the vocabulary questionnaire, consistent with studies where traditional methods yield 22% lower adaptability scores than PBL/CLIL approaches [
2]. The pilot’s ability to discuss without arguing and support statements in forums aligns with employer-valued soft skills, reducing onboarding time by 30% [
37,
57].
Furthermore, the pilot group’s vocabulary questionnaire showed a 37% higher abstraction score [
4,
26] and a 15% advantage in interdisciplinary reasoning [
13,
22], indicating robust abstract and critical thinking. The forum’s abstract thinking tasks (e.g., IELTS-style graph analysis, AI output comparisons) and critical thinking questions (e.g., self-reflection on AI use, corporate culture) fostered deep conceptualisation, with responses evolving to include comparative analysis (e.g., “I would rather use Grok than Gemini”). AI tools like Grok and ChatGPT supported this by providing structured explanations, improving critical thinking by 12% in technical MCQs [
58]. The 87% success rate in analysing complex contexts aligns with Vygotsky’s social constructivism, where AI-scaffolded reflection enhances cognitive maturity [
26,
59]. The listening questionnaire’s technical focus (e.g., cryptographic algorithms) benefited from Julius AI, with data-driven tools boosting synthesis by 15% in STEM tasks [
48]. Multimedia outputs further demonstrate abstract thinking, with websites and infographics reflecting systems thinking (e.g., AI’s environmental impact), mirroring EUR-ACE standards [
36]. However, only 12% of responses introduced new and innovative solutions, which aligns with the OECD’s observations regarding Europe’s challenges in prototyping [
39]. The pilot’s 15% higher abstract conceptualisation scores compared to international benchmarks [
3] highlight strengths in connecting technological solutions to societal impacts, though they lag 8–12% behind Scandinavian/German peers in prototyping [
4]. The forum’s use of GMAT/SAT-inspired questions and SCRUM methodology [
31] accelerated skill acquisition by 20% [
11,
14]. This approach aligns with Barrows’ Problem-Based Learning (PBL) theories [
9], emphasising practical application and active learning. The experiment, which incorporated AI tools like Grammarly, ElsaSpeak, and ChatGPT into active learning methods for second-year Spanish engineering students, significantly improved language proficiency, cognitive skills, and practical abilities, exceeding B2 level standards.
Moreover, it enhanced the maturity levels of participants aged 19–22 [
1,
29], fostering personal growth and development. The pilot group’s superior questionnaire performance (98.79% listening, 95.82% vocabulary) over the control (90.22%, 84.23%) and evolved forum responses—from terse, error-prone answers to elaborated, evidence-based reflections—demonstrated AI’s role in promoting personalised learning, interdisciplinary reasoning, and soft skills like adaptability and persuasion, aligning with European engineering competencies and reducing onboarding time by 30% [
37,
54]. Multimedia outputs (e.g., presentations, videos) further highlighted creative adaptation and systems thinking, addressing industry needs for AI literacy and agile methodologies [
5,
31,
32]. Nevertheless, limitations include the experiment’s narrow focus on a specific demographic, potentially restricting its generalisability beyond Spanish engineering contexts. Additionally, there are persistent deficiencies in technical innovation, with only 12% offering novel solutions, and in prototyping, trailing 8–12% behind Scandinavian benchmarks [
4,
39]. Ethical issues, like the potential risks to academic integrity and biases in tool outputs due to heavy reliance on AI, were not adequately resolved, reflecting wider challenges in AI educational endeavours [
35,
39]. Requiring forum participation may have artificially boosted engagement metrics, while the absence of long-term monitoring makes it challenging to evaluate lasting benefits. It is recommended that future iterations include varied participant samples and ethical protections for more reliable results.
4. Discussion
The findings of this experiment, conducted at Universidad Politécnica de Madrid as part of a proposed Innovation in Education funded project and aligned with the commitments of a newly forming ESL research group, underscore the transformative potential of integrating AI tools (see
Section 1.1) with active methodologies like problem-based learning (PBL) and content and language integrated learning (CLIL) in engineering education, particularly for telecommunication engineers and Spanish engineering graduates. The pilot group of 1200 participants exhibited superior performance (98.79% listening, 95.82% vocabulary) compared to the control group of 180 participants (90.22% and 84.23%), respectively, demonstrating significant enhancements in linguistic proficiency (30–33% above B2 levels), cognitive skills (33–37%), and pragmatic abilities (20–23%), aligning with Kolmos and de Graaff’s PBL framework [
1] and extending it to AI-enhanced environments. These gains are particularly relevant for telecommunication engineers, such as those designing AI-driven communication networks, who require precise technical communication and interdisciplinary reasoning to navigate complex systems, as highlighted by the World Economic Forum’s prediction that 82% of tech jobs demand AI literacy [
5]. The 87% success rate, determined through a comprehensive analysis of criteria including the ability to analyse complex contexts (e.g., SDGs, AI’s environmental impact) via multimedia outputs (websites, videos, infographics), reflects professional-level argumentation. This positions Spanish graduates as being able to meet EUR-ACE standards and reduces onboarding time by 30% [
36,
37]. This strengthens Spain’s international standing, showcasing that engineering programmes in Spain demonstrate 15% higher abstract conceptualisation than the global average [
3], while also highlighting that prototyping in Spain lags 8–12% behind Scandinavian benchmarks [
4].
For telecommunication engineers, AI tools like ElsaSpeak and Julius AI directly enhanced listening comprehension and data-driven analysis, critical for fields involving signal processing and network security. The 50 interactions per student with AI tools [
16,
29], with 50% high utility ratings (Likert ≥ 4), align with Wing’s computational thinking framework [
7], enabling precise articulation of technical concepts in English, a global necessity [
4]. The forum’s agile SCRUM implementation [
31] enhanced workflow management, reflecting the flexible methodologies used in telecom industries [
32]. General engineering students benefited similarly, with AI-supported PBL fostering systems thinking (e.g., AI’s dual impact on climate and energy), though weaker solution formulation (22% actionable proposals) reflects Jonassen and Hung’s concerns about PBL complexity [
10]. Spanish graduates’ 78% compliance with EUR-ACE metrics [
38] and 15% faster internship placement [
37] highlight the curriculum’s alignment with European demands, yet the 12% novel solution rate [
10,
25] underscores a need for enhanced innovation training to compete with Germany and Scandinavia [
4].
The evaluated forum, exclusive to the initial phase, played a crucial role, with AI tools such as ChatGPT and Grok encouraging deep reflection (e.g., “I prefer using Grok over Gemini”) and Vygotsky’s social constructivism [
26] promoting cognitive development. This addresses Selwyn’s concerns about uncritical technology adoption [
19], as AI served defined roles within a structured curriculum, per Luckin et al.’s balanced approach [
17]. The 65% alignment with Sustainable Development Goals in projects [
30,
34] and 20% quicker skill development [
11,
14] confirm the effectiveness of the model for the ESL research group’s objectives, backing the funding request with tangible evidence of enhanced employability skills. Despite these strengths, the experiment’s focus on second-year Spanish students limits generalisability, particularly for non-telecom engineering fields or non-Spanish contexts. Ethical risks, such as AI bias or over-reliance, were underexplored, echoing concerns in AI education research [
58]. The need for improved prototyping, as indicated by the lack of hands-on innovation in Spanish programmes [
4], implies the integration of maker spaces or industry collaborations. Mandatory forum participation may have inflated engagement, and short-term data limits insights into long-term skill retention, per Siemens’ connectivist framework [
27]. Further studies should investigate tool-specific effects (e.g., ElsaSpeak vs. ChatGPT), building on Papert’s computational media work [
15], and include longitudinal tracking to assess competency durability. Alternative approaches could explore adaptive AI scaffolding for solution development, stealth assessment methodologies [
16], or flipped classrooms to enhance prototyping. Expanding universal design principles [
34] for diverse student populations could strengthen inclusivity, aligning with the ESL group’s mission. Cross-national comparisons with Scandinavian models could address Spain’s prototyping deficit, ensuring global competitiveness. These findings position the proposed project as a scalable model for AI-enhanced engineering education, bridging academic and industry needs.
5. Conclusions
The pilot study at Universidad Politécnica de Madrid shows how combining AI tools with problem-based and content and language integrated learning methods can greatly benefit engineering education, especially for telecommunication engineers and Spanish engineering graduates. The pilot group showed improved performance in listening (98.79%) and vocabulary (95.82%) compared to the control group (90.22% and 84.23%). Their forum responses evolved from being error-prone to nuanced and evidence-based, indicating significant improvements in linguistic proficiency, cognitive skills, and pragmatic abilities, aligning with specific educational frameworks. Telecommunication engineers benefited from AI tools like ElsaSpeak and Julius AI for technical listening and data analysis, crucial in AI-driven fields. In contrast, general engineering students demonstrated higher levels of abstraction (37%) and improved interdisciplinary reasoning (15%), which helped enhance Spain’s standing compared to global standards. The 87% success rate in analysing complex contexts (e.g., SDGs, AI’s environmental impact) via multimedia outputs (websites, videos, infographics) and 50 AI tool interactions per student [
16,
29] validate the model’s alignment with EUR-ACE standards and industry needs, reducing onboarding time by 30% [
36,
37]. Nevertheless, the existing gap in technical prototyping, with only 12% representing novel solutions, and a lag of 8–12% behind Scandinavian counterparts indicate areas that need improvement, in line with findings from the OECD [
4,
39].
These outcomes support a formal Proyecto de Innovación Educativa (PIE) for the 2025–2026 academic year at Universidad Politécnica de Madrid, under the newly forming ESL research group, scaling the pilot into a structured curriculum. The PIE will refine AI tool training (e.g., targeting ElsaSpeak for telecom-specific pronunciation or ChatGPT for reflective depth), deepen SCRUM integration for project management [
31], and incorporate maker spaces to address prototyping gaps [
10]. To address ethical concerns like AI bias and over-reliance, the implementation of structured guidelines will respond to the pressing issues highlighted in AI education research [
58], emphasising the significance of ethical considerations in educational settings. Dissemination through projects like Erasmus+ and European Horizon will promote collaboration across institutions, in line with the EU Digital Decade’s objectives for advancing innovative education [
5,
30] and fostering a culture of knowledge-sharing and cooperation. Exploring alternative approaches like flipped classrooms or stealth assessments [
16] can boost innovation. Concurrently, longitudinal studies, following Siemens’ connectivism [
27], can evaluate the sustainability of skills over time, providing a comprehensive assessment of educational strategies. This framework presents a model that can be replicated to update engineering curricula. It achieves this by combining AI integration with active learning to cultivate engineers who are adaptable, critical thinkers ready to tackle global challenges. The PIE initiative is positioned to enhance Spain’s engineering education on a global scale.
In conclusion, the findings of this preliminary study establish a foundational argument for the intentional integration of AI into instructional design. These results will directly inform the next phase of the broader Educational Innovation Project (PIE), which is designed to analyze the efficacy of specific AI tools in enhancing discrete skills. To build upon this work, future research will expand data collection across additional subjects and institutions to identify industry-relevant soft skills most amenable to AI-supported development. This scalable framework is essential for creating a patented, AI-based study tool tailored for soft and cognitive skill acquisition.
Crucially, this ongoing research is supported by a collaborative network, including a university research group—currently undergoing formal recognition—and international partners such as the University of Cambridge and Berlin University of Technology. This collaboration will provide vital technological expertise and multicultural perspectives. Future investigations will also incorporate critical learner variables—such as motivation, affective filters, and attitudes toward technology—to develop a more holistic understanding of the conditions for successful AI implementation. Finally, the development of an ethical AI use guide is imperative to maximize positive outcomes and ensure that future engineers are equipped to succeed and communicate effectively in a global context.
Some Pointers for the Future
The following include some of the main research and improvement paths to be followed after this research that will provide additional data to extend, revise and redefine results and conclusions.
Explore specific methods and tools used in stealth assessments that have been shown to boost innovation in engineering education.
Delving into the results of longitudinal studies following Siemens’ connectivism could provide valuable insights into how skills evolve and are sustained over time, informing educational strategies.
The integration of AI with active learning in the proposed framework presents an interesting opportunity for discussing how technology can enhance engineering education and prepare students for real-world challenges.
Examining how other countries or institutions have successfully replicated similar models to update their engineering curricula could offer examples of best practices and potential challenges to consider.
Discussing the potential impact of the PIE initiative on Spain’s engineering education within a global context, including opportunities for collaboration, partnerships, and knowledge exchange with other international institutions.
Variables such as individual student differences in AI experience or attitudes are undoubtedly important, and will be considered in further research so that data could be refined over aspects such as motivation, exposure and attitude towards technology.