1. Introduction
The introduction is organized into four interconnected subsections, each building a coherent rationale for the study.
Section 1.1. Understanding AI: Skills, Ethics, and Responsibility opens the discussion by framing AI NOT merely as a technical skill but as a responsibility with significant ethical and societal dimensions. It highlights challenges such as algorithmic bias, opaque decision-making, and accountability, emphasizing that technical competence must be combined with ethical literacy, critical thinking, and epistemological reasoning. This subsection lays the foundation for the need for interdisciplinary approaches that enable learners to anticipate potential harms, engage in reflective decision-making, and act responsibly in AI-mediated contexts.
Section 1.2. The Italian Istituti Tecnici Superiori (ITS) System provides an overview of the ITS Academies, including their structure, program areas, student demographics, and typical curricula. While ITS programs offer hands-on experience in digital technologies, this subsection underscores the limited integration of AI ethics and responsible governance. By presenting this context, the study highlights the gap between technical training and the development of broader ethical and professional competencies, which motivates the design of the proposed intervention.
Section 1.3. Ethical Literacy in AI for ITS Professionals clarifies the specific role of ITS students: they are not developers or “guardians” of AI systems, but informed users who must understand the ethical, social, and legal implications of intelligent technologies. This section stresses that ethical literacy is a transversal skill—essential for recognizing risks, making responsible decisions, and interacting safely and fairly with AI in professional environments.
Section 1.4. Contribution of This Study describes the practical implementation of the Rome Technopole Spoke 4 initiative. It emphasizes the co-design, multi-stakeholder, and interdisciplinary approach used to develop a modular AI and Ethics training program. This subsection articulates the study’s contributions: theoretically, by defining the role of ITS participants in AI ethics; practically, by providing a model for interdisciplinary training; and methodologically, by demonstrating how collaborative frameworks can generate flexible, context-aware curricula. Together, these contributions illustrate how ITS programs can prepare professionals to navigate AI responsibly, integrating technical skills with ethical reflection and lifelong learning.
1.1. Understanding AI: Skills, Ethics, and Responsibility
In a world increasingly shaped by algorithms and intelligent systems, understanding AI is no longer just a technical skill—it is a responsibility. As artificial intelligence becomes deeply embedded in daily life—from healthcare and transportation to manufacturing and education—it carries not only great potential but also important ethical and societal implications. AI is not a neutral tool: it inherently reflects the values, assumptions, and decisions of its creators and users, influencing fairness, privacy, and social equity across diverse populations (
Mittelstadt et al., 2016).
These realities present challenges that go beyond coding or system design. By coding we mean the ability to write, read, and understand software instructions that tell a computer or AI system what to do. Algorithmic bias, opaque decision-making processes, and unclear accountability require AI professionals to develop ethical literacy: the ability to critically evaluate potential risks, unintended consequences, and societal impacts of AI technologies, while actively engaging in responsible and transparent decision-making throughout development and deployment (
Arbelaez Ossa et al., 2024;
Selbst et al., 2019;
Floridi, 2019). Such literacy is essential not only to reduce harm but also to ensure that AI systems can serve all societal groups fairly, avoiding reinforcement of existing inequalities and promoting inclusivity (
Morley et al., 2020). Promoting ethical awareness could also build trust and legitimacy in AI applications, supporting their sustainable adoption in professional and social contexts.
At the same time, AI could transform sectors directly relevant to Italian ITS programs, including Industry 4.0 manufacturing, digital health services, and smart transportation systems (
Brettel et al., 2014). Yet many curricula remain fragmented, often focused only on technical skills, and may not pay enough attention to ethical considerations or applied learning approaches that contextualize AI’s broader impact on business, work, and society (
Jobin et al., 2019). A translational educational approach—integrating technical competencies with real-world case studies, ethical dilemmas, and experiential simulations—could help students develop both practical and ethical insights, as well as adaptability and critical thinking skills needed to navigate AI-driven environments (
Vieriu & Petrea, 2025;
Mena-Guacas et al., 2025).
Integrating ethics into AI education could also foster broader professional and civic skills, including employability and digital citizenship. Ethical literacy could allow graduates to anticipate and critically assess unintended consequences of technological innovation, such as data misuse, job displacement due to automation, or privacy risks from surveillance technologies (
Slavković et al., 2024;
Bednarowska-Michaiel & Uprichard, 2025). These skills go beyond coding or data analysis, requiring the ability to frame problems from multiple perspectives, engage in interdisciplinary dialogue, and balance efficiency with fairness and accountability (
Dignum, 2019). As AI increasingly mediates interactions between individuals, organizations, and infrastructure, professionals may need to anticipate ethical tensions, communicate value-driven decisions, and integrate normative considerations into technical workflows. This holistic approach could strengthen adaptability, critical thinking, and collaborative skills—core elements of employability and responsible digital citizenship (
Wong, 2024).
Labor market trends suggest growing importance for these capabilities. Employers may increasingly seek talent capable of combining technical expertise with human-centered, ethically informed reasoning, particularly in sectors undergoing rapid digital transformation such as healthcare, finance, manufacturing, logistics, and public administration (
Murire, 2024). Embedding ethics into ITS curricula could therefore be not only an educational enhancement but a strategic necessity, supporting career readiness, lifelong learning, and workforce resilience (
Tang & Lee, 2025).
To achieve this, AI education should be interdisciplinary, integrating technology, social sciences, and the humanities to provide a nuanced understanding of AI’s societal impact (
Smith & Chen, 2025). Interdisciplinarity develops skills such as systems thinking and encourages dialogue across different ways of understanding and producing knowledge—how we learn, interpret, and validate information. In technical areas, this includes understanding how data is collected, models are built, and decisions are derived, as well as assessing the reliability, biases, and limits of these processes.
Mastery of coding, AI literacy, and epistemological reasoning equips students in scientific fields to implement and interpret models, evaluate data critically, and understand how algorithmic outputs relate to real-world outcomes (
Tang & Lee, 2025;
Wang & Martinez, 2024;
Lin & Dai, 2025). By coding we mean the practical ability to write, read, and debug software instructions, design algorithms, and translate real-world problems into step-by-step computational procedures (
Tang & Lee, 2025;
Wang & Martinez, 2024). Coding is not only a technical skill: it trains students to think logically, structure solutions, and understand how computers and AI systems execute instructions.
AI literacy (
Lin & Dai, 2025) refers to the broader ability to understand, evaluate, and interact responsibly with AI systems. It includes knowing how AI processes data, recognizing limitations and biases in models, interpreting outputs in context, and reflecting on the ethical, social, and practical consequences of AI decisions (
Lin & Dai, 2025). Epistemological reasoning—the study of how knowledge is created, validated, and applied—helps students critically assess data sources, model assumptions, and decision-making processes, cultivating a reflective mindset essential for responsible scientific and technological practice (
Lin & Dai, 2025).
Integrating these three domains in AI education allows students to move beyond mere technical proficiency. By developing computational skills alongside critical reflection on how knowledge is produced and applied, learners are better positioned to anticipate unintended consequences, engage in ethical decision-making, and communicate value-driven choices in real-world contexts. Embedding such a framework within ITS programs could help cultivate adaptive expertise, preparing students not only to use AI effectively but also to contribute responsibly to the evolving digital society (
Lin & Dai, 2025).
Beyond interdisciplinarity, curriculum design could involve multi-stakeholder co-creation, engaging educators, industry representatives, regulatory bodies, universities, and community organizations (
Levett-Jones et al., 2010;
Daly-Smith et al., 2020). This collaborative approach would ensure that educational content remains current, relevant, and aligned with labour market needs, while integrating technical, ethical, social, and normative perspectives. Co-design could also help address complex “wicked” problems, strengthen links to internships and employment pathways, and allow curricula to remain agile in response to technological, regulatory, and ethical developments (
Parvatha, 2024;
Ertelt et al., 2021).
Even though ITS programs are not advanced AI or engineering degrees, students can benefit from understanding the ethical dimensions of AI. This includes recognizing that AI systems are not neutral: they reflect the values, assumptions, and choices of their developers and users. Ethical literacy in AI helps students identify potential biases, fairness issues, and privacy risks, and encourages responsible use of technology in real-world contexts (
Lin & Dai, 2025).
By introducing these concepts in a practical and accessible way, ITS students can develop a mindset that combines technical awareness with ethical reasoning. They learn to ask critical questions about AI applications in sectors like manufacturing, digital health, and smart services, understanding how decisions made by AI systems can affect people, organizations, and society. This foundation supports thoughtful, responsible action in professional environments without requiring deep technical mastery (
Wong, 2024;
Murire, 2024).
1.2. The Italian Istituti Tecnici Superiori (ITS) System: Overview, Student Profile, Key Statistics, and AI Considerations
The Italian Istituti Tecnici Superiori (ITS), also known as ITS Academies, provides highly specialized post-secondary education aimed at directly meeting labour market demands (
Ministero dell’Istruzione e del Merito, n.d.;
INDIRE, n.d.-b). Established in 2010, ITS Academies are Italy’s pioneering model of professionalized tertiary education, inspired by similar programs successfully implemented in other European countries. They train highly skilled technicians in sectors critical for economic development, including digital technologies, automation, energy, tourism, healthcare, and more. These institutions are closely linked to the industrial system, preparing mid-level professionals who can help companies harness and manage advanced technologies, including Industry 4.0 solutions.
1.2.1. Institutional Structure and Program Areas
There are 147 ITS Academies distributed across Italy, organized into ten strategic technological areas (
INDIRE, n.d.-a): Energy (17), Sustainable Mobility and Logistics (21), Chemistry and Life Sciences Technologies (11), Agro-Food Systems (24), Housing and Built Environment (4), Mechatronics (14), Fashion System (10), Services for Enterprises and Non-Profit Organizations (9), Technologies for Cultural, Artistic, and Tourism Activities (18), and Information, Communication, and Data Technologies (19)
Geographically, Lombardy hosts the largest number of ITS Academies (25), followed by Campania (16), Lazio (16), Sicily (11), Puglia and Tuscany (10), Calabria and Campania (9), Veneto (8), Emilia-Romagna and Piedmont (7), Liguria and Abruzzo (6), Sardinia (5), and Marche and Friuli Venezia Giulia (4), with single institutions in Molise, Umbria, and Basilicata.
ITS Academies operate under a Foundation of Participation model, which brings together public and private stakeholders, including companies, universities, research centers, local authorities, and the education and training system. This model allows for effective collaboration, combining expertise and resources to deliver high-quality professional education (
INDIRE, n.d.-a,
n.d.-b).
1.2.2. Admission, Program Structure, and Curriculum
ITS programs are open to young people and adults who have completed secondary education, including traditional high school diploma holders, four-year vocational diploma holders with an additional year of technical education, and a small proportion of university graduates (
INDIRE, n.d.-c;
TuttoITS, 2024).
Courses typically last two or three years, comprising 4–6 semesters and totalling 1800–2000 h of study. Internships are mandatory for at least 30% of total hours, and at least 50% of instructors are professionals from the labour market. Students may also complete part of their training through high-level apprenticeship contracts. Each program ends with a final assessment, conducted by commissions composed of school, university, vocational training, and industry representatives (
TuttoITS, 2024;
INDIRE, 2025).
The curriculum covers both technical skills and professional practices, including digital literacy, data protection, privacy, risk assessment, and safe use of advanced technologies. This ensures graduates are prepared to operate responsibly in advanced technological environments.
1.2.3. Artificial Intelligence and Ethical Considerations
While ITS programs provide students with hands-on experience in digital technologies, including tools powered by artificial intelligence, it is important to note that the curricula do not focus specifically on AI ethics or responsible AI governance. The emphasis is primarily on practical technical skills, industry-relevant applications, and digital literacy. Students gain familiarity with AI-enabled systems in professional contexts, but structured training on the ethical, legal, or societal implications of AI is limited.
This situation is not unique to ITS programs; even many university degree courses do not systematically cover AI ethics, highlighting a broader gap in higher education. Addressing this gap could ensure that graduates across educational pathways are not only technically competent but also equipped to understand and navigate ethical challenges in increasingly automated and AI-driven workplaces (
INDIRE, 2025).
1.2.4. Student Profile and Background
According to the 2025 National Monitoring Report, the 450 ITS courses completed by December 2023 involved 11,834 students, with 8588 graduating (72.6%). Graduates found employment at a high rate: 84% (7212) were employed within a year, and 93% of these (6698) in jobs directly aligned with their training (
INDIRE, n.d.-c,
2025;
TuttoITS, 2024).
The ITS student body remains predominantly male (73%), aged 18–25 and largely holding a technical diploma (55.1%). About a quarter of students have a high school diploma (24.3%), 14.5% a vocational diploma, and 3.5% already hold a university degree. Approximately 40% of students were unemployed or seeking their first job before enrolment. Dropout rates are 24.3% overall, higher among women (28.1%) and older students (43.1% over age 30;
INDIRE, 2025).
1.2.5. Qualifications and Recognition
Graduates earn a Higher Technical Diploma, corresponding to Level 5 of the European Qualifications Framework (EQF). The diploma is accompanied by a EUROPASS Diploma Supplement, enabling recognition and mobility within Italy and across Europe.
1.2.6. Legal Framework
The ITS system is governed by a series of Italian laws and decrees spanning from 1999 to 2022. These regulations establish the design, evaluation, monitoring, and certification of ITS programs. A comprehensive overview of these laws is provided in
Table A1 (
Appendix A) (
INDIRE, n.d.-d).
1.3. Ethical Literacy in AI for ITS Professionals: Awareness, Not Guardianship
It is important to emphasize that students and professionals involved in ITS programs are not “guardians” of AI ethics, nor are they responsible for developing or designing intelligent systems. These activities fall within the domain of experts in data science, computer engineering, and software development, who possess the technical skills necessary to create, test, and optimize algorithms (
INDIRE, n.d.-a).
What ITS students are expected to do is interact with AI systems, manage their daily use, and understand their ethical, social, and legal implications within the context of their work (
INDIRE, n.d.-c). They do not need to know the technical details of code or models, but they must be aware of how technology can influence people, processes, and decisions, and act in an informed and responsible manner (
TuttoITS, 2024).
This awareness is crucial to protect them from professional and social vulnerabilities (
Rome Technopole, n.d.). Without the ability to recognize risks and consequences, workers may passively experience the impact of AI, exposing themselves to mistakes, uninformed decisions, or unintended effects on people and the organizations they serve. Ethical literacy, therefore, does not require engineering or development skills but entails the ability to reflect on the consequences of one’s actions, assess risks, responsibly manage interactions with intelligent systems, and clearly communicate decisions, even in complex or uncertain contexts (
INDIRE, 2025).
In practice, this means being able to interpret real-world scenarios, identify ethical and social issues, balance practical objectives with principles of fairness and safety, and recognize when AI might produce problematic or unfair outcomes. ITS students must be capable of managing their interactions with intelligent systems responsibly, contributing to an ethical and inclusive use of technology without assuming roles that require advanced technical expertise.
In summary, a critical understanding of AI ethical implications becomes a fundamental transversal skill: it is not a formal objective of the ITS curriculum, but an essential prerequisite for future professionals to operate with safety, autonomy, and responsibility, protecting themselves from undesired effects of AI and contributing to an ethically aware work environment. Possessing this knowledge allows them to address the challenges of interacting with intelligent systems, recognize risks and consequences, and participate responsibly in digitally complex professional contexts.
1.4. Contribution of This Study
This study originates from the activities of the Rome Technopole, a multidisciplinary innovation hub funded by the EU’s Next Generation EU program and designed to foster scientific excellence and sustainable economic growth in the Lazio region (
Rome Technopole, n.d.). Operating under a Hub & Spoke governance model with six thematic Spokes addressing strategic sectors such as energy, digital transformation, health technologies, and education (see
Appendix A,
Table A2 and
Table A3), Spoke 4 focuses specifically on strengthening higher technical professional education through the ITS Academy system. This system delivers specialized training aligned with emerging labour market needs in areas such as artificial intelligence, digital innovation, and green technologies.
The core mission of Spoke 4 is to develop flexible, modular training programs deployable in both face-to-face and online settings, supporting lifelong learning for first-level university students, ITS participants, and other professionals. The Italian National Institute of Health (ISS) contributes expertise in health sciences, bioethics, and technology to co-design interdisciplinary modules that integrate advanced technical content with reflection on ethical, legal, and social dimensions. Collaborating with the other partners listed in
Table A1, this multi-stakeholder framework ensures that training is relevant, responsive, and grounded in real-world challenges, thereby promoting digital citizenship and responsible engagement with complex socio-technical systems in line with European and UNESCO educational frameworks (
UNESCO, 2021;
European Commission, 2020).
Within this context, the present study reports on the design and development of an AI and Ethics training module created through this co-design, multi-stakeholder approach. While the effectiveness of this model in preparing learners for the evolving demands of digital transformation remains to be fully assessed, the initiative represents an important step toward building a resilient, knowledge-driven regional economy aligned with Italy’s digital and green transition priorities.
This study makes three key contributions. Theoretically, it clarifies the role of ITS participants not as developers or “guardians” of AI ethics, but as informed users and managers of AI systems, emphasizing the importance of awareness and critical comprehension of their social, ethical, and legal implications. Practically, it provides insights into the design of interdisciplinary training modules that equip ITS learners with the capacity to recognize and respond to ethical challenges, fostering responsible decision-making and reducing professional vulnerability in workplace interactions with intelligent systems. Methodologically, it documents a multi-stakeholder co-design process involving academic institutions, industry partners, and the ISS, showing how collaborative frameworks can generate flexible, context-aware training that bridges technical competencies with ethical literacy.
Taken together, these contributions offer a structured perspective on how ITS programs can prepare professionals to engage responsibly with AI technologies. The study demonstrates how a balanced integration of technical skills and ethical reflection can support lifelong learning, workforce resilience, and socially responsible innovation in digitally complex environments.
2. Methods
The methodology of this study is structured into four interconnected components, each addressing a critical phase of the pilot module’s development, deployment, and evaluation. First, within the framework of thematic roundtables involving Spoke 4 stakeholders, discussions were initiated to identify both sector-specific and cross-cutting training needs related to emerging technologies, particularly digital innovation and Artificial Intelligence (AI).
Second, the design and submission of the pilot study outline the creation of the educational module, its theoretical foundations, and the collaborative process with partner institutions. Third, the virtual focus group using a CAWI (Computer-Assisted Web Interviewing) platform describes the data collection strategy, highlighting the tools, question types, and ethical considerations that ensured broad and flexible stakeholder engagement. Finally, the data analysis section details the statistical and qualitative methods employed to rigorously interpret the collected data, combining quantitative rigor with thematic depth. Together, these components offer a comprehensive framework for understanding and validating the pilot educational intervention within the context of vocational and technical education, with the focus on the ITS Academy system in the Lazio Region.
2.1. Training Needs Identification Through Multi-Stakeholder Thematic Roundtables on Emerging Digital Technologies, with a Focus on AI
During the initial phase of the Rome Technopole initiative, thematic roundtables were conducted with representatives from universities, research bodies, public institutions, and industry partners. These discussions within the Spoke 4, involved a wide range of stakeholders (see
Table A1 and
Table A2 in the
Appendix A), including ISS, Sapienza University of Rome, University of Rome Tor Vergata, Roma Tre University, University of Tuscia, University of Cassino and Southern Lazio, LUISS Guido Carli, Campus Bio-Medico University of Rome, as well as the Lazio Region, the Municipality of Rome, the Chamber of Commerce, and other Public bodies and leading national industries such as IBM, ENEL, Engineering Ingegneria Informatica, Leonardo S.p.A., Almaviva, Telecom Italia and other entities.
The roundtables focused on the implications of emerging technologies—particularly Artificial Intelligence (AI)—for the transformation of technical education. Across diverse disciplinary areas, participants identified a widespread and transversal training gap affecting all ITS Academy sectors, including those not explicitly focused on digital innovation. These include, for example, ITS programs in healthcare technologies, sustainable industry, mobility and logistics, agri-food systems, and cultural heritage.
As part of the preliminary phase, a structured stakeholder consultation was conducted through thematic roundtables and working groups involving diverse representatives from the Rome Technopole Spoke 4 ecosystem. Participants included members from ITS Academies, universities, regional and local authorities, research institutions, public bodies, and industry partners.
The primary aim was to gather detailed insights on gaps and training needs related to emerging technologies, particularly Artificial Intelligence (AI), and their ethical, legal, social, and cybersecurity implications within existing ITS curricula. Discussions highlighted that, although many sector-specific programs integrate digital tools, they often lack a foundational layer of education on how AI systems function, their impact on decision-making, and the broader societal and legal challenges involved.
These observations were consistent across sectors such as automation, digital manufacturing, energy, logistics, healthcare, and public services, where AI is increasingly embedded. Stakeholders underscored the urgent need to incorporate algorethic literacy—a cross-disciplinary competence combining AI general knowledge with ethical, legal, cybersecurity, and societal awareness—to equip learners with the critical skills needed for responsible use of AI technologies in professional settings.
This consultation process is intended to produce a first guiding output for the development of a transversal AI and ethics training module, setting strategic directions to address these identified needs within ITS educational pathways.
2.2. Design of the Pilot Study and Submission to the Partners
Coordinated by the Istituto Superiore di Sanità (ISS) within Spoke 4 of the Rome Technopole project, the pilot educational module was developed to directly reflect the thematic priorities identified during Phase 1, with a strong focus on the ethical and technological dimensions of Artificial Intelligence (AI).
The primary aim was to respond to the rapidly evolving demands of the labor market—especially in sectors deeply impacted by digital transformation—by enriching, harmonizing, and innovating the training offer of ITS Academies in the Lazio Region through a curriculum that integrates both AI knowledge and ethical reasoning.
The methodological approach was based on:
A competence-based framework that emphasizes not only technical and professional skills related to AI but also the development of ethical awareness, critical thinking, and responsible decision-making.
The integration of traditional teaching methods with digital and e-learning tools to encourage active learning, autonomy, and engagement, particularly in exploring the complex social implications of AI.
A design that ensures modularity, scalability, and flexibility, allowing adaptation to different educational settings, teaching styles, and time constraints, while maintaining a clear focus on the interplay between AI technologies and ethical considerations.
This approach aligns with the insights from multi-stakeholder consultations, which highlighted the urgent need for educational pathways that do not isolate technical competencies from their ethical, philosophical, and societal contexts. The module aims to prepare learners not only to understand AI technologies but to critically reflect on their impact and to apply ethical reasoning across disciplines and sectors.
2.3. Virtual Focus Group Using a CAWI
A virtual focus group was conducted using a CAWI (Computer-Assisted Web Interviewing) module developed with Microsoft Forms in collaboration with the other partners of Spoke 4, who had previously reviewed the module before completing the survey. This tool was selected to facilitate the efficient and flexible collection of qualitative data from all stakeholders involved in the project, gathering their opinions, feedback, and evaluations. The CAWI module allowed the structuring of both open-ended and closed-ended questions, enabling quantitative responses as well as in-depth reflections. The use of Microsoft Forms ensured easy accessibility for participants and supported preliminary automated data analysis, thus aiding the validation and improvement phases of the pilot module. We remark that participation in the virtual focus group was strictly voluntary, ensuring that all stakeholders could freely choose whether to contribute without any obligation and anonymously.
Embedding the virtual focus group within the CAWI platform provided several key advantages: (1) Comprehensive Data Collection: The integration enabled the simultaneous gathering of quantitative data, alongside qualitative insights from open-ended questions. This combination allowed for a holistic understanding of participants’ views and experiences regarding the educational content and delivery. (2) Participant Convenience and Flexibility: The asynchronous nature of the CAWI-based virtual focus group permitted participants to contribute at their own pace and from any location with internet access. This flexibility encouraged more thoughtful and detailed responses, removing the time constraints typical of traditional synchronous focus groups. (3) Cost and Time Efficiency: Consolidating data collection into a single online tool minimized the need for in-person meetings or live sessions, reducing logistical complexities and resource expenditure while maintaining data richness and depth. (4) Streamlined Data Analysis: The unified platform facilitated a smoother data processing workflow. Quantitative responses collected via graduated questions were easily quantified, while qualitative responses were thematically analysed to extract deeper insights. This dual approach strengthened the evidence base for module refinement. (5) Ethical and Secure Data Handling: Utilizing Microsoft Forms, compliant with institutional IT security standards, ensured data privacy and integrity throughout the research process, addressing concerns related to data protection and regulatory compliance.
The CAWI instrument included a diverse set of question formats tailored to capture a wide range of feedback: (A) Multiple-choice questions to collect clear, categorical data; (B)Graduated-scale questions with five levels to assess participant opinions; (C) Open-ended questions encouraging detailed qualitative reflections; (D) And a Net Promoter Score (NPS) question to evaluate participants’ likelihood of recommending the module to peers.
This integrated and collaborative approach, developed with the participation of Spoke 4 partners, allowed for an effective assessment of the pilot educational module, ensuring its content and delivery methods were well-aligned with the expectations and needs of all involved stakeholders.
Participants from Spoke 4 ISS (developers of the module) were excluded to avoid bias.
2.4. Data Analysis
The quantitative data gathered through the CAWI module were analysed using tailored statistical approaches based on the nature of each question type.
For the graded questions, responses were examined using the one-sample Student’s t-test. This allowed us to determine whether the mean scores significantly differed from a predefined neutral midpoint, providing a rigorous evaluation of participants’ perspectives on various aspects of the pilot module. Values of p < 0.05 were considered statistically significant.
Regarding the single multiple-choice question that permitted multiple selections, a descriptive statistical analysis was conducted. Rather than applying inferential tests, the analysis focused on calculating the frequency and percentage of each option selected by participants. This approach effectively captured the distribution of preferences and highlighted the most chosen themes, offering valuable insights into stakeholders’ priorities and interests without imposing assumptions required for inferential testing.
The Net Promoter Score (NPS) was also calculated to assess participants’ overall endorsement of the pilot module. Respondents were categorized into Promoters (scores 9–10), Passives (scores 7–8), and Detractors (scores 0–6), and the NPS was computed by subtracting the percentage of Detractors from the percentage of Promoters. This metric provided a straightforward and widely recognized measure of user satisfaction and likelihood to recommend, complementing the other quantitative and qualitative analyses.
Qualitative responses collected through open-ended questions were analysed using thematic analysis. This qualitative method enabled the identification and categorization of key themes, patterns, and emerging insights across the dataset. Through iterative coding and careful interpretation, the analysis enriched the quantitative results by revealing deeper participant motivations, concerns, and suggestions related to the module’s content and delivery.
Importantly, the data collection process was conducted anonymously, ensuring that all responses were de-identified to protect participants’ privacy and confidentiality. This ethical approach fostered an environment where stakeholders felt comfortable sharing honest and constructive feedback, ultimately supporting the robustness and credibility of the findings.
Overall, this mixed-methods analysis strategy combined statistical rigor with rich qualitative insight, providing a comprehensive understanding of stakeholder views that informed the validation and refinement of the pilot educational module.
3. Results
The evaluation of the pilot educational module followed the four-phase methodology outlined in
Section 2.1,
Section 2.2,
Section 2.3 and
Section 2.4. Phase 1 (
Section 3.1) focused on identifying training needs through multi-stakeholder thematic roundtables, which shaped the content and scope of the module. Subsequent phases 2–4 (
Section 3.2) involved the collaborative design of the module, its delivery via a virtual CAWI-based focus group, and a mixed-method evaluation combining quantitative and qualitative data analysis. This phased approach ensured broad stakeholder engagement, encouraged critical reflection, and enabled a structured assessment of the module’s clarity, perceived relevance, and utility for employability.
The results are organized into two main sections, reflecting the dual structure of the methodology: one dedicated to the outputs of the needs assessment (Phase 1), and the other focused on the outcomes of the design, delivery, and evaluation phases (Phases 2–4). This structure supports a coherent presentation of both the foundational insights that informed the module’s development and the feedback generated through its piloting.
3.1. Output (Phase 1): Content Orientation for a Transversal AI and Ethics Framework
The consultation process resulted in a shared recommendation to establish a transversal AI and Ethics content framework to be integrated across ITS pathways. This content orientation, while not intended to replace sector-specific training or to serve as a course for AI developers, aims to complement existing curricula by equipping students with critical awareness of AI systems and their broader ethical, legal, societal, and cybersecurity implications.
Key features of this proposed content framework include (1) foundational understanding of algorithmic systems, their functioning, and their role in automated decision-making; (2) algorethic framing, incorporating ethical, legal, human rights, and cybersecurity considerations; (3) interdisciplinary adaptability, allowing integration across ITS tracks from healthcare and agri-food to smart industry and public services; and (4) scenario-based learning, encouraging engagement with real-life case studies, dilemmas, and future-facing challenges.
Stakeholders unanimously agreed on the relevance of this content direction to support digital resilience, responsible innovation, and employability in AI-driven environments. The proposal aligns with international frameworks, such as the European Commission’s Coordinated Plan on AI and UNESCO’s guidelines for human-centric AI, both of which advocate embedding ethical and societal considerations into technical and vocational education systems.
This trace report reflects a documented and consensus-driven output of the Rome Technopole Spoke 4 process and will guide the design of pilot educational materials and future scale-up strategies. A synthetic structure outlining the thematic areas and pedagogical focus for this content framework is provided in
Box 1 to support curriculum development efforts.
Box 1. Preliminary Content Guidelines for a Transversal AI and Algorethics Module.
An AI and Ethics core module must provide all ITS students with a comprehensive, interdisciplinary foundation in Artificial Intelligence, focusing on critical understanding rather than technical programming or AI development. This foundational knowledge is essential to empower students across diverse sectors to responsibly engage with AI-driven systems and technologies in their professional environments.
A module must encompass the following key thematic areas:
Algorethics: Ethical and Philosophical Foundations
Introduce the ethical dimensions of algorithms, revisiting the philosophical origins of algorithmic reasoning and decision-making to frame AI within a human-centered ethical context.
Historical Evolution of Artificial Intelligence
Provide a chronological overview of AI development, from the initial conceptualizations and early breakthroughs (1940–1960), through the AI winters and resurgence (1970–1990), to the deep learning renaissance and current state of AI technologies in the modern era.
Sectoral Applications and Case Studies
Illustrate the role of AI across key economic and social sectors to highlight both opportunities and challenges:
- ○
Industry and Manufacturing: Automation, innovation, and efficiency improvements driven by AI systems.
- ○
Logistics and Transportation: Optimization, predictive analytics, and enhanced mobility solutions.
- ○
Consumer Services: AI-enabled personalization, recommendation systems, and customer experience enhancement.
Healthcare Focus: Specificities and Ethical Challenges
Address AI’s growing role in healthcare, with detailed analysis of:
- ○
Medical diagnosis and decision support from theory to practical implementation.
- ○
Personalized medicine and AI-assisted treatment plans.
- ○
Robotics in surgery and rehabilitation.
- ○
Data management, clinical analytics, and the handling of sensitive health data.
- ○
Critical issues such as accuracy, safety, bias, transparency, regulation, and privacy protection.
Cross-Cutting Ethical, Legal, and Cybersecurity Considerations (Algorethics)
Explore algorithmic bias, accountability, explainability, data protection, human rights implications, and cybersecurity risks associated with AI deployment, fostering awareness of responsible innovation and societal impact.
Didactic Approach and Learning Methodology
Employ scenario-based learning and real-life case studies to engage students with practical dilemmas, ethical challenges, and future-facing issues, promoting critical thinking and informed decision-making.
Interdisciplinary Adaptability
Ensure the module’s content is adaptable for integration across multiple ITS pathways—from healthcare and agri-food to smart industry, logistics, and public services—supporting broad digital literacy and employability in AI-driven labor markets.
This structured framework is designed to guide the development of detailed curricula, ensuring alignment with international best practices and policy frameworks, such as the European Commission’s Coordinated Plan on AI and UNESCO’s guidelines for human-centric AI education.
Ultimately, a module must foster digital resilience, ethical awareness, and responsible innovation skills necessary for a future workforce that will increasingly operate alongside AI-driven intelligent systems.
3.2. From Co-Design to Implementation: Pilot Module Development and Assessment
3.2.1. Outcomes of Phase 2: Prototype Module Content Development
The key outcome of Phase 2 was the development of a modular educational resource titled “Artificial Intelligence between History and Innovation: A Multidisciplinary Teaching Approach”. This volume comprises 18 thematic learning units (“learning cards”) designed to blend technical content on AI with a robust integration of ethical inquiry and reflection.
Each card is composed of three sections:
Subtopic Focus: providing detailed knowledge on AI concepts, historical development, technological applications, and associated ethical issues.
Reflective Questions: designed to foster critical discussion and deepen understanding of the ethical, social, and professional implications of AI.
Suggested Research Activities: encouraging autonomous investigation into both the technical and ethical dimensions of AI through current literature, case studies, and digital resources.
The module starts with an in-depth focus on algorethics—the ethical and philosophical principles governing algorithm design—setting the stage for subsequent units that explore AI’s evolution, real-world applications, and challenges, with particular emphasis on healthcare, industry, and consumer sectors.
Suitable for in-person, blended, and online delivery, the module supports both synchronous and asynchronous learning and is compatible with active learning methods such as flipped classroom, project-based, and inquiry-based learning. This flexible structure ensures its applicability in vocational and technical education environments, like ITS Academies, while also offering relevance for broader professional training programs.
The pilot module was formally submitted to the Spoke 4 partners for review and further development, representing a novel educational tool designed to integrate AI literacy and ethical reasoning, thereby empowering learners to navigate and contribute responsibly to the digital and AI-driven world. The evaluation of the module was conducted using a comprehensive, multi-faceted survey designed to capture nuanced perspectives across several dimensions. The survey included multiple types of questions: Likert-scale items for graded assessment of satisfaction and perceived relevance, multiple-choice questions for structured evaluation of knowledge and practical application, and open-ended prompts to explore thematic and qualitative insights. By combining quantitative and qualitative formats, the survey ensured that participants could express both objective judgments and subjective reflections, thereby producing a rich, multidimensional dataset.
A total of 37 participants from the Spoke 4 network contributed to the survey, corresponding to approximately 94% of the members directly involved in this specific activity in Spoke 4. The participant group was relatively homogeneous, predominantly composed of educators and trainers actively engaged in professional development and instructional activities. This group was complemented by a smaller proportion of experts in digital technologies, healthcare innovation, and policy advisory roles, ensuring the inclusion of technical and contextual perspectives without creating overrepresentation. Importantly, members of the Italian National Institute of Health (ISS) were deliberately excluded from the survey to prevent potential bias stemming from their role as developers of the module. Their exclusion safeguarded the independence of responses, ensuring that feedback reflected the unmediated professional judgment of Spoke 4 partners rather than the influence of authorship or authority in content creation.
To further protect the integrity of the evaluation, all survey responses were collected anonymously. Anonymity was a crucial feature, as it mitigated potential social desirability bias, hierarchical pressure, and conformity effects, allowing participants to provide honest, candid, and well-considered feedback. This methodological choice strengthened the reliability of the data by fostering an environment in which participants could freely express both positive and critical perspectives.
Additionally, the diversity of question types played a complementary role in reducing bias. Likert-scale questions enabled straightforward, comparable quantitative assessment; multiple-choice questions provided structured evaluation aligned with learning objectives; and open-ended prompts allowed participants to elaborate on nuanced observations, contextual concerns, and practical suggestions. The combination of these formats ensured that the evaluation captured both breadth and depth of insights, minimizing the limitations that could arise from a single-question type or narrow response framing.
Taken together, the careful selection of participants, the structured multi-format survey design, and the implementation of anonymity collectively minimized potential biases, reinforced the validity of the findings, and provided a robust foundation for interpreting results. These measures ensure that the collected feedback reflects the independent, considered professional insights of Spoke 4 partners, offering a reliable basis for the further refinement of the module and informing its potential broader deployment within the network. The methodological rigor exemplified in this evaluation demonstrates that even in a relatively small-scale pilot, it is possible to obtain high-quality, actionable insights that support both iterative improvement and evidence-based decision-making.
3.2.2. Outcomes of Phases 3 and 4: Module Delivery, Participant Engagement, and Evaluation
A total of 37 participants from Spoke 4 voluntarily participated in the virtual focus group, completing the CAWI-based survey designed to evaluate the relevance, quality, and perceived impact of the pilot educational module.
The quantitative analysis focused on two primary question types: (a) graded questions using a 5-point Likert scale (1 = minimum, 5 = maximum), and (b) a multiple-choice question allowing multiple selections. The responses to graded questions were examined using a one-sample Student’s t-test against a neutral test value of 3, to assess whether stakeholder perceptions deviated significantly from a neutral stance. Frequencies and percentages were calculated for the multiple-choice item to identify areas of perceived applicability.
Perceived Relevance and Impact—Graded Questions
A total of 37 participants from the Spoke 4 network voluntarily took part in the virtual focus group and completed the CAWI-based survey designed to evaluate the relevance, quality, and perceived impact of the pilot educational module. The quantitative assessment employed multiple question formats, including 5-point Likert-scale items and multiple-choice questions allowing multiple selections. Responses to the graded Likert items were analysed using a one-sample Student’s t-test against a neutral value of 3, to determine whether participant perceptions significantly deviated from neutrality.
Table 1 summarizes the results for the seven graded items, including mean scores,
t-values,
p-values, and 95% confidence intervals calculated as:
where
is the mean value,
SD is the standard deviation,
n is the sample, and the
# t(n−1,0.975) is the critical
t-value for a two-tailed one-sample
t-test with 36 degrees. The critical
t-value for a two-tailed one-sample
t-test with 36 degrees of freedom at the 95% confidence level is
t36,0.975 = 2.028.
All items yielded mean scores significantly above the neutral value, with p-values consistently < 0.001, indicating a statistically robust positive assessment of the module.
The highest mean score (4.19) was assigned to the relevance of the module content for ITS Academy students, underscoring a strong alignment with perceived educational needs.
Similarly, participants positively evaluated the clarity and structure (4.08), and the extent to which the module addresses cross-cutting issues in AI training (4.08), suggesting that the pedagogical design was both accessible and well-contextualized.
The quality of examples and case studies (4.05) and students’ understanding of AI applications (4.03) also received favourable scores, confirming the module’s strength in offering concrete, applicable content.
The item on employability scored slightly lower (3.97), though still significantly positive, indicating cautious optimism regarding the module’s impact on job readiness.
Finally, the overall perceived quality and impact of the training module (4.14) consolidates these positive impressions, suggesting that stakeholders recognize the pilot as a valuable and well-designed educational resource.
These results support the module’s alignment with both the didactic expectations of vocational educators and the practical demands of the technological labour market.
The
Cronbach’s alpha for the seven Likert-scale items was calculated with the formula:
where
α (alpha): is the internal consistency reliability coefficient, measuring how closely related the items of a test or questionnaire are. Typical values range from 0 to 1, with higher values indicating greater reliability.
N: the total number of items (questions or variables) in the test or questionnaire.
: variance of the i-th item. Each item has its own variance because responses can differ across participants.
: total variance of the overall score (sum of all items for each participant).
: sum of the variances of all items, used to compare individual item variance with the total variance.
(N/(N − 1)): correction factor for the number of items, needed because α tends to underestimate true reliability in short tests.
1 − (Σ /): measures the proportion of total variance attributable to the covariance between items (i.e., internal consistency).
A resulted =
0.91, indicating excellent internal consistency and confirming that the survey items reliably measure a coherent construct related to module relevance and perceived impact according to Gliem and Gliem (
Gliem & Gliem, 2003). The high alpha value supports the robustness of the quantitative assessment and strengthens confidence in the interpretation of the results.
All items scored significantly above the neutral midpoint (3), highlighting strong positive perceptions of the module across multiple dimensions, including relevance, clarity, practical examples, and perceived impact on learners’ understanding and employability. The 95% confidence intervals further substantiate the reliability of these findings, demonstrating that even under conservative estimates, participants consistently rated the module positively.
This quantitative evidence complements the qualitative feedback collected through open-ended survey items, providing a comprehensive understanding of participant perceptions. The combination of statistical significance testing, confidence intervals, and reliability analysis ensures a rigorous evaluation framework, suitable for informing further refinement and potential broader deployment of the pilot module.
Sectoral Applicability—Multiple-Choice Responses
Participants were also asked to identify the sectors they believed would benefit most from the module. Multiple selections were allowed, and the frequency of each option is summarized in
Table 2 below.
The results highlight a unanimous perception of the module’s relevance for both Healthcare and Industry, which were selected by 100% of respondents. These sectors are emblematic of areas undergoing fast-paced technological change and where AI is already reshaping workflows and job profiles.
At the same time, the high selection rates for Digital Transformation (78%) and Consumer Applications (73%) reinforce the cross-sectoral value of the module. Given that each respondent could indicate multiple sectors, these percentages reflect a broad consensus on the module’s versatility and applicability.
In total, 120 selections were made across 37 participants, yielding an average of 3.24 sector choices per respondent. Under a uniform distribution (i.e., if all five sectors were equally relevant), each sector would have received approximately 24 selections. Notably, Healthcare and Industry exceed this benchmark by over 50%, while Digital Transformation and Consumer Applications also surpass expectations by 21% and 12%, respectively. This suggests a non-random distribution, indicating that the module is not only seen as pertinent to specific domains, but is also recognized as strategically transversal, responding to converging needs across multiple sectors where AI is becoming an operational and innovation-driven asset.
Thematic Analysis of Open-Ended Responses
A thematic analysis was conducted on the open-ended feedback provided by 33 of the 37 participants (
Table 3), following their review of the training module. This analysis highlighted five major dimensions that participants particularly appreciated or suggested for improvement:
- 1.
Interdisciplinary and Sectoral Relevance (16 mentions)
Many respondents emphasized the
cross-sectoral applicability of the module, particularly the integration of content across
healthcare, industry, ethics, and digital transformation. The
multidisciplinary approach, including references to computer science, philosophy, and medicine, was praised for fostering
integrated and critical thinking:
“Its multidisciplinary approach is a major strength…”
“Involving instructors from different fields… can enrich the course…”
- 2.
Historical and Ethical Depth (15 mentions)
The inclusion of
historical context and the systematic integration of
algorethics (ethical reflection on algorithms) were consistently highlighted as strong points. These elements support a
responsible and contextualized understanding of AI:
“Exploring AI’s historical development helps learners grasp how foundational concepts evolved…”
“Ethics is embedded, not appended…”
- 3.
Concrete Examples and Case Studies (11 mentions)
Participants appreciated the use of
real-world cases, suggesting even more could be added through
company partnerships or guest speakers to bridge theory and practice:
“Including concrete case studies promotes experiential learning …”
“Partnering with companies or research centers … can enhance relevance …”
- 4.
Engagement, Collaboration and Innovation (9 mentions)
Several responses called for the creation of interactive spaces (e.g., online forums), team-based projects, or collaborations with international universities to foster peer learning and innovation:
“Facilitating teamwork on interdisciplinary projects can foster practical skills …”
“Establishing an online forum or community space …”
- 5.
Clarity, Structure, and Pedagogical Design (7 mentions)
Positive comments were made on the
clear organization,
balanced structure, and
progressive content delivery, which helped learners navigate complex issues effectively:
“Clear organization and progressive content delivery help learners …”
“The course is well-balanced between theory and reflection …”
Commentary
The qualitative responses reinforce and enrich the quantitative findings by confirming the module’s multidimensional strength. Not only is it perceived as sector-relevant, but also as pedagogically mature, with ethical and historical depth, strong practical orientation, and openness to collaborative enhancement. Suggestions for improvement—like increasing exposure to real-world cases or enhancing international and peer exchange—should be seen as opportunities to expand an already appreciated and well-received structure.
Overall, this feedback paints a picture of a module that is both didactically robust and conceptually innovative, aligning with the needs of ITS Academy learners and the evolving landscape of AI education.
Net Promoter Score (NPS) Evaluation
To assess overall participant satisfaction with the “AI and Ethics” module delivered within Spoke 4, the Net Promoter Score (NPS) was calculated. Participants were classified into three categories based on their responses: Promoters (scores 9–10), Passives (scores 7–8), and Detractors (scores 0–6). Out of 37 participants, 29 were Promoters, 6 were Passives, and 2 were Detractors.
The proportion of Promoters was pp = 29/37 = 0.7838 = 29/37 = 0.7838, and the proportion of Detractors was pd = 2/37 = 0.0541.
The NPS is obtained as the difference:
An
NPS above 50 is widely regarded as
very good, indicating a high level of participant satisfaction and strong approval of the module (
SurveyMonkey, 2025;
CustomerGauge, 2025;
Qualtrics, 2025). To account for the relatively small sample size in this pilot study (
n = 37), a 95% confidence interval was calculated using the standard formula for proportion-based metrics:
where
SE is the standard error @. The resulting confidence interval ranges from 57.85 to 88.09, demonstrating that even under conservative estimates, participant endorsement remains robust. This supports the interpretation that the module is highly relevant and resonates well with learners, reinforcing the effectiveness of the co-design and ethically informed curriculum approach.
This result demonstrates that the majority of participants perceived the training as valuable and relevant, highlighting effective engagement and positive reception within this pilot implementation.
To further evaluate the distribution of participant responses across the three categories, a χ2 test was conducted, comparing observed frequencies with the expected uniform distribution. With 2 degrees of freedom, the calculated chi-square statistic was 34.44 (p < 0.001), demonstrating a significant deviation from uniformity and confirming that Promoters predominated, while Passives and Detractors were substantially fewer.
Taken together, these quantitative findings reinforce the positive reception of the pilot module, demonstrating that the co-designed, ethically informed curriculum effectively engaged learners and met their expectations in terms of relevance, clarity, and practical applicability.
@ For NPS, responses are treated as a binary measure (Promoters vs. Detractors). A normal approximation was used to compute the 95% confidence interval, which is appropriate for proportion-based metrics even with a moderate sample size (n = 37). The critical value 1.96 corresponds to a 95% confidence level. This provides an estimate of variability while accounting for the limited sample.
4. Discussion
The Discussion is structured to highlight the development, significance, and broader implications of the Spoke 4 pilot initiative, moving progressively from the summary of added values to international evidence, market perspectives, and finally, study limitations.
Section 4.1—Summary and Added Values presents the core rationale and achievements of the pilot AI and Ethics module within ITS Academy programs. It emphasizes the growing need for foundational knowledge in AI ethics to navigate the complex ethical, social, and legal challenges across sectors. The section highlights how the multidisciplinary co-design approach, engaging experts, stakeholders, and learners through virtual focus groups and surveys, reinforced relevance, practical utility, and participant engagement. Key added values include the integration of AI ethics across disciplines, multidisciplinary collaboration, innovative use of digital participatory tools, and high acceptance and practical relevance, fostering learner ownership and long-term impact.
Section 4.2—International Evidence Supporting Multidisciplinary and Co-Designed Educational Pathways for the AI Era situates the Spoke 4 initiative within broader global trends. It demonstrates how co-design, multi-stakeholder engagement, and interdisciplinarity are increasingly recognized as essential for future-oriented AI education. This section discusses evidence from the literature showing that relational co-design, systemic integration, and ethical embedding strengthen curriculum adaptability, inclusiveness, and professional relevance. The pilot project exemplifies these principles by combining expertise from biomedical sciences, computer engineering, philosophy, education, and organizational psychology, creating a modular, interdisciplinary, and ethically informed curriculum.
Section 4.3—Market Expansion and the Need for Holistic, Future-Oriented Skill Development contextualizes the pilot initiative within the global and regional growth of the TVET and vocational training sector. The discussion highlights how technological innovation, demographic shifts, and evolving labor market demands are transforming education, emphasizing skills-based, modular, and learner-centered approaches. The section underlines the importance of combining specialized technical capacities with transversal competencies—such as inclusivity, digital literacy, cybersecurity, sustainability, soft skills, and ethical awareness—to prepare professionals capable of responding to complex, interdisciplinary challenges.
Finally,
Section 4.4 Limitations outlines the study’s contextual limitations, highlighting that although the Italian ITS system is deeply shaped by local governance, institutional structures, and regional dynamics, it nonetheless provides a rich and adaptable model. The lessons drawn from this experience can inspire the design of multidisciplinary and ethically informed training programs in other settings, demonstrating the potential for broader application beyond the Italian context.
Overall, the Discussion demonstrates how the Spoke 4 pilot contributes to an emerging paradigm in which AI education is not confined to technical expertise but incorporates ethical, social, and interdisciplinary dimensions. By combining co-design, multi-stakeholder collaboration, and holistic skill development, the initiative offers a practical model for responsive, inclusive, and future-oriented training in the AI era.
4.1. Summary and Added Values
Within the ITS Academy training programs, there is a growing and urgent need to provide a solid foundation in Artificial Intelligence (AI) ethics as a fundamental component of all educational modules. This foundational knowledge is crucial for learners to navigate the complex ethical, social, and legal challenges that AI technologies present, not only in healthcare but across multiple sectors. As digital transformation accelerates, professionals across disciplines must be prepared to handle issues related to data privacy, algorithmic transparency, bias mitigation, and responsible AI use, ensuring ethical standards are embedded in all technological applications.
To address these challenges, our approach involved a multidisciplinary co-design process engaging experts, stakeholders, and learners through structured activities such as virtual focus groups and online surveys administered via Computer-Assisted Web Interviewing (CAWI). These collaborative settings facilitated open dialogue, enabling the identification of key competencies and the refinement of content to meet real-world needs. The iterative feedback loops and teamwork fostered innovation and consensus, reinforcing the module’s relevance and practical utility.
The methodology employed combines quantitative and qualitative analysis, highlighting high acceptance and enthusiasm for the module among participants. The co-design framework, enriched by the inclusion of virtual focus groups as a participatory tool, served as an effective means to ensure the training content was aligned with current demands and anticipatory of future trends.
Key added values emerging from this initiative include: (1)
Integration of AI Ethics Across Disciplines: Providing learners with a comprehensive understanding of AI ethics applicable beyond healthcare, embracing diverse sectors impacted by digital technologies. (2)
Multidisciplinary Collaboration: Leveraging expertise from various fields to build a well-rounded and applicable curriculum, reflecting the complexity of ethical challenges in AI. (3)
Innovative Use of Virtual Focus Groups and CAWI: Enhancing participant engagement and inclusiveness through digital tools that facilitate rich qualitative input alongside quantitative data. (4)
High Participant Acceptance and Practical Relevance: Demonstrated by positive feedback and thematic analysis, confirming the module’s alignment with learners’ expectations and professional requirements. The evaluation of participant satisfaction through the Net Promoter Score (NPS) further substantiates these findings. With 29 promoters, 6 passives, and 2 detractors, the calculated NPS is 71.43, indicating an excellent level of endorsement according to established benchmarks (
SurveyMonkey, 2025;
CustomerGauge, 2025;
Qualtrics, 2025). High NPS values have been associated with strong engagement, perceived relevance, and motivation to apply learned skills in professional contexts. This outcome is consistent with prior research showing that participatory and ethically informed curriculum design enhances not only learner satisfaction but also the acquisition of critical competencies, such as ethical literacy, adaptive reasoning, and interdisciplinary collaboration, which are essential for navigating AI-driven professional environments
This innovative, participatory design process not only strengthens the content quality but also fosters a sense of ownership and motivation among learners, which is vital for effective knowledge transfer and long-term impact.
4.2. International Evidence Supporting Multidisciplinary and Co-Designed Educational Pathways for the AI Era
This Spoke 4 project at
Rome Technopole (
n.d.) has developed within the broader context of advancing innovative technology education, with a special focus on Artificial Intelligence (AI) and its ethical applications. This initiative stands
as an important example of multidisciplinary collaboration, bringing together experts from diverse domains such as computer science, engineering, ethics, healthcare, and policy. The co-design approach at the core of the project actively involves multiple stakeholders, including academic institutions, industry representatives, regulatory agencies, and community organizations, to ensure the training modules meet real-world needs and challenges. By fostering continuous dialogue and joint decision-making, the project not only enhances the technical quality of the educational content but also embeds essential ethical considerations and social responsibility. This holistic method ensures that learners are equipped with comprehensive knowledge and practical skills that reflect the complexity of AI innovation in contemporary society. Spoke 4 thus embodies a cutting-edge model for developing technology curricula that are responsive, inclusive, and future-oriented.
This study aligns strongly with the growing body of scientific literature emphasizing the importance of co-design, multi-stakeholder approaches, and interdisciplinarity in shaping future-oriented educational strategies, particularly in the context of digital innovation and Artificial Intelligence (AI) (
Alam & Windiarti, 2025;
Mena-Guacas et al., 2023;
Khan et al., 2024;
Ejjami, 2024;
Vidanaralage et al., 2022;
Li et al., 2024;
Stephens et al., 2023;
Jiang et al., 2022). As AI continues to permeate education, healthcare, and labour systems, it becomes increasingly clear that addressing these challenges requires not just technical training, but integrated, ethically informed, and context-responsive approaches.
For instance,
Montt-Blanchard et al. (
2023) emphasize the importance of relational co-design in tailoring educational content to diverse learner needs, while
Kurucz et al. (
2025) show that design thinking and developmental evaluation strengthen adaptability and innovation.
Bari (
2025) and
Esangbedo et al. (
2024) highlight how multi-stakeholder collaboration between industry and academia is essential for curricular alignment with labour demands.
Yam et al. (
2025) advocate for systemic co-design models embedded throughout ideation and implementation, and
Kerr et al. (
2022) demonstrate that embedding ethics and sociotechnical awareness from the outset reinforces multi-stakeholder responsibility in AI literacy.
The pilot initiative developed within Spoke 4 of the Rome Technopole is an example of such an approach. Conceived as a multi-stakeholder platform between universities, research institutes, ITS academies, and industry stakeholders, the project was grounded in co-design and ethical foresight. The resulting modular training system on digital skills, AI, and algorethics brought together professionals from biomedical sciences, computer engineering, philosophy, education, and organizational psychology—showcasing full interdisciplinarity.
The need for co-design in educational innovation is well documented.
Montt-Blanchard et al. (
2023) argue that relational practices among educators, students, and external actors lead to more inclusive and responsive curricula.
Kurucz et al. (
2025) show that experience-based co-design strengthens trust and creativity across sectors. These methodologies were instrumental in shaping our pilot’s structure and content through iterative consultation with ITS trainers, university researchers, and health professionals.
The Rome Technopole pilot explicitly aimed to anticipate the integrative thinking advocated by
Yam et al. (
2025), who propose an integrated co-design model for educational transformation. This systemic view was mirrored in our project by embedding interdisciplinary dialogue into all phases—from needs analysis to pedagogical design to feedback loops.
In addition,
Kurucz et al. (
2025) argue that co-design combined with developmental evaluation fosters adaptive curricula responsive to shifting technological and societal conditions. Drawing on this insight, the pilot modules were designed to be flexible, scalable, and context-aware.
The importance of multi-stakeholder collaboration is further evidenced in
Esangbedo et al. (
2024) and
Bari (
2025), which show how partnerships with employers and regulators help align educational offerings with fast-evolving labour demands. The Spoke 4 curriculum reflects this by incorporating sector-specific digital use cases—e.g., AI in diagnostics, data interoperability in health records, and algorithmic transparency in public services—making the training both multidisciplinary and professionally actionable.
A central dimension of our project, and a growing concern in the literature, is the integration of ethics into AI education.
Kerr et al. (
2022) emphasize that inclusivity and social awareness must be embedded from the outset. In line with this, the concept of algorethics in our modules fosters critical thinking around data bias, explainability, human oversight, and accountability—issues at the heart of an interdisciplinary reflection.
This ethical emphasis echoes
Allen (
2024), who proposes “ED-AI Lit” as a framework that combines computational literacy with civic engagement and social justice. Our curriculum’s focus on AI as both a technical and socio-ethical construct reflects this orientation, encouraging learners not just to use AI, but to question and steward its applications.
The literature also points to specific advantages of interdisciplinarity.
Sissodia and Dwivedi (
2025) show how teams combining engineers, healthcare workers, and ethicists achieve better design outcomes in AI-enabled healthcare.
Shi et al. (
2025) further demonstrate that students in interdisciplinary AI programs develop stronger problem-solving and communication skills.
Finally, the risks of neglecting ethics and co-design are well illustrated in healthcare.
Lastrucci et al. (
2024) and
Montomoli et al. (
2024) argue that deploying AI in critical care without adequate safeguards or interdisciplinary input may cause unintended harms and systemic inequities. Their findings reinforce the idea that education must serve as a preventive infrastructure for responsible innovation.
Overall, this study contributes to an emerging paradigm shift: one in which AI and digital education are no longer the exclusive domain of technologists, but the shared responsibility of educators, policymakers, engineers, social scientists, and citizens. By embedding co-design, multi-stakeholder collaboration, and interdisciplinarity into the very fabric of training modules, the Spoke 4 project offers a prototype for how future educational models can—and must—respond to the complexities of an AI-driven world.
4.3. Market Expansion and the Need for Holistic, Future-Oriented Skill Development
The global TVET sector is undergoing rapid and sustained growth, reflecting the evolving needs of labor markets shaped by technological innovation, demographic shifts, and socio-economic transitions. According to recent data, the global vocational training market was valued at USD 388.1 billion in 2024 and is expected to reach USD 648.9 billion by 2030, with a CAGR of 8.9% (
Research and Markets, n.d.). This significant expansion is not merely quantitative, but also qualitative, signaling a paradigmatic shift in how training is designed and delivered.
Several interrelated trends are driving this transformation. First, the integration of cutting-edge technologies such as artificial intelligence (AI), virtual reality (VR), and augmented reality (AR) is redefining the educational experience—making it more interactive, personalized, and contextually relevant. Second, there is a pronounced shift from traditional degree-oriented pathways to skills-based education, particularly in sectors such as healthcare, IT, and green construction. Third, modular and stackable credentials (e.g., micro credentials) are gaining traction, offering more flexible, responsive, and learner-centric routes to employment. Fourth, the exponential growth of the EdTech industry—projected to reach nearly USD 350 billion by 2030—underscores the rising demand for scalable, tech-mediated learning systems that can support continuous upskilling and reskilling.
These global dynamics are mirrored at regional levels. In Europe, the TVET market is projected to reach USD 431.5 million by 2030, with Germany and Switzerland setting benchmarks through dual-education systems that combine academic instruction with apprenticeships. In India, the surge in vocational training demand is especially evident in STEM disciplines, with initiatives like the Symbiosis Artificial Intelligence Institute aiming to democratize AI literacy. In Italy, the sector is expanding robustly under the framework of the Piano Nazionale di Ripresa e Resilienza (PNRR), which allocates targeted investments for digital and green skills. The Italian market alone is projected to grow at a CAGR of 10.2%, reaching USD 52.6 million by 2030 (
Grand View Research, n.d.).
This economic and technological momentum calls for a renewed focus on holistic education. While technical education remains essential for developing sector-specific expertise, it must now be complemented by a broader set of transversal competencies. These include not only digital fluency, inclusiveness in technology design, and sustainability, but also soft skills such as adaptability, problem-solving, and ethical awareness. In particular, understanding concepts like cybersecurity, algorithmic fairness, and social responsibility in innovation is critical to ensuring that technological progress aligns with human values and societal wellbeing.
In this evolving context, a new educational paradigm is emerging that embraces a “double” or “multi-track” model of skill development. This approach combines specialized capacities and transversal competencies to prepare learners for complex, technology-driven environments. Specialized capacities provide deep technical knowledge tailored to specific domains—such as healthcare, commerce, logistics, or digital innovation—equipping learners with precise, operational abilities and sector-specific expertise. At the same time, transversal competencies enable learners to operate effectively across diverse contexts and adapt to rapidly evolving workplaces. These include inclusive and accessible technologies, acknowledging the growing relevance of assistive and adaptive solutions in contemporary work environments, particularly in healthcare and service-oriented sectors. Digitalization and cybersecurity literacy are also critical, as professionals must be capable of maintaining operational integrity in interconnected, data-rich systems. Sustainability, encompassing environmental, social, and health-related considerations, is another essential area, preparing learners to engage responsibly in practices ranging from occupational health to green production and digital medicine. Complementing these technical and systemic skills, soft skills and relational competencies—such as communication, teamwork, change management, and problem-solving—are vital for navigating high-innovation contexts and collaborative ecosystems. Finally, ethical and social awareness is integral, as professionals must cultivate a critical understanding of the broader implications of emerging technologies, including data privacy, algorithmic bias, and the societal impact of innovation. Such a comprehensive educational approach is critical to bridge the gap between technical specialization and societal needs. It prepares learners not only to excel in their respective domains but to act as informed, responsible agents in increasingly interdisciplinary and socially conscious work environments. Ultimately, this multi-track model of education enables the workforce to engage meaningfully with the complexities of the digital age, fostering innovation that is not only competitive but also inclusive and sustainable.
4.4. Limitations
This study centers on the Italian ITS system, characterized by a distinctive governance and educational design deeply embedded in Italy’s institutional, economic, and regional landscape. While this specific context shapes the model’s unique strengths and stakeholder dynamics, it also presents an opportunity to consider how such a successful, locally rooted approach can inspire adaptation and innovation in other national and educational settings.
The evolving nature of the ITS framework highlights the project’s capacity for continuous growth and responsiveness to emerging technological and societal trends. Future iterations of the model will benefit from ongoing refinement and scaling, informed by both local experience and international comparative research.
Far from limiting the impact of this study, the Italian case provides a valuable and replicable foundation, offering rich insights for policymakers, educators, and institutions worldwide aiming to integrate transversal and multidisciplinary skills into higher technical education.
5. Future Work
Building on the successful pilot and validation of the initial AI training module within the ITS framework, the project is now positioned for strategic expansion. This next phase aims to broaden the educational offer by incorporating critical competencies that go beyond traditional technical skills to meet the increasingly complex demands of modern labor markets.
Recognizing that learners must be prepared for interdisciplinary challenges and future-oriented professions, upcoming modules will emphasize innovation, inclusivity, and sustainability. These initiatives not only enhance graduate employability but also align the ITS system with global trends such as digital transformation, environmental stewardship, and social inclusion.
Collaboration among regional stakeholders, educational institutions, and industry partners within the Rome Technopole ecosystem ensures co-creation and continuous updating of these modules, guaranteeing their relevance and impact. This dynamic approach supports ITS evolution into a flexible, resilient, and future-proof model responsive to technological progress and socio-economic shifts. The planned modular expansion covers a comprehensive range of competencies across key areas. It addresses accessibility and assistive technologies, fostering inclusive learning environments and developing the skills needed to design technologies that support people with disabilities. Cybersecurity is emphasized to ensure learners can protect data and digital infrastructures within increasingly interconnected technical domains. The curriculum also integrates sustainability, embedding environmental and social responsibility principles to prepare students for active participation in the green economy. Additionally, it cultivates transversal soft skills, including communication, teamwork, problem-solving, and adaptability, which are essential for fostering innovation and resilience. Finally, the program strengthens digital literacy and emerging technologies, enabling learners to effectively apply AI, IoT, data analytics, and other cutting-edge tools in practical and professional contexts. Supported by Rome Technopole’s innovation ecosystem, this modular approach strengthens the ITS model’s capacity to respond effectively to evolving labor market demands and to drive inclusive, sustainable, and future-ready technical education.
6. Conclusions
The Rome Technopole, through its Spoke 4 focused on education and workforce development, has successfully implemented a pilot multidisciplinary training module on Artificial Intelligence (AI) and AI Ethics within the ITS Academy framework. This initiative responds to the urgent and growing need to embed ethical foundations in AI education, preparing learners to address complex social, legal, and technical challenges posed by AI across diverse sectors beyond healthcare. A key strength of the project lies in its participatory co-design methodology, engaging experts, stakeholders, and learners in iterative virtual focus groups and surveys. This collaborative approach ensured that the curriculum reflects real-world needs, anticipates future trends, and fosters shared ownership and motivation among participants. The project exemplifies best practices supported by international evidence on the value of multidisciplinary collaboration and stakeholder involvement in educational innovation. By integrating insights from computer science, ethics, healthcare, policy, and industry, the training modules provide learners with comprehensive, applicable skills and promote critical thinking on algorithmic transparency, data privacy, bias mitigation, and responsible AI use. Spoke 4’s model, combining flexible, modular content with ethical foresight and continuous stakeholder dialogue, demonstrates how education can adapt responsively to rapid technological and societal changes. This approach supports the development of a future-ready workforce equipped to navigate AI-enabled transformations responsibly and inclusively. In conclusion, the Rome Technopole’s Spoke 4 initiative sets a benchmark for future-oriented, co-designed, and ethically grounded AI education, highlighting the indispensable role of multidisciplinary and participatory frameworks in preparing professionals for the challenges and opportunities of the AI era.