Next Article in Journal
Thermal Performance Analysis of Borehole Heat Exchangers Refilled with the Use of High-Permeable Backfills in Low-Permeable Rock Formations
Previous Article in Journal
Performance Comparison of Metaheuristic and Hybrid Algorithms Used for Energy Cost Minimization in a Solar–Wind–Battery Microgrid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of a Contextualized AI and Entrepreneurship-Based Training Program on Teacher Learning in the Ecuadorian Amazon

by
Luis Quishpe-Quishpe
1,
Irene Acosta-Vargas
2,3,
Lorena Rodríguez-Rojas
2,
Jessica Medina-Arias
2,
Daniel Antonio Coronel-Navarro
3,
Roldán Torres-Gutiérrez
1 and
Patricia Acosta-Vargas
4,5,*
1
Facultad de Ciencias de la Vida, Universidad Regional Amazónica Ikiam, Vía Muyuna—Alto Tena Km 7, Tena 150150, Ecuador
2
Facultad de Ciencias Socio Ambientales, Universidad Regional Amazónica Ikiam, Vía Muyuna—Alto Tena Km 7, Tena 150150, Ecuador
3
Grupo de Investigación de Población y Ambiente, Universidad Regional Amazónica Ikiam, Vía Muyuna—Alto Tena Km 7, Tena 150150, Ecuador
4
Intelligent and Interactive Systems Laboratory, Universidad de Las Américas, Quito 170125, Ecuador
5
Carrera de Ingeniería Industrial, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de Las Américas, Quito 170125, Ecuador
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(19), 8850; https://doi.org/10.3390/su17198850
Submission received: 8 August 2025 / Revised: 28 September 2025 / Accepted: 30 September 2025 / Published: 3 October 2025

Abstract

The integration of emerging technologies is reshaping the teaching skills required in the 21st century, yet little evidence exists on how contextualized training supports rural teachers in adopting active methodologies and critically incorporating AI into entrepreneurship education. This study evaluated the impact of a 40-h professional development program implemented in Educational District 15D01 in the Ecuadorian Amazon. Thirty-nine secondary school teachers participated (mean age = 43.1 years); 36% lacked prior entrepreneurship training, and 44% had not recently mentored student projects. A sequential explanatory mixed-methods design was employed. The quantitative phase employed a 22-item questionnaire that addressed four dimensions: entrepreneurial knowledge, competencies, methodological strategies, and AI integration. Significant pre–post improvements were found (p < 0.001), with large effects for knowledge (d = 1.43), methodologies (d = 1.39), and AI integration (d = 1.30), and a moderate effect for competences (d = 0.66). The qualitative phase analyzed 312 open-ended responses, highlighting greater openness to innovation, enhanced teacher agency, and favorable perceptions of AI as a resource for ideation, prototyping, and evaluation. Overall, the findings suggest that situated, contextually aligned training can strengthen digital equity policies, foster pedagogical innovation, and empower educators in underserved rural communities, contributing to sustainable pathways for teacher professional development.

1. Introduction

Strengthening teacher professional development in rural contexts remains a crucial challenge for 21st-century education systems, particularly in historically marginalized territories such as the Latin American Amazon [1]. Despite advances in educational coverage and equity, significant structural, technological, and pedagogical gaps persist, limiting not only teachers’ access to continuous training but also their capacity to adopt innovative, contextually relevant, and critically reflective teaching practices.
Within this landscape, generative artificial intelligence (AI) is emerging as a transformative resource in education. Tools such as ChatGPT 4.0 and DALL-E 3.0 have shown potential for supporting curriculum planning, content generation, personalized tutoring, and formative assessment [2]. However, their effective integration into educational practice depends on more than infrastructure and connectivity. It requires digital literacy, critical thinking, and ethical engagement from educators, especially in regions where digital access remains scarce or unevenly distributed [3].
In parallel, entrepreneurship education has gained attention as a strategy to foster transversal competencies, such as creativity, problem-solving, decision-making, leadership, and collaboration, that respond to the demands of the contemporary world. In this area, frameworks such as EntreComp, promoted by the European Commission, as well as active methodologies like Lean Startup and Design Thinking, have proven effective in cultivating innovation and learner agency in diverse educational settings [4]. Nevertheless, their inclusion in teacher professional development, particularly in rural regions, remains limited due to systemic barriers and the lack of targeted, adaptable training programs [5].
Although both fields, emerging technologies and entrepreneurial education, have garnered growing academic interest, their combined application in teacher training, particularly in rural contexts, remains underexplored [6]. Most existing studies have focused on urban, well-connected environments, without considering the specific characteristics of rural and culturally diverse settings. This lack of evidence limits the possibility of designing contextualized, adaptable, and scalable vocational training policies [7]. The Ecuadorian Amazon offers a valuable setting for investigating such integration. The region is characterized by high cultural and geographic diversity, infrastructure constraints, frequent teacher turnover, and limited access to training opportunities. However, teachers often show resilience and adaptability, crafting pedagogical responses tailored to their local communities despite insufficient institutional support.
In response to this reality, the present study evaluates the cognitive and attitudinal impact of a 40-h teacher training program implemented in Educational District 15D01 (Ecuadorian Amazon). The program combined the EntreComp framework, active methodologies (such as Lean Startup and Design Thinking), and generative AI tools (including ChatGPT and DALL-E) [8]. A sequential explanatory mixed-method design was employed to assess changes in teachers’ knowledge, competencies, methodological strategies, and perceptions regarding the pedagogical use of AI in entrepreneurship-focused teaching.
The central focus of this research is therefore the evaluation of the training program as an educational resource, rather than the assessment of teaching or learning processes in general. By examining both quantitative outcomes and qualitative perceptions, the study seeks to contribute empirical evidence on how integrated and contextualized training experiences can enhance digital equity, promote pedagogical innovation, and empower educators to lead transformative practices in underserved communities. This approach aligns with critical and situated perspectives in education, emphasizing the ethical, territorial, and transformative dimensions of teacher learning.

2. Materials and Methods

2.1. Research Design

This study employed a sequential explanatory mixed-methods design [4,9] to evaluate a contextualized training program as a didactic resource for teacher professional development. The design combined a quantitative quasi-experimental phase with a qualitative interpretative phase, enabling both the measurement of cognitive outcomes and the exploration of teachers’ perceptions [10]. Such integration is widely recommended in educational evaluation where interventions address complex and context-dependent processes [11,12].
The quantitative phase followed a Type A–B pretest–posttest design without a control group. Focused on four dimensions related to entrepreneurial knowledge, competencies, methodological strategies, and AI integration. Seminal works established this structure (A–B) as a valid alternative when randomized trials are not feasible [4,13,14]. In teacher training and rural education research, withholding potentially beneficial programs is often considered unethical [15], while logistical and cultural constraints further hinder the implementation of random assignment [16]. In these settings, pretest–posttest designs allow defensible causal inference by documenting change over time with appropriate statistical controls [17,18]. Previous educational studies confirm their suitability for evaluating professional development and training interventions [12].
To strengthen internal validity, several methodological safeguards were applied. The training was delivered by the same instructors, using a standardized curriculum, which ensured procedural consistency across all participants. Pretest and posttest data were collected within a fixed timeframe to reduce maturation and history effects, as recommended in quasi-experimental frameworks [13,18,19]. The same validated 22-item instrument was used at both times to prevent instrumentation bias, and the evaluation conditions were kept constant to minimize testing effects. In addition, the use of paired non-parametric tests (Wilcoxon) and dimensionality reduction techniques (PCA) provided statistical control and facilitated the identification of latent patterns, thereby enhancing the plausibility of causal inference in the absence of randomization [17,20]. The quasi-experimental A–B design adopted in this study aligns with ethical and operational constraints commonly encountered in rural education contexts, where random assignment is often infeasible or ethically inappropriate [15,16,21]. Furthermore, the integration of qualitative data within a sequential explanatory mixed-methods design allowed for triangulation and contextual validation, reinforcing internal validity by exploring underlying mechanisms and participant experiences [11,20].

2.2. Participants and Context

The sampling frame included all in-service teachers in District 15D01, located in the Ecuadorian Amazon, who were invited to participate in the training program. Participation was voluntary and supported by the local education authority. All 39 invited teachers consented to participate, completed both the pre- and post-test instruments, and attended the full training schedule. As a result, no participants were excluded, and the final sample (N = 39) represents the total eligible population in the district (see Figure 1). The participants came from diverse disciplinary backgrounds and had an average age of 43.1 years. Notably, 36% had not received prior training in entrepreneurship, and 44% had not mentored student projects in the past three years, indicating limited prior exposure to entrepreneurship-oriented pedagogical practices [21].
The sample size was determined by contextual and ethical considerations rather than statistical sampling logic. Given that the program targeted a specific educational community in a rural setting, a census approach was adopted, working with the entire eligible population. This strategy aligns with recommended practices for educational research involving small-scale or hard-to-reach populations [22].

2.3. Intervention Program

The 40-h training program was implemented in a blended format, combining in-person and virtual components. This modality integrated traditional face-to-face instruction with online learning, enhancing flexibility, accessibility, and personalization [22]. The program was structured into three modules (see Table 1): Module 1 addressed entrepreneurial knowledge and competencies, Module 2 focused on methodological strategies, and Module 3 developed the integration of AI in education. This structure ensured a direct correspondence between the training content and the evaluation instrument, thereby reinforcing the validity of the pre–post comparisons.

2.4. Instruments

A 22-item Likert-scale questionnaire (1 = strongly disagree, 5 = strongly agree) was used to collect quantitative data [23]. The instrument was designed to assess the program’s impact on the four dimensions mentioned in Section 2.1: knowledge of educational entrepreneurship, entrepreneurial competencies, methodological strategies, and the pedagogical use of AI, drawing on the EntreComp framework and AI pedagogical literacy indicators. The full dataset, including the complete 22 questionnaire items in textual form and the corresponding scoring and mapping keys for each analytical dimension, as well as the answers of each participant, has been deposited in the open Mendeley repository for transparency and reproducibility [24].
Each questionnaire item was coded using a short code, allowing the results to be represented in multivariate visualizations. Table 2 details these codes, the associated questions, and their analytical dimensions:
No cultural or linguistic adaptations were necessary, as the instrument was developed initially in the local language and aligned with the sociocultural and educational context of the participant group. Nevertheless, the instrument underwent content validation through expert judgment, followed by a pilot test with an equivalent sample. This process refined item wording, confirmed conceptual clarity, and verified alignment with the program’s objectives, in line with best practices in questionnaire development [25]. Internal reliability was examined using Cronbach’s alpha coefficients, which were calculated separately for the pre- and post-test measurements [26]. To complement the quantitative data, eight open-ended questions were administered as part of both the pretest and posttest. The pretest items invited teachers to reflect on their preparation, challenges, and expectations regarding AI and entrepreneurship. The posttest items explored participants’ perceptions of the training, their preferred uses of AI, perceived barriers, the supportive roles of AI, and their preferred AI applications. Across the 39 participants, this yielded 312 qualitative response units, which were analyzed thematically to enrich the interpretation of quantitative findings and to strengthen methodological triangulation [27].

2.5. Procedure

The study was carried out in three consecutive phases:
  • Pretest: The 22-item questionnaire was administered to the 39 participating teachers to establish a baseline. Additionally, three open-ended diagnostic questions invited reflections on preparation, challenges, and expectations related to entrepreneurship and AI.
  • Implementation of the training program: The three structured modules were developed (see Section 2.3).
  • Post-test: The questionnaire was administered again under equivalent conditions. Five additional open-ended questions were included, focusing on teachers’ perceptions of the training, their preferred uses of AI, perceived barriers, the supportive roles of AI, and their preferred applications of AI tools.
In total, this procedure generated paired quantitative data and 312 codable qualitative responses (eight per teacher: three pretest, five posttest). This design enabled both comparative pre–post analysis and qualitative insights aligned with the training process, thereby strengthening the internal validity and interpretative depth of the findings [25,28].

2.6. Data Analysis

The quantitative analysis was carried out using R software (version 4.2.1). Reliability was assessed with Cronbach’s α for each dimension. Since normality was not met (Shapiro–Wilk test), Wilcoxon signed-rank tests were applied for paired comparisons. The assumption of symmetry in paired differences was assessed via visual inspection of the distributions, which showed approximate symmetry across all four dimensions, thereby supporting the use of the Wilcoxon signed-rank test. To control for Type I error inflation arising from multiple comparisons, Holm–Bonferroni corrections were applied to the p-values; all results remained significant after adjustment. Effect sizes were calculated using Cohen’s d, interpreted according to conventional thresholds (0.2 = small, 0.5 = moderate, 0.8 = large) [29]. To explore latent patterns in responses, a principal component analysis (PCA) was conducted.
The qualitative dataset, comprising 312 responses, along with the codebook, has been deposited in the open Mendeley repository for transparency and reproducibility [24]. The dataset was analyzed using an inductive thematic approach [30], supported by MAXQDA 2022. Coding was performed independently by two researchers, with consensus reached in cases of discrepancy. Thematic analysis yielded four analytical panels: (A) perceived barriers, (B) preferred strategies, (C) perception of the role of AI, and (D) projected pedagogical applications. To ensure transparency and replicability, a structured codebook with operational definitions, inclusion/exclusion criteria, and anonymized teacher examples was developed. To assess reliability, a random 25% subset of responses (n = 78) was independently double-coded. Inter-coder agreement was almost perfect (Cohen’s κ = 0.83, 95% CI [0.78–0.87], p < 0.001), confirming the consistency of the coding scheme. To complement coding, exploratory text-mining techniques in R (tidytext, dplyr) were applied to visualize word frequencies and co-occurrences. This triangulation enhanced the robustness of the findings by linking statistical outcomes with interpretative insights [31].

3. Results and Discussion

3.1. Validation of the Quantitative Data Collection Instrument

The internal consistency of the questionnaire was confirmed through Cronbach’s alpha coefficients, which indicated excellent reliability across all dimensions both before and after the intervention (see Table 3). The most notable improvement was observed in the Use of AI in Education dimension, which rose from an acceptable to an excellent level, suggesting that participants developed greater conceptual clarity after completing the program. In contrast, the posttest coefficient for Entrepreneurial Competencies exceeded 0.95, which may signal redundancy among some items [32]. This issue has been highlighted in previous validation studies, where overly high coefficients suggest the need to refine indicators to avoid conceptual overlap and preserve instrument parsimony [33]. Future applications of this instrument should therefore consider reducing items or reformulating them to maintain both reliability and discriminant validity.

3.2. Quantitative Effects Analysis

The results demonstrate statistically significant improvements across all four dimensions and their respective items, confirming the effectiveness of the training program as a didactic resource (see Figure 2). The radar chart reveals a substantial increase in mean scores following the intervention, with the most notable gains observed in items such as LMA (use of AI for market analysis), LDE (use of AI for prototyping), and CAU (fostering student autonomy). These results indicate that participants not only expanded their knowledge but also strengthened their confidence in applying innovative tools within their teaching practice, consistent with prior findings on active and experiential methodologies in entrepreneurship education [34].
Additionally, Figure 3 shows comparative boxplots for each of the four global dimensions evaluated. In all cases, the post-test median shifts toward higher values, with a reduction in dispersion. This case indicates a more homogeneous appropriation of the content by the participating teachers, suggesting that the combined in-person and online approach was practical even in a rural setting with limited connectivity [35]. Aligning with evidence that digital equity requires not only access but also contextualized pedagogical integration [34].
According to the results presented in Table 4, the mean scores increased significantly across all four dimensions, confirming the program’s effectiveness as a didactic intervention. The most pronounced difference was observed in Entrepreneurial Competencies, with an increase of more than one point (from 2.86 to 3.91) and a large effect size (d = 0.87). This result suggests that the program not only strengthened participants’ theoretical knowledge but also facilitated the development of transferable skills applicable to school-based projects. Comparable outcomes have been reported in studies emphasizing the value of experiential and situated approaches in fostering entrepreneurial thinking and practical competencies among teachers [36].
This finding resonates with situated learning perspectives, which stress that authentic, practice-oriented experiences enable teachers to integrate entrepreneurial competences more effectively into their classroom practices [35]. It also aligns with research indicating that teacher professional development programs grounded in active and experiential methodologies contribute to greater teacher agency and more sustainable pedagogical change [37].
The increase in the dimension of methodological strategies for AI and entrepreneurship (from 2.73 to 3.76, d = 0.76) suggests that teachers actively adopted approaches such as Design Thinking and Lean Startup, supported by digital tools like empathy maps and the Canvas. These findings are consistent with prior research indicating that agile, project-based methodologies, combined with emerging technologies, can enhance teacher autonomy and foster learning grounded in authentic challenges [34]. Similarly, significant improvements were observed in entrepreneurial knowledge (d = 0.75), reinforcing evidence that the EntreComp framework facilitates the translation of abstract entrepreneurial constructs into concrete classroom practices [22]. This suggests that even short, contextually oriented programs can bridge the gap between policy-driven competence models and practical pedagogical applications.
The dimension of pedagogical use of AI exhibited the most substantial relative increase, from 2.41 to 3.59, with a moderate effect (d = 0.64). This improvement is particularly noteworthy given participants’ limited prior familiarity with AI-based tools. However, effective adoption of AI in education requires more than technical proficiency; it demands pedagogical literacy that integrates critical thinking, ethical awareness, and educational acumen [38]. In this study, the combination of situated training and direct experimentation with ChatGPT and DALL·E [39] supported this initial transition, demonstrating the potential of structured, contextualized professional development to introduce AI into rural teaching practices.
In line with broader international debates, the observed transformations can also be interpreted through technology integration frameworks. From the perspective of the SAMR model [40], teachers’ reported uses of generative AI tools progressed from augmentation (e.g., using ChatGPT to support idea generation or formative feedback) toward modification and even redefinition (e.g., applying DALL·E to prototype solutions in interdisciplinary projects). This trajectory illustrates how the training went beyond substitutional practices to promote genuinely transformative applications of AI in education. Likewise, the findings align with the TPACK framework [41], which stresses the intersection of technological, pedagogical, and content knowledge [41]. By coupling entrepreneurial content with active methodologies and emerging digital tools, the program fostered not only technical competence but also the pedagogical literacy required for meaningful and ethical AI integration.

3.3. Multivariate Analysis: Teacher Profiles and Latent Patterns

3.3.1. Emerging Teacher Profiles (K-Means Clustering)

To complement the pre–post analysis, multivariate techniques were applied to identify structural patterns in the post-test responses and characterize emerging teacher profiles. As shown in Figure 4, cluster analysis using the K-means algorithm identified two distinct teacher profiles in the principal component space constructed from the 22 post-test variables. The clustering suggests the existence of distinct trajectories of content appropriation during the training.
  • Group 1 (green circle) is composed of teachers who demonstrate greater development in entrepreneurial skills and pedagogical strategies, but are only just beginning to utilize AI tools.
  • Group 2 (orange triangle) exhibits a greater affinity with the technological dimension, particularly in the use of generative AI for tutoring, market analysis, and prototype design.
This differentiation suggests distinct trajectories of appropriation of the training program. Group 1 could represent teachers with prior methodological experience who require greater support in digital literacy. At the same time, Group 2 exhibits a technological innovation profile with the potential to drive change processes within their institutions.
These findings align with studies showing that teachers’ adoption of innovations depends on prior experience, resource availability, and attitudes toward change, which generate heterogeneous appropriation patterns [34]. Furthermore, differentiated professional learning opportunities have been recommended in literature as a way to accommodate diverse teacher trajectories in similar educational contexts [28].

3.3.2. Extreme Cases: Atypical Trajectories

According to Figure 5, six atypical cases or outliers were identified based on Euclidean distance from the multivariate centroid. These teachers’ responses deviated significantly from the general pattern, reflecting both low and high levels of appropriation.
  • Low appropriation cases (T17, T18, T32): located in positions of overall underperformance, possibly linked to previous gaps in continuing education, limited digital literacy, or resistance to change.
  • High appropriation cases (T3, T19, T37): stand out for high scores in AI integration, active strategies, and project leadership.
From a pedagogical perspective, these extreme profiles provide valuable insights for program design. Teachers with lower appropriations may benefit from personalized tutoring, scaffolding, or reinforcement materials, while those with higher appropriations could be mobilized as distributed leaders and peer mentors. Such peer-learning and mentoring practices have been shown to strengthen situated teacher professional development in resource-constrained settings [42].

3.3.3. Factorial Structure of the Instrument (Item PCA)

The results indicate that the first two components explain 74% of the total variance (PC1 = 64.25%, PC2 = 9.69%), as shown in Figure 6. The distribution of items is consistent with their theoretical dimension:
  • Competencies (blue) and entrepreneurial knowledge (green) items tend to align with PC1 and PC2 in a complementary manner.
  • AI items (red) load strongly on PC1, reflecting their conceptual independence from traditional pedagogical dimensions.
  • Methodological strategies (purple) appear more dispersed but still maintain internal coherence, indicating their multidimensional nature.
This factorial structure empirically validates the conceptual organization of the instrument and confirms that the axes used to segment teacher profiles are supported by consistent dimensions. Notably, the grouping of AI items highlights the emergence of AI literacy as a distinct pedagogical dimension, complementary yet independent from entrepreneurial and methodological constructs.
From a methodological perspective, this validation strengthens the robustness of the multivariate analysis. By showing that the instrument measures distinct but complementary constructs, the legitimacy of the teacher’s profiles identified through PCA and K-means is reinforced. This case aligns with recommendations in the literature, which emphasize the combined use of factorial and clustering techniques to interpret multidimensional instruments in educational research [20].

3.4. Qualitative Analysis: Perceptions, Barriers, and Appropriations

The analysis of the 312 qualitative response units revealed four central panels: (A) perceived barriers, (B) preferred strategies, (C) perception of the role of AI, and (D) projected pedagogical applications (see Figure 7). These categories, coded according to the structured codebook deposited in the open Mendeley repository [24], provide a situated perspective that complements the quantitative findings by illustrating how teachers adapted the program content in relation to their contextual realities [43].
From these results, teachers consistently identified structural barriers, such as limited connectivity, scarce institutional support, and insufficient prior training, as obstacles to appropriation. These challenges align with the literature on digital equity, which posits that infrastructural and institutional constraints are central mediators of innovation in rural education [43]. As one participant noted,
“Lack of internet, lack of access to technological devices and connectivity in students”.
In contrast, participants highlighted active, student-centered strategies, including project canvases, empathy maps, and the EntreComp framework, as particularly valuable for designing meaningful classroom activities. This case aligns with research on situated and experiential learning, which emphasizes the role of authentic tasks and reflective structures in professional learning [44]. One teacher summarized it this way:
“I would like to apply to the Simulation of an Entrepreneurship with Lean Startup”.
Regarding AI integration, teachers expressed favorable attitudes toward generative tools for idea generation, formative feedback, and project mentoring, while also noting ethical concerns and the risk of overreliance. As one participant commented,
“Well, the use of AIs is a great help, and adapting to them is a great help for tutoring”.
These findings align with calls in the literature for AI literacy that extends beyond technical skills to encompass ethical, critical, and pedagogical dimensions [38]. Finally, the proposed applications demonstrate a creative appropriation of the tools, ranging from interdisciplinary project design to the use of DALL·E for visual prototyping and ChatGPT for classroom ideation. Such proposals highlight teachers’ agency in contextualizing AI for low-resource settings, consistent with recommendations for empowering educators as co-designers of innovation [45]. As one participant stated,
“AI helps to a large extent to generate innovative ideas and correctly use resources and tools”.

4. Conclusions

This study demonstrates that a short training program focused on entrepreneurship, active methodologies, and generative AI can generate measurable pedagogical and cognitive gains in rural contexts. Statistically significant improvements were observed across all four dimensions: Entrepreneurial Competencies (d = 0.87), Entrepreneurial Knowledge (d = 0.75), Teaching Strategies (d = 0.76), and Use of AI in Education (d = 0.64). These findings confirm the value of situated, practice-based approaches adapted to territorial needs.
Multivariate and qualitative analyses revealed differentiated appropriation patterns, shaped by prior training, digital access, and professional identity. Teachers reported greater openness to innovation and a stronger sense of pedagogical agency, even in the presence of structural barriers. Importantly, AI was not conceived as an end in itself, but rather as a pedagogical catalyst embedded in ethical and contextual reflection, thereby reinforcing the centrality of teacher agency in advancing digital inclusion.
Despite certain limitations, such as the absence of a control group, the narrow focus on a single district, and the exploratory nature of the qualitative analysis, the study remains highly effective. While the latter yielded valuable insights, it warrants deeper examination in future research to capture richer narrative patterns and contextual nuances. Further studies should also assess long-term sustainability, explore impacts on student learning, and test scalable strategies such as peer mentoring and localized support systems.

Author Contributions

Conceptualization, L.Q.-Q., and P.A.-V.; methodology, L.Q.-Q., and P.A.-V.; software, L.Q.-Q.; validation, L.Q.-Q., and P.A.-V.; formal analysis, L.Q.-Q., and P.A.-V.; investigation, L.Q.-Q., I.A.-V., L.R.-R., J.M.-A., D.A.C.-N., R.T.-G., and P.A.-V.; resources, L.Q.-Q., and P.A.-V.; data curation, L.Q.-Q., and P.A.-V.; writing—original draft preparation, L.Q.-Q., I.A.-V., L.R.-R., J.M.-A., D.A.C.-N., R.T.-G., and P.A.-V.; writing—review and editing, L.Q.-Q., I.A.-V., L.R.-R., J.M.-A., D.A.C.-N., R.T.-G., and P.A.-V.; visualization, L.Q.-Q., and P.A.-V.; supervision, L.Q.-Q., and P.A.-V.; project administration, L.Q.-Q., and P.A.-V.; funding acquisition, P.A.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was jointly funded by the Universidad de las Américas—Ecuador (internal project 518.A.XV.24)—and the JILLAY Project (POA-011-2023), supported by Universidad Regional Amazónica Ikia.

Institutional Review Board Statement

The study was evaluated and certified as ethically compliant by the Comité de Ética en Investigación e Innovación de la Universidad Regional Amazónica IKIAM (protocol CEII-IKIAM-INF-2025-001. 10 March 2025). The committee classified the project as a low-risk educational intervention, with adequate safeguards in place for confidentiality, informed consent, and the ethical use of data.

Informed Consent Statement

Informed consent was obtained from all participants before the commencement of data collection. Consent was documented digitally through an introductory statement in the questionnaires, where participants confirmed their voluntary participation, awareness of the study objectives, and acceptance of anonymized use of their responses for academic purposes. All data were anonymized (i.e., identifiers were removed) and securely stored on the institutional server, accessible only to the research team.

Data Availability Statement

The data used in this study have been deposited in the open Mendeley repository, where they are available for review and reproduction [24].

Acknowledgments

The authors are grateful for the support provided by the JILLAY Project, “Assessment of Climate Risks and Vulnerabilities in Indigenous Communities of the RBCC through Analysis of Green, Blue, and Socioeconomic Infrastructure” (Code POA-011-2023), which provided the operational and logistical framework for the development of this research. Special recognition is also extended to Education District 15D01 for facilitating access to participating teachers and supporting the implementation of the training program on the ground.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Montoya, G.; Valencia, L.; Vargas, L.; García, J.; Franco, J.; Calderón, H. Ruralidad, educación rural e identidad profesional de maestras y maestros rurales. Prax. Saber 2022, 13, e13323. [Google Scholar] [CrossRef]
  2. Guárdia Ortiz, L.; Bekerman, Z.; Zapata Ros, M. Presentación del número especial “IA generativa, ChatGPT y Educación. Consecuencias para el Aprendizaje Inteligente y la Evaluación Educativa.”. Rev. Educ. Distancia 2024, 5, 12–44. [Google Scholar] [CrossRef]
  3. Carranza Alcántar, M. del R. Use of Active Methodologies with ICT in Teacher Training. Salud Cienc. Tecnol.-Ser. Conf. 2024, 3, 702. [Google Scholar] [CrossRef]
  4. Bardales-Cárdenas, M.; Cervantes-Ramón, E.F.; Gonzales-Figueroa, I.K.; Farro-Ruiz, L.M. Entrepreneurship skills in university students to improve local economic development. J. Innov. Entrep. 2024, 13, 55. [Google Scholar] [CrossRef]
  5. Ávalos, C.; Pérez-Escoda, A.; Monge, L. Lean Startup as a Learning Methodology for Developing Digital and Research Competencies. J. New Approaches Educ. Res. 2019, 8, 227–242. [Google Scholar] [CrossRef]
  6. Ibrahim, U. Integration of Emerging Technologies in Teacher Education for Global Competitiveness. Int. J. Educ. Life Sci. 2024, 2, 127–138. [Google Scholar] [CrossRef]
  7. Sandirasegarane, S.; Sutermaster, S.; Gill, A.; Volz, J.; Mehta, K. Context-Driven Entrepreneurial Education in Vocational Schools. Int. J. Res. Vocat. Educ. Train. 2016, 3, 106–126. [Google Scholar] [CrossRef]
  8. Uchima-Marin, C.; Murillo, J.; Salvador-Acosta, L.; Acosta-Vargas, P. Integration of Technological Tools in Teaching Statistics: Innovations in Educational Technology for Sustainable Education. Sustainability 2024, 16, 8344. [Google Scholar] [CrossRef]
  9. Fissore, C.; Floris, F.; Conte, M.M.; Sacchet, M. Teacher training on artificial intelligence in education. In Proceedings of the Smart Learning Environments in the Post Pandemic Era: Selected Papers from the CELDA 2022 Conference, Lisbon, Portugal, 8–10 November 2022; Springer: Berlin/Heidelberg, Germany, 2024; pp. 227–244. [Google Scholar]
  10. Bell, R.; Bell, H. Entrepreneurship education in the era of generative artificial intelligence. Entrep. Educ. 2023, 6, 229–244. [Google Scholar] [CrossRef]
  11. Wipulanusat, W.; Panuwatwanich, K.; Stewart, R.A.; Sunkpho, J. Applying Mixed Methods Sequential Explanatory Design to Innovation Management. In Proceedings of the 10th International Conference on Engineering, Project, and Production Management, Berlin, Germany, 2–4 September 2019; Panuwatwanich, K., Ko, C.-H., Eds.; Springer: Singapore, 2020; pp. 485–495. [Google Scholar]
  12. Prashar, A.; Gupta, P.; Dwivedi, Y.K. Plagiarism awareness efforts, students’ ethical judgment and behaviors: A longitudinal experiment study on ethical nuances of plagiarism in higher education. Stud. High. Educ. 2023, 49, 929–955. [Google Scholar] [CrossRef]
  13. Campbell, D.T.; Stanley, J.C. Experimental and Quasi-Experimental Designs for Research. In Handbook of Research on Teaching; Houghton Mifflin Company: Boston, MA, USA, 1963. [Google Scholar]
  14. Kefalis, C.; Skordoulis, C.; Drigas, A. Digital Simulations in STEM Education: Insights from Recent Empirical Studies, a Systematic Review. Encyclopedia 2025, 5, 10. [Google Scholar] [CrossRef]
  15. Reid, J.-A. The Politics of Ethics in Rural Social Research: A Cautionary Tale. In Ruraling Education Research: Connections Between Rurality and the Disciplines of Educational Research; Roberts, P., Fuqua, M., Eds.; Springer: Singapore, 2021; pp. 247–263. ISBN 978-981-16-0131-6. [Google Scholar]
  16. Troyer, M. The gold standard for whom? Schools’ experiences participating in a randomised controlled trial. J. Res. Read. 2022, 45, 406–424. [Google Scholar] [CrossRef]
  17. Kim, H.; Clasing-Manquian, P. Quasi-Experimental Methods: Principles and Application in Higher Education Research. In Theory and Method in Higher Education Research; Emerald Publishing: Leeds, UK, 2023; Volume 9. [Google Scholar] [CrossRef]
  18. Denny, M.; Denieffe, S.; O’Sullivan, K. Non-equivalent Control Group Pretest–Posttest Design in Social and Behavioral Research. In The Cambridge Handbook of Research Methods and Statistics for the Social and Behavioral Sciences: Volume 1: Building a Program of Research; Nichols, A.L., Edlund, J., Eds.; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, UK, 2023; pp. 314–332. ISBN 9781316518526. [Google Scholar]
  19. Slocum, T.A.; Pinkelman, S.E.; Joslyn, P.R.; Nichols, B. Threats to Internal Validity in Multiple-Baseline Design Variations. Perspect. Behav. Sci. 2022, 45, 619–638. [Google Scholar] [CrossRef]
  20. De Santis, A.; Sannicandro, K.; Bellini, C.; Minerva, T. Trends in the use of Multivariate Analysis in Educational Research: A review of methods and applications in 2018–2022. J. E-Learn. Knowl. Soc. 2024, 20, 47–55. [Google Scholar] [CrossRef]
  21. Downes, N.; Marsh, J.; Roberts, P.; Reid, J.-A.; Fuqua, M.; Guenther, J. Valuing the Rural: Using an Ethical Lens to Explore the Impact of Defining, Doing and Disseminating Rural Education Research. In Ruraling Education Research: Connections Between Rurality and the Disciplines of Educational Research; Roberts, P., Fuqua, M., Eds.; Springer: Singapore, 2021; pp. 265–285. ISBN 978-981-16-0131-6. [Google Scholar]
  22. Seikkula-Leino, J.; Salomaa, M.; Jónsdóttir, S.R.; McCallum, E.; Israel, H. EU Policies Driving Entrepreneurial Competences—Reflections from the Case of EntreComp. Sustainability 2021, 13, 8178. [Google Scholar] [CrossRef]
  23. Kusmaryono, I.; Wijayanti, D.; Risqi, H. Number of Response Options, Reliability, Validity, and Potential Bias in the Use of the Likert Scale in Education and Social Science Research: A Literature Review. Int. J. Educ. Methodol. 2022, 8, 625–637. [Google Scholar] [CrossRef]
  24. Quishpe, Q.; Miguel, L.; Irene, A.-V.; Lorena, R.-R.; Jessica, M.-A.; Daniel, C.-N.; Roldan, T.-G.; Roldan, T.-G. Dataset AI and Entrepreneurship in Amazon Teacher Training 2025. Mendeley Data 2025, V1. Available online: https://data.mendeley.com/datasets/9rmdx6z785/2 (accessed on 7 August 2025).
  25. Gehlbach, H.; Brinkworth, M.E. Measure Twice, Cut down Error: A Process for Enhancing the Validity of Survey Scales. Rev. Gen. Psychol. 2011, 15, 380–387. [Google Scholar] [CrossRef]
  26. Wang, D.; Dong, X.; Zhong, J. Enhance College AI Course Learning Experience with Constructivism-Based Blog Assignments. Educ. Sci. 2025, 15, 217. [Google Scholar] [CrossRef]
  27. Teemant, A.; Sherman, B.; Dagli, C. A Mixed Method Study of a Critical Sociocultural Coaching Intervention. In Handbook of 480 Critical Coaching Research; Routledge: London, UK, 2024; pp. 145–162. [Google Scholar]
  28. Abulibdeh, A. A systematic and bibliometric review of artificial intelligence in sustainable education: Current trends and future research directions. Sustain. Futures 2025, 10, 101033. [Google Scholar] [CrossRef]
  29. Ulferts, H.; Wolf, K.M.; Anders, Y. Impact of Process Quality in Early Childhood Education and Care on Academic Outcomes: Longitudinal Meta-Analysis. Child. Dev. 2019, 90, 1474–1489. [Google Scholar] [CrossRef] [PubMed]
  30. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  31. Borges, E.M. Hypothesis Tests and Exploratory Analysis Using R Commander and Factoshiny. J. Chem. Educ. 2023, 100, 267–278. [Google Scholar] [CrossRef]
  32. Hussey, I.; Alsalti, T.; Bosco, F.; Elson, M.; Arslan, R. An aberrant abundance of Cronbach’s alpha values at. 70. Adv. Methods Pract. Psychol. Sci. 2025, 8, 25152459241287124. [Google Scholar] [CrossRef]
  33. Govindasamy, P.; Cumming, T.M.; Abdullah, N. Validity and reliability of a needs analysis questionnaire for the development of a creativity module. J. Res. Spec. Educ. Needs 2024, 24, 637–652. [Google Scholar] [CrossRef]
  34. Collin, S.; Brotcorne, P. Capturing digital (in)equity in teaching and learning: A sociocritical approach. Int. J. Inf. Learn. Technol. 2019, 36, 169–180. [Google Scholar] [CrossRef]
  35. Segal, A. Rethinking Collective Reflection in Teacher Professional Development. J. Teach. Educ. 2023, 75, 155–167. [Google Scholar] [CrossRef]
  36. Rodrigues, A.L. Entrepreneurship Education Pedagogical Approaches in Higher Education. Educ. Sci. 2023, 13, 940. [Google Scholar] [CrossRef]
  37. Mohammad Nezhad, P.; Stolz, S.A. Unveiling teachers’ professional agency and decision-making in professional learning: The illusion of choice. Prof. Dev. Educ. 2024, 1–21. [Google Scholar] [CrossRef]
  38. Franco D’Souza, R.; Mathew, M.; Mishra, V.; Surapaneni, K.M. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Med. Educ. Online 2024, 29, 2330250. [Google Scholar] [CrossRef] [PubMed]
  39. Ruiz-Rojas, L.I.; Salvador-Ullauri, L.; Acosta-Vargas, P. Collaborative Working and Critical Thinking: Adoption of Generative Artificial Intelligence Tools in Higher Education. Sustainability 2024, 16, 5367. [Google Scholar] [CrossRef]
  40. Cáceres-Nakiche, K.; Carcausto-Calla, W.; Yabar Arrieta, S.R.; Lino Tupiño, R.M. The SAMR Model in Education Classrooms: Effects on Teaching Practice, Facilities, and Challenges. J. High. Educ. Theory Pract. 2024, 24, 160–172. [Google Scholar] [CrossRef]
  41. Mishra, P.; Koehler, M.J. Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teach. Coll. Rec. 2006, 108, 1017–1054. [Google Scholar] [CrossRef]
  42. Petko, D.; Mishra, P.; Koehler, M.J. TPACK in context: An updated model. Comput. Educ. Open 2025, 8, 100244. [Google Scholar] [CrossRef]
  43. Alkan, A. Artificial Intelligence: Its Role and Potential in Education TT Yapay Zekâ: Eğitimdeki Rolü ve Potansiyeli. İnsan Ve Toplum. Bilim. Araştırmaları Derg. 2024, 13, 483–497. [Google Scholar] [CrossRef]
  44. Korthagen, F.A.J. Situated learning theory and the pedagogy of teacher education: Towards an integrative view of teacher behavior and teacher learning. Teach. Teach. Educ. 2010, 26, 98–106. [Google Scholar] [CrossRef]
  45. Gupta, N.; Khatri, K.; Malik, Y.; Lakhani, A.; Kanwal, A.; Aggarwal, S.; Dahuja, A. Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training. BMC Med. Educ. 2024, 24, 1544. [Google Scholar] [CrossRef]
Figure 1. Sampling frame and response rate.
Figure 1. Sampling frame and response rate.
Sustainability 17 08850 g001
Figure 2. Radar chart of pre- and post-intervention averages across all questionnaire items.
Figure 2. Radar chart of pre- and post-intervention averages across all questionnaire items.
Sustainability 17 08850 g002
Figure 3. Boxplot comparisons of pre- and post-intervention scores by pedagogical dimension.
Figure 3. Boxplot comparisons of pre- and post-intervention scores by pedagogical dimension.
Sustainability 17 08850 g003
Figure 4. Teacher profiles were identified through K-means clustering based on post-intervention data.
Figure 4. Teacher profiles were identified through K-means clustering based on post-intervention data.
Sustainability 17 08850 g004
Figure 5. Detection of outlier teacher profiles based on PCA space.
Figure 5. Detection of outlier teacher profiles based on PCA space.
Sustainability 17 08850 g005
Figure 6. PCA biplot of post-intervention items by pedagogical dimension.
Figure 6. PCA biplot of post-intervention items by pedagogical dimension.
Sustainability 17 08850 g006
Figure 7. Hematic panels from open-ended teacher responses (N = 312 coded units). (A) perceived barriers, (B) preferred strategies, (C) perception of the role of AI, and (D) projected pedagogical applications.
Figure 7. Hematic panels from open-ended teacher responses (N = 312 coded units). (A) perceived barriers, (B) preferred strategies, (C) perception of the role of AI, and (D) projected pedagogical applications.
Sustainability 17 08850 g007
Table 1. Structure of the teacher training program.
Table 1. Structure of the teacher training program.
ModuleSessions/HoursLearning ObjectivesPedagogical Strategies and TasksDigital/Material ResourcesAssessment Criteria
1. Entrepreneurial Competencies2 sessions (12 h)Introduce the EntreComp framework to promote an entrepreneurial mindset, leadership, and reflective practice.Self-assessment activities; case analysis; structured debates on innovation challengesEntreCompEdu platform; curated case studies; reflection guidesCompletion of self-assessment matrix; quality of contributions in debates; reflective notes
2. Active Methodologies2 sessions (14 h)Apply Lean Startup and Design Thinking in school contexts; strengthen project-based teaching competencies.Project Canvas design workshop; empathy-mapping exercise; collaborative problem-solving tasksProject Canvas template; empathy maps; PBL scenariosSubmission of project drafts, peer feedback reports, and instructor evaluation of methodological alignment
3. AI in Education3 sessions (14 h)Develop pedagogical use of generative AI (ChatGPT, DALL·E) for ideation, prototyping, and formative feedback.AI-assisted lesson design; guided creation of prototypes; role-play simulations for formative assessmentGenerative AI platforms (ChatGPT, DALL·E); guided assignments; evaluation rubricsPresentation of AI-supported lesson prototype; reflective journal on ethical/critical use of AI
Table 2. Short codes, questions, and associated dimensions for the 22-item pre–post questionnaire.
Table 2. Short codes, questions, and associated dimensions for the 22-item pre–post questionnaire.
Short CodeItemDimension
KETWhat is your level of knowledge about key entrepreneurship concepts?Knowledge
KDTKnowledge about Design Thinking as a methodology for innovation projects?Knowledge
KLSKnowledge about Lean Startup and its application in student projects?Knowledge
KFFKnowledge about funding sources for innovative projects?Knowledge
KLAKnowledge about active learning methodologies for entrepreneurship (e.g., PBL)?Knowledge
KEPKnowledge about structuring a compelling Elevator Pitch?Knowledge
CIDCompetence in identifying innovation opportunities in your educational context?Competence
CGUCompetence in guiding students in generating innovative ideas?Competence
CSPCompetence in helping students mobilize resources for their projects?Competence
CPRCompetence in fostering perseverance and motivation in work teams?Competence
CFACompetence in facilitating the management of school entrepreneurship projects?Competence
CWOCompetence in working with other teachers or external actors to support projects?Competence
CAUCompetence in fostering autonomy and decision-making in students?Competence
SMAApplication of active methodologies (e.g., PBL, gamification) in tutoring?Strategies
STDUse digital tools for prototyping and validation?Strategies
SEAEvaluation of projects based on impact and feasibility criteria?Strategies
SMEUse of mentoring and effective feedback in student projects?Strategies
LCOKnowledge and use of AI concepts in education?AI Integration
LTUUse of ChatGPT or other generative AI for project tutoring?AI Integration
LMAUse of AI tools for market analysis and idea validation?AI Integration
LDEUse of AI for prototyping and product development (e.g., DALL·E)?AI Integration
LRICritical reflection on risks and ethics of AI in education?AI Integration
Table 3. Cronbach’s Alpha results.
Table 3. Cronbach’s Alpha results.
Dimensionα (Pre)α (Post)
Entrepreneurial Competencies0.9460.962
Entrepreneurial Knowledge0.8960.943
Teaching Strategies for AI & Entrepreneurship0.8960.944
Use of AI in Education0.8380.948
Table 4. Pre–post comparison across pedagogical dimensions (Wilcoxon test and Cohen’s d).
Table 4. Pre–post comparison across pedagogical dimensions (Wilcoxon test and Cohen’s d).
DimensionPre-Test (M ± SD)Post-Test (M ± SD)p-Value (Wilcoxon)p-adj (Holm-Bonferroni)Cohen’s d
Entrepreneurial Competencies2.86 ± 0.703.91 ± 0.52<0.001<0.0010.87 (large)
Entrepreneurial Knowledge2.91 ± 0.613.84 ± 0.47<0.001<0.0010.75 (moderate)
Teaching Strategies for AI & Entrepreneurship2.73 ± 0.653.76 ± 0.58<0.001<0.0010.76 (high)
Use of AI in Education2.41 ± 0.713.59 ± 0.66<0.001<0.0010.64 (moderate)
Note: Cohen’s d thresholds follow conventional interpretations: 0.2 = small, 0.5 = moderate, 0.8 = large. All results remained significant after Holm-Bonferroni correction for multiple comparisons (four dimensions).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quishpe-Quishpe, L.; Acosta-Vargas, I.; Rodríguez-Rojas, L.; Medina-Arias, J.; Coronel-Navarro, D.A.; Torres-Gutiérrez, R.; Acosta-Vargas, P. Impact of a Contextualized AI and Entrepreneurship-Based Training Program on Teacher Learning in the Ecuadorian Amazon. Sustainability 2025, 17, 8850. https://doi.org/10.3390/su17198850

AMA Style

Quishpe-Quishpe L, Acosta-Vargas I, Rodríguez-Rojas L, Medina-Arias J, Coronel-Navarro DA, Torres-Gutiérrez R, Acosta-Vargas P. Impact of a Contextualized AI and Entrepreneurship-Based Training Program on Teacher Learning in the Ecuadorian Amazon. Sustainability. 2025; 17(19):8850. https://doi.org/10.3390/su17198850

Chicago/Turabian Style

Quishpe-Quishpe, Luis, Irene Acosta-Vargas, Lorena Rodríguez-Rojas, Jessica Medina-Arias, Daniel Antonio Coronel-Navarro, Roldán Torres-Gutiérrez, and Patricia Acosta-Vargas. 2025. "Impact of a Contextualized AI and Entrepreneurship-Based Training Program on Teacher Learning in the Ecuadorian Amazon" Sustainability 17, no. 19: 8850. https://doi.org/10.3390/su17198850

APA Style

Quishpe-Quishpe, L., Acosta-Vargas, I., Rodríguez-Rojas, L., Medina-Arias, J., Coronel-Navarro, D. A., Torres-Gutiérrez, R., & Acosta-Vargas, P. (2025). Impact of a Contextualized AI and Entrepreneurship-Based Training Program on Teacher Learning in the Ecuadorian Amazon. Sustainability, 17(19), 8850. https://doi.org/10.3390/su17198850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop