1. Introduction
Effective classification and progress monitoring are critical components of inclusive education systems. Classification helps identify students with academic difficulties and guides decisions about interventions and support, while progress monitoring ensures that student growth is tracked and instructional strategies are adjusted accordingly. When implemented consistently and supported by clear criteria, adequate resources, and appropriate tools, these systems make a significant contribution to student success (
Filderman et al., 2023).
Building on this global understanding, it is essential to consider how these principles are applied within specific national contexts, such as Qatar, where inclusive education has become a central policy priority.
In Qatar, inclusion has been a national priority under the broader vision of equitable education for all (
Darwish et al., 2025;
M. K. Al-Hendawi et al., 2022). The Ministry of Education and Higher Education has introduced initiatives to support students with educational challenges and integrate support services within mainstream schools. However, several studies suggest variability in how classification and monitoring practices are applied across schools, as well as challenges related to resource limitations, professional training, and clarity of procedures (
Darwish et al., 2025;
M. Al-Hendawi et al., 2023). Despite advancements in policies, gaps remain in aligning school-level practices with inclusive education goals.
Globally, the integration of AI in education is gaining attention as a means to improve the efficiency and accuracy of educational decision-making. AI-powered tools have the potential to enhance classification processes, personalize learning, and streamline progress monitoring (
Hussein et al., 2025;
Holmes et al., 2019;
Luckin et al., 2016). For example, AI can support early detection of students at risk, adapt instruction to individual needs, and generate real-time insights for teachers and administrators (
UNESCO, 2021;
U.S. Department of Education, 2023). Yet, the successful adoption of AI depends on educators’ readiness, institutional infrastructure, and an understanding of the ethical and practical implications (
Hussein et al., 2025;
Holmes et al., 2019;
UNESCO, 2021).
This study examines stakeholders’ satisfaction with the current classification and progress monitoring systems in Qatar and investigates their perceptions regarding the potential role of AI in enhancing these systems. While classification and monitoring remain foundational practices in inclusive education, stakeholders’ views of AI integration provide insight into the balance between satisfaction with existing approaches and aspirations for innovation. Specifically, this study addresses the following research questions:
To what extent are stakeholders satisfied with current classification and monitoring systems, and what factors predict their satisfaction?
What are the perceived roles, importance, and anticipated challenges related to integrating AI into these systems?
Are there significant differences in stakeholders’ perceptions and satisfaction based on gender, job title, type of institution, years of experience, or educational stage?
2. Literature Review
2.1. Classification and Progress Monitoring
Progress monitoring is a key component of a comprehensive Multi-Tiered System of Support (MTSS), involving systematic and frequent assessments designed to guide instructional decisions and identify students requiring additional support. Within MTSS, progress monitoring is not a one-time event but an ongoing process in which data are regularly collected and analyzed to inform teachers’ instructional strategies. This continuous cycle allows educators to respond promptly to students’ needs, adjust interventions, and prevent learning gaps from widening. A recent meta-analysis confirmed the effectiveness of this approach, finding that progress monitoring significantly improves academic performance compared to traditional instruction by enabling timely instructional adjustments based on structured, ongoing assessment data (
Fuchs et al., 2025). These findings strengthen the argument that data-driven decision-making is an essential element of inclusive education.
One of the most widely used tools in this context is Curriculum-Based Measurement (CBM), which is frequently implemented within MTSS frameworks. CBM offers technically robust and user-friendly tools for tracking academic growth across domains such as reading, mathematics, and writing (
Deno, 1985;
Stecker et al., 2005). The method is particularly valuable because it provides quick, reliable indicators of student performance that can be administered and scored efficiently, making it practical for classroom use. In the domain of writing, for example, CBM has been shown to demonstrate strong reliability and validity, supporting its utility in capturing student progress in complex skills such as organization, fluency, and expression (
Hampton & Lembke, 2016;
McMaster & Espin, 2007).
These findings also align with broader trends in educational assessment that emphasize the importance of formative, real-time progress tracking to improve learning outcomes (
Black & Wiliam, 1998;
Luckin et al., 2016). Formative assessment, when carried out consistently, enables teachers to provide feedback that is immediate, targeted, and actionable, which in turn supports deeper learning and greater student engagement. This approach contrasts with summative assessments, which provide valuable information about achievement but are less effective in shaping day-to-day instructional practices.
Despite these clear benefits, the implementation of progress monitoring remains inconsistent across schools in many contexts. In many contexts, schools and teachers struggle to integrate progress data into instructional planning due to factors such as insufficient professional development, lack of familiarity with assessment tools, and the time required to collect and interpret data effectively (
Gebhardt et al., 2023). Even when tools like CBM are available, their use is sometimes limited to compliance purposes rather than being fully embedded in instructional decision-making. These challenges highlight the importance of having clear classification criteria, effective tools that are accessible to teachers, and adequate resources to ensure that progress monitoring can function as intended to support all learners.
2.2. Artificial Intelligence in Education
The emergence of AI in education presents new opportunities for enhancing classification and monitoring systems. AI is increasingly recognized as one of the most transformative technologies in the field of education because of its ability to process large volumes of data, identify patterns in student learning, and support real-time decision-making (
Hussein et al., 2025;
Holmes et al., 2019;
Luckin et al., 2016). Within the context of student classification and monitoring, AI has been highlighted as particularly valuable for its potential to personalize learning pathways, automate time-consuming progress tracking tasks, and provide early alerts for students at risk of falling behind (
Al-Thani, 2025;
Luckin et al., 2016). These functions can reduce teacher workload, increase the efficiency of data analysis, and enhance the accuracy of interventions by basing decisions on continuous streams of data rather than periodic assessments.
AI can also contribute to adaptive learning systems that tailor instruction to each learner’s strengths and needs, thereby supporting inclusive education practices. By analyzing behavioral, cognitive, and performance data, AI-driven platforms can generate customized recommendations for teaching strategies, remedial activities, or enrichment opportunities (
Hussein et al., 2025;
Zawacki-Richter et al., 2019). Additionally, AI-powered predictive analytics can help schools and policymakers identify systemic trends, enabling early interventions that improve equity and resource allocation.
However, the successful adoption of AI depends on a range of enabling conditions. Educators’ readiness to engage with AI technologies is a central factor, as teachers and administrators must have both the technical competence and the confidence to interpret AI outputs and integrate them meaningfully into their instructional decision-making (
Holmes et al., 2019). Adequate infrastructure, including reliable digital systems, high-quality data management, and technical support, is also critical to ensure that AI solutions can function effectively in school settings (
Chen et al., 2020).
At the same time, AI raises important ethical considerations. Concerns about fairness, transparency, and accountability in algorithmic decision-making are particularly salient in education, where classification and monitoring decisions can have long-term consequences for students (
Williamson & Eynon, 2020). Without careful design, AI systems risk reinforcing existing biases, misclassifying students, or undermining trust in educational processes. A mixed-methods study by
Karran et al. (
2024) found that perceptions of AI in education vary across stakeholder groups, mainly depending on trust in data privacy, clarity of system processes, and the perceived usefulness of AI tools in addressing real instructional needs. This suggests that technical innovation must be matched by efforts to ensure stakeholder trust and responsible governance if AI is to play a sustainable role in education.
2.3. Contextualizing in Qatar
In Qatar, stakeholders have reported variability in the implementation of inclusive education policies, particularly in terms of engaging educators in decision-making (
Al-Thani, 2025). Over the past two decades, Qatar has introduced a series of reforms aligned with the national vision for educational excellence and equity, seeking to strengthen inclusion within mainstream schools. The Ministry of Education and Higher Education has issued guidelines requiring schools to adopt classification systems to identify students with educational challenges and provide targeted interventions. These policies are designed to align with global frameworks for inclusive education, ensuring that students with diverse needs receive support within general education classrooms.
Although these national guidelines emphasize inclusion, practical barriers have persisted at the school level. Teachers and administrators have frequently cited the need for insufficient professional training on how to apply classification criteria, resulting in inconsistent practices across institutions (
M. Al-Hendawi et al., 2023). Additionally, progress monitoring tools and resources are not always readily available, which limits schools’ ability to track student development systematically and adjust instructional strategies accordingly (
Darwish et al., 2025). These challenges mirror international findings that highlight how policy ambitions often exceed implementation capacity when resources, training, and monitoring systems are lacking.
Another issue raised in the Qatari context relates to the degree of educator involvement in shaping inclusive practices.
Al-Thani (
2025) noted that while stakeholders are often consulted in policy development, their engagement is sometimes limited, leading to a sense that reforms are “top-down” rather than co-constructed with teachers and practitioners. This disconnect can reduce the effectiveness of inclusion initiatives, as educators may feel less ownership of policies and less prepared to enact them in daily practice.
These gaps create a space for exploring innovative solutions. AI-driven tools could potentially provide more consistent criteria for classification, offer data-driven insights to support monitoring, and reduce the burden on teachers by automating routine tasks. However, as highlighted in global research, successful adoption depends on both stakeholder acceptance and adequate preparation. For Qatar, this means that AI solutions must be carefully aligned with local needs, address existing gaps in training and resources, and ensure that educators are active partners in the implementation process. In summary, while prior studies have explored inclusive education reforms, the use of progress monitoring tools, and the potential of AI in education, few have examined how key stakeholders perceive these systems or the feasibility of integrating AI to address existing gaps. This study extends previous work by focusing on stakeholder perceptions of classification, progress monitoring, and AI integration within Qatar’s inclusive education framework, providing context-specific insights to inform both national policy and international discussions on inclusive education and technology adoption.
2.4. The Present Study
The reviewed literature highlights two central themes. First, while student classification and progress monitoring are widely recognized as foundational components of inclusive education, their implementation remains uneven due to barriers such as limited training, insufficient resources, and inconsistent integration into instructional planning. Second, AI has been identified as a promising avenue for strengthening these practices by enabling more accurate classification, automating monitoring tasks, and supporting personalized instruction. At the same time, concerns related to ethics, fairness, data privacy, and educator readiness point to the need for careful and context-sensitive adoption.
In Qatar, these issues are reflected in local practice. Although national policies emphasize inclusive education, schools face challenges in consistently applying classification criteria and sustaining systematic progress monitoring due to resource and training limitations. Furthermore, stakeholder engagement in shaping inclusive practices has often been limited, which may reduce the effectiveness of policy implementation at the classroom level. Despite these challenges, little empirical evidence exists on how key stakeholders, including teachers, administrators, and service providers, perceive the adequacy of current systems and the potential of AI to address existing gaps (
Darwish et al., 2025;
Holmes et al., 2019;
Zawacki-Richter et al., 2019).
To address this gap, the present study examines stakeholders’ satisfaction with the current classification and progress monitoring systems in Qatar and explores their expectations for integrating AI into these practices.
3. Method
3.1. Research Design
This study adopted a quantitative, cross-sectional survey design to investigate stakeholders’ satisfaction with current classification and progress monitoring systems in Qatar and to examine their perceptions regarding the integration of AI into these processes. A cross-sectional survey design was considered particularly appropriate because it enables the collection of data from a relatively large and diverse group of participants at a single point in time, thereby providing a comprehensive snapshot of prevailing attitudes and perceptions (
Bryman, 2016;
Creswell & Creswell, 2017). On the other hand, the survey method is well-established in educational and social science research for examining stakeholders’ attitudes toward systems, policies, and innovations (
Dillman et al., 2014). In the present study, it was deemed the most effective means to capture the voices of professionals working at various levels of the educational system, including those directly involved in classroom instruction and those engaged in administrative or policy-related roles. This breadth of participation is significant in the Qatari context, where inclusive education reforms are ongoing and require both top-down and bottom-up perspectives to assess alignment between policy intentions and practice (
Darwish et al., 2025;
Jabri et al., 2025).
3.2. Participants
A total of 313 stakeholders participated in the study, representing four main groups: teachers (
n = 246), educational/administrative decision-makers (
n = 42), educational service providers (
n = 10), and other staff members (
n = 15). Participants also varied in terms of gender (male = 145, female = 168) and years of experience in the educational field (<5 years = 28, 5–10 years = 67, >10 years = 218). The sample reflects the diversity of professionals involved in implementing and overseeing inclusive education policies and practices in Qatar. Participation was voluntary, and respondents were assured of anonymity and confidentiality.
Table 1 shows the demographic information of the participants.
Participants were selected using a purposive sampling approach, targeting individuals directly involved in student classification, progress monitoring, and inclusive education practices. This method was appropriate because the study sought perspectives from stakeholders with specific professional experience relevant to the research focus. The target population included teachers, administrators, and service providers working in public and private schools, ministry centers, and related educational institutions across Qatar. Invitations were distributed electronically through Ministry channels, and participation was voluntary.
3.3. Instrument
Data were collected using a questionnaire specifically designed for this study to assess stakeholders’ perceptions of student classification, progress monitoring, and the integration of AI in educational contexts. The development of original instruments tailored to study objectives is common in educational research, particularly when existing tools do not adequately capture the cultural or contextual dimensions under investigation (
DeVellis, 2016;
Mertler & Reinhart, 2016). The instrument was designed to align directly with the study’s research questions and was structured into four domains:
Perceptions of classification practices (8 items; e.g., “How clear are the criteria used in your classification system?”).
Perceptions of progress monitoring practices (5 items; e.g., “To what extent is progress data used to adjust teaching strategies?”).
Perceptions of the role and importance of AI integration (3 items; e.g., “How important is the integration of artificial intelligence systems with current educational systems?”). These items also assessed expectations for AI functionality (e.g., prediction, reporting, personalization) and anticipated challenges (e.g., data privacy, system compatibility, user acceptance).
Overall satisfaction with classification and monitoring systems (1 outcome item measured on a 4-point Likert scale, 1 = not satisfied at all to 4 = very satisfied).
Items were written as closed-ended questions with Likert-type responses to ensure comparability across participants, a method considered reliable and efficient for measuring attitudes and perceptions (
Boone & Boone, 2012).
To establish content validity, the draft instrument was reviewed by 12 experts in special education and educational psychology drawn from universities and schools in Qatar. Experts evaluated each item for clarity, relevance, and alignment with the study objectives using a four-point agreement scale (1 = not relevant, 4 = highly relevant). The overall agreement rate was 91.7%, exceeding the minimum threshold of 80% commonly recommended for expert validation (
Lynn, 1986). Based on expert feedback, minor revisions were made to improve item clarity and contextual appropriateness, and the research team prepared a final draft.
The revised survey was piloted with 25 participants representing teachers and administrators who were not included in the main sample. The pilot tested clarity, timing, and reliability. Respondents confirmed that the instrument was clear and user-friendly. Reliability analysis of the pilot data yielded strong internal consistency across subscales, with Cronbach’s alpha values ranging from 0.82 to 0.88. No further modifications were deemed necessary after the pilot, and the final version of the questionnaire, as summarized in
Table 2, was then administered in the main study.
Because all items were self-developed to align with the specific objectives and cultural context of the study, no adaptation from existing instruments was required. To ensure validity and reliability, the instrument underwent a rigorous expert review and pilot testing process. These procedures established strong content validity and internal consistency, confirming that the items effectively represented the intended constructs. Although Exploratory and Confirmatory Factor Analyses were not conducted in this study, future research is planned to further examine the instrument’s factorial structure using larger and more diverse samples.
The questionnaire was organized into four main sections: Section A collected demographic information (e.g., gender, role, and years of experience); Section B focused on perceptions of current classification practices; Section C addressed progress monitoring practices; and Section D examined perceptions of AI integration and overall satisfaction with current systems. Each section was clearly labeled to facilitate respondent understanding and ensure logical flow.
Stakeholder satisfaction and AI perceptions, including:
Satisfaction with the current system (4-point Likert scale: 1 = not satisfied at all, 4 = very satisfied).
Perceived importance of AI integration (4-point Likert scale: 1 = not important, 4 = very important).
Expected functions of AI systems and anticipated challenges, assessed through multiple-choice items.
3.4. Procedure
The study adhered to rigorous ethical and procedural protocols to ensure the protection of participants and the integrity of the data collected. Ethical approval was obtained from the X University Institutional Review Board (QU-IRB 070/2025-EA), and authorization was secured from the Ministry of Education and Higher Education (MOEHE) to permit access to schools and educational offices nationwide. The survey was administered electronically through a secure online platform, which facilitated wide accessibility across public schools, private schools, ministry centers, and other institutions. An electronic format was selected not only for its efficiency and cost-effectiveness but also because it reduces potential barriers to participation by enabling respondents to complete the survey at their convenience and in a location of their choice (
Dillman et al., 2014). The platform incorporated security measures to safeguard participant confidentiality and prevent unauthorized access, consistent with best practices for online educational research (
Evans & Mathur, 2018).
The research team collaborated closely with MOEHE officials, who assisted in disseminating invitations to schools, educational offices, and service centers. The invitation email included a cover letter and a link to the survey, which contained an introductory statement clearly explaining the purpose of the study, the voluntary nature of participation, and the approximate time required for completion. This statement emphasized the anonymity and confidentiality of responses, assuring participants that no identifying information would be collected or reported. Informed consent was obtained electronically prior to the start of the questionnaire. Participants were required to read the consent statement and indicate agreement before gaining access to the survey items.
The survey took approximately 15 min to complete. The relatively short duration was intentional to reduce response fatigue and increase completion rates, a recommended practice in online survey research (
Fan & Yan, 2010). Responses were automatically recorded by the secure survey platform, minimizing researcher bias and ensuring the integrity of the data collection process.
3.5. Data Analysis
Descriptive and inferential statistical techniques were applied to address the study’s research questions and provide a comprehensive understanding of stakeholders’ perceptions and satisfaction. All analyses were conducted using IBM SPSS Statistics, Version 29.
Each statistical technique was selected to align with a specific research question. Descriptive statistics addressed Research Questions 1–3 by summarizing stakeholders’ overall perceptions of classification, progress monitoring, and AI integration. Multiple regression analysis was used to answer Research Question 4, examining how perceptions of classification and monitoring predict overall satisfaction. Independent-samples t-tests and one-way ANOVA were employed to explore additional hypotheses regarding group differences (e.g., by gender, job title, type of institution, years of experience, and educational stage), providing insight into how stakeholder perceptions may vary across demographic and professional characteristics.
To begin, descriptive statistics, including means, standard deviations, frequencies, and percentages, were calculated to summarize participants’ responses. These analyses provided an initial overview of satisfaction levels, perceptions of classification and progress monitoring practices, and expectations regarding the integration of AI. In doing so, a general profile of stakeholders’ views across the sample was created, providing a foundation for subsequent inferential analyses.
Building on these descriptive findings, multiple regression analysis was employed to examine the predictive roles of stakeholders’ perceptions of classification and progress monitoring practices in shaping their overall satisfaction with existing systems. Building on these descriptive findings, multiple regression analysis was employed to examine the predictive roles of stakeholders’ perceptions of classification and progress monitoring practices in shaping their overall satisfaction with existing systems, as outlined in Research Question 1.
To address Research Question 3, independent-samples t-tests were used to evaluate gender-based differences in perceptions, while one-way ANOVA tests compared responses across job title, type of institution, years of experience, and educational stage. Where statistically significant differences emerged, post hoc analyses were conducted to explore the nature and direction of these differences. In addition, independent-samples t-tests were used to evaluate gender-based differences in perceptions, offering insight into whether male and female participants held significantly different views about the effectiveness of current practices.
Further analyses were conducted using one-way ANOVA tests to compare perceptions across job title, type of institution, years of experience, and educational stage. Where statistically significant differences emerged, post hoc analyses were conducted further to explore the nature and direction of these differences. Complementing these analyses, frequencies and percentages were calculated to capture stakeholders’ expectations regarding AI integration, including both anticipated functions (e.g., prediction, reporting, and personalization) and perceived challenges (e.g., data privacy, system compatibility, and user acceptance).
To interpret perception levels, mean scores for each item and domain were classified as low (1.00–2.33), moderate (2.34–3.66), or high (3.67–5.00) based on the five-point Likert scale used. Overall mean scores for each domain were calculated by averaging the item means within that domain. These overall means were used primarily for Research Questions 1–3 to summarize the general trends in stakeholder perceptions.
Finally, a threshold of
p < 0.05 was adopted to determine statistical significance. Where appropriate, effect sizes (Cohen’s
d, eta squared) were reported to provide additional information about the magnitude and practical significance of observed differences, thereby complementing significance testing and enhancing the interpretability of results (
Field, 2018).
4. Results
4.1. Perceptions of Current Classification Practices
Stakeholders’ perceptions of existing classification practices in Qatari schools were examined through eight survey items addressing clarity, effectiveness, resources, and the success of innovative strategies. To explore these perceptions, means and standard deviations were calculated for teachers, administrators, and decision-makers.
Table 3 presents the results, ranking the items in descending order according to the mean scores.
As the table indicates, the descriptive results show that participants generally expressed positive views, with an overall mean of 3.03 (SD = 0.40) on a four-point scale, corresponding to a high degree of agreement. This finding suggests that respondents essentially recognize the utility of current classification systems, while also identifying specific areas that require further development.
Among the individual items, the highest-rated aspect was the perceived success of innovative strategies implemented in schools (M = 3.15, SD = 0.50), which indicates that recent efforts to introduce new practices and approaches in student classification are viewed favorably and inspire confidence among stakeholders. The impact of classification-related challenges on support for students with educational difficulties ranked second (M = 3.12, SD = 0.56), reflecting respondents’ awareness that such challenges directly influence the adequacy of support services provided to students.
Perceptions of the clarity of classification criteria (M = 3.05, SD = 0.62) and the availability of necessary resources for accurate implementation (M = 3.03, SD = 0.54) also received high levels of agreement, suggesting that stakeholders generally view current guidelines and resource provision as adequate. However, moderate ratings were observed for the effectiveness of the classification system itself (M = 3.00, SD = 0.52) and for the effectiveness of the tools used to classify students (M = 3.00, SD = 0.54). The lowest mean score was reported for the sufficiency of current resources and policies to support classification and monitoring (M = 2.92, SD = 0.62). Similarly, the accuracy of classifying students according to specific disabilities received only a moderate level of agreement (M = 2.96, SD = 0.63).
4.2. Perceptions of Current Practices in Progress Monitoring
The second research question examined how teachers, administrators, and decision-makers perceive the current practices of monitoring the progress of students with academic challenges in schools. As shown in
Table 4, descriptive statistics were calculated to assess stakeholders’ evaluations of the effectiveness of existing tools and systems, the diversity and frequency of assessments, and the extent to which progress data inform teaching strategies.
The findings shown in
Table 4 indicate that stakeholders generally view current progress-monitoring practices positively. The overall mean score of 3.07 (
SD = 0.43) shows a strong agreement that these practices are effective. The highest-rated item was the perceived effectiveness of the tools used to track student progress (
M = 3.11,
SD = 0.55), demonstrating stakeholders’ confidence in the usefulness of existing monitoring tools. Two items received the second-highest rating: the availability of data analysis systems for evaluating student progress (
M = 3.09,
SD = 0.57) and the extent to which progress data are used to adjust teaching strategies (
M = 3.09,
SD = 0.55). These results suggest that stakeholders see the importance of data-driven methods in improving instructional practices. Finally, perceptions of the diversity of assessments (
M = 3.06,
SD = 0.56), the effectiveness of data analysis systems (
M = 3.04,
SD = 0.55), and the frequency of assessments for monitoring purposes (
M = 3.03,
SD = 0.54) were all rated highly, although with slightly lower mean scores compared to other items. These results illustrate a consistent endorsement of the current monitoring practices across various aspects, with modest variation in the degree of agreement.
4.3. Perceptions of AI Integration
The third research question focused on stakeholders’ perceptions of the role and importance of integrating AI into student classification and progress-monitoring systems. To address this, three areas were analyzed: (a) the functions stakeholders expect from a new classification and monitoring system, (b) the perceived importance of integrating AI into current educational systems, and (c) the anticipated challenges of implementation.
4.4. Expected Functions of a New Classification and Monitoring System
Table 5 presents the frequencies and percentages of participants’ responses regarding the most important functions they expect from a new system for classifying and monitoring students with academic challenges.
The results presented in
Table 5 indicate that stakeholders expressed varied yet generally positive expectations regarding the functions of a new classification and monitoring system in education. The most frequently endorsed function was predicting students’ educational challenges (
n = 104, 33.2%), suggesting that early identification and proactive intervention are regarded as central priorities for supporting learners with diverse needs. Nearly the same proportion of respondents (
n = 103, 32.9%) emphasized the importance of data analysis and detailed reporting, underscoring stakeholders’ recognition of the critical role that robust data systems play in informing instructional decisions and enhancing accountability.
A notable proportion of participants (n = 71, 22.7%) emphasized the need for customization of individual educational plans, which reflects an interest in tailoring educational strategies to students’ unique strengths and challenges. In contrast, providing immediate feedback to teachers and students was selected by a smaller share of respondents (n = 35, 11.2%), ranking lowest among the listed options. While this function was not prioritized to the same extent, its selection highlights an acknowledgment of the value of real-time feedback for instructional improvement.
Overall, the findings emphasize that stakeholders expect AI-based classification and monitoring systems to move beyond administrative functions toward predictive analytics and personalized learning support. At the same time, the relatively lower emphasis on immediate feedback indicates a potential area for further awareness-raising regarding its educational benefits.
4.5. Expected Importance of AI Integration
Stakeholders were also asked to evaluate the overall importance of integrating AI into existing educational systems. Their responses are summarized in
Table 6.
As shown in
Table 7, the majority of respondents considered the integration of artificial intelligence (AI) systems into existing educational frameworks to be highly significant. Nearly half of the participants (45.0%,
n = 141) rated AI integration as important, while an additional one-third (33.9%,
n = 106) considered it very important. These results indicate a growing recognition of the potential of AI to enhance educational quality and support critical processes, such as student classification, progress monitoring, and data-driven instructional decision-making.
A smaller proportion of respondents (18.5%, n = 58) adopted a neutral stance, which may indicate uncertainty, limited exposure, or insufficient knowledge regarding the educational applications of AI. Only a very small minority (2.6%, n = 8) considered AI integration to be unimportant.
These findings reflect broad and substantial support for the advancement of AI integration in education, while simultaneously highlighting the importance of ensuring robust digital infrastructure, adequate professional training, and clear guidance to facilitate effective and sustainable implementation.
4.6. Expected Challenges of AI Integration
Participants also identified potential challenges to adopting AI systems for supporting students with educational challenges. The frequencies and percentages of responses are presented in
Table 7.
Table 7 indicates that the most frequently cited challenge in implementing AI systems to support students with educational difficulties was the acceptance and adaptation of teachers and students to new technologies, reported by 40.9% (
n = 128) of participants. This underscores the central role of human factors and highlights the extent to which resistance to change may hinder the effective adoption of AI in educational contexts.
Concerns regarding integration with existing systems, as well as data privacy and security, were also widely acknowledged, with each cited by 27.8% (n = 87) of respondents. These findings highlight concerns not only about technical interoperability but also about the protection of sensitive student data, reflecting legitimate regulatory and ethical considerations.
By contrast, implementation and maintenance costs were identified by only 3.5% (n = 11) of participants as the primary obstacle, suggesting that financial constraints are perceived as less pressing than issues of human readiness, system compatibility, and data governance.
Taken together, these findings emphasize the importance of investing in awareness-building initiatives, providing targeted technical training for educational staff, strengthening cybersecurity measures, and ensuring seamless system integration. Addressing these areas will be essential for achieving effective and sustainable implementation of AI systems in support of inclusive education.
4.7. Satisfaction with Current Classification and Monitoring Systems
The final research question examined the extent to which stakeholders are satisfied with the current classification and progress monitoring systems, as well as the factors that predict their satisfaction. A multiple regression analysis was conducted to assess the predictive roles of perceptions of classification practices and perceptions of progress monitoring practices.
Table 8 presents the results of the regression analysis.
The analysis revealed that stakeholders’ satisfaction was strongly predicted by their perceptions of classification and monitoring practices. The model accounted for 54.2% of the variance in satisfaction (R
2 = 0.542) and was statistically significant,
F(2, 310) = 183.29,
p < 0.001. Among the predictors, perceptions of current classification practices were the most influential factor (β = 0.473,
p < 0.001), followed by perceptions of progress monitoring practices (β = 0.315,
p < 0.001). These results indicate that satisfaction increases substantially when stakeholders view classification and monitoring systems as effective and transparent, with classification practices exerting the greater influence. This underscores the importance of strengthening classification mechanisms as a foundation for building trust and satisfaction among teachers, administrators, and decision-makers.
Table 9 provides the distribution of responses regarding satisfaction with the current system.
As shown in
Table 9, the majority of respondents expressed positive views of the current system, with 67.4% (
n = 211) being satisfied and 17.6% (
n = 55) being very satisfied. A smaller proportion reported dissatisfaction (14.1%,
n = 44), and only 1.0% (
n = 3) indicated that they were not satisfied at all. The overall mean satisfaction score was 3.02 (
SD = 0.60), reflecting a generally high level of satisfaction. Nonetheless, the presence of a dissatisfied minority suggests that there remains room for targeted improvements to enhance stakeholder confidence in the system further.
4.8. Differences in Perceptions by Job Title
To explore stakeholders’ perceptions of classification and progress monitoring practices according to job title, a one-way ANOVA was used. Job categories included teachers, educational and administrative decision-makers, educational service providers, and others. Descriptive statistics and test results are presented in
Table 10.
As displayed in
Table 10, the mean scores for perceptions of classification practices ranged from 2.86 to 3.04 across the four job categories, while perceptions of progress monitoring practices ranged from 2.88 to 3.08. Although teachers and educational/administrative decision-makers tended to report slightly higher perceptions than service providers and other staff, these differences were not statistically significant for either domain (classification practices:
F = 0.649,
p = 0.584; progress monitoring practices:
F = 1.350,
p = 0.258). These results suggest that job title does not significantly influence stakeholders’ perceptions of current classification and progress monitoring practices.
4.9. Differences in Perceptions by Gender
To determine whether gender influences staff perceptions of classification and progress monitoring practices, independent samples
t-tests were conducted to compare mean scores between male and female participants. The results are presented in
Table 11.
The t-test results indicated statistically significant gender-based differences in perceptions of both classification and progress monitoring practices. Male participants reported slightly higher mean scores than female participants in both areas. For classification practices, the difference reached significance (t = 2.037, p = 0.042), and for progress monitoring practices, the effect was slightly more substantial (t = 2.169, p = 0.031). These findings suggest that gender may be a factor influencing staff perceptions of classification and monitoring practices in educational settings, albeit with modest effect sizes.
4.10. Differences in Perceptions by Type of Institution
To examine whether staff perceptions of classification and progress monitoring practices varied according to the type of institution, institutions were categorized into four groups: public schools, private schools, ministry centers/offices, and other institutions. A one-way ANOVA was conducted to test for differences among these groups, and the results are displayed in
Table 12.
As shown in
Table 12, mean scores for perceptions of both classification practices (ranging from 2.85 to 3.04) and progress monitoring practices (ranging from 2.90 to 3.09) were highly similar across all institution types. The ANOVA results indicated no statistically significant differences among groups for either classification practices (
F = 0.492,
p = 0.804) or progress monitoring practices (
F = 1.139,
p = 0.334). These findings suggest that institutional type does not significantly shape staff perceptions, which may reflect the influence of standardized national policies and common training frameworks applied across institutions.
4.11. Differences in Perceptions by Years of Experience
The study also explored whether perceptions of classification and progress monitoring practices varied according to years of experience in the educational field. Participants were grouped into three categories: less than 5 years, 5–10 years, and more than 10 years. One-way ANOVA results are presented in
Table 13.
Table 13 revealed that staff with fewer than five years of experience reported slightly higher mean scores for both classification (
M = 3.11) and progress monitoring (
M = 3.18) practices compared to those with more years of experience. However, these differences were not statistically significant (
F = 0.943,
p = 0.390 for classification;
F = 1.589,
p = 0.206 for monitoring). These results indicate that years of professional experience do not meaningfully affect perceptions of classification and monitoring, suggesting consistency of views across experience levels.
4.12. Differences in Perceptions by Educational Stage
Finally, the study investigated whether staff perceptions differed across the educational stage in which they worked: primary, intermediate, secondary, or other. One-way ANOVA results are shown in
Table 14.
As shown in
Table 14, mean perceptions of both classification (ranging from 2.93 to 3.05) and progress monitoring (ranging from 2.96 to 3.10) practices were relatively similar across educational stages. The ANOVA results indicated no statistically significant differences among groups for either classification practices (
F = 0.914,
p = 0.434) or progress monitoring practices (
F = 1.038,
p = 0.376). These findings suggest that staff perceptions are consistent across primary, intermediate, and secondary stages, as well as in other contexts, indicating a general uniformity in views regardless of educational stage.
5. Discussion
This study addressed three main research questions: (1) stakeholders’ satisfaction with current student classification and progress monitoring systems and the factors predicting their satisfaction; (2) stakeholders’ perceptions of the role, importance, and anticipated challenges of integrating AI into these systems; and (3) differences in perceptions and satisfaction across demographic and professional groups. This study examines the satisfaction of Qatari stakeholders with the current student classification and progress monitoring systems. In addition, it explores their perceptions of the potential role of AI in improving these processes. Overall, the findings indicate relatively high satisfaction with existing systems but also strong support for AI integration, suggesting that stakeholders recognize both the value and the limitations of current practices. Data analysis reveals a contradiction, where a high level of satisfaction with current practices is accompanied by strong support for integrating AI. This paradox does not imply that the current system is ideal, but rather suggests that stakeholders tend to adapt their practices to the tools that are available and permitted (
Darwish et al., 2025). Furthermore, it appears that stakeholders have learned that “top-down” reform of the Qatari educational system leaves negative impacts on educators, leading them to accept the current situation (
M. Al-Hendawi et al., 2017;
Al-Thani, 2025), and to work with the limitations of the system, including a lack of resources, and inconsistent application of standards (
M. Al-Hendawi et al., 2023). This finding supports earlier research suggesting that satisfaction within centralized education systems often reflects professional adaptability rather than full endorsement of system adequacy (
M. Al-Hendawi et al., 2023). In addition, the literature suggests that stakeholders’ demand for integrating AI reflects their need to enhance human decision-making in response to complex data, rather than replacing this role in the entire process (
Holmes et al., 2019). This aligns with international findings emphasizing AI as a supportive tool that strengthens, rather than substitutes, professional judgment (
Williamson & Eynon, 2020).
Regarding the second research question, which examined the perceived role and importance of AI integration, data analysis revealed that the top expected functions were prediction and data analysis/reporting. In fact, these can be considered priorities for stakeholders who have shown an evident need to use data-based and proactive approaches to inform effective early interventions for students at risk of disability (
Luckin et al., 2016;
Holmes et al., 2019). Moreover, using AI for prediction and reporting assists teachers by automating tasks, thereby providing them with the necessary time to teach their students rather than spending time on a lengthy bureaucratic process. On the other hand, findings show that stakeholders’ lowest emphasis is on immediate feedback, which reflects the lack of awareness of its importance (
Karran et al., 2024). In the Qatari context, this may reflect limited professional exposure to formative data-use practices, underscoring the need for capacity-building and training on digital feedback systems. Findings highlight the need for a clear and compelling classification standard (
Filderman et al., 2023). However, as satisfaction reflects acceptance, this acceptance raises some ethical issues, including bias resulting from a lack of training, and privacy concerns (
Williamson & Eynon, 2020). These issues warrant in-depth attention.
Concerning the third research question on group differences, this finding indicates that perceptions of classification practices were a stronger predictor of overall satisfaction than perceptions of progress monitoring, where both were significant. Regarding predictors, it clearly indicates that when AI assists in having precise early identification of any potential disability, stakeholders are more likely to support integrating it into their system. Moreover, this suggests that the potential value of AI likely depends on its ability to provide highly accurate identification and data-driven decisions. Classification accuracy is crucial in the Qatari context, where literature indicates limitations on resources and inconsistent practices in schools (
Darwish et al., 2025;
M. K. Al-Hendawi et al., 2022). This reinforces that effective classification systems remain a foundation for inclusion efforts in Qatar, and AI may strengthen this foundation by addressing issues of consistency and data management. Findings additionally indicate that males have more positive perceptions than females, which can be mainly attributed to local cultural perspectives and different experiences and expectations within the Qatari educational system.
Notably, job title, institution type, years of experience, and educational phase did not significantly differ, demonstrating widespread agreement throughout the educational landscape. Additionally, descriptively lower perceptions were found among non-teaching staff and mid-career professionals, though not statistically significant, may represent one of the inclusion challenges in Qatar (
Jabri et al., 2025). Similar gender-based perception differences have been reported in other Gulf education studies, often linked to structural role differences within schools (
Jabri et al., 2025), where non-teaching staff are more likely to have a broader view of the system and be more vulnerable to accountability or criticism if the classification turned out to be false.
Taken together, the findings across all three research questions highlight that stakeholders in Qatar generally express confidence in existing classification and monitoring systems but see significant promise in AI to enhance efficiency, accuracy, and inclusivity. These results align with international literature emphasizing that AI adoption in education is most effective when integrated to complement human expertise (
Holmes et al., 2019;
Williamson & Eynon, 2020). Within Qatar’s rapidly developing education landscape, the findings underscore the importance of combining technological innovation with contextual understanding and ongoing professional development to sustain inclusive and ethical practices.
6. Implications for Policy and Practice
Policymakers could exercise caution when interpreting the high level of satisfaction shown in the current study. This satisfaction should be interpreted as evidence of adaptation to the existing limitations imposed by the system, rather than as an indicator of the system’s effectiveness. Furthermore, policymakers may consider that this level of satisfaction indicates acceptance rather than alignment with the requirements of inclusive education objectives. Thus, educational reform should focus on strengthening the foundations of current classification practices by providing adequate resources, establishing clear standards, and ensuring more consistent application across educational settings. Ultimately, this reform may lead to rebuilding confidence in the identification and classification system and its practices, and provide an opportunity for more advanced monitoring processes. In addition, given that stakeholders prioritized prediction of student challenges (33.2%) and the generation of detailed analyses and reports (32.9%), policymakers should consider a targeted AI procurement strategy that focuses initially on these high-demand functions. Concentrating on the tools stakeholders value most can ensure immediate utility, maximize buy-in, and demonstrate early success, rather than implementing a broad, generic AI framework.
Furthermore, policymakers may also pave the way for systematically integrating AI into the system by facilitating the implementation of a national framework that effectively responds to stakeholders’ priorities and requirements. In the Qatari context, these applications primarily include AI-driven early identification of learning challenges, predictive analytics for student progress, and automated generation of progress reports, all of which directly support teachers, administrators, and service providers in inclusive education settings. This can be practically achieved by providing schools with resources (tools and staff) and support (training) for consistent implementation. This includes providing them with a unified classification protocol across the country and ensuring that detection and classification processes are ethical, fair, accurate, and transparent. This is confirmed by the fact that perceptions did not significantly differ among institution types, supporting the need for unified protocols. Moreover, since acceptance and adaptation by teachers and students emerged as the most anticipated challenges (40.9%), reforms should incorporate a comprehensive change management plan. This goes beyond conventional training and includes co-design initiatives, pilot programs, and participatory implementation involving educators and students. Such collaborative approaches are more likely to foster ownership, reduce resistance, and encourage sustained engagement with new AI systems.
Implementing these recommendations requires collaboration with teachers, administrators, and service providers. However, this collaboration can be achieved through moving beyond the “top-down” approach, addressing any potential dissatisfaction among non-teaching staff, mid-career professionals, and females, and encouraging these groups to participate in the initiatives, thereby reducing resistance to change. Moreover, there is an evident need to ensure that schools have the necessary infrastructure in place before introducing AI, address privacy concerns, and provide ongoing training and technical support. Stakeholders could eventually be able to monitor not only satisfaction but also the impact of integrating AI in the actual detection and classification of the at-risk students.
Finally, with 85% of respondents expressing satisfaction with current systems and nearly 80% rating AI integration as essential, these findings can inform a strategic communication plan. Messaging should frame reform as a transition from an acceptable system to an optimal, future-ready one, leveraging the existing satisfaction to build enthusiasm for innovation. This “acceptance paradox” can thus serve as a foundation for motivating educators and the wider community toward excellence in inclusive education.
7. Limitations and Future Research
This cross-sectional survey design does not allow for causal conclusions, and satisfaction may have been influenced by social desirability bias. Further, the study was conducted within the Qatari context, which may limit generalizability to other educational systems. Although the overall sample size was adequate, the predominance of teachers among respondents may reduce the reliability of subgroup comparisons and suggests caution when interpreting group differences. Future research could explore longitudinal changes in satisfaction as AI tools are introduced, examine differences across stakeholder subgroups, and assess the impact of AI adoption on student outcomes.
Future research needs to employ qualitative approaches, such as interviews or focus groups, to explore in depth how different stakeholder groups interpret their satisfaction with existing systems while still expressing a need for AI. Such studies could provide rich insights into whether satisfaction reflects an acceptance of current limitations, as well as cultural or institutional factors that shape responses. By combining quantitative breadth with qualitative depth, future research would build a more comprehensive understanding of both satisfaction and the perceived role of AI in education.
8. Conclusions
The current study aimed to explore stakeholder perceptions of student classification and progress monitoring in Qatar’s schools, as well as the potential use of AI. Findings based on survey data from 313 stakeholders revealed high satisfaction with the existing systems alongside strong demand for integrating AI. However, policymakers may consider this paradox an opportunity for systematic improvement of the system rather than a justification to avoid replacing it. Moreover, the potential role of AI in predicting early detection of disabilities should emphasize strengthening classification as the top policy priority, which reflects the need for effective monitoring and intervention.
Moreover, awareness is needed among stakeholders that specific requirements must be met to achieve a more effective and efficient integration of AI. These requirements are not only related to introducing technology but also to staff readiness, systematic support, and setting obligatory ethical standards. Moreover, some emerging concerns should be considered, including privacy, fairness, transparency, and accountability in algorithmic decision-making. It appears that without a careful consideration of the requirements and concerns, the goals of inclusion are more likely to be undermined.
In conclusion, the current study goes beyond revealing the paradox of accepting current practices and demanding the integration of AI. It demonstrates Qatar’s readiness to address policy-practice gaps. In addition to demonstrating Qatar’s readiness for AI in inclusive education, it clearly illustrates that educational systems and practices can be both accepted and critiqued for improvement simultaneously. These findings may spark a discussion within the Qatari education system regarding the safe and privacy-sensitive integration of AI into the country’s special education system and contribute effectively to the global debate on adopting the most suitable model of AI for detecting, classifying, and monitoring students at risk of disabilities.
Author Contributions
M.A.-H. Conceptualization, methodology, validation, writing: literature review; A.A. methodology, analysis, results, writing: original draft preparation; N.A.-Z. discussion, writing, review, and editing. All authors have read and agreed to the published version of the manuscript.
Funding
This publication was supported by Qatar university Internal Grant No. QUCG-CED-25/26-770. The findings achieved herein are solely the responsibility of the authors.
Institutional Review Board Statement
This study was conducted in accordance with the Declaration of Helsinki and received approval from the Qatar University Institutional Review Board (Approval Number: QU-IRB 070/2025-EA; Approval Date: 4 April 2025).
Informed Consent Statement
The research involving human participants was reviewed and approved by the Qatar University Institutional Review Board. All participants provided written informed consent prior to participating in the study.
Data Availability Statement
The authors will make the raw data supporting the conclusions of this article available upon request.
Acknowledgments
The authors would like to thank all the participants of this study for their time and valuable contributions.
Conflicts of Interest
The authors declare that the research was conducted without any commercial or financial relationships that could be interpreted as potential conflicts of interest.
References
- Al-Hendawi, M., Keller, C., & Khair, M. S. (2023). Special education in the Arab gulf countries: An analysis of ideals and realities. International Journal of Educational Research Open, 11, 371. [Google Scholar] [CrossRef]
- Al-Hendawi, M., Khair, M. S. B. M., & Keller, C. E. (2017). Qatar. In Praeger international handbook of special education (Vol. 3, pp. 224–237). Asia and Oceania. Praeger. [Google Scholar]
- Al-Hendawi, M. K., Keller, C., & Alqahwaji, A. (2022). Qatar: Expanding services for quality education for students with dyslexia. In Routledge international handbook of dyslexia in education (pp. 250–260). Routledge. [Google Scholar]
- Al-Thani, G. (2025). Beyond consultation: Rethinking stakeholder engagement in Qatar’s public education policymaking. Education Sciences, 15(6), 769. [Google Scholar] [CrossRef]
- Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. [Google Scholar] [CrossRef]
- Boone, H., & Boone, D. (2012). Analyzing likert data. The Journal of Extension, 50(2), 48. [Google Scholar] [CrossRef]
- Bryman, A. (2016). Social research methods. Oxford University Press. [Google Scholar]
- Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. [Google Scholar] [CrossRef]
- Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. SAGE Publications. [Google Scholar]
- Darwish, S., Alodat, A., Al-Hendawi, M., & Ianniello, A. (2025). General education teachers’ perspectives on challenges to the inclusion of students with intellectual disabilities in Qatar. Education Sciences, 15(7), 908. [Google Scholar] [CrossRef]
- Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–232. [Google Scholar] [CrossRef] [PubMed]
- DeVellis, R. F. (2016). Scale development: Theory and applications. SAGE Publications. [Google Scholar]
- Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed mode surveys: The tailored design method (4th ed., pp. xvii, 509). John Wiley & Sons, Inc. [Google Scholar]
- Evans, J. R., & Mathur, A. (2018). The value of online surveys: A look back and a look ahead. Internet Research, 28(4), 854–887. [Google Scholar] [CrossRef]
- Fan, W., & Yan, Z. (2010). Factors affecting response rates of the web survey: A systematic review. Computers in Human Behavior, 26(2), 132–139. [Google Scholar] [CrossRef]
- Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications. [Google Scholar]
- Filderman, M. J., McKown, C., Bailey, P., Benner, G. J., & Smolkowski, K. (2023). Assessment for effective screening and progress monitoring of social and emotional learning skills. Beyond Behavior, 32(1), 15–23. [Google Scholar] [CrossRef]
- Fuchs, A., Radkowitsch, A., & Sommerhoff, D. (2025). Using learning progress monitoring to promote academic performance?: A meta-analysis of the effectiveness. Educational Research Review, 46, 100648. [Google Scholar] [CrossRef]
- Gebhardt, M., Blumenthal, S., Scheer, D., Blumenthal, Y., Powell, S., & Lembke, E. (2023). Editorial: Progress monitoring and data-based decision-making in inclusive schools. Frontiers in Education, 8, 1186326. [Google Scholar] [CrossRef]
- Hampton, D. D., & Lembke, E. S. (2016). Examining the technical adequacy of progress monitoring using early writing curriculum-based measures. Reading & Writing Quarterly, 32(4), 336–352. [Google Scholar] [CrossRef]
- Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. [Google Scholar]
- Hussein, E., Hussein, M., & Al-Hendawi, M. (2025). Investigation into the applications of artificial intelligence (AI) in special education: A literature review. Social Sciences, 14(5), 288. [Google Scholar] [CrossRef]
- Jabri, A., Alodat, A. M., Al-Hendawi, M., & Ianniello, A. (2025). Challenges facing general education teachers in inclusive classrooms in Qatar. Frontiers in Education, 10, 1623453. [Google Scholar] [CrossRef]
- Karran, A. J., Charland, P., Martineau, J.-T., Ortiz de Guinea López de Arana, A., Lesage, A. M., Senecal, S., & Leger, P.-M. (2024). Multi-stakeholder perspectives on responsible AI and acceptability in education. arXiv, arXiv:2402.15027. Available online: https://arxiv.org/abs/2402.15027 (accessed on 22 July 2025).
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education. Available online: https://oro.open.ac.uk/50104/ (accessed on 18 July 2025).
- Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382. [Google Scholar] [CrossRef] [PubMed]
- McMaster, K. L., & Espin, C. A. (2007). Technical features of curriculum-based measurement in writing: A literature review. The Journal of Special Education, 41(2), 68–84. [Google Scholar] [CrossRef]
- Mertler, C. A., & Reinhart, R. V. (2016). Advanced and multivariate statistical methods: Practical application and interpretation (6th ed.). Routledge. [Google Scholar]
- Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–819. [Google Scholar] [CrossRef]
- UNESCO. (2021). AI and education: Guidance for policy-makers. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000376709 (accessed on 22 July 2025).
- U.S. Department of Education. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations. U.S. Department of Education.
- Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16, 39. [Google Scholar] [CrossRef]
Table 1.
Demographic Characteristics of the Participants.
Table 1.
Demographic Characteristics of the Participants.
| Variable | Type | Frequency | Percent |
|---|
| Job Title | Teacher | 246 | 78.6 |
| | Decision-Maker | 42 | 13.4 |
| | Service Provider | 10 | 3.2 |
| | Other | 15 | 4.8 |
| | Total | 313 | 100.0 |
| Gender | Female | 168 | 53.7 |
| | Male | 145 | 46.3 |
| | Total | 313 | 100.0 |
| Type of Institution | Public School | 268 | 85.6 |
| | Private School | 8 | 2.6 |
| | Ministry Center/Office | 32 | 10.2 |
| | Other | 5 | 1.6 |
| | Total | 313 | 100.0 |
| Years of Experience | Less than 5 years | 28 | 8.9 |
| | 5–10 years | 67 | 21.4 |
| | More than 10 years | 218 | 69.6 |
| | Total | 313 | 100.0 |
| Educational Stage | Primary Stage | 161 | 51.4 |
| | Intermediate Stage | 48 | 15.3 |
| | Secondary Stage | 73 | 23.3 |
| | Other | 31 | 9.9 |
| | Total | 313 | 100.0 |
Table 2.
Research Questions and Corresponding Survey Items.
Table 2.
Research Questions and Corresponding Survey Items.
| Research Question | Aligned Survey Items |
|---|
| RQ1: Perceptions of current classification practices | Q1, Q2, Q3, Q4, Q5, Q14, Q15, Q16 |
| RQ2: Perceptions of progress monitoring practices | Q6, Q7, Q8, Q9, Q10 |
| RQ3: Perceived role of AI in classification and monitoring | Q11, Q12, Q13 |
| RQ4: Satisfaction with current systems and predictors | Q17 (predicted by Q1–Q16) |
Table 3.
Means and Standard Deviations of Stakeholders’ Perceptions of Current Student Classification Practices (N = 313).
Table 3.
Means and Standard Deviations of Stakeholders’ Perceptions of Current Student Classification Practices (N = 313).
| Rank | N | Item | Mean | Std. Deviation | Degree of Agreement |
|---|
| 1 | 17 | How successful are the innovative strategies implemented in your institution? | 3.15 | 0.50 | high |
| 2 | 15 | To what extent do classification-related challenges affect support for students with educational challenges? | 3.12 | 0.56 | high |
| 3 | 2 | How clear are the criteria used in your classification system? | 3.05 | 0.62 | high |
| 4 | 4 | To what extent are the necessary resources available to implement classification accurately? | 3.03 | 0.54 | high |
| 5 | 3 | How effective is the current classification system? | 3.00 | 0.52 | moderate |
| 5 | 5 | How would you evaluate the effectiveness of the current tools used to classify students based on their needs? | 3.00 | 0.54 | moderate |
| 7 | 1 | How would you describe the accuracy of student classification according to their specific disabilities? | 2.96 | 0.63 | moderate |
| 8 | 16 | To what extent are the current resources or policies sufficient for classifying, tracking, and supporting students? | 2.92 | 0.62 | moderate |
| | | Perceptions of current classification practices | 3.03 | 0.40 | high |
Table 4.
Means and Standard Deviations of Stakeholders’ Perceptions of Current Practices in Monitoring the Progress of Students with Academic Challenges (N = 313).
Table 4.
Means and Standard Deviations of Stakeholders’ Perceptions of Current Practices in Monitoring the Progress of Students with Academic Challenges (N = 313).
| Rank | N | Item | Mean | Std. Deviation | Degree of Agreement |
|---|
| 1 | 8 | How effective are the tools used to track student progress? | 3.11 | 0.55 | high |
| 2 | 6 | Are data analysis systems available for evaluating student progress? | 3.09 | 0.57 | high |
| 2 | 11 | To what extent is progress data used to adjust teaching strategies? | 3.09 | 0.55 | high |
| 4 | 9 | How diverse are the assessments used to monitor student progress? | 3.06 | 0.56 | high |
| 5 | 7 | How would you describe the effectiveness of data analysis systems in supporting the evaluation of student progress? | 3.04 | 0.55 | high |
| 6 | 10 | How frequently are the assessments conducted to ensure effective follow-up? | 3.03 | 0.54 | high |
| | | Perceptions of progress monitoring practices | 3.07 | 0.43 | high |
Table 5.
Frequencies and Percentages of Key Functions Expected from a New Classification and Monitoring System in Education (N = 313).
Table 5.
Frequencies and Percentages of Key Functions Expected from a New Classification and Monitoring System in Education (N = 313).
| Category | Frequency | Percentage |
|---|
| Predicting students’ educational challenges | 104 | 33.2% |
| Data analysis and detailed reporting | 103 | 32.9% |
| Customizing individual educational plans | 71 | 22.7% |
| Providing immediate feedback to teachers and students | 35 | 11.2% |
| Total | 313 | 100.0% |
Table 6.
Frequencies and Percentages Regarding the Importance of Integrating AI Systems with Current Educational Systems (N = 313).
Table 6.
Frequencies and Percentages Regarding the Importance of Integrating AI Systems with Current Educational Systems (N = 313).
| Category | Frequency | Percentage |
|---|
| Not important | 8 | 2.6% |
| Neutral | 58 | 18.5% |
| Important | 141 | 45.0% |
| Very important | 106 | 33.9% |
| Total | 313 | 100.0% |
Table 7.
Frequencies and Percentages of Anticipated Challenges in Implementing AI Systems for Students with Educational Challenges (N = 313).
Table 7.
Frequencies and Percentages of Anticipated Challenges in Implementing AI Systems for Students with Educational Challenges (N = 313).
| Rank | Anticipated Challenge | Frequency | Percentage |
|---|
| 1 | Acceptance and adaptation of teachers and students to new technology | 128 | 40.9% |
| 2 | Integration with existing systems | 87 | 27.8% |
| 2 | Data privacy and security | 87 | 27.8% |
| 4 | Implementation and maintenance cost | 11 | 3.5% |
| | Total | 313 | 100.0% |
Table 8.
Results of Multiple Regression Analysis Predicting Stakeholders’ Satisfaction.
Table 8.
Results of Multiple Regression Analysis Predicting Stakeholders’ Satisfaction.
| Indicator/Variable | R | R2 | Adjusted R2 | Std. Error | F(df = 2310) | Sig. | B | Beta | t | Sig. |
|---|
| Model | 0.736 | 0.542 | 0.539 | 0.405 | 183.29 | 0.000 | – | – | – | – |
| (Constant) | – | – | – | – | – | – | −0.462 | – | −2.523 | 0.012 |
| Perceptions of current classification practices | – | – | – | – | – | – | 0.707 | 0.473 | 8.347 | 0.000 |
| Perceptions of progress monitoring practices | – | – | – | – | – | – | 0.436 | 0.315 | 5.563 | 0.000 |
Table 9.
Frequencies and Percentages of Satisfaction with the Current System for Classifying and Monitoring Student Progress (N = 313).
Table 9.
Frequencies and Percentages of Satisfaction with the Current System for Classifying and Monitoring Student Progress (N = 313).
| Category | N | Item | Mean | Std. Deviation | Level |
|---|
| Very satisfied | 55 | 17.6% | 3.02 | 0.60 | High |
| Satisfied | 211 | 67.4% | – | – | – |
| Dissatisfied | 44 | 14.1% | – | – | – |
| Not satisfied at all | 3 | 1.0% | – | – | – |
| Total | 313 | 100.0% | – | – | – |
Table 10.
Means and Standard Deviations of Stakeholders’ Perceptions of Classification and Progress Monitoring Practices by Job Title (N = 313).
Table 10.
Means and Standard Deviations of Stakeholders’ Perceptions of Classification and Progress Monitoring Practices by Job Title (N = 313).
| Variable | Job Title | N | Mean | Std. Deviation | F | Sig. |
|---|
| Perceptions of classification practices | Teacher | 246 | 3.03 | 0.386 | 0.649 | 0.584 |
| | Decision-Maker | 42 | 3.04 | 0.454 | | |
| | Service Provider | 10 | 2.86 | 0.466 | | |
| | Other | 15 | 2.99 | 0.410 | | |
| | Total | 313 | 3.03 | 0.399 | | |
| Perceptions of progress monitoring | Teacher | 246 | 3.08 | 0.427 | 1.350 | 0.258 |
| | Decision-Maker | 42 | 3.07 | 0.420 | | |
| | Service Provider | 10 | 2.95 | 0.343 | | |
| | Other | 15 | 2.88 | 0.551 | | |
| | Total | 313 | 3.07 | 0.431 | | |
Table 11.
Independent Samples t-Test Results for Perceptions of Classification and Progress Monitoring Practices by Gender.
Table 11.
Independent Samples t-Test Results for Perceptions of Classification and Progress Monitoring Practices by Gender.
| Variable | Gender | N | Mean | Std. Deviation | t | df | Sig. (2-Tailed) |
|---|
| Perceptions of classification practices | Male | 145 | 3.08 | 0.356 | 2.037 | 311 | 0.042 |
| | Female | 168 | 2.99 | 0.428 | | | |
| Perceptions of progress monitoring | Male | 145 | 3.13 | 0.396 | 2.169 | 311 | 0.031 |
| | Female | 168 | 3.02 | 0.455 | | | |
Table 12.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Type of Institution (N = 313).
Table 12.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Type of Institution (N = 313).
| Variable | Institution Type | N | Mean | Std. Deviation | F | Sig. |
|---|
| Perceptions of classification practices | Public School | 268 | 3.04 | 0.416 | 0.492 | 0.804 |
| | Private School | 8 | 2.88 | 0.341 | | |
| | Ministry Center/Office | 32 | 3.01 | 0.254 | | |
| | Other | 5 | 2.85 | 0.240 | | |
| | Total | 313 | 3.03 | 0.399 | | |
| Perceptions of progress monitoring | Public School | 268 | 3.09 | 0.448 | 1.139 | 0.334 |
| | Private School | 8 | 2.90 | 0.235 | | |
| | Ministry Center/Office | 32 | 2.99 | 0.315 | | |
| | Other | 5 | 2.90 | 0.279 | | |
| | Total | 313 | 3.07 | 0.431 | | |
Table 13.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Years of Experience (N = 313).
Table 13.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Years of Experience (N = 313).
| Variable | Years of Experience | N | Mean | Std. Deviation | F | Sig. |
|---|
| Perceptions of classification practices | Less than 5 years | 28 | 3.11 | 0.475 | 0.943 | 0.390 |
| | 5–10 years | 67 | 2.99 | 0.349 | | |
| | More than 10 years | 218 | 3.03 | 0.403 | | |
| | Total | 313 | 3.03 | 0.399 | | |
| Perceptions of progress monitoring | Less than 5 years | 28 | 3.18 | 0.554 | 1.589 | 0.206 |
| | 5–10 years | 67 | 3.01 | 0.418 | | |
| | More than 10 years | 218 | 3.07 | 0.417 | | |
| | Total | 313 | 3.07 | 0.431 | | |
Table 14.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Educational Stage (N = 313).
Table 14.
Means and Standard Deviations of Perceptions of Classification and Progress Monitoring Practices by Educational Stage (N = 313).
| Variable | Educational Stage | N | Mean | Std. Deviation | F | Sig. |
|---|
| Perceptions of classification practices | Primary | 161 | 3.05 | 0.432 | 0.914 | 0.434 |
| | Intermediate | 48 | 3.05 | 0.335 | | |
| | Secondary | 73 | 3.00 | 0.402 | | |
| | Other | 31 | 2.93 | 0.281 | | |
| | Total | 313 | 3.03 | 0.399 | | |
| Perceptions of progress monitoring | Primary | 161 | 3.10 | 0.445 | 1.038 | 0.376 |
| | Intermediate | 48 | 3.04 | 0.422 | | |
| | Secondary | 73 | 3.06 | 0.448 | | |
| | Other | 31 | 2.96 | 0.312 | | |
| | Total | 313 | 3.07 | 0.431 | | |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).