Next Article in Journal
A Novel High-Precision Imaging Radar for Quality Inspection of Building Insulation Layers
Previous Article in Journal
Development and Validation of a Coupled Hygro-Chemical and Thermal Transport Model in Concrete Using Parallel FEM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Implementation of Artificial Intelligence Technologies for the Assessment of Students’ Attentional State: A Scoping Review

by
Rosabel Roig-Vila
1,*,
Paz Prendes-Espinosa
2 and
Miguel Cazorla
3
1
Department of General Teaching and Specific Teaching Methods, University of Alicante, 03690 Alicante, Spain
2
Department of Teaching and School Organisation, University of Murcia, 30100 Murcia, Spain
3
Department of Computer Science and Artificial Intelligence, University of Alicante, 03690 Alicante, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 5990; https://doi.org/10.3390/app15115990
Submission received: 7 April 2025 / Revised: 12 May 2025 / Accepted: 14 May 2025 / Published: 26 May 2025

Abstract

:
Artificial intelligence (AI) has recently erupted into the field of education, offering novel opportunities, particularly in the analysis of student behaviour. There is a lack of knowledge on the use of AI in assessing attention; hence, a scoping review (ScR) is proposed. The aim is to explore and analyse the scientific literature related to such implementations in educational settings. We included empirical studies published in English between 2017 and 2023, focusing on the application of AI in formal learning environments. Theoretical reviews and studies conducted outside the field of education were excluded. The databases consulted were Scopus, Web of Science, and APA PsycInfo. The studies were selected by three independent reviewers using Rayyan, and the data were organised with predefined forms and analysed using VOSviewer. A total of 26 studies were identified. Research conducted in Asia (China) was predominant, although we found significant contributions from Europe and America. The methodological approaches were primarily experimental, focusing on mechanical observation and AI-based analytical techniques. The approaches adopted and the elements common to AI applications are discussed, highlighting implications for researchers, professionals and teachers.

1. Introduction

In the current educational scenario, the integration of artificial intelligence (AI) technologies has become a crucial area of research, driven by the growing number of applications and the interest they have generated, although this interest is, in some cases, accompanied by concern, worry, or even fear. In 2023, in his strikingly titled blog “The Age of AI has Begun”, Bill Gates defended the idea that the scientific fields most likely to see the advances and applications of AI were health and education [1].
This study addresses the potential applications of AI at its intersection with student attention, exploring the various tools and experiences shaping this line of research. The use of AI technologies to analyse students’ attention is a growing phenomenon. This approach not only involves a technological revolution, but also raises fundamental questions about the effectiveness, ethics, and pedagogical implications of such implementations. In this context, we seek to understand the adoption of these technologies and their impact on learning strategies and situations, as is the case in certain studies [2], but specifically, we are interested in identifying attention patterns.
This study delves into the theoretical basis underpinning initiatives and practices that seek to assess and improve student attention through the use of AI. In addition, the methodology and experimental designs used in these experiences play an essential role in validating the results. The different methodological approaches and types of experiments are explored. The various specific AI techniques and tools used to measure students’ attention levels are also examined. From image processing algorithms to machine learning models, we address the breadth and specialisation of technologies used in the assessment of attentional state.
Further analysis of these AI systems designed to detect attentional patterns in students is needed, and a systematic review of the work hitherto conducted is hence of great interest. This necessity justifies our scoping review, with the main aim being to analyse and discuss the literature on artificial intelligence technologies and their use in assessing students’ attentional state. The article is organised as follows: Section 2 presents the theoretical background of the research conducted; Section 3 covers the detailed methodology of the systematic review carried out; Section 4 presents the results obtained; Section 5 discusses these results; Section 6 presents the conclusions; and Section 7 describes the limitations.

2. Literature Review

Numerous applications of AI exist in all areas of our society, but its use in education requires particular exploration. AI has been shown to have interesting applications in the design and production of educational content [2], student assessment [3], designing innovative educational experiences incorporating AI [4], personalising training [5], and even promoting student engagement [6]. Changes are becoming particularly evident in higher education, making it crucial to examine the possibilities and risks of AI in this context [7], as well as to critically analyse the myths emerging around its use [8,9].
One of the ongoing lines of research is focused on analysing the use of AI either to monitor or to improve students’ attention. In this sense, attention is a basic element in the construction of learning and to ensure the effectiveness of communication in the educational process. As reported by Arciniegas et al. [10], students with a higher level of attention increase their retention capacity and therefore improve their learning, which is why they tend to be more academically successful. Studies have tracked students’ gaze to analyse what attracts their attention [11].
The research most closely linked to our study is that which uses AI as a tool to analyse students’ attention, both in face-to-face classroom situations and in e-learning models. Research has analysed how students express their attention levels and emotions through the analysis of facial expressions or body positions while in the classroom [12]. In this paper, the authors demonstrate how AI can help improve teaching strategies, as AI can determine the emotions that students reflect in their facial expressions and thus detect whether they are paying sufficient attention.
Along the same lines, other authors [13] complement the use of AI with biometric analysis tools (smart watches, sensors, cameras, etc.) to monitor students in online teaching systems, also suggesting that the resulting attention indicators can be used by teachers to redefine their teaching strategies. The research by Hou et al. [14] was also carried out in online teaching contexts, with the authors coinciding on the need for further research in this area, as the results show the effectiveness of identifying emotions. Another study focusing on these objectives is that by Trabelsi et al. [15], who also use AI to analyse emotions and attention through students’ facial expressions, reporting that the training of the AI system performed correctly in 76% of the situations. Other work shows similar results [16] and connect these technologies with “smart classroom” environments [17,18,19], with that by Hou et al. [14] also being of interest. Meanwhile, new environments have been tested [20,21,22] in order to analyse their educational possibilities.
Our research is based on the scoping review (ScR) method, with the aim of exploring the relevant literature on the use of AI technologies for identifying students’ attentional state during an educational activity and to describe the experiences being developed in this field of study. To this end, the following research questions are posed:
  • Question 1: Are AI technologies being used to analyse students’ attention or concentration levels?
  • Question 2 (RQ2): What theoretical frameworks underpin the experiences analysed on the use of artificial intelligence in educational settings?
  • Question 3 (RQ3): What methodologies and types of interventions have been used in studies implementing artificial intelligence in education?
  • Question 4: What types of AI techniques and tools are used to measure the attention span of learners?

3. Methodology

The ScR has been proposed as a method to explore, identify, evaluate and synthesise the relevant literature on a given research topic in a systematic, structured and reproducible way [23]. The methodology emerged in scientific disciplines in the 1980s with the aim of providing objectivity and rigour when developing the literature review process [24] and is used largely in health-related areas. However, it is becoming increasingly relevant in the social sciences [25] and, specifically, in studies on technology-related education [3,26,27].
This scoping review was conducted in accordance with the PRISMA 2020 statement [28], introduced by Moher et al. in 2009 [29], and its extension for scoping reviews, the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) [30]. The corresponding checklist was completed, and a flow diagram describing the identification, screening, and inclusion process of the studies is included. The review was not registered prior to its being conducted, as there are currently no official platforms for registering scoping reviews.

3.1. Inclusion Criteria and Sample Selection

The research corpus focused on original double-blind peer-reviewed publications, following the criteria detailed below (Table 1): (a) the research period was confined to the last 6 years, given the novelty of the topic under study (2017–2023); (b) queries were performed on the title, abstract, and keywords, as they are considered to represent the essence of scholarly publications [31]; (c) the Scopus and WOS databases were used, as they are the main databases of quality scientific production [32,33], and APA PsycInfo was used due to its being one of the most extensive databases in the field of psychology [34]; (d) publications in English were selected since it is the language of more than 75% of scientific production in the social sciences and 90% in the natural sciences [35,36] and it was taken into account that the use of a single language for the analysis facilitates the identification of lexical relationships in the research corpus [31]; (e) only papers presenting empirical primary research were considered; (f) the subject of study should be related to the use of artificial intelligence to measure students’ attention and concentration; and (g) the scope of the studies was education, considering all levels of education (primary, secondary, and higher).

3.2. Search Strategy

In accordance with the objective and the questions posed, the following keywords were considered for inclusion in the SLR search strategy: ‘Education’, ‘Artificial Intelligence or AI’, ‘Attention Level’ or ‘Concentration’, and ‘Students’. With these terms, and in line with the research period indicated in the previous subsection, the search protocol for each database was established as shown below (Table 2).

3.3. Study Selection

The articles obtained from the search using the above keywords were analysed and selected according to the criteria described in the previous two subsections, by two reviewers independently, based on the content of the titles and abstracts. A third reviewer intervened in the event of discrepancies to make the final decision. For this purpose, we used the artificial intelligence tool Rayyan [37], which facilitates selection and blind review between researchers.

3.4. Coding, Data Extraction, and Analysis

The process of coding and labelling the items was also supported by the Rayyan tool, as it allows codes to be assigned to items for categorisation. Subsequently, Excel 2021 software (Microsoft: Redmond, WA, USA) was used for creating spreadsheets and for the development of graphs. Meanwhile, Vosviewer software (version 1.6.20) [38] was used for creating and visualising networks for the analysis of bibliometric mapping supporting the exploration of relationships between elements, given its widespread use in multiple systematic reviews [39,40].
Finally, a critical assessment of the studies was conducted using the Joanna Briggs Institute (JBI) critical appraisal tools for cross-sectional and quasi-experimental studies [41]. The aim was to determine the reliability, relevance, and findings of the articles selected in the ScR. To this end, the three participating researchers jointly evaluated the studies identified in the scoping process.

4. Results

Following the indications of the PRISMA-ScR protocol (Figure 1), the inclusion and exclusion criteria, and the search strategy, the final phase of sampling and selection of the research corpus yielded a total of 26 publications.
The first search was conducted in December 2023, finding a total of 229 initial results. After eliminating non-papers and duplicates, 177 articles were analysed.

4.1. A General Bibliometric Overview

Table 3 shows the main data of the selected studies: authors, year of publication, journal, authors’ country of origin, and professional affiliation.
Table 4 complements the one above with information on the type of study and a summary of the work conducted, including information on the theoretical foundations and the types of instruments applied. The table also includes an appraisal using the JBI tools [41] (for each corresponding study type), along with a brief description and the score obtained for each study, according to the items fulfilled and the percentage of fulfilment. We thus accepted the studies that scored over 60% on appraisal, with the review finally covering 26 studies.
Furthermore, the time distribution of the selected publications (Figure 2) shows that the turning point in the growth of interest in using IA tools to analyse student attention span was in 2021, after which it has continued to rise.
The changes in education generated by the enforced confinement during the COVID-19 pandemic highlighted the importance of digital and distance education, and, with it, the study of tools that can measure the attention span of students to improve their motivation and learning.
There is interest in studying the subject under analysis in both Europe and America. However, the majority of the publications identified in this review and for this subject are concentrated among research teams from Asia, particularly from China (Figure 3).
A few journals account for the largest number of studies on this topic, such as Sensors and Sustainability (Figure 4). MDPI is the main publisher of journals that include this type of study, followed by Springer and IEEE, an aspect that may be related to the total volume of publications of these publishing groups in recent years.

4.2. Keyword Analysis

To analyse the keywords in the selected articles, we first carried out a standardisation process based on the following criteria: (a) standardising words in singular and plural (e.g., intelligence and intelligences); (b) unifying synonyms in the same concept and purifying terms to avoid potential semantic conflicts or conceptual redundancies; (c) eliminating neutral words not directly related to the topic, such as “article”, but keeping others that in their context did have a related meaning such as “cognition” [66]. Once the process of standardising and debugging the keywords had been completed, the final list was extracted using the Vosviewer software [38], and the keywords are listed according to their relevance in the light of their frequency/presence in the corpus (Table 5).
The item “learning” clearly stands out among the main keywords in the analysis, given that the main objective of the use of AI in the studies was to improve learning. In second place comes the term “attention”, which highlights its importance in the studies analysed. This is followed by the item “indicator recognition”, which brings together various keywords in the standardisation process and refers to the measurement and recognition of various aspects and indicators for subsequent analysis, such as facial or postural recognition, gaze, actions, etc. “Intelligence” and “artificial intelligence”, in fourth and fifth place, are key terms for the analysis insofar as their use formed part of the methodology of the research selected. “Student behaviour” is also shown to be a relevant term, being the object of the research analysed. “Adaptive learning” is also prominent, as the aim of the studies was to improve learning systems based on measurements of students’ attention, with “data analysis” being central to their methodology. The extracted keywords can be contextualised using Figure 5, which illustrates their co-occurrence network, underlining both their incidence and interrelationships [67,68].
This analysis uses 30 nodes with their corresponding terms extracted from the titles, keywords, and abstracts of the articles selected from the Wos, Scopus, and APA PsycInfo databases. As a result, four distinct clusters were established (Figure 5).
The first cluster, in blue, refers to artificial intelligence and several related technologies used in studies, such as xgbost [58], as well as online and adaptive learning. On the other hand, Cluster 2, in yellow, shows the relationship between the concepts of learning and attention, as well as the terms game-based learning and one of the techniques used in several studies to measure the level of attention, namely electroencephalography [42,52]. The green cluster shows the relationship between the concept of intelligence and the recognition of attention indicators, with cognition, theoretical models, teaching, machine learning, and the human–computer relationship. Finally, the red cluster relates student behaviour in the context of education to emotions, data analysis, object recognition, and privacy protection methods in the context of this type of research, among other keywords.
In short, the analysis of the co-occurrences of the keywords used in the studies selected provides an insight into how the research corpus is structured at the micro level in order to interpret the object of study.

4.3. Use of AI Technologies to Analyse Students’ Attention or Concentration Levels (QR1)

In response to research question 1 (RQ1), it has been observed that numerous studies have been conducted to analyse and improve the students’ attention or concentration of students, in the fields of both education and psychology, with the aim of improving educational quality and student performance [66]. Different techniques have traditionally been used for this purpose, of which the most prominent are observation and the application of various tests. The d2 Attention Test is one of the best known and most widely applied since its creation by Brickenkamp and Zilmer [69].
Technological advances in recent years, especially in relation to artificial intelligence and its application in the field of education [70], have aroused the interest of researchers in studying student attention and concentration using new technological tools [71], among the various applications and developments of AI that are being carried out in the field of education. Undoubtedly, the studies presented in this systematic review are proof of this, especially considering the extensive growth in publications in the last year, that is, 2023.

4.4. Theoretical Basis for the Experiments Being Developed (QR2)

The main objective of the selected research is to analyse students’ attention and/or concentration level in order to improve learning, as stated in QR2. For this reason, the studies are based on theoretical theories of attention and learning.
Durães et al. [43], for example, build on Keefe [44], who argues that learning styles can be defined as cognitive, affective, and physiological characteristics that indicate how individuals interact with and respond to learning environments. Tau and Acebedo base their theoretical framework on Winne and Hadwin’s [45] information processing theory of self-regulated learning (SRL). Liang et al. propose that the use of an interactive learner feedback system can help increase learner concentration [46]. Behera et al. [47] base their work on the possibility of recognising students’ affective states in order to accordingly adapt teaching methodologies, as do Serrano-Mamolar et al. [48] and Hou et al. [14], who specifically focus on emotions. This is also the case in the study by Hossen and Uddin [58], who also refer to cognitive states.

4.5. Methodological Designs and Types of Experiences Proposed to Implement These Types of Experiences (QR3)

According to QR3, it has been observed that the methodological designs used were fundamentally experimental and grounded in mechanical observation through technological elements, with data analysis based on artificial intelligence. The participants analysed (students) were exposed to various educational activities to measure the indicators that allow their levels of attention and/or concentration to be estimated and predicted.
On the other hand, it has also been noted that several studies were applied in specifically designed contexts, either because the technology to be used for measurement required it, as with the use of electroencephalography or other elements [42,52], or to facilitate the performance of the experiments [45]. Other investigations were carried out in real classroom conditions with the intention of collecting information during their standard development [49], mostly with small samples ranging from three or four cases [51] to classrooms of thirty to fifty students [49], although some studies used larger samples [61].
Similarly, it can be seen that several studies also combined data collection using technological devices with various questionnaires and tests to cross-check the data [14].
Finally, in response to QR3, it should be noted that research was been carried out at different educational levels from primary [48,50,63,65], secondary [43,44,46], or university levels [45,47] and, in some cases, also involving teachers [50,58].

4.6. AI Techniques and Tools Used to Measure Learners’ Attention Level (QR4)

The analysis of the selected studies has enabled us to answer QR4, showing that the techniques and technological tools used to measure the level of student attention in the studies analysed are varied, including the use of electroencephalograms [42,52], video cameras to capture body and/or facial images [15,45,47,48,49,52,58,60,65], analysis of mouse and keyboard use [41,42], the use of wristbands or bracelets to measure biometric indicators [45,54], physiological sensors [48], and recording students’ hand gestures [44].
Depending on the research approach and the techniques and tools used, different types of information were collected. These included eye tracking or tracking of facial expressions [45,47,49,60] and biometric data [45,54], which were analysed with artificial intelligence systems for two purposes. On the one hand, the intention was to determine students’ levels of attention and concentration and, on the other, some of the research aimed to generate and enrich datasets for future use in AI applications in the classroom [48,63].

5. Discussion

The results obtained in this systematic review show a growing interest in the implementation of AI technologies for assessing students’ attentional state. The literature reviewed shows that most studies adopted experimental methodological approaches, based on automated observation using AI technologies and data analysis through machine learning algorithms [10,12]. These findings are in line with previous research highlighting the potential of AI in education for measuring attention and recognising behavioural patterns [11,14].
One of the main aspects identified is the predominance of studies conducted in Asia, especially in China, suggesting a consolidation of research lines in this geographical context [14,15,17]. Although there is interest in Europe and America, scientific production in these regions is comparatively lower, suggesting the need to extend the analysis to different educational and cultural environments to validate the applicability of these technologies in different contexts [13,19].
From a methodological perspective, most of the studies used face and body recognition techniques to analyse students’ attentional state in the classroom [12,15,19]. This trend is consistent with previous studies demonstrating the effectiveness of these methods in inferring students’ attentional state and engagement in real time [14,16]. However, authors have warned of the need to consider ethical issues, such as student privacy and informed consent in the use of these technologies [9,17]. The studies reviewed also reflect a diversification in the tools used. While some studies used video cameras and computer vision algorithms to assess attention [12,15], others combined these methods with biometric sensors, such as smart bracelets or electroencephalograms, in order to obtain more accurate measurements [13,18]. This combination of techniques allows for a triangulation of data that can improve the reliability of results but poses additional challenges in terms of implementation in real educational settings [16,19].
The findings also highlight the need to establish clear policy frameworks for the integration of AI in the assessment of students’ attentional state. While some studies highlighted the potential of these technologies to optimise pedagogical strategies and personalise teaching [5,7], others warned about the risk of over-reliance on automated systems, which could affect the teacher–student relationship and autonomy in the learning process [8,9].
In terms of effectiveness, the studies report accuracy levels in detecting students’ attentional state ranging from 70% to 90%, depending on the technology used and the AI model implemented [14,15,16]. However, the variability in the results suggests the need to standardise assessment procedures and to develop more robust models capable of adapting to different educational settings [12,19].
The results of this systematic review have important implications for both educational practice and research. In practical terms, the use of artificial intelligence technologies to measure student attention can promote personalised teaching. These tools allow teachers to adapt their teaching strategies based on real-time indicators of the group’s level of engagement [14,15,46]. However, it is essential to define ethical protocols and regulatory frameworks that protect privacy and ensure informed consent, especially in contexts involving minors [9,50].
Several future lines of research are emerging. These include the design of more adaptive and less invasive AI models, the validation of these technologies in diverse educational settings, and the analysis of their impact on students’ academic performance and emotional well-being [57,59].
It is also important to emphasise the urgent need to study the balance between automation and teacher autonomy. It is essential that these tools support the educational task without replacing professional judgement [5,7]. Although AI technologies show great technical potential, their pedagogical implications require attention. Over-reliance on these systems can simplify the complexity of student behaviour, displace pedagogical decision-making and encourage behaviourist approaches [7]. In addition, algorithmic biases that disadvantage students with non-normative attention styles or diverse cultural backgrounds have been identified, which can lead to unfair educational decisions [8,9]. For all these reasons, it is essential to consolidate the role of teachers as critical mediators of learning and to ensure that these tools support their work without replacing contextualised pedagogical understanding [5].

6. Conclusions

The analysis confirms the growing interest in the application of artificial intelligence for the assessment of students’ attentional state and its potential to improve teaching and pedagogical decision-making. Most of the studies reviewed employed experimental methodologies based on automated observation and data analysis using advanced algorithms, allowing for more accurate identification of attention patterns. However, there remain challenges related to privacy, implementation in real educational settings, and the validation of these technologies in different contexts. Scientific production is concentrated in certain regions, which highlights the need to extend studies to other educational systems. Despite the progress made, the diversity of approaches and tools used hinders comparison of the results, highlighting the need for standardised models to enhance their applicability.
In addition to the findings obtained, this review identifies relevant gaps in the literature and formulates useful recommendations for advancing the use of AI in education, such as the following. Firstly, there is a need to design more accessible and ethically sustainable artificial intelligence systems. These systems must be able to be effectively integrated into real classrooms without compromising student privacy or autonomy. Secondly, it is recommended that teacher training in the critical and pedagogical use of these technologies be strengthened. This preparation is key to avoiding excessive dependence on automated systems and to maintaining the role of teachers as mediators of learning. Thirdly, it is suggested that future research adopt a comparative and interdisciplinary approach in order to analyse the effectiveness of these systems at different educational levels and in different sociocultural contexts.
In summary, the application of AI technologies to measure student attention represents a promising avenue for personalising teaching and improving educational processes. However, their effective integration requires ensuring ethics, equity, and the active participation of teachers. In the short term, pilot projects should be launched in real environments, and collaboration between disciplines should be promoted to ensure responsible development. In addition, future research should consider the technical effectiveness as well as the emotional, social, and educational impact of these tools in diverse contexts.
Similarly, recommendations can be made for different stakeholders. It is a priority for education professionals to receive training in the critical and pedagogical use of AI technologies, integrating them as a support for teaching judgement and not as a substitute for it. Policy makers should promote clear regulatory frameworks that ensure privacy protection and fairness in the implementation of these tools, especially in contexts involving minors. Finally, the research community is encouraged to expand studies in diverse contexts, explore the longitudinal impact of these technologies on learning, and move towards more transparent, ethical, and culturally sensitive models.

7. Limitations

Although the review followed rigorous procedures and relied on three of the most comprehensive and authoritative academic databases (Scopus, Web of Science, and APA PsycInfo), it is possible that some relevant studies were not identified due to being published in other languages or indexed in less accessible or less widely used sources. In addition, literature not formally published, such as grey literature, was not included, which may have limited the overall breadth of the evidence base.
In this regard, the decision to include only English-language publications, in order to make the review process more effective, may have introduced some linguistic bias that could affect the geographical representativeness of the findings. This limitation may have reduced the visibility of interesting research published in other languages, especially in regions such as Latin America, Eastern Europe, and French-speaking Africa. This bias is a recognised concern in contemporary systematic reviews and should be taken into consideration [35,36].

Author Contributions

Conceptualization, R.R.-V.; research design, R.R.-V. and P.P.-E.; data curation, R.R.-V. and M.C.; formal analysis, R.R.-V. and P.P.-E.; writing—original draft preparation, R.R.-V.; writing—review and editing, P.P.-E. and M.C.; funding acquisition, M.C. and R.R.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana, grant number CIPROM/2021/017 (PROMETEU 2022).

Data Availability Statement

All data extracted and analysed during this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to thank the “Direcció General de Ciència i Investigació de la Generalitat Valenciana” for awarding us the grant ID CIPROM/2021/017 within the Prometheus programme for research groups of excellence, thanks to which we have been able to present this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gates, B. Gates Notes. Available online: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun (accessed on 20 March 2025).
  2. Parraga, J.A.; Zambrano, R.M.M.; Cevallos, L.A.T. La personalización del aprendizaje: Estrategias de adaptación de contenido con inteligencia artificial en entornos educativos. Educ. Vínculos Rev. Estud. Interdiscip. Educ. 2024, 13, 64–77. [Google Scholar] [CrossRef]
  3. González-Calatayud, V.; Prendes-Espinosa, P.; Roig-Vila, R. Artificial Intelligence for Student Assessment: A Systematic review. Appl. Sci. 2021, 11, 5467. [Google Scholar] [CrossRef]
  4. Valeri, F.; Nilsson, P.; Cederqvist, A.-M. Exploring students’ experience of CHATGPT in STEM education. Comput. Educ. Artif. Intell. 2024, 8, 100360. [Google Scholar] [CrossRef]
  5. Parra-Sánchez, J.S. Potencialidades de la Inteligencia Artificial en Educación Superior: Un Enfoque desde la Personalización. Rev. Docentes 2.0 2022, 14, 19–27. [Google Scholar] [CrossRef]
  6. Chen, X.; Xie, H.; Qin, S.J.; Wang, F.L.; Hou, Y. Artificial Intelligence-Supported Student Engagement Research: Text mining and Systematic analysis. Eur. J. Educ. 2025, 60, e70008. [Google Scholar] [CrossRef]
  7. García, O.C. Inteligencia Artificial en Educación Superior: Oportunidades y Riesgos. Rev. Interuniv. Investig. Tecnol. Educ. 2023, 15, 16–27. [Google Scholar] [CrossRef]
  8. Giray, L. Ten Myths about Artificial Intelligence in Education. High. Learn. Res. Commun. 2024, 14. [Google Scholar] [CrossRef]
  9. Prendes-Espinosa, M.P. La revolución de la Inteligencia Artificial en tiempos de negacionismo tecnológico. Rev. Interuniv. Investig. Tecnol. Educ. 2023, 15, 1–15. [Google Scholar] [CrossRef]
  10. Arciniegas, D.F.T.; Amaya, M.; Carvajal, A.P.; Rodriguez-Marin, P.A.; Duque-Muñoz, L.; Martinez-Vargas, J.D. Students’ Attention Monitoring System in Learning Environments based on Artificial Intelligence. IEEE Lat. Am. Trans. 2021, 20, 126–132. [Google Scholar] [CrossRef]
  11. Xu, H.; Zhang, J.; Sun, H.; Qi, M.; Kong, J. Analyzing students’ attention by gaze tracking and object detection in classroom teaching. Data Technol. Appl. 2023, 57, 643–667. [Google Scholar] [CrossRef]
  12. Marquez-Carpintero, L.; Pina-Navarro, M.; Suescun-Ferrandiz, S.; Escalona, F.; Gomez-Donoso, F.; Roig-Vila, R.; Cazorla, M. Artificial intelligence-based system for detecting attention levels in students. J. Vis. Exp. 2023, 202, e65931. [Google Scholar] [CrossRef] [PubMed]
  13. Alsayigh, H.K.S.; Khidhir, A.S.M. Using IoT to predict student attention levels in e-learning classes: A review. AIP Conf. Proc. 2024, 3091, 040009. [Google Scholar] [CrossRef]
  14. Hou, C.; Ai, J.; Lin, Y.; Guan, C.; Li, J.; Zhu, W. Evaluation of online teaching quality based on facial expression recognition. Future Internet 2022, 14, 177. [Google Scholar] [CrossRef]
  15. Trabelsi, Z.; Alnajjar, F.; Parambil, M.M.A.; Gochoo, M.; Ali, L. Real-Time Attention Monitoring System for Classroom: A deep learning approach for student’s behavior recognition. Big Data Cogn. Comput. 2023, 7, 48. [Google Scholar] [CrossRef]
  16. Nguyen, D.D.; Nguyen, X.H.; Than, T.; Nguyen, M.S. Automated attendance system in the classroom using artificial intelligence and internet of things technology. In Proceedings of the 2021 8th NAFOSTED Conference on Information and Computer Science (NICS), Hanoi, Vietnam, 21–22 December 2021; pp. 531–536. [Google Scholar] [CrossRef]
  17. Parambil, M.M.A.; Ali, L.; Alnajjar, F.; Gochoo, M. Smart Classroom: A Deep Learning Approach towards Attention Assessment through Class Behavior Detection. In Proceedings of the 2022 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 21–24 February 2022; pp. 1–6. [Google Scholar] [CrossRef]
  18. Martínez-Ballesté, A.; Batista, E.; Figueroa, E.; Torruella, G.F.; Llurba, C.; Quiles-Rodríguez, J.; Unciti, O.; Palau, R. A proposal for the smart classroom infrastructure using IoT and artificial intelligence. In Proceedings of the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Osaka, Japan, 2–4 July 2024; pp. 109–114. [Google Scholar] [CrossRef]
  19. Llurba, C.; Fretes, G.; Palau, R. Classroom emotion monitoring based on image processing. Sustainability 2024, 16, 916. [Google Scholar] [CrossRef]
  20. Jeong, M.; Park, J.; Oh, S.H. Cyber Environment Test Framework for Simulating Command and Control Attack Methods with Reinforcement Learning. Appl. Sci. 2025, 15, 2120. [Google Scholar] [CrossRef]
  21. Cazorla, M.; Roig Vila, R. Interdisciplinary collaboration in the elaboration of datasets on Artificial Intelligence and Education. The case of MEEBAI. In Proceedings of the 3rd International Congress: Education and Knowledge, Alicante, Spain, 6–7 June 2024; Octaedro: Barcelona, Spain, 2024; p. 20. [Google Scholar]
  22. Martinez-Roig, R. Robots sociales, música y movimiento: Percepciones de las personas mayores sobre el robot Pepper para su formación (Social robots, music and movement: Older people’s perceptions of the Pepper training robot). Pixel-Bit Rev. Medios Educ. 2024, 70, 25–41. [Google Scholar] [CrossRef]
  23. Gough, D.; Oliver, S.; Thomas, J. Introduction to Systematic Reviews, 2nd ed.; Sage: London, UK, 2017. [Google Scholar]
  24. Koseoglu, M.A.; Rahimi, R.; Okumus, F.; Liu, J. Bibliometric studies in tourism. Ann. Tour. Res. 2016, 61, 180–198. [Google Scholar] [CrossRef]
  25. Karen Chapman, K. Characteristics of systematic reviews in the social sciences. J. Acad. Libr. 2021, 47, 102396. [Google Scholar] [CrossRef]
  26. Hamilton, D.; McKechnie, J.; Edgerton, E.; Wilson, C. Immersive virtual reality as a pedagogical tool in education: A systematic literature review of quantitative learning outcomes and experimental design. J. Comput. Educ. 2021, 8, 1–32. [Google Scholar] [CrossRef]
  27. Shih-Ting, C.; Gwo-Jen, h.; Yun-Fang, t. Artificial intelligence-based robots in education: A systematic review of selected SSCI publications. Comput. Educ. Artif. Intell. 2022, 3, 100091. [Google Scholar] [CrossRef]
  28. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021, 372, 71. [Google Scholar]
  29. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
  30. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  31. Dogan, M.E.; Goru Dogan, T.; Bozkurt, A. The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Appl. Sci. 2023, 13, 3056. [Google Scholar] [CrossRef]
  32. Zhu, J.; Liu, W. A tale of two databases: The use of Web of Science and Scopus in academic papers. Scientometrics 2020, 123, 321–335. [Google Scholar] [CrossRef]
  33. Martín-Martín, A.; Thelwall, M.; Orduna-Malea, E.; López-Cózar, E.D. Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics 2021, 126, 871–906. [Google Scholar] [CrossRef]
  34. VandenBos, G.R. From print to digital (1985–2015): APA’s evolving role in psychological publishing. Am. Psychol. 2017, 72, 837–847. [Google Scholar] [CrossRef]
  35. Hamel, R.E. The dominance of English in the international scientific periodical literature and the future of language use in science. Aila Rev. 2007, 20, 53–71. [Google Scholar] [CrossRef]
  36. Ramıírez-Castañeda, V. Disadvantages in preparing and publishing scientific papers caused by the dominance of the English language in science: The case of Colombian researchers in biological sciences. PLoS ONE 2020, 15, e0238372. [Google Scholar] [CrossRef]
  37. Mourad Ouzzani, H.H.; Zbys, F.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef] [PubMed]
  38. Van Eck, N.J.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef] [PubMed]
  39. Yu, L.H.; Yu, Z.G. Qualitative and quantitative analyses of artificial intelligence ethics in education using VOSviewer and CitNetExplorer. Front. Psychol. 2023, 14, 1061778. [Google Scholar] [CrossRef]
  40. Danese, M.; Gioia, D. Spatial Analysis for Landscape Changes: A Bibliometric Review. Appl. Sci. 2021, 11, 10078. [Google Scholar] [CrossRef]
  41. Barker, T.H.; Habibi, N.; Aromataris, E.; Stone, J.C.; Leonardi-Bee, J.; Sears, K.; Hasanoff, S.; Klugar, M.; Tufanaru, C.; Moola, S.; et al. The revised JBI critical appraisal tool for the assessment of risk of bias quasi-experimental studies. JBI Evid. Synth. 2024, 22, 378–388. [Google Scholar] [CrossRef]
  42. Chen, C.H. Measuring the differences between traditional learning and game-based learning using electroencephalography (EE) physiologically based methodology. J. Interact. Learn. Res. 2017, 28, 221–233. [Google Scholar]
  43. Durães, D.; Carneiro, D.; Jiménez, A.; Novais, P. Characterizing attentive behavior in intelligent environments. Neurocomputing 2018, 272, 46–54. [Google Scholar] [CrossRef]
  44. Durães, D.; Toala, R.; Gonçalves, F.; Novais, P. Intelligent tutoring system to improve learning outcomes. AI Commun. 2019, 32, 161–174. [Google Scholar] [CrossRef]
  45. Taub, M.; Azevedo, R. How Does Prior Knowledge Influence Eye Fixations and Sequences of Cognitive and Metacognitive SRL Processes during Learning with an Intelligent Tutoring System? Int. J. Artif. Intell. Educ. 2019, 29, 1–28. [Google Scholar] [CrossRef]
  46. Liang, J.M.; Su, W.C.; Chen, Y.L.; Wu, S.L.; Chen, J.J. Smart Interactive Education System Based on Wearable Devices. Sensors 2019, 19, 3260. [Google Scholar] [CrossRef]
  47. Behera, A.; Matthew, P.; Keidel, A.; Vangorp, P.; Fang, H.; Canning, S. Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems. Int. J. Artif. Intell. Educ. 2020, 30, 236–270. [Google Scholar] [CrossRef]
  48. Serrano-Mamolar, A.; Arevalillo-Herráez, M.; Chicote-Huete, G.; Boticario, J.G. An Intra-Subject Approach Based on the Application of HMM to Predict Concentration in Educational Contexts from Nonintrusive Physiological Signals in Real-World Situations. Sensors 2021, 21, 1777. [Google Scholar] [CrossRef] [PubMed]
  49. Zhao, J.; Li, J.; Jia, J. A study on posture-based teacher-student behavioral engagement pattern. Sust. Cities Soc. 2021, 67, 102749. [Google Scholar] [CrossRef]
  50. Araya, R.; Sossa-Rivera, J. Automatic Detection of Gaze and Body Orientation in Elementary School Classrooms. Front. Robot. AI 2021, 8, 729832. [Google Scholar] [CrossRef]
  51. Gao, Q.; Tan, Y. Impact of Different Styles of Online Course Videos on Students’ Attention During the COVID-19 Pandemic. Front. Public Health 2022, 10, 858780. [Google Scholar] [CrossRef]
  52. Shen, Y. Analysis and Research on the Characteristics of Modern English Classroom. Sci. Program. 2022, 2022, 2211468. [Google Scholar] [CrossRef]
  53. Villegas-Ch., W.; García-Ortiz, J.; Urbina-Camacho, I.; Mera-Navarrete, A. Proposal for a System for the Identification of the Concentration of Students Who Attend Online Educational Models. Computers 2023, 12, 74. [Google Scholar] [CrossRef]
  54. Södergård, C.; Laakko, T. Inferring Students’ Self-Assessed Concentration Levels in Daily Life Using Biosignal Data From Wearables. IEEE Access 2023, 11, 30308–30323. [Google Scholar] [CrossRef]
  55. Ma, Y.; Wei, Y.; Shi, Y.; Li, X.; Tian, Y.; Zhao, Z. Online Learning Engagement Recognition Using Bidirectional Long-Term Recurrent Convolutional Networks. Sustainability 2023, 15, 198. [Google Scholar] [CrossRef]
  56. Dimitriadou, E.; Lanitis, A. Student Action Recognition for Improving Teacher Feedback During Tele-Education. IEEE Trans. Learn. Technol. 2023, 17, 569–584. [Google Scholar] [CrossRef]
  57. Kim, S.; Kim, J.-H.; Hyung, W.; Shin, S.; Choi, M.J.; Kim, D.H.; Im, C.-H. Characteristic Behaviors of Elementary Students in a Low Attention State During Online Learning Identified Using Electroencephalography. IEEE Trans. Learn. Technol. 2023, 17, 619–628. [Google Scholar] [CrossRef]
  58. Hossen, M.K.; Uddin, M.S. Attention monitoring of students during online classes using XGBoost classifier. Comput. Educ. Artif. Intell. 2023, 5, 100191. [Google Scholar] [CrossRef]
  59. Simonetti, I.; Tamborra, L.; Giorgi, A.; Ronca, V.; Vozzi, A.; Aricò, P.; Borghini, G.; Sciaraffa, N.; Trettel, A.; Babiloni, F.; et al. Neurophysiological Evaluation of Students’ Experience during Remote and Face-to-Face Lessons: A Case Study at Driving School. Brain Sci. 2023, 13, 95. [Google Scholar] [CrossRef]
  60. Alkabbany, I.; Ali, A.M.; Foreman, C.; Tretter, T.; Hindy, N.; Farag, A. An Experimental Platform for Real-Time Students Engagement Measurements from Video in STEM Classrooms. Sensors 2023, 23, 1614. [Google Scholar] [CrossRef]
  61. Zhu, Z.; Zheng, X.Q.; Ke, T.P.; Chai, G.F. Emotion recognition in learning scenes supported by smart classroom and its application. Trait. Signal 2023, 40, 751–758. [Google Scholar] [CrossRef]
  62. Hasnine, M.N.; Nguyen, H.T.; Tran, T.T.T.; Bui, H.T.T.; Akçapınar, G.; Ueda, H. A Real-Time Learning Analytics Dashboard for Automatic Detection of Online Learners’ Affective States. Sensors 2023, 23, 4243. [Google Scholar] [CrossRef]
  63. Wang, Z.; Yao, J.; Zeng, C.; Li, L.; Tan, C. Students’ Classroom Behavior Detection System Incorporating Deformable DETR with Swin Transformer and Light-Weight Feature Pyramid Network. Systems 2023, 11, 372. [Google Scholar] [CrossRef]
  64. Pi, Z.; Zhang, Y.; Yu, Q.; Yang, J. Difficulty level moderates the effects of another’s presence as spectator or co-actor on learning from video lectures. Educ. Technol. Res. Dev. 2023, 71, 1887–1915. [Google Scholar] [CrossRef]
  65. Mudawi, N.A.; Pervaiz, M.; Alabduallah, B.I.; Alazeb, A.; Alshahrani, A.; Alotaibi, S.S.; Jalal, A. Predictive Analytics for Sustainable E-Learning: Tracking Student Behaviors. Sustainability 2023, 15, 14780. [Google Scholar] [CrossRef]
  66. Bhanji, F.; Gottesman, R.; de Grave, W.; Steinert, Y.; Winer, L.R. The retrospective pre-post: A practical method to evaluate learning from an educational program. Acad. Emerg. Med. 2012, 19, 189–194. [Google Scholar] [CrossRef]
  67. Montoya Restrepo, I.; Valencia Arias, A.; Montoya Restrepo, A. Field mapping of knowledge in entrepreneurial intentions through analysis of social knowledge network. Ingeniare Rev. Chil. Ing. 2016, 24, 337–350. [Google Scholar] [CrossRef]
  68. Su, H.N.; Lee, P.C. Mapping knowledge structure by keyword co-occurrence: A first look at journal papers in Technology Foresight. Scientometrics 2010, 85, 65–79. [Google Scholar] [CrossRef]
  69. Brickenkamp, R.; Zilmer, E. The d2 Test of Attention (d2); Hogrefe and Huber Publishers: Newburyport, MA, USA, 1998. [Google Scholar] [CrossRef]
  70. Ouyang, R.; Jiao, P. Artificial intelligence in education: The three paradigms. Comput. Educ. Artif. Intell. 2021, 2, 100020. [Google Scholar] [CrossRef]
  71. Chen, X.; Zou, D.; Xie, H.; Cheng, G.; Liu, C. Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions. Educ. Technol. Soc. 2022, 25, 28–47. Available online: https://www.jstor.org/stable/48647028 (accessed on 19 May 2025).
Figure 1. PRISMA flow diagram of the systematic review [28,30].
Figure 1. PRISMA flow diagram of the systematic review [28,30].
Applsci 15 05990 g001
Figure 2. Time trend of AI in publications on measuring level of attention in education.
Figure 2. Time trend of AI in publications on measuring level of attention in education.
Applsci 15 05990 g002
Figure 3. Distribution of publications by countries (one publication can be coded for more than one country).
Figure 3. Distribution of publications by countries (one publication can be coded for more than one country).
Applsci 15 05990 g003
Figure 4. List of principal journals.
Figure 4. List of principal journals.
Applsci 15 05990 g004
Figure 5. Keyword co-occurrence network with 30 items, created using software Vosviewer [42].
Figure 5. Keyword co-occurrence network with 30 items, created using software Vosviewer [42].
Applsci 15 05990 g005
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Published 2017–2023
Original papers, double-blind peer review process.
Peer review journal
Published before 2017
No original papers: review articles, protocols, conference papers, reports, etc. Not peer review journal
English languageNot in English
Empirical and primary researchNot empirical and primary (e.g., reviews)
Studies in the field of education (primary, secondary, and higher)Other fields of study (e.g., health)
Object of study: use of artificial intelligence to measure students’ level of attention or concentrationNo artificial intelligence
No measurement of attention or concentration level
Table 2. Information on the research corpus and search queries for the inclusion criteria.
Table 2. Information on the research corpus and search queries for the inclusion criteria.
Research Corpus
DatabaseWOS, Scopus and APA PsycNET
Period2017–2023 (inclusive)
Search Queries
WOS(ALL = (EDUCATION)) AND ALL = (ARTIFICIAL INTELLIGENCE or AI)) AND ALL = (attention level or concentration)) AND ALL = (STUDENT)
Scopus(TITLE-ABS-KEY (education) AND TITLE-ABS-KEY (artificial AND intelligence OR AI) AND TITLE-ABS-KEY (level AND attention OR concentration) AND TITLE-ABS-KEY (student)) AND PUBYEAR > 2016
APA PsycInfoEducation AND “artificial intelligence OR AI” AND attention level OR concentration and student
Table 3. Main data of the selected articles.
Table 3. Main data of the selected articles.
AuthorsYearTitleJournalCountryAuthor Affiliation
Chen, C.H. [42]2017Measuring the differences between traditional learning and game-based learning using electroencephalography (EEG) physiologically based methodologyJournal of Interactive Learning ResearchTaiwanEducation
Durães et al. [43]2018Characterising attentive behaviour in intelligent environmentsNeurocomputingPortugalComputer Science
Durães et al. [44]2019Intelligent tutoring system to improve learning outcomesAI CommunicationPortugalComputer Science
Taub and Azevedo [45]2019How Does Prior Knowledge Influence Eye Fixations and Sequences of Cognitive and Metacognitive SRL Processes during Learning with an Intelligent Tutoring System?International Journal of Artificial Intelligence in EducationUSAPsychology
Liang et al. [46]2019Smart Interactive Education System Based on Wearable DevicesSensorsTaiwanComputer Science, Engineering, Medicine
Behera et al. [47]2020Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring SystemsInternational Journal of Artificial Intelligence in EducationUKComputer Science
Serrano-Mamolar et al. [48]2021An Intra-Subject Approach Based on the Application of HMM to Predict Concentration in Educational Contexts from Nonintrusive Physiological Signals in Real-World SituationsSensorsSpainArtificial Intelligence and Computer Science
Zhao et al. [49]2021A study on posture-based teacher–student behavioural engagement patternSustainable Cities and SocietyChinaInformation Science and Technology
Araya and Sossa-Rivera [50]2021Automatic Detection of Gaze and Body Orientation in Elementary School ClassroomsFrontiers in Robotics and AIChileEducation
Gao and Tan [51]2022Impact of Different Styles of Online Course Videos on Students’ Attention During the COVID-19 PandemicFrontiers in Public HealthChinaEconomics and Artificial Intelligence
Shen [52]2022Analysis and Research on the Characteristics of Modern English Classroom Learners’ Concentration Based on Deep LearningScientific ProgrammingChina, SpainScience and Technology, Linguistic, Literature and Translation
Hou et al. [14]2022Evaluation of Online Teaching Quality Based on Facial Expression RecognitionFuture InternetChinaInformation and Communication Engineering
Villegas et al. [53]2023Proposal for a System for the Identification of the Concentration of Students Who Attend Online Educational ModelsComputersEcuadorInformation Technology Engineering, Philosophy, Arts and Educational Sciences
Södergård and Laakko [54]2023Inferring Students’ Self-Assessed Concentration Levels in Daily Life Using Biosignal Data From WearablesIEEE ACCESSFinlandTechnical Research
Trabelsi et al. [15]2023Real-Time Attention Monitoring System for Classroom: A Deep Learning Approach for Student’s Behaviour RecognitionBig Data and Cognitive ComputingUnited Arab EmiratesComputer Science and Software Engineer
Ma et al. [55]2023Online Learning Engagement Recognition Using Bidirectional Long-Term Recurrent Convolutional NetworksSustainabilityChinaEducational Technology, Artificial Intelligence in Education
Dimitriadou and Lanitis [56]2023Student Action Recognition for Improving Teacher Feedback During Tele-EducationIEEE Transactions on Learning TechnologiesCyprusTechnology
Kim et al. [57]2023Characteristic Behaviors of Elementary Students in a Low Attention State During Online Learning Identified Using ElectroencephalographyIEEE Transactions on Learning TechnologiesRepublic of KoreaEducation, Artificial Intelligence, Electronic Engineering, Biomedical
Hossen and Uddin [58]2023Attention monitoring of students during online classes using XGBoost classifierComputers and Education: Artificial IntelligenceBangladeshComputer Science and Engineering
Simonetti et al. [59]2023Neurophysiological Evaluation of Students’ Experience during Remote and Face-to-Face Lessons: A Case Study at Driving SchoolBrain SciencesItalyComputer Science and Technology, Anatomical, Histological, Forensic and Orthopaedic Sciences, Industrial Neuroscience
Alkabbany et al. [60]2023An Experimental Platform for Real-Time Students Engagement Measurements from Video in STEM ClassroomsSensorsUSAComputer Engineering, Education and Human Development, Psychology
Zhu et al. [61]2023Emotion Recognition in Learning Scenes Supported by Smart Classroom and Its ApplicationSignal TreatmentChinaInformation Engineering
Hasnine et al. [62]2023A Real-Time Learning Analytics Dashboard for Automatic Detection of Online Learners’ Affective StatesSensorsTürkiye, JapanComputing and Multimedia, Computer Education and Instructional Technology
Wang et al. [63]2023Students’ Classroom Behaviour Detection System Incorporating Deformable DETR with Swin Transformer and Light-Weight Feature Pyramid NetworkSystemsChinaTechnology, Education
Pi et al. [64]2023Difficulty level moderates the effects of another’s presence as spectator or co-actor on learning from video lecturesEducational technology research and developmentChinaTeaching Technology, Artificial Intelligence in Education
Mudawi et al. [65]2023Predictive Analytics for Sustainable E-Learning: Tracking Student BehaviorsSustainabilityPakistan, Saudi ArabiaComputer Science and Artificial Intelligence
Table 4. Description of the studies and critical appraisal using JBI.
Table 4. Description of the studies and critical appraisal using JBI.
AuthorsType of StudySummaryDetailed JBI Appraisal% JBI
Chen, C.H. (2017) [42]Quasi-experimentalStudy comparing traditional vs. game-based learning using EEG. Grounded in engagement theory. EEG was used to measure attention and relaxation.Comparative design to examine traditional learning vs. game-based learning. Despite there being no standard control group, attentional differences are identified using EEG. The measurements are valid, although certain details are lacking in the analysis.6/9 (66%)
Durães et al. (2018) [43]Cross-sectionalObservation of attentional behaviour in smart classrooms. Based on human–computer interaction. Uses non-invasive sensor and neural networks.Observational study in smart environment. Despite not addressing confounding factors, attention is measured using sensors and neural networks, with the results being consistent.6/8 (75%)
Durães et al. (2019) [44]Cross-sectionalDevelopment of an intelligent tutoring system. Based on learning personalisation theories. Sensors and interaction analysis are used to monitor attention.Development of an intelligent tutor with no comparison between groups. Although the study is cross-sectional, the AI techniques and application justify its inclusion.6/8 (75%)
Taub and Azevedo (2019) [45]Quasi-experimentalExplores how prior knowledge affects attention and self-regulated processes. Grounded in self-regulated learning (SRL) theory. Uses eye-tracking and recordings from the system.Assessment of the effect of prior knowledge using two different groups. Good quasi-experimental design, with objective measures and adequate analysis.8/9 (88%)
Liang et al. (2019) [46]Quasi-experimentalProposal of an interactive system harnessing wearable devices to monitor attention. Based on neuroeducation. Uses wearable devices with biometric sensors.Wearable device system applied in an educational setting. Practical assessment using objective techniques, albeit without controlling for conditions or direct comparison.6/9 (66%)
Behera et al. (2020) [47]Cross-sectionalAssociates facial expressions and gestures using learning tasks. Based on intelligent tutoring systems (ITS). Uses computer vision and facial analysis to measure engagement.Association between facial expressions and learning tasks. Analysis using computer vision. Good internal validity despite being a cross-sectional study.6/8 (75%)
Serrano-Mamolar et al. (2021) [48]Cross-sectionalPrediction of concentration using non-intrusive physiological signals. Based on Hidden Markov Models (HMM). The use of biometric sensors and statistical analysis in real-world setting.Prediction of concentration through biometric signals, individual analysis, but rigorous use of statistical techniques. Confounding factors not controlled for.7/8 (87%)
Zhao et al. (2021) [49]Cross-sectionalPostural and behavioural engagement patterns in teacher–student interactions. Based on video analysis. Applies posture recognition techniques.Detection of postural patterns. Cross-sectional study, with a certain experimental design. Clear findings albeit without external validation for extrapolation. 6/8 (75%)
Araya and Sossa-Rivera (2021) [50]Cross-sectionalSystem to detect gaze and body orientation in primary school classrooms. Based on shared attention. Uses computer vision in video. Automatic orientation and gaze detection system. Practical application in classrooms. Reliable technical assessment.7/8 (87%)
Gao and Tan (2022) [51]Quasi-experimentalAssesses the impact of video styles on attention. Based on cognitive load theory. Uses EEG to measure attention during online classes.Comparison of video styles using EEG. Good quasi-experimental design but lacks details on sampling procedure.7/9 (77%)
Shen (2022) [52]Quasi-experimentalAnalysis of attention in modern English learning classrooms. Grounded in edge computing system architecture. Uses Multi-task Cascaded Convolutional Networks (MTCNN) for facial recognition.Attention assessed using deep neural networks. Despite the lack of a control group, the study employed robust analysis with facial recognition.6/9 (66%)
Hou et al. (2022) [14]Quasi-experimentalEvaluation of online teaching quality using facial recognition. Based on mood analysis. Uses VGG16 network and ECA-Net for facial expression.Facial recognition applied to online classes. Good design. The system is only applied and the accuracy results in technical tests are described. No comparisons with control group.7/9 (77%)
Villegas et al. (2023) [53]Quasi-experimentalSystem to identify concentration levels and emotional states in online educational models. Electroencephalography (EEG) and skin conductors were used.Proposal of a system to measure concentration and attentional states in online training. Despite the lack of a control group, the study is validated using objective measures and appropriate methodology.7/9 (77%)
Södergård and Laakko (2023) [54]Quasi-experimentalEstimation of attention using biosensors and daily self-assessment. Based on self-regulated learning. Use of smart wristbands and machine learning.Self-assessed attention using biosensors in daily life. Experimental study using mechanical observation.7/8 (87%)
Trabelsi et al. (2023) [15]Quasi-experimentalReal-time attention monitoring with behaviour detection. Based on pattern recognition. Use of neural networks and YOLO.Real-time attention monitoring system in the classroom. Modern vision and AI techniques. No formal comparison applied.6/9 (66%)
Ma et al. (2023) [55]Quasi-experimentalRecognition of online learning engagement using BiLRCN. Grounded in deep learning. Use of bidirectional and convolutional networks.Engagement recognition using BiLRCN. Good technical analysis, despite the lack of clear pre/post measurements.6/9 (66%)
Dimitriadou and Lanitis (2023) [56]Quasi-experimentalEvaluation of a system to recognise student action during tele-education. Based on privacy and participation. Use of computer vision and real-time feedback.Action recognition in tele-education. Good use of AI and validation by stakeholders. No control group, but detailed approach to development of experiment.6/9 (66%)
Kim et al. (2023) [57]Quasi-experimentalEEG study in primary school students to identify low-attention behaviours. Based on biomarkers and machine learning. Combination of signal processing and video analysis.Identification of behaviours using EEG. Good experimental control and application with participants.8/9 (88%)
Hossen and Uddin (2023) [58]Quasi-experimentalClassification of attention levels during online classes using XGBoost. Draws on digital participation patterns. Uses behavioural visual characteristics and machine learning.XGBoost model to monitor attention. Technical analysis present high reliability. No clear between-groups comparison, but results are well presented.7/9 (77%)
Simonetti et al. (2023) [59]Quasi-experimentalComparison between remote and face-to-face classes using EEG and physiological signals. Focused on student experience. Uses wearable devices to obtain neurophysiological measures.Comparison of remote vs. face-to-face classes using EEG. Robust design and use of objective measures.8/9 (88%)
Alkabbany et al. (2023) [60]Quasi-experimentalReal-time engagement measurement system using video in STEM classrooms. Based on computational perception and active education. AI used for face and posture detection. Evaluation of attention in STEM classrooms using video analysis to capture students’ head poses, gaze, body movements and facial emotions. Experimental application extensively detailed.7/9 (77%)
Zhu et al. (2023) [61]Quasi-experimentalEmotion recognition in smart classrooms. Grounded in emotion and attention models. Uses convolutional neural networks and NetVLAD.Emotion recognition in smart classrooms. AI tools implemented to evaluate various scenarios.7/9 (77%)
Hasnine et al. (2023) [62]Quasi-experimentalReal-time analytics dashboard to detect affective states. Based on webcam data. Uses face detection and automatic emotion classification.Affective states dashboard. Good technical validation. No comparison group in the experiment.6/9 (66%)
Wang et al. (2023) [63]Quasi-experimentalBehaviour detection system with Transformers and FPN. Based on objective behaviour evaluation. Uses Swin Transformer and Deformable DETR.Classroom behaviour detection using Transformer. Extensive description of technical application and implementation of experiment.7/9 (77%)
Pi et al. (2023) [64]Quasi-experimentalEffects of social environment on learning from videos. Based on social presence theories. Uses EEF and cognitive performance measures.Assessment of impact of social presence on learning. Experimental design using EEG. Includes between-groups comparisons.8/9 (88%)
Mudawi et al. (2023) [65]Cross-sectionalAttention prediction system in e-learning. Based on automatic learning and behaviour detection. Uses Viola–Jones and genetic algorithms.The study presents a good technical foundation and uses adequate data but does not validate the system in real e-learning situations, limiting its practical applicability. Recognises this limitation and proposes future improvements.7/8 (87%)
Table 5. Frequency of topics related to artificial intelligence, education, and level of attention or concentration [38].
Table 5. Frequency of topics related to artificial intelligence, education, and level of attention or concentration [38].
KeywordsOccurrencesTotal Link Strength
1Learning1355
2Attention644
3Indicator recognition642
4Intelligence627
5Artificial intelligence535
6Student behaviours537
7Education434
8Adaptive learning324
9Data analytics323
10Machine learning326
11Online learning325
12Behavioural research215
13Cognition212
14Computer vision210
15Electroencephalography211
16Emotions28
17Human computer interaction217
18Privacy protection methods215
19Teaching218
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roig-Vila, R.; Prendes-Espinosa, P.; Cazorla, M. Implementation of Artificial Intelligence Technologies for the Assessment of Students’ Attentional State: A Scoping Review. Appl. Sci. 2025, 15, 5990. https://doi.org/10.3390/app15115990

AMA Style

Roig-Vila R, Prendes-Espinosa P, Cazorla M. Implementation of Artificial Intelligence Technologies for the Assessment of Students’ Attentional State: A Scoping Review. Applied Sciences. 2025; 15(11):5990. https://doi.org/10.3390/app15115990

Chicago/Turabian Style

Roig-Vila, Rosabel, Paz Prendes-Espinosa, and Miguel Cazorla. 2025. "Implementation of Artificial Intelligence Technologies for the Assessment of Students’ Attentional State: A Scoping Review" Applied Sciences 15, no. 11: 5990. https://doi.org/10.3390/app15115990

APA Style

Roig-Vila, R., Prendes-Espinosa, P., & Cazorla, M. (2025). Implementation of Artificial Intelligence Technologies for the Assessment of Students’ Attentional State: A Scoping Review. Applied Sciences, 15(11), 5990. https://doi.org/10.3390/app15115990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop