You are currently viewing a new version of our website. To view the old version click .
Societies
  • Concept Paper
  • Open Access

4 December 2025

The Use of Artificial Intelligence (AI) in Early Childhood Education

,
and
1
Department of Dynamic, Clinical and Health Psychology, Sapienza University of Rome, 00185 Rome, Italy
2
Department of Health Sciences, UniCamillus—Saint Camillus International Medical University, 00131 Rome, Italy
3
Faculty of Psychology, International Telematic University Uninettuno, 00186 Rome, Italy
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Digital Learning, Ethics and Pedagogies

Abstract

The integration of Artificial Intelligence (AI) into early childhood education presents new opportunities and challenges in fostering cognitive, social, and emotional development. This theoretical discussion synthesizes recent research on AI’s role in personalized learning, educational robotics, gamified learning, and social-emotional development. The study explores theoretical frameworks such as Vygotsky’s Sociocultural Theory, Distributed Cognition, and the Five Big Ideas Framework to understand AI’s impact on young learners. AI-powered personalized learning platforms enhance engagement and adaptability, while robotics and gamification foster problem-solving and collaboration. Additionally, AI tools support children with disabilities, promoting inclusivity and accessibility. However, ethical concerns related to privacy, bias, and teacher preparedness pose challenges to effective AI integration. Furthermore, the long-term effects of AI on children’s social skills and emotional intelligence require further investigation. This theoretical discussion emphasizes the need for interdisciplinary collaboration to develop AI-driven educational strategies that prioritize developmental appropriateness, equity, and ethical considerations. The findings highlight AI’s potential as a transformative educational tool, provided it is implemented thoughtfully and responsibly. The paper aims to address the following research question: How can artificial intelligence (AI) be meaningfully and ethically integrated into early childhood education to enhance learning, while preserving developmental and relational values?

1. Introduction

The integration of Artificial Intelligence (AI) into early childhood education has become an increasingly discussed topic, given its potential to transform both teaching and learning practices. Accordingly, this paper examines how AI can be meaningfully and ethically integrated into early childhood education, analyzing its potential benefits and limitations from theoretical, pedagogical, and ethical perspectives. Moreover, this manuscript responds to the growing need for a theoretically grounded and critically balanced synthesis of the role of AI in early childhood education. While previous reviews have largely emphasized either technological innovation or developmental risks, this study aims to integrate both perspectives through a multidimensional framework. The novelty of this contribution lies in its focus on ethical mediation and the alignment of AI systems with pedagogical and relational principles that are central to early learning.

1.1. Theoretical Frameworks Supporting the Authors’ Considerations

Theoretical frameworks are critical to understanding how Artificial Intelligence (AI) impacts the cognitive, social, and emotional development of young children. Two prominent frameworks, Vygotsky’s Sociocultural Theory and the Human–Computer Interaction (HCI) framework, provide foundational lenses for this exploration. Additionally, more recent approaches, such as Distributed Cognition Theory and the Five Big Ideas Framework, offer nuanced perspectives on how children interact with and learn through AI technologies.
Vygotsky’s Sociocultural Theory emphasizes the role of tools and social interactions in shaping a child’s cognitive development. AI systems, such as adaptive learning platforms and collaborative robotics, can be seen as cultural tools that mediate learning experiences. These technologies provide scaffolding for tasks within a child’s Zone of Proximal Development (ZPD), enabling them to achieve learning outcomes they could not accomplish independently. For example, AI-powered educational robots often guide children through problem-solving activities, offering incremental support that aligns with their developmental level. These interactions not only build cognitive skills but also foster collaboration and social engagement when children work in teams to solve AI-driven challenges [1]. In line with Vygotsky’s Sociocultural Theory, this study views learning as a mediated activity structured by tools, language, and social participation. From this perspective, Human–Computer Interaction (HCI) becomes a critical field for understanding how AI technologies act as cultural mediators that transform the child’s zone of proximal development. The convergence between sociocultural and HCI paradigms allows for a nuanced reading of the child–technology–teacher triad in contemporary education.
The Human–Computer Interaction Framework (HCI) evaluates how children engage with AI tools, focusing on usability, user experience, and design principles that prioritize children’s needs. This framework highlights the importance of intuitive and age-appropriate AI systems. Research shows that child-friendly interfaces with simple navigation and engaging visuals enhance learning outcomes by fostering sustained attention and active participation. For instance, it has been underscored how AI-based platforms designed with HCI principles improve user satisfaction and reduce cognitive overload, making them more effective for young learners [2,3].
Distributed Cognition Theory offers an additional lens by viewing AI as part of an extended cognitive system. This framework posits that cognition is not confined to the individual but is distributed across people, tools, and environments. AI technologies, such as smart toys and virtual assistants, function as cognitive partners, helping children offload memory and computation tasks. For example, when children interact with AI systems that adaptively recommend problem-solving strategies, AI becomes an integral part of their cognitive process. This shared cognitive load allows children to focus on higher-order thinking skills, such as reasoning and creativity.
The Five Big Ideas Framework, tailored for young learners, simplifies complex AI concepts into accessible, play-based activities. By emphasizing storytelling, role-play, and tangible interactions with robotics, this framework ensures that children engage with AI in developmentally appropriate ways [4]. For instance, children might program a robot to navigate a maze, learning foundational coding concepts while also developing spatial reasoning and teamwork skills.

1.2. Why Is It Important to Reflect on Young Children’s Use of Artificial Intelligence

Studying the role of Artificial Intelligence (AI) in early childhood is essential due to its profound implications for cognitive development, future technological fluency, and equity in education.
Firstly, the early years are critical for cognitive development, as foundational skills in reasoning, communication, and emotional regulation are formed during this period. AI systems, such as interactive robots and adaptive learning platforms, can support these developmental milestones by providing personalized feedback and fostering problem-solving skills [5]. For instance, AI-based tools like emotion recognition systems enable children to practice empathy and social interaction in structured environments, which are essential for lifelong learning [6].
Secondly, early exposure to AI cultivates technological fluency, preparing children for a world increasingly driven by technology. Introducing AI concepts, such as machine learning and robotics, through age-appropriate activities ensures that children develop confidence and curiosity in engaging with complex technologies. Studies show that children who engage with AI early on are more likely to pursue STEM fields and develop digital literacy skills that are crucial for future employability [7].
Finally, studying AI’s impact on children is vital for addressing equity and inclusivity in education. AI has the potential to bridge gaps for marginalized groups by providing accessible, personalized learning experiences. For children with disabilities, AI-driven tools such as adaptive interfaces and virtual assistants offer tailored support that enhances their educational opportunities [8]. However, ethical considerations, such as data privacy and bias, must be addressed to ensure that AI promotes equitable outcomes for all children.
In conclusion, investigating the intersection of AI and early childhood development is not only beneficial but necessary. It equips children with foundational skills, prepares them for technological advancements, and fosters inclusive learning environments. By doing so, society can ensure that AI serves as a tool for empowerment rather than a source of disparity.
This paper advances a conceptual perspective in which AI-mediated learning is interpreted as a dynamic interplay among technological mediation, developmental adaptation, and pedagogical intentionality. Such an approach positions AI not merely as an instructional aid but as a socio-technical environment that reshapes the conditions of early learning. On this basis, the study outlines a forward-looking agenda calling for interdisciplinary inquiry and policy design that aligns innovation with developmental and ethical standards.

1.3. Personalized Learning Platforms

One of the most significant advancements in artificial intelligence (AI) for early childhood education is the development of personalized learning platforms. These platforms leverage sophisticated algorithms to analyze a child’s learning style, pace, and specific needs. By continuously assessing data such as performance metrics, response times, and engagement patterns, AI systems dynamically tailor content delivery to optimize learning. This personalized approach ensures that children are neither overwhelmed by tasks that are too difficult nor disengaged by material that is too simple. For example, adaptive learning systems can identify when a child struggles with a specific concept, such as basic arithmetic, and provide targeted exercises to address the gap. On the other hand, advanced learners can be presented with more challenging problems to keep them engaged.
Research underscores the effectiveness of personalized AI platforms in fostering engagement and improving comprehension. Empirical evidence increasingly shows that early exposure to digital technologies can affect the development of attention, executive functioning, and emotional regulation, and it has been observed that children who used AI-driven platforms showed higher levels of motivation and retention compared to those in traditional classrooms [9]. The personalized feedback provided by AI systems also plays a critical role in sustaining a child’s attention and encouraging positive learning behaviors. Furthermore, these platforms enable educators to monitor progress in real-time, offering insights that can guide instructional strategies and interventions. In this way, personalized learning platforms not only enhance individual learning experiences but also support educators in making data-driven decisions.

1.4. Educational Robotics and Machine Learning

Another key application of AI in early childhood education is the use of educational robotics and machine learning tools to introduce children to the basics of artificial intelligence and computational thinking. Robots equipped with AI capabilities provide children with hands-on learning experiences that make abstract concepts tangible. For instance, programmable robots allow young learners to engage in problem-solving activities, such as coding the robot to perform specific tasks. These activities not only teach foundational programming skills but also encourage critical thinking and creativity.
It has been highlighted [10] how educational robotics has been successfully integrated into curricula in several countries, helping children as young as five years old grasp concepts like cause-and-effect relationships, sequencing, and logical reasoning. Beyond technical skills, these activities foster collaboration and communication, as children often work in teams to solve challenges presented by the robots. Such group interactions also help develop social-emotional skills, including patience, teamwork, and conflict resolution.
Machine learning is another domain where AI tools are being utilized to teach children about the core principles of data-driven decision-making. Simplified machine learning models, adapted for early education, allow children to explore how AI systems “learn” from data. By participating in these activities, children gain a foundational understanding of how AI impacts the world around them, preparing them for a future increasingly shaped by technology.

1.5. Gamified and Embodied Learning

Gamification and embodied learning are innovative approaches that utilize AI to create immersive and engaging educational experiences for young learners. Gamified platforms integrate game design elements such as points, badges, and leaderboards into educational content, making learning feel more like play. These platforms often use AI to adapt game difficulty and content to the learner’s progress, ensuring a balance between challenge and skill. By embedding educational objectives within interactive games, these systems make learning both enjoyable and effective.
It has been illustrated [11] how gamified AI platforms promote creativity and soft skill development in young children. For instance, games that involve storytelling or character creation encourage children to think imaginatively, while problem-solving games improve critical thinking and decision-making skills. Additionally, gamification motivates children by rewarding progress, helping to build self-esteem and a positive attitude toward learning.
Embodied learning takes this interactivity a step further by incorporating physical movement and real-world interactions into the learning process. In these environments, children engage with AI-enabled tools such as motion-capture devices, augmented reality systems, or humanoid robots that respond to their movements and actions. These experiences are particularly effective for kinesthetic learners, who benefit from combining physical activity with cognitive tasks. For example, embodied learning platforms may involve children guiding a robot through a maze by physically moving objects or using gestures, reinforcing spatial reasoning and motor skills. It has also been noted [12] that these tools encourage emotional engagement, as children often form connections with the AI agents they interact with, further enhancing the learning experience.
Comparing evidence across AI modalities shows that robotics-based activities often outperform gamified systems in promoting collaboration, spatial reasoning, and tangible problem-solving, whereas gamified environments tend to sustain engagement and intrinsic motivation over longer periods [12]. Equity-oriented interventions present mixed outcomes: while they increase accessibility and participation for underrepresented groups, they sometimes lack the pedagogical depth achieved through direct human facilitation. These contrasts suggest that combining AI modalities within a blended learning design may yield the most inclusive and effective results.
The theoretical foundation of this paper integrates sociocultural and cognitive perspectives to interpret the educational use of AI. Vygotsky’s notion of mediated learning emphasizes the role of tools and interaction in cognitive development, while Human–Computer Interaction and Distributed Cognition provide frameworks for analyzing how AI functions as an external cognitive aid. The Five Big Ideas Framework complements these approaches by defining educational principles such as collaboration, creativity, and ethical responsibility. Together, these perspectives form the evaluative criteria adopted in this study: (a) the capacity of AI to scaffold learning processes; (b) its compatibility with socio-emotional development; and (c) its contribution to equitable and ethical education.
Building on these foundations, although these frameworks arise from distinct disciplinary traditions, they are conceptually compatible. All four emphasize interaction, mediation, and the distributed nature of cognition. Vygotsky’s and HCI perspectives share the view that learning occurs through culturally mediated tools, while Distributed Cognition and the Five Big Ideas Framework both stress collaboration, problem-solving, and ethical reflection as essential learning processes. This shared orientation allows for an integrative evaluation of AI applications that considers both cognitive outcomes and socio-ethical implications.

2. Benefits of AI in Early Childhood Education

Enhanced Learning Outcomes

Artificial intelligence (AI) has demonstrated its potential to significantly enhance learning outcomes for young children by making the educational process more interactive, adaptive, and engaging. AI-powered tools and platforms cater to diverse learning styles by tailoring content to the individual needs of learners. These findings can be interpreted within an integrated conceptual model of AI–child learning. In this model, AI operates through three mutually reinforcing dimensions: cognitive scaffolding, affective engagement, and social interaction. Cognitive scaffolding allows the system to adapt feedback and challenges to the learner’s competence level; affective engagement promotes curiosity and motivation through personalized, emotionally responsive tasks; and social interaction, either real or simulated, supports communication and cooperation. Taken together, these elements illustrate how AI functions not as a replacement for the educator but as a mediating agent embedded within a sociocultural learning ecology. These systems analyze a child’s interactions, comprehension levels, and pace of learning, then dynamically adjust the material to ensure optimal engagement. By providing personalized feedback and encouraging active participation, AI fosters deeper understanding and retention of concepts.
Previous literature [13] reports that the interactive nature of AI-enabled learning tools significantly improves material understanding in young children. For instance, interactive simulations and visual aids supported by AI allow children to grasp abstract concepts more effectively. Additionally, AI-driven platforms often employ gamification techniques, where educational content is delivered in the form of games, puzzles, and challenges. This approach not only holds a child’s attention but also motivates them to stay engaged with learning tasks for extended periods, thereby reinforcing knowledge acquisition. Moreover, AI systems provide immediate and accurate feedback, enabling students to identify and correct mistakes in real time, which accelerates their learning progress. These tools have proven particularly effective in enhancing foundational skills such as numeracy and literacy.

3. Support for Special Needs Education

AI has brought transformative advancements to special needs education by offering tools that address the unique challenges faced by children with disabilities. Adaptive and assistive technologies, powered by AI, provide personalized support to children with physical, cognitive, or sensory impairments, enabling them to participate meaningfully in the learning process. Speech recognition and text-to-speech technologies, for example, assist children with speech delays or dyslexia by enhancing their ability to communicate and engage with educational materials.
Some scholars emphasized the role of AI in creating inclusive learning environments for children with disabilities [14]. Tools such as AI-driven virtual assistants, wearable devices, and brain–computer interfaces have made it possible for children with limited mobility or sensory impairments to interact with educational content independently. For children on the autism spectrum, AI-enabled tools like emotion recognition systems provide tailored interventions that help them develop social and emotional skills. These systems can detect emotional cues and adjust interactions accordingly, fostering a supportive environment that accommodates their needs. By improving accessibility and promoting inclusivity, AI ensures that children with diverse abilities have equitable opportunities to learn and thrive in educational settings.

Cultivation of Digital Literacy

In an increasingly digital world, fostering digital literacy from an early age has become essential. AI tools in early childhood education provide children with an opportunity to develop critical technological skills that are vital for navigating a technologically advanced future. These tools introduce children to the basics of technology use, programming, and problem-solving, equipping them with foundational knowledge that supports future learning and innovation.
It has been argued [15] that early exposure to AI fosters curiosity and confidence in technology while simultaneously promoting critical thinking and creativity. Through interactive activities involving AI, children learn how technology functions and how to use it responsibly. For example, educational robotics platforms allow children to engage in coding tasks that teach them logical reasoning and algorithmic thinking. Such experiences cultivate a deeper understanding of how digital systems work, preparing children to adapt to evolving technological landscapes. Furthermore, early exposure to AI helps demystify complex technological concepts, making them more approachable and less intimidating as children grow.
By embedding digital literacy into early education, AI ensures that children are better prepared to participate in and contribute to a future shaped by technological advancements [16]. This foundational skill set is not only important for academic success but also essential for fostering adaptability and lifelong learning in an increasingly interconnected world.

4. Challenges in Integrating AI into Early Education

4.1. Ethical and Privacy Concerns

The use of artificial intelligence (AI) in early childhood education introduces significant ethical and privacy challenges, particularly regarding the collection and management of sensitive student data. AI-powered educational tools often rely on gathering vast amounts of personal data to tailor learning experiences effectively. This data includes behavioral patterns, engagement metrics, and, in some cases, biometric data such as facial expressions and emotional cues. While this information can enhance personalization and educational outcomes, it raises critical concerns about data security and potential misuse.
Previous studies [17] highlighted that one of the most pressing ethical dilemmas lies in ensuring that children’s data is protected against breaches or unauthorized access. The involvement of third-party AI providers exacerbates these concerns, as the lack of transparency in how data is stored, shared, or used can undermine trust among parents and educators. Moreover, AI algorithms are often trained on datasets that may inadvertently incorporate biases, leading to discriminatory outcomes that disproportionately affect vulnerable populations. For instance, children from underrepresented groups may encounter less accurate predictions or recommendations, further widening existing educational disparities. These issues underscore the need for robust regulatory frameworks and ethical guidelines to govern AI usage in educational settings, ensuring that children’s privacy and rights are safeguarded.

4.2. Curriculum and Teacher Readiness

A significant barrier to the effective integration of AI in early education is the lack of preparedness among educators and the underdevelopment of AI-compatible curricula. Many teachers lack the technical knowledge and confidence required to use AI tools effectively, creating a gap between the technology’s potential and its practical application in classrooms. Furthermore, curricula that incorporate AI are often designed without sufficient input from educators, leading to a mismatch between the tools’ capabilities and the pedagogical goals they aim to support.
It has been emphasized [18] that while teachers are generally enthusiastic about incorporating AI into their classrooms, they frequently cite insufficient training and professional development as major challenges. Educators need tailored training programs that address not only the technical aspects of AI but also its pedagogical integration. Without this support, teachers may struggle to align AI technologies with their instructional methods, diminishing the tools’ impact. Additionally, many AI-driven curricula are not designed with early childhood education in mind, focusing instead on older age groups or advanced technical skills. This lack of alignment makes it difficult for teachers to adapt AI to the developmental and cognitive needs of young children. Addressing these gaps requires collaboration among policymakers, educators, and technology developers to create training programs and curricula that are both accessible and contextually appropriate.

4.3. Technology Accessibility Gaps

Despite the transformative potential of AI in education, its benefits are not evenly distributed due to socioeconomic disparities that limit access to AI-powered tools. Many schools and families, particularly in low-income or rural areas, lack the infrastructure and resources necessary to adopt AI technologies. These disparities create a digital divide, where children from disadvantaged backgrounds miss out on the opportunities afforded by AI-enhanced learning environments.
It has been noted [19] that the cost of AI-enabled devices, software, and reliable internet connectivity often places these tools beyond the reach of underfunded schools and marginalized communities. This inequity not only limits access to personalized learning experiences but also exacerbates existing educational inequalities. For instance, children from well-resourced schools may gain early exposure to digital literacy and advanced technologies, while their peers in underserved areas struggle with basic educational resources.
Bridging this accessibility gap requires targeted interventions, such as government subsidies, public–private partnerships, and community-driven initiatives to ensure equitable access to AI technologies. Additionally, designing cost-effective and offline-compatible AI tools can help extend their reach to underserved populations, enabling a more inclusive approach to AI integration in early education.

4.4. Curriculum Development

The development of age-appropriate curricula that effectively integrate artificial intelligence (AI) is a critical area for advancing its application in early childhood education. To harness the potential of AI, policymakers and educators must collaborate to design educational frameworks that align with the developmental needs of young learners while incorporating essential ethical considerations and fostering critical thinking. AI’s incorporation into curricula should go beyond technical training, aiming instead to cultivate foundational cognitive and socio-emotional skills through engaging and interactive methods.
One study [20] emphasized the importance of introducing AI literacy at an early age, including basic concepts such as how AI systems operate and influence everyday life. By simplifying complex AI topics through playful and relatable activities, educators can ensure that children develop a sense of curiosity and familiarity with the technology. At the same time, curricula must include discussions on ethical considerations, such as privacy, data security, and algorithmic bias, to foster responsible and informed interactions with AI tools. Collaborative efforts between policymakers, technologists, and educators will be crucial to creating curricula that are both developmentally appropriate and socially responsible.

4.5. Teacher Training

The successful integration of AI in early education also depends on the preparedness of teachers to utilize these tools effectively. Professional development programs must focus on equipping educators with AI literacy and the technical skills required to implement AI technologies in their classrooms. Beyond technical proficiency, such training should address pedagogical strategies for integrating AI into teaching methods and leveraging its capabilities to enhance learning outcomes.
Research in the field underscores the need for ongoing and accessible teacher training initiatives to build confidence in using AI tools [21]. Many educators remain hesitant about adopting AI due to a lack of familiarity or fear of being replaced by technology. Comprehensive training programs should demystify AI, emphasizing its role as a complement to, rather than a replacement for, human instruction. These programs should include hands-on experience with AI platforms, guidance on interpreting AI-driven insights, and strategies for maintaining a balance between technology and traditional teaching methods. In addition, training should address how to identify and mitigate potential biases in AI tools, ensuring equitable learning experiences for all students.

4.6. Holistic Approaches

As AI continues to transform educational practices, a holistic approach that blends technological advancements with traditional methods is essential to preserving the human-centric values of education. While AI can provide personalized learning experiences and support diverse student needs, it cannot replicate the emotional connection and mentorship that educators offer. Combining AI’s capabilities with the strengths of traditional teaching methods ensures that children benefit from both innovation and the personal touch of human interaction.
It has been highlighted that the importance of maintaining this balance, emphasizing that AI should augment rather than replace traditional educational practices [22]. For instance, while AI can automate repetitive tasks such as grading or lesson customization, educators should focus on fostering creativity, critical thinking, and social-emotional learning through collaborative activities and discussions. A holistic approach also involves recognizing the limitations of AI, such as its inability to understand nuanced emotional expressions or cultural contexts, and using these insights to guide its implementation.
By integrating AI thoughtfully and ethically, educational systems can leverage its potential while preserving the core values of human connection, empathy, and collaboration. Holistic practices will ensure that AI serves as a tool to enhance, rather than overshadow, the rich, multidimensional experience of early childhood education.

5. Exploring Emotional and Social Intelligence Development Through AI Interactions in Early Childhood Education

AI as a Social Partner for Emotional Development

The potential of Artificial Intelligence (AI) to foster emotional and social intelligence in young children represents a novel area of exploration. While much research has focused on the cognitive benefits of AI, its applications in supporting socio-emotional growth, empathy, and interpersonal skills remain underutilized. AI-enabled systems, such as emotionally responsive chatbots and robots, can serve as valuable tools for developing children’s emotional intelligence, particularly by simulating emotional scenarios and providing real-time emotional feedback during activities.
Emotionally interactive AI tools, such as those incorporating sentiment analysis and emotion recognition, offer structured environments where children can learn to identify, interpret, and respond to emotional cues. For example, some research [23] highlights the application of AI visual transfer technology to recognize facial expressions and emotional states in educational settings, enhancing children’s ability to perceive and regulate emotions. Such tools allow children to engage in guided emotional learning exercises, helping them develop empathy and self-regulation. This aligns with findings by [24], who explored AI-driven interventions tailored for children with autism spectrum disorder (ASD). These tools helped children practice emotional regulation and social skills, emphasizing the importance of individualized, AI-enhanced learning experiences for socio-emotional development.
In addition to simulations, AI systems can provide real-time feedback to help children regulate their emotions. AI-powered robots and virtual agents can detect emotional states through facial recognition and voice tone analysis, then respond appropriately to guide children toward constructive behaviors. For instance, AI systems that use the Facial Action Coding System (FACS) can analyze micro-expressions and suggest calming strategies when a child displays signs of frustration or anxiety. Such real-time interventions support emotional self-awareness and regulation, as noted by [25], who emphasized the role of conversational AI in engaging young learners through emotionally responsive interactions.
Despite these advancements, long-term effects remain underexplored. For instance, there is limited understanding of how long-term interactions with AI impact a child’s ability to regulate and express emotions autonomously. While short-term studies show promising results, longitudinal research is necessary to determine whether skills learned through AI interactions transfer to real-world settings. Furthermore, ethical considerations such as data privacy and algorithmic biases must be addressed to ensure these tools are safe and effective. One author has warned of potential pitfalls, including AI’s limitations in understanding non-linear, playful communication styles typical of young children, and its susceptibility to biases embedded in training datasets [26].
In conclusion, AI-enabled systems offer exciting possibilities for fostering emotional intelligence in young children. By simulating emotional scenarios and providing personalized feedback, these tools can help children develop empathy, self-regulation, and social skills. However, addressing research gaps and ethical challenges is critical to fully realize AI’s potential in this domain. Continued interdisciplinary research and collaboration between educators, psychologists, and technologists will be essential to ensure that AI serves as a meaningful and responsible social partner for children’s emotional development.

6. AI’s Role in Developing Social Skills Through Collaboration

6.1. AI as a Facilitator of Teamwork and Collaborative Problem-Solving

Artificial Intelligence (AI) has shown promise as a tool to foster social skills and collaborative problem-solving in young children. Through interactive and adaptive technologies, AI can act as a mediator and facilitator in group activities, encouraging equitable participation, teamwork, and the development of critical social behaviors. These AI tools provide structured opportunities for children to engage in collaborative tasks, improving their ability to navigate social contexts, such as conflict resolution and cooperative problem-solving.
AI-powered tools like collaborative robots and interactive systems provide children with opportunities to practice teamwork by engaging in tasks that require shared responsibility and coordination. For instance, it has been demonstrated [27] how a tablet-based robot, “Surfacebot,” encouraged children to take on collaborative roles and provide feedback during problem-solving activities. The use of reinforcement learning within the robot fostered active engagement and role negotiation among children, helping them develop perspective-taking skills and promoting a mutual exchange of information.
Research also shows that robotics and game-based collaborative tasks improve children’s ability to cooperate and communicate effectively. In a study [28], a robotics-based curriculum was tested with kindergarten-aged children to assess the impact of structured and unstructured collaborative tasks. The findings revealed that a less structured, learn-by-doing approach was more effective in fostering peer-to-peer collaboration than heavily guided activities. These results emphasize the importance of designing AI-enabled collaborative systems that balance autonomy with guidance, encouraging children to actively participate and make shared decisions.
For children with Autism Spectrum Disorder (ASD), AI-driven interventions have shown notable success in improving social skills. One study [29] introduced a tablet-based game, “StarRescue,” which facilitated turn-taking and task-sharing among autistic children. The game’s design emphasized role interdependence and mutual planning, leading to significant improvements in social communication and collaborative behaviors. Such tools demonstrate how AI can scaffold children’s development of social skills by providing environments where collaborative interactions are modeled and practiced.
Despite these promising applications, further empirical validation is required. For example, while short-term studies indicate that AI can enhance collaborative behavior, longitudinal studies are needed to determine whether these skills transfer to real-world social settings. Moreover, questions about the scalability of such systems in diverse educational contexts, as well as their impact on long-term social development, require further exploration.
One critical unanswered question is whether AI interventions can improve children’s ability to navigate complex social contexts such as conflict resolution and cooperative tasks. While evidence suggests that AI tools enhance specific social skills in controlled environments, their ability to foster nuanced, context-sensitive social behaviors over time is still unclear.

6.2. Prolonged Interaction with AI and Its Effects on Children’s Relationships

The long-term influence of artificial intelligence (AI) on children’s social development is a critical area of investigation, particularly as AI becomes increasingly integrated into their daily lives through education and interactive tools. While AI technologies have the potential to complement human relationships by reinforcing positive social behaviors, they may also inadvertently reduce direct peer interactions or modify the dynamics of relationships with teachers and caregivers. Understanding these developmental trade-offs is essential to designing AI systems that foster holistic growth.
One of the key benefits of AI is its capacity to support social-emotional learning (SEL) and improve social adaptability in children. Research by [30] found that AI-assisted educational tools can positively influence adolescents’ social adaptability by fostering interpersonal relationships, peer collaboration, and emotional awareness. However, the same study highlighted potential risks, such as increased reliance on AI for social interactions, which might hinder the natural development of negotiation and conflict resolution skills that arise in peer-based contexts.
In terms of relational dynamics, studies suggest that prolonged interaction with AI systems may shape how children perceive social hierarchies and authority [31]. Some authors [32] examined AI’s role in attachment formation and social skill acquisition, identifying that while AI can act as a supportive tool for children, overdependence on AI-mediated interactions could disrupt traditional social bonds with caregivers and teachers. Similarly, other scholars [33] emphasized that while children benefit from personalized AI interfaces, these technologies may inadvertently reduce their opportunities for unstructured, spontaneous social interactions with peers. Such interactions are crucial for developing interpersonal competencies such as empathy, cooperation, and problem-solving.
Concerns about diminished peer interactions are echoed in findings by [34], who explored the social effects of AI in education. They noted that while AI tools foster engagement and self-regulation, excessive reliance on AI-mediated tasks could reduce opportunities for collaborative, face-to-face learning experiences. This shift may lead to a narrower range of social competencies, particularly those developed in diverse, group-based settings [35,36].
Another dimension is the potential of AI to mediate rather than replace human interactions. Hybrid environments have been studied [37] where children interact with both AI and human facilitators. The study found that while AI enhanced social responsiveness in children with autism, the presence of human practitioners was critical for reinforcing real-world social behaviors. This highlights the importance of designing AI systems that complement human relationships rather than supplant them [38].
Despite these insights, significant research gaps remain. For instance, longitudinal studies are necessary to determine whether children’s reliance on AI affects their ability to independently navigate complex social contexts over time. Additionally, ethical questions about the extent to which AI should mediate children’s social experiences remain unresolved. As it has been suggested [39], interdisciplinary collaborations are essential to understanding the nuanced impacts of AI on social development, particularly during critical developmental periods.

6.3. Adapting AI to Cultural and Societal Norms

The development of culturally sensitive AI socio-emotional learning (SEL) tools represents a critical, yet relatively unexplored, dimension of AI in education. Socio-emotional development in children is profoundly influenced by cultural and societal norms, which shape how emotions are expressed, understood, and managed. Ensuring that AI tools align with these norms is essential for their effectiveness and inclusivity. However, there is a significant research gap in assessing whether existing AI tools are adaptable to diverse cultural contexts and how they can incorporate varied emotional and social norms effectively.
Research [40] on AI-enhanced cross-cultural competence in STEM education demonstrates that AI can foster cultural sensitivity by integrating cultural-historical activity theories. Tools such as AI-powered language translators and culturally specific simulations have been shown to reduce stereotypes and promote cross-cultural understanding. These findings suggest that similar frameworks could be applied to SEL tools to make them more culturally relevant and inclusive [41].
AI-driven SEL tools could incorporate cultural norms by adjusting emotional recognition models to account for culturally specific expressions of emotion. For example, it has been proposed a dialogue system capable of embedding humor, empathy, and cultural sensitivity into educational interactions. This system dynamically adapts to cultural contexts by integrating external knowledge and analyzing linguistic and emotional variations across regions. Their study highlights that nuanced AI systems can promote culturally appropriate emotional and social learning while maintaining high accuracy in recognizing cultural cues [42].
Additionally, AI systems can enhance cultural adaptability by leveraging participatory design. Co-design sessions have been conducted with children from diverse cultural backgrounds, finding that children were more engaged when AI tools aligned with their cultural context. These findings underscore the importance of designing culturally sensitive AI tools that resonate with learners’ socio-cultural realities [43].
Despite these advancements, significant challenges remain. A systematic review [44] highlights the lack of cross-cultural validity in current AI models used for emotion recognition in education. Ethical concerns such as data privacy, algorithmic bias, and equitable access further complicate efforts to create universally adaptable SEL tools [45].

6.4. Critical Evaluation of Ethical Implications

The ethical implications of teaching socio-emotional skills using artificial intelligence (AI) represent a significant and underexplored area in educational research. While AI-driven tools offer the potential to support emotional development by fostering empathy, self-regulation, and interpersonal skills, they also raise profound ethical questions about bias, transparency, and the definition of “appropriate” emotions and behaviors.
One of the core ethical concerns revolves around the origins of emotional programming in AI systems. As highlighted by [46], AI-driven emotional education technologies often rely on facial coding systems and algorithms to recognize and simulate emotions. However, these systems are subject to the biases inherent in the data used for their training, which frequently reflects limited cultural or demographic diversity. This can lead to the reinforcement of stereotypes and the imposition of narrow emotional standards that do not account for the diverse ways in which emotions are expressed across cultures. Such biases can marginalize students who do not conform to these predefined norms, as they may be labeled as less emotionally competent due to the system’s lack of inclusivity [47].
Another critical issue is the ethical oversight of emotional lessons programmed into AI tools. Who decides which emotions and behaviors are desirable, and what framework is used to determine these standards? It has been argued that while AI models for emotion recognition show promise in personalizing emotional learning, they often lack transparency in how they classify and respond to emotions. This opacity raises concerns about the potential for AI systems to dictate or enforce normative emotional behavior, effectively shaping students’ emotional development in ways that may conflict with individual or cultural values [48].
Moreover, there is a risk of over-reliance on AI for emotional development, which could diminish the role of human educators and caregivers in fostering nuanced socio-emotional skills. Scholars have emphasized the importance of balancing AI technologies with human-centric educational practices to ensure that emotional learning retains its depth and adaptability. While AI tools can provide personalized support and real-time feedback, they cannot replicate the complexity of human emotional interactions, which are essential for developing empathy and understanding in real-world contexts [49].
Finally, there is a broader ethical question about the societal implications of delegating emotional education to AI systems. Warns that relying on AI to mediate human emotions risks creating a dependency on technology for interpersonal skills, which could erode the authenticity of human relationships. To mitigate these risks, AI systems must be designed with robust ethical guidelines that prioritize inclusivity, transparency, and accountability [50].
Despite growing interest in this area, further research is needed. There is a need for longitudinal studies to evaluate the long-term impact of AI-mediated emotional development, particularly in diverse cultural settings. Furthermore, interdisciplinary collaborations between educators, psychologists, and technologists are essential to develop frameworks for ethical AI deployment in socio-emotional learning. These frameworks should address how AI systems can foster emotional growth without imposing biased or reductive definitions of appropriate emotions and behaviors.
In light of these ethical concerns, a practical roadmap for responsible AI integration is essential. Policymakers should establish transparent data governance and equity-oriented funding schemes to prevent systemic bias. Educators require structured professional development that combines technical competence with ethical reflection and relational pedagogy. Developers are encouraged to adopt co-design methods involving teachers, parents, and child development experts to ensure that AI applications respect developmental boundaries and promote human-centered values. Through this shared responsibility, AI in early education can evolve as both innovative and socially accountable.

7. Conclusions

This paper reviewed the pedagogical, cognitive, and ethical dimensions of AI use in early childhood education. It highlights both opportunities for personalized and adaptive learning and risks related to equity, ethics, and over-technologization. Synthesizing four complementary frameworks—Sociocultural Theory, Human–Computer Interaction, Distributed Cognition, and the Five Big Ideas—the study proposes an integrative perspective on AI as a mediating cultural tool. Future research should address longitudinal and cross-cultural dimensions of AI-mediated learning, and policy initiatives should aim to align innovation with the developmental and relational values of early education.
Ultimately, the meaningful integration of AI in early education depends on maintaining a continuous dialogue between technological innovation and pedagogical ethics.

Author Contributions

Conceptualization, S.C. and L.C.; Methodology, A.G.I.M.; Validation, A.G.I.M.; Writing—Original Draft Preparation, S.C. and L.C.; Writing—Review & Editing, A.G.I.M.; Supervision, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

During the preparation of this paper, the authors employed ChatGpt 5.0 to enhance linguistic clarity, readability, rectify grammatical errors, and refine academic expressions in non-native English sections of the manuscript. The AI-generated suggestions were meticulously reviewed, modified, and validated by the authors to ensure adherence to scholarly standards. The authors confirm that no AI-generated interpretations, conclusions, or data analyses were incorporated into the final content, and they assume full responsibility for the accuracy and originality of the research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Eguchi, A. AI-Powered Educational Robotics as a Learning Tool to Promote Artificial Intelligence and Computer Science Education. In Robotics in Education; Springer International Publishing: Cham, Switzerland, 2022; pp. 279–287. ISBN 9783030825430. [Google Scholar]
  2. Algahtani, A. A Comparative Study of Ai-Based Educational Tools: Evaluating User Interface Experience and Educational Impact. J. Theor. Appl. Inf. Technol. 2024, 102, 1746–1758. [Google Scholar]
  3. Troussas, C.; Krouska, A.; Sgouropoulou, C. A Novel Framework of Human-Computer Interaction and Human-Centered Artificial Intelligence in Learning Technology. In Human-Computer Interaction and Augmented Intelligence: The Paradigm of Interactive Machine Learning in Educational Software; Springer Nature: Cham, Switzerland, 2025; pp. 387–431. [Google Scholar]
  4. Su, J.; Yang, W. Artificial Intelligence and Robotics for Young Children: Redeveloping the Five Big Ideas Framework. ECNU Rev. Educ. 2023, 7, 685–698. [Google Scholar] [CrossRef]
  5. Liu, J.; Chen, M.; He, H.; Liu, H.; Luo, W.; Li, H. Forms and Functions Innovation: A Scoping Review of Digital and Intelligence Technologies in Early Childhood Education Practice. AI Brain Child 2025, 1, 8. [Google Scholar] [CrossRef]
  6. Halkiopoulos, C.; Gkintoni, E. Leveraging AI in E-Learning: Personalized Learning and Adaptive Assessment through Cognitive Neuropsychology-A Systematic Analysis. Electronics 2024, 13, 3762. [Google Scholar] [CrossRef]
  7. Abisoye, A. AI Literacy in STEM Education: Policy Strategies for Preparing the Future Workforce. J. Front. Multidiscip. Res. 2023, 4, 17–24. [Google Scholar] [CrossRef]
  8. Das, S.; Mutsuddi, I.; Ray, N. Artificial Intelligence in Adaptive Education: A Transformative Approach. In Advancing Adaptive Education: Technological Innovations for Disability Support; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 21–50. [Google Scholar]
  9. Alenezi, A. Teacher Perspectives on AI-Driven Gamification: Impact on Student Motivation, Engagement, and Learning Outcomes. Inf. Technol. Learn. Tools 2023, 97, 138. [Google Scholar] [CrossRef]
  10. Kerimbayev, N.; Beisov, N.; Kovtun, A.; Nurym, N.; Akramova, A. Robotics in the International Educational Space: Integration and the Experience. Educ. Inf. Technol. 2020, 25, 5835–5851. [Google Scholar] [CrossRef]
  11. Gómez Niño, J.R.; Delgado, L.P.; Chiappe, A.; Ortega González, E. Gamifying Learning with AI: A Pathway to 21st-Century Skills. J. Res. Child. Educ. 2025, 39, 735–750. [Google Scholar] [CrossRef]
  12. Vartiainen, H.; Tedre, M.; Valtonen, T. Learning Machine Learning with Very Young Children: Who Is Teaching Whom? Int. J. Child Comput. Interact. 2020, 25, 100182. [Google Scholar] [CrossRef]
  13. Arif, M.; Ismail, A.; Irfan, S. AI-Powered Approaches for Sustainable Environmental Education in the Digital Age: A Study of Chongqing International Kindergarten. Int. J. Environ. Eng. Educ. 2025, 7, 35–47. [Google Scholar] [CrossRef]
  14. Ahmed, S.; Rahman, M.S.; Kaiser, M.S.; Hosen, A.S.M.S. Advancing Personalized and Inclusive Education for Students with Disability through Artificial Intelligence: Perspectives, Challenges, and Opportunities. Digital 2025, 5, 11. [Google Scholar] [CrossRef]
  15. Muthmainnah; Ibna Seraj, P.M.; Oteir, I. Playing with AI to Investigate Human-Computer Interaction Technology and Improving Critical Thinking Skills to Pursue 21st Century Age. Educ. Res. Int. 2022, 2022, 6468995. [Google Scholar] [CrossRef]
  16. Iivari, N. Empowering Children to Make and Shape Our Digital Futures-from Adults Creating Technologies to Children Transforming Cultures. Int. J. Inf. Learn. Technol. 2020, 37, 279–293. [Google Scholar] [CrossRef]
  17. Indriasari, D.T.; Karman, K. Privacy, Confidentiality, and Data Protection: Ethical Considerations in the Use of the Internet. Int. J. Islam. Educ. Res. Multicult. IJIERM 2023, 5, 431–450. [Google Scholar] [CrossRef]
  18. Lee, I.; Perret, B. Preparing High School Teachers to Integrate AI Methods into STEM Classrooms. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 12783–12791. [Google Scholar] [CrossRef]
  19. Maggo, J.; Maggo, T. Disparities in Educational Opportunities Between Urban and Rural Regions Across the Globe and the Potential of AI to Bridge the Divide. Int. J. Bus. Anal. Intell. IJBAI 2025, 13, 44. [Google Scholar] [CrossRef]
  20. Lim, E.M. The Effects of Pre-Service Early Childhood Teachers’ Digital Literacy and Self-Efficacy on Their Perception of AI Education for Young Children. Educ. Inf. Technol. 2023, 28, 12969–12995. [Google Scholar] [CrossRef]
  21. Alwaqdani, M. Investigating Teachers’ Perceptions of Artificial Intelligence Tools in Education: Potential and Difficulties. Educ. Inf. Technol. 2024, 30, 2737–2755. [Google Scholar] [CrossRef]
  22. Alam, A. Should Robots Replace Teachers? Mobilisation of AI and Learning Analytics in Education. In Proceedings of the 2021 International Conference on Advances in Computing, Communication, and Control (ICAC3), Mumbai, India, 3–4 December 2021; IEEE: New York, NY, USA, 2021. [Google Scholar]
  23. Salloum, S.A.; Alomari, K.M.; Alfaisal, A.M.; Aljanada, R.A.; Basiouni, A. Emotion Recognition for Enhanced Learning: Using AI to Detect Students’ Emotions and Adjust Teaching Methods. Smart Learn. Environ. 2025, 12, 21. [Google Scholar] [CrossRef]
  24. Wankhede, N.; Kale, M.; Shukla, M.; Nathiya, D.; Roopashree, R.; Kaur, P.; Goyanka, B.; Rahangdale, S.; Taksande, B.; Upaganlawar, A.; et al. Leveraging AI for the Diagnosis and Treatment of Autism Spectrum Disorder: Current Trends and Future Prospects. Asian J. Psychiatr. 2024, 101, 104241. [Google Scholar] [CrossRef]
  25. Morris, M.E.; Kathawala, Q.; Leen, T.K.; Gorenstein, E.E.; Guilak, F.; Labhard, M.; Deleeuw, W. Mobile Therapy: Case Study Evaluations of a Cell Phone Application for Emotional Self-Awareness. J. Med. Internet Res. 2010, 12, e10. [Google Scholar] [CrossRef]
  26. Cimino, S.; Carola, V.; Cerniglia, L.; Bussone, S.; Bevilacqua, A.; Tambelli, R. The μ-Opioid Receptor Gene A118G Polymorphism Is Associated with Insecure Attachment in Children with Disruptive Mood Regulation Disorder and Their Mothers. Brain Behav. 2020, 10, e01659. [Google Scholar] [CrossRef]
  27. Kaag, W.; Theune, M.; Huibers, T. Designing a Learning Robot to Encourage Collaboration between Children. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer International Publishing: Cham, Switzerland, 2021; pp. 148–168. ISBN 9783030784478. [Google Scholar]
  28. Su, J.; Yang, W.; Yim, I.H.Y.; Li, H.; Hu, X. Early Artificial Intelligence Education: Effects of Cooperative Play and Direct Instruction on Kindergarteners’ Computational Thinking, Sequencing, Self-regulation and Theory of Mind Skills. J. Comput. Assist. Learn. 2024, 40, 2917–2925. [Google Scholar] [CrossRef]
  29. Huang, Y.; Wang, Y.; Xiao, T.; Bei, R.; Zhao, Y.; Lu, Z.; Tong, X. StarRescue: Transforming A Pong Game to Visually Convey the Concept of Turn-Taking to Children with Autism. In Extended Abstracts of the 2022 Annual Symposium on Computer-Human Interaction in Play; Association for Computing Machinery: New York, NY, USA, 2022; pp. 246–252. [Google Scholar]
  30. Duan, S. AI-Assisted Social Skills Training for Preschool Children with ASD: A Study on Early Psychological Interventions. J. Mind Behav. 2025, 46, 436–454. [Google Scholar]
  31. Hofstede, G.J.; Student, J.; Kramer, M.R. The Status–Power Arena: A Comprehensive Agent-Based Model of Social Status Dynamics and Gender in Groups of Children. AI Soc. 2018, 38, 2511–2531. [Google Scholar] [CrossRef]
  32. Fulmer, R.; Zhai, Y. Artificial Intelligence in Human Growth and Development: Applications through the Lifespan. Fam. J. 2025, 33, 5–13. [Google Scholar] [CrossRef]
  33. Neugnot-Cerioli, M.; Laurenty, O.M. The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives between AI and Child Development Experts. arXiv 2024, arXiv:2405.19275. [Google Scholar] [CrossRef]
  34. Williamson, B. The Social Life of AI in Education. Int. J. Artif. Intell. Educ. 2024, 34, 97–104. [Google Scholar] [CrossRef]
  35. Afsharnejad, B.; Whitehorne Smith, P.; Bölte, S.; Milbourn, B.; Girdler, S. A Systematic Review of Implicit versus Explicit Social Skills Group Programs in Different Settings for School-Aged Autistic Children and Adolescents. J. Autism Dev. Disord. 2024; online ahead of print. [Google Scholar] [CrossRef]
  36. Cerniglia, L.; Cimino, S.; Ammaniti, M. What are the effects of screen time on emotion regulation and academic achievements? A three-wave longitudinal study on children from 4 to 8 years of age. J. Early Child. Res. 2021, 19, 145–160. [Google Scholar] [CrossRef]
  37. Heinrich, A.J.; Heitmayer, M.; Smith, E.; Zhang, Y. Experiencing Hybrid Spaces: A Scoping Literature Review of Empirical Studies on Human Experiences in Cyber-Physical Environments. Comput. Hum. Behav. 2025, 164, 108502. [Google Scholar] [CrossRef]
  38. Zhou, L.; Paul, S.; Demirkan, H.; Yuan, L.; Spohrer, J.; Zhou, M.; Basu, J. Intelligence Augmentation: Towards Building Human-Machine Symbiotic Relationship. AIS Trans. Hum. -Comput. Interact. 2021, 13, 243–264. [Google Scholar] [CrossRef]
  39. Cerniglia, L.; Cimino, S.; Marzilli, E.; Pascale, E.; Tambelli, R. Associations among Internet Addiction, Genetic Polymorphisms, Family Functioning, and Psychopathological Risk: Cross-Sectional Exploratory Study. JMIR Ment. Health 2020, 7, e17341. [Google Scholar] [CrossRef] [PubMed]
  40. Patel, A.U.; Gu, Q.; Esper, R.; Maeser, D.; Maeser, N. The Crucial Role of Interdisciplinary Conferences in Advancing Explainable AI in Healthcare. BioMedInformatics 2024, 4, 1363–1383. [Google Scholar] [CrossRef]
  41. McCall, C.S.; Romero, M.E.; Yang, W.; Weigand, T. A Call for Equity-Focused Social-Emotional Learning. School Psych. Rev. 2022, 52, 586–607. [Google Scholar] [CrossRef]
  42. Javed, S.; Zehra, S.R.; Ullah, H.; Naveed, M. How AI Can Detect Emotional Cues in Students, Improving Virtual Learning Environments by Providing Personalized Support and Enhancing Social-Emotional Learning. Rev. Appl. Manag. Soc. Sci. 2025, 8, 665–682. [Google Scholar] [CrossRef]
  43. Barnes, E.; Hutson, J. Developing Empathetic AI: Exploring the Potential of Artificial Intelligence to Understand and Simulate Family Dynamics and Cultural Identity. J. Artif. Intell. Robot. 2024, 2, 1–24. [Google Scholar]
  44. Shan, X.; Xu, Y.; Wang, Y.; Lin, Y.S.; Bao, Y. Cross-Cultural Implications of Large Language Models: An Extended Comparative Analysis. In International Conference on Human-Computer Interaction; Springer Nature: Cham, Switzerland, 2024; pp. 106–118. [Google Scholar]
  45. Henriksen, D.; Creely, E.; Gruber, N.; Leahy, S. Social-Emotional Learning and Generative AI: A Critical Literature Review and Framework for Teacher Education. J. Teach. Educ. 2025, 76, 312–328. [Google Scholar] [CrossRef]
  46. Vistorte, A.O.R.; Deroncele-Acosta, A.; Ayala, J.L.M.; Barrasa, A.; López-Granero, C.; Martí-González, M. Integrating Artificial Intelligence to Assess Emotions in Learning Environments: A Systematic Literature Review. Front. Psychol. 2024, 15, 1387089. [Google Scholar] [CrossRef]
  47. Llorent, V.J.; Núñez-Flores, M.; Kaakinen, M. Inclusive Education by Teachers to the Development of the Social and Emotional Competencies of Their Students in Secondary Education. Learn. Instr. 2024, 91, 101892. [Google Scholar] [CrossRef]
  48. Calandri, E.; Mastrokoukou, S.; Marchisio, C.; Monchietto, A.; Graziano, F. Teacher Emotional Competence for Inclusive Education: A Systematic Review. Behav. Sci. 2025, 15, 359. [Google Scholar] [CrossRef]
  49. Alanazi, S.A.; Shabbir, M.; Alshammari, N.; Alruwaili, M.; Hussain, I.; Ahmad, F. Prediction of Emotional Empathy in Intelligent Agents to Facilitate Precise Social Interaction. Appl. Sci. 2023, 13, 1163. [Google Scholar] [CrossRef]
  50. Hosseini Tabaghdehi, S.A.; Ayaz, Ö. AI Ethics in Action: A Circular Model for Transparency, Accountability and Inclusivity. J. Manag. Psychol. 2025; preprint. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.