Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = teaching gesture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1367 KiB  
Review
Embodied Learning Through Immersive Virtual Reality: Theoretical Perspectives for Art and Design Education
by Albert L. Lehrman
Behav. Sci. 2025, 15(7), 917; https://doi.org/10.3390/bs15070917 - 7 Jul 2025
Viewed by 411
Abstract
A significant development in pedagogical strategies which make use of the principles of embodied cognition can be found within the implementation of Immersive Virtual Reality (IVR) into art and design education. This theoretical study investigates how IVR-mediated embodiment enhances spatial thinking and creative [...] Read more.
A significant development in pedagogical strategies which make use of the principles of embodied cognition can be found within the implementation of Immersive Virtual Reality (IVR) into art and design education. This theoretical study investigates how IVR-mediated embodiment enhances spatial thinking and creative problem-solving in art and design education by examining the taxonomy of embodied learning and principles of embodied cognition. The pedagogical affordances and limitations of IVR for creative learning are analyzed through a combination of empirical research and case studies, such as the Tangible and Embodied Spatial Cognition (TASC) system and Tilt Brush studies. Through gesture, spatial navigation, and environmental manipulation, IVR provides numerous possibilities for externalizing creative ideation; however, its implementation requires negotiating contradictions between virtual and physical materiality. IVR-based educational technologies have the potential to revolutionize teaching and learning. The goal of this paper is to provide educators with a theoretically grounded framework for applying embodied practices in IVR-based learning environments, while also acknowledging the current limitations of this technology. Full article
(This article belongs to the Special Issue Neurocognitive Foundations of Embodied Learning)
Show Figures

Figure 1

18 pages, 4185 KiB  
Article
An Empirical Study on Pointing Gestures Used in Communication in Household Settings
by Tymon Kukier, Alicja Wróbel, Barbara Sienkiewicz, Julia Klimecka, Antonio Galiza Cerdeira Gonzalez, Paweł Gajewski and Bipin Indurkhya
Electronics 2025, 14(12), 2346; https://doi.org/10.3390/electronics14122346 - 8 Jun 2025
Viewed by 404
Abstract
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts [...] Read more.
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts to an assistant. Gesture data were analyzed using manual annotations (MAXQDA) and the computational methods of pose estimation and k-means clustering. The study revealed that participants tend to maintain consistent pointing styles, with one-handed pointing and index finger gestures being the most common. Gaze and pointing often co-occur, as do leaning forward and pointing. Using our gesture categorization algorithm, we analyzed gesture information values. As the experiment progressed, the information value of gestures remained stable, although the trends varied between participants and were associated with factors such as age and gender. These findings underscore the need for gesture recognition systems to balance generalization with personalization for more effective human–robot interaction. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Figure 1

28 pages, 3886 KiB  
Article
Assessment and Improvement of Avatar-Based Learning System: From Linguistic Structure Alignment to Sentiment-Driven Expressions
by Aru Ukenova, Gulmira Bekmanova, Nazar Zaki, Meiram Kikimbayev and Mamyr Altaibek
Sensors 2025, 25(6), 1921; https://doi.org/10.3390/s25061921 - 19 Mar 2025
Viewed by 931
Abstract
This research investigates the improvement of learning systems that utilize avatars by shifting from elementary language compatibility to emotion-driven interactions. An assessment of various instructional approaches indicated marked differences in overall effectiveness, with the system showing steady but slight improvements and little variation, [...] Read more.
This research investigates the improvement of learning systems that utilize avatars by shifting from elementary language compatibility to emotion-driven interactions. An assessment of various instructional approaches indicated marked differences in overall effectiveness, with the system showing steady but slight improvements and little variation, suggesting it has the potential for consistent use. Analysis through one-way ANOVA identified noteworthy disparities in post-test results across different teaching strategies. However, the pairwise comparisons with Tukey’s HSD did not reveal significant group differences. The group variation and limited sample sizes probably affected statistical strength. Evaluation of effect size demonstrated that the traditional approach had an edge over the avatar-based method, with lessons recorded on video displaying more moderate distinctions. The innovative nature of the system might account for its initial lower effectiveness, as students could need some time to adjust. Participants emphasized the importance of emotional authenticity and cultural adaptation, including incorporating a Kazakh accent, to boost the system’s success. In response, the system was designed with sentiment-driven gestures and facial expressions to improve engagement and personalization. These findings show the potential of emotionally intelligent avatars to encourage more profound learning experiences and the significance of fine-tuning the system for widespread adoption in a modern educational context. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 5234 KiB  
Article
Application of Virtual Reality Technology in Enhancing the Teaching Effectiveness of Coal Mine Disaster Prevention
by Xuelong Li, Shuaifeng Song, Shumin Liu, Dawei Yin, Rui Wang and Bin Gong
Sustainability 2025, 17(1), 79; https://doi.org/10.3390/su17010079 - 26 Dec 2024
Cited by 4 | Viewed by 1282
Abstract
Coal mine disaster prevention is a fundamental course within mining engineering and coal mine safety engineering curricula. Given the complexity and variability of coal mine disasters, it is crucial to cultivate students’ practical awareness to address the challenges encountered in this field. Virtual [...] Read more.
Coal mine disaster prevention is a fundamental course within mining engineering and coal mine safety engineering curricula. Given the complexity and variability of coal mine disasters, it is crucial to cultivate students’ practical awareness to address the challenges encountered in this field. Virtual reality (VR) technology, with its highly realistic and reusable virtual environments, reduces the resource consumption required for on-site training. Additionally, it offers an effective solution for students to safely and efficiently understand coal mine disasters, master the common types of disasters and their causes, and enhance immersive learning, practical skills, and emergency response capabilities. This study integrates virtual simulation experiments with course content and utilizes VR technology to simulate mine environments and disaster processes, which allows students to experience disaster events in a safe virtual setting. By incorporating embodied cognition theory and VR gesture technology, an interactive learning system is developed to improve students’ learning efficiency and engagement. The results indicate that applying VR technology to teaching coal mine disaster prevention and control significantly stimulates students’ interest and facilitates a comprehensive, intuitive understanding of the causes, characteristics, and prevention measures associated with coal mine disasters. Employing virtual reality technology in education not only enhances the students’ awareness of coal mine safety but also provides strong support for the sustainable development of coal mine enterprises. Full article
Show Figures

Figure 1

19 pages, 15889 KiB  
Article
SIGNIFY: Leveraging Machine Learning and Gesture Recognition for Sign Language Teaching Through a Serious Game
by Luca Ulrich, Giulio Carmassi, Paolo Garelli, Gianluca Lo Presti, Gioele Ramondetti, Giorgia Marullo, Chiara Innocente and Enrico Vezzetti
Future Internet 2024, 16(12), 447; https://doi.org/10.3390/fi16120447 - 1 Dec 2024
Cited by 1 | Viewed by 2056
Abstract
Italian Sign Language (LIS) is the primary form of communication for many members of the Italian deaf community. Despite being recognized as a fully fledged language with its own grammar and syntax, LIS still faces challenges in gaining widespread recognition and integration into [...] Read more.
Italian Sign Language (LIS) is the primary form of communication for many members of the Italian deaf community. Despite being recognized as a fully fledged language with its own grammar and syntax, LIS still faces challenges in gaining widespread recognition and integration into public services, education, and media. In recent years, advancements in technology, including artificial intelligence and machine learning, have opened up new opportunities to bridge communication gaps between the deaf and hearing communities. This paper presents a novel educational tool designed to teach LIS through SIGNIFY, a Machine Learning-based interactive serious game. The game incorporates a tutorial section, guiding users to learn the sign alphabet, and a classic hangman game that reinforces learning through practice. The developed system employs advanced hand gesture recognition techniques for learning and perfecting sign language gestures. The proposed solution detects and overlays 21 hand landmarks and a bounding box on live camera feeds, making use of an open-source framework to provide real-time visual feedback. Moreover, the study compares the effectiveness of two camera systems: the Azure Kinect, which provides RGB-D information, and a standard RGB laptop camera. Results highlight both systems’ feasibility and educational potential, showcasing their respective advantages and limitations. Evaluations with primary school children demonstrate the tool’s ability to make sign language education more accessible and engaging. This article emphasizes the work’s contribution to inclusive education, highlighting the integration of technology to enhance learning experiences for deaf and hard-of-hearing individuals. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

14 pages, 6903 KiB  
Communication
Development of Dual-Arm Human Companion Robots That Can Dance
by Joonyoung Kim, Taewoong Kang, Dongwoon Song, Gijae Ahn and Seung-Joon Yi
Sensors 2024, 24(20), 6704; https://doi.org/10.3390/s24206704 - 18 Oct 2024
Viewed by 1518
Abstract
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures [...] Read more.
As gestures play an important role in human communication, there have been a number of service robots equipped with a pair of human-like arms for gesture-based human–robot interactions. However, the arms of most human companion robots are limited to slow and simple gestures due to the low maximum velocity of the arm actuators. In this work, we present the JF-2 robot, a mobile home service robot equipped with a pair of torque-controlled anthropomorphic arms. Thanks to the low inertia design of the arm, responsive Quasi-Direct Drive (QDD) actuators, and active compliant control of the joints, the robot can replicate fast human dance motions while being safe in the environment. In addition to the JF-2 robot, we also present the JF-mini robot, a scaled-down, low-cost version of the JF-2 robot mainly targeted for commercial use at kindergarten and childcare facilities. The suggested system is validated by performing three experiments, a safety test, teaching children how to dance along to the music, and bringing a requested item to a human subject. Full article
(This article belongs to the Special Issue Intelligent Social Robotic Systems)
Show Figures

Figure 1

18 pages, 7008 KiB  
Article
Could You Say [læp˺ tɒp˺]? Acquisition of Unreleased Stops by Advanced French Learners of English Using Spectrograms and Gestures
by Maelle Amand and Zakaria Touhami
Languages 2024, 9(8), 257; https://doi.org/10.3390/languages9080257 - 25 Jul 2024
Viewed by 1853
Abstract
The present study analyses the production rates of stop-unrelease amongst advanced French learners of English before and after training. Although stop-unrelease may be regarded as a minor issue in English pronunciation teaching, it has received some attention in recent years. Earlier studies showed [...] Read more.
The present study analyses the production rates of stop-unrelease amongst advanced French learners of English before and after training. Although stop-unrelease may be regarded as a minor issue in English pronunciation teaching, it has received some attention in recent years. Earlier studies showed that amongst “phonetically naive English listeners”, the lack of release of /p/, /t/ and /k/ leads to lower identification scores. The present study analyses the speech of 31 French university students majoring in English to measure the efficiency of an awareness approach on the production of stop-unrelease. The experiment comprised three phases with a test and a control group. During Phase 1, both groups were asked to read pairs of words and sentences containing medial and final voiceless stops. We chose combinations of two identical stops (homorganic) or stops with different places of articulation (heterorganic), as well as stops in utterance-final position. Namely, wait for me at that table over there, that pan, or I like that truck. In Phase 2, one group watched an explanatory video to increase awareness on stop-unrelease in English before reading Phase 1 words and sentences a second time. The remaining group was the control group and did not receive any training. Among the participants, 17 read a French text containing pairs of stops in similar positions to those in the English one, which served as an L1 baseline. In total, six students continued until Phase 3 (reading the same stimuli a month later; three in the control group and three in the test group). The results showed that sentence-final stops were overwhelmingly released (above 90%) in both English and French in Phase 1. Training had a significant impact on sentence-final stop-unrelease (p < 0.001), which rose from 9.65% to 72.2%. Progress was also visible in other contexts as in heterorganic pairs of stops. Based on these results, we strongly recommend the combined use of spectrograms and gestures to raise awareness in a classroom or for online learning so as to reach multiple learner profiles and further increase efficiency in pronunciation learning. Full article
(This article belongs to the Special Issue Speech Analysis and Tools in L2 Pronunciation Acquisition)
Show Figures

Figure 1

13 pages, 2390 KiB  
Article
Continuous Recognition of Teachers’ Hand Signals for Students with Attention Deficits
by Ivane Delos Santos Chen, Chieh-Ming Yang, Shang-Shu Wu, Chih-Kang Yang, Mei-Juan Chen, Chia-Hung Yeh and Yuan-Hong Lin
Algorithms 2024, 17(7), 300; https://doi.org/10.3390/a17070300 - 7 Jul 2024
Cited by 3 | Viewed by 1498
Abstract
In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous [...] Read more.
In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous recognition algorithm for capturing teachers’ dynamic gesture signals. This algorithm aims to offer instructional attention cues for students with attention deficits. According to the body landmarks of the teacher’s skeleton by using vision and machine learning-based MediaPipe BlazePose, the proposed method uses simple rules to detect the teacher’s hand signals dynamically and provides three kinds of attention cues (Pointing to left, Pointing to right, and Non-pointing) during the class. Experimental results show the average accuracy, sensitivity, specificity, precision, and F1 score achieved 88.31%, 91.03%, 93.99%, 86.32%, and 88.03%, respectively. By analyzing non-verbal behavior, our method of competent performance can replace verbal reminders from the teacher and be helpful for students with attention deficits in inclusive education. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

14 pages, 3852 KiB  
Article
Real-Time Visual Feedback Based on MIMUs Technology Reduces Bowing Errors in Beginner Violin Students
by Cecilia Provenzale, Francesco Di Tommaso, Nicola Di Stefano, Domenico Formica and Fabrizio Taffoni
Sensors 2024, 24(12), 3961; https://doi.org/10.3390/s24123961 - 19 Jun 2024
Cited by 3 | Viewed by 1395
Abstract
Violin is one of the most complex musical instruments to learn. The learning process requires constant training and many hours of exercise and is primarily based on a student–teacher interaction where the latter guides the beginner through verbal instructions, visual demonstrations, and physical [...] Read more.
Violin is one of the most complex musical instruments to learn. The learning process requires constant training and many hours of exercise and is primarily based on a student–teacher interaction where the latter guides the beginner through verbal instructions, visual demonstrations, and physical guidance. The teacher’s instruction and practice allow the student to learn gradually how to perform the correct gesture autonomously. Unfortunately, these traditional teaching methods require the constant supervision of a teacher and the interpretation of non-real-time feedback provided after the performance. To address these limitations, this work presents a novel interface (Visual Interface for Bowing Evaluation—VIBE) to facilitate student’s progression throughout the learning process, even in the absence of direct teacher intervention. The proposed interface allows two key parameters of bowing movements to be monitored, namely, the angle between the bow and the string (i.e., α angle) and the bow tilt (i.e., β angle), providing real-time visual feedback on how to correctly move the bow. Results collected on 24 beginners (12 exposed to visual feedback, 12 in a control group) showed a positive effect of the real-time visual feedback on the improvement of bow control. Moreover, the subjects exposed to visual feedback judged the latter as useful to correct their movement and clear in terms of the presentation of data. Although the task was rated as harder when performed with the additional feedback, the subjects did not perceive the presence of a violin teacher as essential to interpret the feedback. Full article
Show Figures

Figure 1

24 pages, 5405 KiB  
Article
Nonverbal Communication in Classroom Interaction and Its Role in Italian Foreign Language Teaching and Learning
by Pierangela Diadori
Languages 2024, 9(5), 164; https://doi.org/10.3390/languages9050164 - 1 May 2024
Cited by 1 | Viewed by 4035
Abstract
The purpose of this study is to present the state of the art of recent research on nonverbal communication in L2 classroom interaction, in particular on teachers’ and students’ gestures, and then focus on a case of gestures in an L2 Italian classroom. [...] Read more.
The purpose of this study is to present the state of the art of recent research on nonverbal communication in L2 classroom interaction, in particular on teachers’ and students’ gestures, and then focus on a case of gestures in an L2 Italian classroom. A corpus of video-recorded interactions (CLODIS) were analyzed to answer the following research question: How do L2 Italian native teachers behave when addressing international students? Are there differences with what has been observed in other foreign language (L2) teaching contexts? Both previous data-based research on multimodality in L2 classes and the analysis on CLODIS show that teachers select and coordinate multiple semiotic modes as interactional resources to complete various teaching tasks. Furthermore, Italian native teachers use not only the typical pedagogical gestures (both iconic and metaphorical), but also culturally specific emblems that may cause misunderstandings or inappropriate mirroring effects. For these reasons, it is important that L2 teachers develop a good multimodal awareness, especially if they teach their mother tongue to foreign students and if they belong to a “contact culture”, as is the case observed in L2 Italian classes. Full article
(This article belongs to the Special Issue Advances in Non-Verbal Communication in the 21st Century)
Show Figures

Figure 1

19 pages, 11345 KiB  
Article
ST-TGR: Spatio-Temporal Representation Learning for Skeleton-Based Teaching Gesture Recognition
by Zengzhao Chen, Wenkai Huang, Hai Liu, Zhuo Wang, Yuqun Wen and Shengming Wang
Sensors 2024, 24(8), 2589; https://doi.org/10.3390/s24082589 - 18 Apr 2024
Cited by 15 | Viewed by 2080
Abstract
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition [...] Read more.
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition in teaching mainly focuses on detecting the static gestures of individual students and analyzing their classroom behavior. To analyze the teacher’s gestures and mitigate the difficulty of single-target dynamic gesture recognition in multi-person teaching scenarios, this paper proposes skeleton-based teaching gesture recognition (ST-TGR), which learns through spatio-temporal representation. This method mainly uses the human pose estimation technique RTMPose to extract the coordinates of the keypoints of the teacher’s skeleton and then inputs the recognized sequence of the teacher’s skeleton into the MoGRU action recognition network for classifying gesture actions. The MoGRU action recognition module mainly learns the spatio-temporal representation of target actions by stacking a multi-scale bidirectional gated recurrent unit (BiGRU) and using improved attention mechanism modules. To validate the generalization of the action recognition network model, we conducted comparative experiments on datasets including NTU RGB+D 60, UT-Kinect Action3D, SBU Kinect Interaction, and Florence 3D. The results indicate that, compared with most existing baseline models, the model proposed in this article exhibits better performance in recognition accuracy and speed. Full article
Show Figures

Figure 1

10 pages, 204 KiB  
Article
Animal Pneuma: Reflections on Environmental Respiratory Phenomenology
by Lenart Škof
Philosophies 2024, 9(2), 33; https://doi.org/10.3390/philosophies9020033 - 5 Mar 2024
Viewed by 2088
Abstract
This essay is an attempt to propose an outline of a new respiratory animal philosophy. Based on an analysis of the forgetting of breath in Western philosophy, it aims to gesture towards a future, breathful and compassionate world of co-sharing and co-breathing. In [...] Read more.
This essay is an attempt to propose an outline of a new respiratory animal philosophy. Based on an analysis of the forgetting of breath in Western philosophy, it aims to gesture towards a future, breathful and compassionate world of co-sharing and co-breathing. In the first part, the basic features of forgetting of breath are explained based on David Abram’s work in respiratory ecophilosophy. This part also introduces an important contribution to modern philosophy by Ludwig Klages. The second part is dedicated to reflections on what I understand as an unfortunate transition from soul and pneuma to spirit and Geist. Based on these analyses, I proceed towards an idiosyncratic thought on the nocturnal mystery of pneuma, with references to ancient Upanishadic and 20th-century phenomenological Levinasian thought. Based on these teachings, I argue that, at the bottom of her existence, the subject is a lung partaking in an immense external lung (Merleau-Ponty). In the fourth part of the essay, I extend my reflections toward comparative animal respiratory phenomenology and argue for the immense compassion for all our fellow breathing beings. Finally, in the concluding, fifth part of this essay, I am arguing for a future biocentric and breathful environment, signifying and bringing a new compassionate-respiratory alliance into the world. Full article
(This article belongs to the Special Issue Environmental Philosophy and Ecological Thought)
15 pages, 2625 KiB  
Article
Exploring the Acquisition of Social Communication Skills in Children with Autism: Preliminary Findings from Applied Behavior Analysis (ABA), Parent Training, and Video Modeling
by Daniela Bordini, Ana Cláudia Moya, Graccielle Rodrigues da Cunha Asevedo, Cristiane Silvestre Paula, Décio Brunoni, Helena Brentani, Sheila Cavalcante Caetano, Jair de Jesus Mari and Leila Bagaiolo
Brain Sci. 2024, 14(2), 172; https://doi.org/10.3390/brainsci14020172 - 9 Feb 2024
Cited by 4 | Viewed by 7427
Abstract
Social communication skills, especially eye contact and joint attention, are frequently impaired in autism spectrum disorder (ASD) and predict functional outcomes. Applied behavior analysis is one of the most common evidence-based treatments for ASD, but it is not accessible to most families in [...] Read more.
Social communication skills, especially eye contact and joint attention, are frequently impaired in autism spectrum disorder (ASD) and predict functional outcomes. Applied behavior analysis is one of the most common evidence-based treatments for ASD, but it is not accessible to most families in low- and middle-income countries (LMICs) as it is an expensive and intensive treatment and needs to be delivered by highly specialized professionals. Parental training has emerged as an effective alternative. This is an exploratory study to assess a parental intervention group via video modeling to acquire eye contact and joint attention. Four graded measures of eye contact and joint attention (full physical prompt, partial physical prompt, gestural prompt, and independent) were assessed in 34 children with ASD and intellectual disability (ID). There was a progressive reduction in the level of prompting required over time to acquire eye contact and joint attention, as well as a positive correlation between the time of exposure to the intervention and the acquisition of abilities. This kind of parent training using video modeling to teach eye contact and joint attention skills to children with ASD and ID is a low-cost intervention that can be applied in low-resource settings. Full article
Show Figures

Figure 1

15 pages, 5164 KiB  
Article
Gifted Students’ Actualization of a Rich Task’s Mathematical Potential When Working in Small Groups
by Anita Movik Simensen and Mirjam Harkestad Olsen
Educ. Sci. 2024, 14(2), 151; https://doi.org/10.3390/educsci14020151 - 31 Jan 2024
Cited by 2 | Viewed by 2027
Abstract
This article examines gifted students’ (ages 13–16) groupwork on a rich task in mathematics. This study was conducted in Norway, which has an inclusive education system that does not allow fixed-ability grouping. The purpose of this study was to better understand how to [...] Read more.
This article examines gifted students’ (ages 13–16) groupwork on a rich task in mathematics. This study was conducted in Norway, which has an inclusive education system that does not allow fixed-ability grouping. The purpose of this study was to better understand how to cultivate mathematical learning opportunities for gifted learners in inclusive education systems. The analysis was conducted from a multimodal perspective, in which students’ coordination of speech, gestures, and artifact use was viewed as part of their learning process. The findings contribute to discussions on gifted students as a heterogeneous group. Moreover, our analysis illustrates how giftedness can be invisible, leading to unrealized potential and low achievement. We suggest that more attention be paid to teaching by adapting to gifted students’ individual needs, particularly if the intention is to provide high-quality learning opportunities for gifted students in inclusive settings. Full article
(This article belongs to the Special Issue Teaching and Learning for Gifted and Advanced Learners)
Show Figures

Figure 1

20 pages, 3648 KiB  
Article
Unlocking the Power of Gesture: Using Movement-Based Instruction to Improve First Grade Children’s Spatial Unit Misconceptions
by Eliza L. Congdon and Susan C. Levine
J. Intell. 2023, 11(10), 200; https://doi.org/10.3390/jintelligence11100200 - 13 Oct 2023
Cited by 1 | Viewed by 2790
Abstract
Gestures are hand movements that are produced simultaneously with spoken language and can supplement it by representing semantic information, emphasizing important points, or showing spatial locations and relations. Gestures’ specific features make them a promising tool to improve spatial thinking. Yet, there is [...] Read more.
Gestures are hand movements that are produced simultaneously with spoken language and can supplement it by representing semantic information, emphasizing important points, or showing spatial locations and relations. Gestures’ specific features make them a promising tool to improve spatial thinking. Yet, there is recent work showing that not all learners benefit equally from gesture instruction and that this may be driven, in part, by children’s difficulty understanding what an instructor’s gesture is intended to represent. The current study directly compares instruction with gestures to instruction with plastic unit chips (Action) in a linear measurement learning paradigm aimed at teaching children the concept of spatial units. Some children performed only one type of movement, and some children performed both: Action-then-Gesture [AG] or Gesture-then-Action [GA]. Children learned most from the Gesture-then-Action [GA] and Action only [A] training conditions. After controlling for initial differences in learning, the gesture-then-action condition outperformed all three other training conditions on a transfer task. While gesture is cognitively challenging for some learners, that challenge may be desirable—immediately following gesture with a concrete representation to clarify that gesture’s meaning is an especially effective way to unlock the power of this spatial tool and lead to deep, generalizable learning. Full article
(This article belongs to the Special Issue Spatial Intelligence and Learning)
Show Figures

Figure 1

Back to TopTop