Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = poses, gestures and voice

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1379 KiB  
Proceeding Paper
Recognizing Human Emotions Through Body Posture Dynamics Using Deep Neural Networks
by Arunnehru Jawaharlalnehru, Thalapathiraj Sambandham and Dhanasekar Ravikumar
Eng. Proc. 2025, 87(1), 49; https://doi.org/10.3390/engproc2025087049 - 16 Apr 2025
Viewed by 937
Abstract
Body posture dynamics have garnered significant attention in recent years due to their critical role in understanding the emotional states conveyed through human movements during social interactions. Emotions are typically expressed through facial expressions, voice, gait, posture, and overall body dynamics. Among these, [...] Read more.
Body posture dynamics have garnered significant attention in recent years due to their critical role in understanding the emotional states conveyed through human movements during social interactions. Emotions are typically expressed through facial expressions, voice, gait, posture, and overall body dynamics. Among these, body posture provides subtle yet essential cues about emotional states. However, predicting an individual’s gait and posture dynamics poses challenges, given the complexity of human body movement, which involves numerous degrees of freedom compared to facial expressions. Moreover, unlike static facial expressions, body dynamics are inherently fluid and continuously evolving. This paper presents an effective method for recognizing 17 micro-emotions by analyzing kinematic features from the GEMEP dataset using video-based motion capture. We specifically focus on upper body posture dynamics (skeleton points and angle), capturing movement patterns and their dynamic range over time. Our approach addresses the complexity of recognizing emotions from posture and gait by focusing on key elements of kinematic gesture analysis. The experimental results demonstrate the effectiveness of the proposed model, achieving a high accuracy rate of 91.48% for angle metric + DNN and 93.89% for distance + DNN on the GEMEP dataset using a deep neural network (DNN). These findings highlight the potential for our model to advance posture-based emotion recognition, particularly in applications where human body dynamics distance and angle are key indicators of emotional states. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

15 pages, 2087 KiB  
Article
Exploring Data Input Problems in Mixed Reality Environments: Proposal and Evaluation of Natural Interaction Techniques
by Jingzhe Zhang, Tiange Chen, Wenjie Gong, Jiayue Liu and Jiangjie Chen
Future Internet 2024, 16(5), 150; https://doi.org/10.3390/fi16050150 - 27 Apr 2024
Cited by 3 | Viewed by 1987
Abstract
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was [...] Read more.
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was employed to compare these techniques’ input efficiency, accuracy, and user experience across varying precision and distance conditions. The findings reveal that multimodal techniques surpass slider methods in input efficiency yet are comparable to keyboards; the voice method excels in reducing cognitive load but falls short in accuracy; and the scaling method marginally leads in user satisfaction but imposes a higher physical load. Furthermore, this study outlines these techniques’ pros and cons and offers design guidelines and future research directions. Full article
Show Figures

Figure 1

27 pages, 1622 KiB  
Article
A Data-Driven Approach to Quantify and Measure Students’ Engagement in Synchronous Virtual Learning Environments
by Xavier Solé-Beteta, Joan Navarro, Brigita Gajšek, Alessandro Guadagni and Agustín Zaballos
Sensors 2022, 22(9), 3294; https://doi.org/10.3390/s22093294 - 25 Apr 2022
Cited by 21 | Viewed by 4671
Abstract
In face-to-face learning environments, instructors (sub)consciously measure student engagement to obtain immediate feedback regarding the training they are leading. This constant monitoring process enables instructors to dynamically adapt the training activities according to the perceived student reactions, which aims to keep them engaged [...] Read more.
In face-to-face learning environments, instructors (sub)consciously measure student engagement to obtain immediate feedback regarding the training they are leading. This constant monitoring process enables instructors to dynamically adapt the training activities according to the perceived student reactions, which aims to keep them engaged in the learning process. However, when shifting from face-to-face to synchronous virtual learning environments (VLEs), assessing to what extent students are engaged to the training process during the lecture has become a challenging and arduous task. Typical indicators such as students’ faces, gestural poses, or even hearing their voice can be easily masked by the intrinsic nature of the virtual domain (e.g., cameras and microphones can be turned off). The purpose of this paper is to propose a methodology and its associated model to measure student engagement in VLEs that can be obtained from the systematic analysis of more than 30 types of digital interactions and events during a synchronous lesson. To validate the feasibility of this approach, a software prototype has been implemented to measure student engagement in two different learning activities in a synchronous learning session: a masterclass and a hands-on session. The obtained results aim to help those instructors who feel that the connection with their students has weakened due to the virtuality of the learning environment. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

28 pages, 2871 KiB  
Article
Adding Pluggable and Personalized Natural Control Capabilities to Existing Applications
by Fabrizio Lamberti, Andrea Sanna, Gilles Carlevaris and Claudio Demartini
Sensors 2015, 15(2), 2832-2859; https://doi.org/10.3390/s150202832 - 28 Jan 2015
Cited by 6 | Viewed by 6169
Abstract
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles [...] Read more.
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities. Full article
(This article belongs to the Special Issue HCI In Smart Environments)
Show Figures

Back to TopTop