Human Motion Perception for Intelligent Systems Awareness and Robotic Control

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 953

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Institute of the Centre for Research and Technology Hellas (CERTH), 6th km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: robotic applications; human motion analysis; collaborative robots

E-Mail Website
Guest Editor
Information Technologies Institute of the Centre for Research and Technology Hellas (CERTH), 6th km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: human–robot interaction; robot vision; service robot perception and cognition; activity and behavior analysis and modeling; safe and socially aware robot navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Center for Assistive Rehabilitation and Robotics Technologies (CARRT), Mechanical Engineering, University of South Florida, Tampa, FL 33620, USA
Interests: robotics; assistive technologies; virtual reality; user interfaces

Special Issue Information

Dear Colleagues,

A wealth of information is being transmitted with every motion of the human body. From the subtle motions during simple tasks that ensure efficient adaptation and robustness to high-intensity actions during sports, the characteristics that can be extracted contain multi-faceted information. Detecting meaningful information through motion capture systems, wearables, and more recently, AI methods has enabled a seamless communication between humans and intelligent systems. This has been transformative for healthcare, sports science, social sciences, rehabilitation and industry applications. In this abundance of motion sensing, the most fundamental question that remains unanswered is what information is meaningful and how it can be extracted. Indeed, there is no shortage of creative approaches to perceive human motion, isolate gestures and encode them as inputs to be processed. However, between the staggering versatility of the human kinematic chain, the biological differences in each person, and the different information each task needs, there is no one-size-fit-all method to detect motions, or to convey them to an intelligent system. For example, involuntary small motions during post-stroke rehabilitation contain crucial data, while, in an industrial context, cobots focusing on intention prediction treat small motions as noise to be discarded.

In this Special Issue, the goal is to explore and disseminate novel and creative approaches to extract meaningful information from voluntary and involuntary motions of the human body, based on the applications, and incorporate them to advanced intelligent systems in an ethical way, as well as map future directions and identify knowledge gaps. The scope includes, but is not limited, to the following:

  • Computer vision for joint detection
  • Novel methods for modelling the human body 
  • Novel sensors/methods to perceive motion
  • AI systems for semantic perception of motion
  • Deep-learning methods for motion prediction and intention classification
  • Intelligent control of prostheses
  • Control of supernumerary robotic limbs
  • AI-powered personalized rehabilitation/performance analysis
  • Collaborative robots (cobots)
  • Haptic interfaces

Dr. Dimitrios Menychtas
Dr. Dimitrios Giakoumis
Dr. Redwan Alqasemi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human motion detection
  • computer vision
  • artificial intelligence
  • supernumerary robotic limbs
  • cobots

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2366 KB  
Article
Validating the Performance of VR Headset Eye-Tracking Using Gold Standard Eye-Tracker and MoCap System
by Russell Nathan Todd, Jian Gong, Amy Catherine Banic and Qin Zhu
Information 2026, 17(2), 143; https://doi.org/10.3390/info17020143 - 2 Feb 2026
Viewed by 697
Abstract
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel [...] Read more.
The integration of eye-tracking into consumer-grade virtual reality (VR) headsets presents a transformative opportunity for assessing user mental states within simulated, immersive environments. However, the validity of this built-in technology must be established against gold-standard real-world eye-tracking systems. This study employs a novel paradigm using a physically moving object to evaluate the accuracy of dynamic smooth pursuit, a key oculomotor function in mental state assessment. We rigorously validated the performance of the HTC Vive Pro Eye’s integrated eye-tracker against the Tobii Pro Glasses 3 using a high-precision OptiTrack motion capture system as ground-truth for object position. Eight participants completed both 2D and 3D gaze-tracking tasks. In the 2D condition, they tracked a dot on a screen, while in the 3D condition, they tracked a physically moving object. The real-world object trajectories captured by OptiTrack were replicated within a VR environment. Gaze data from both the VR headset and the Tobii glasses were recorded simultaneously and compared to the OptiTrack baseline using Dynamic Time Warping (DTW) to quantify accuracy. Results revealed a task-dependent performance. In the 2D task, the Tobii glasses demonstrated significantly lower DTW distances, indicating superior accuracy. Conversely, in the 3D task, the VR headset significantly outperformed the glasses, showing a closer match to the real object trajectory. This suggests that while traditional eye-trackers excel in constrained 2D contexts, integrated VR eye-tracking is more accurate for naturalistic 3D gaze pursuit. We conclude that VR headset eye-tracking is not only a reliable but also a cost-effective tool for research, particularly offering enhanced performance for studies conducted within immersive 3D simulations. Full article
Show Figures

Figure 1

Back to TopTop