sensors-logo

Journal Browser

Journal Browser

Innovative Sensing Methods for Motion and Behavior Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 July 2026 | Viewed by 6077

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Information Sciences, Tohoku University, Sendai, Japan
Interests: visual servo; image processing; robot control and system biology

E-Mail Website
Guest Editor
Graduate School of Information Sciences, Tohoku University, Sendai, Japan
Interests: light transport on projector-camera system; point cloud processing; application of point cloud processing methods to biochemistry

E-Mail Website
Guest Editor
Graduate School of Medicine, Tohoku University, Sendai, Miyagi 980-8575, Japan
Interests: RFID; animal tracking; autism spectrum disorder; multi-animal behavioral analysis; behavioral neuroscience

Special Issue Information

Dear Colleagues,

This Special Issue of Sensors, entitled Innovative Sensing Methods for Motion and Behavior Analysis, will bring together cutting-edge research on sensor technologies and methodologies for capturing and interpreting dynamic behaviors in humans and animals. With the proliferation of advanced sensing modalities—ranging from optical systems and high-speed cameras to ultrasonic sensors, RFID tags, GPS modules, and hyperspectral imaging—novel approaches for motion tracking, localization, and behavioral inference are emerging across disciplines. We welcome original articles and comprehensive reviews on topics such as simultaneous localization and mapping (SLAM); behavior digitalization by multiple cameras, body cameras, eye-glasses, and UAV cameras with deep-learning algorithms; sensor fusion techniques; vision-based tracking under challenging conditions; and bio-inspired sensing systems. Particular emphasis will be placed on dynamic, time-dependent behaviors and human–object or animal–environment interactions. Contributions exploring applications in navigation, behavioral neuroscience, sports analytics, animal ecology, robotics, and smart environments are highly encouraged. We also invite submissions on emerging fields such as wearable inertial measurement units (IMUs), soft sensors for biomechanical monitoring, and sensor-based multimodal data analytics for behavioral modeling. This Special Issue aligns with the scope of Sensors by addressing both the development of innovative sensor solutions and their applications in real-world motion and behavior analysis.

Prof. Dr. Koichi Hashimoto
Dr. Naoya Chiba
Dr. Shohei Ochi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • motion analysis
  • behavior sensing
  • sensor fusion
  • SLAM
  • hyperspectral imaging
  • RFID
  • high-speed vision
  • animal tracking
  • human–machine interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3108 KB  
Article
A Biomechanical Analysis of Two-Person Emergency Patient Lifting Techniques Using Motion Capture and Ergonomic Assessment
by Xiaoxu Ji, Xin Gao, Paige L. Johnson and Isaac Wheeler
Sensors 2026, 26(9), 2747; https://doi.org/10.3390/s26092747 - 29 Apr 2026
Viewed by 346
Abstract
Emergency responders face a high risk of musculoskeletal disorders (MSDs), particularly lower back injuries, due to frequent patient-handling tasks performed in awkward and dynamic postures. This aim of study is to utilize dual motion capture systems integrated with a digital human modeling (DHM) [...] Read more.
Emergency responders face a high risk of musculoskeletal disorders (MSDs), particularly lower back injuries, due to frequent patient-handling tasks performed in awkward and dynamic postures. This aim of study is to utilize dual motion capture systems integrated with a digital human modeling (DHM) ergonomics tool to evaluate the biomechanical effects of two common two-person carrying techniques: facing forward and facing each other. Twenty-two participants lifted a 25 kg mannequin while wearing Xsens motion sensors, and lumbar forces and joint angles were analyzed using Siemens Jack software (v9.0). Peak compressive and anterior–posterior (AP) shear forces, along with trunk, hip, and knee joint angles, were examined. Compressive forces ranged from approximately 948.6 to 2955.6 N, and AP shear forces ranged from 286.0 to 827.0 N. Mean compressive and AP shear forces were higher during the facing-each-other task (1977.3 N and 595.0 N) than during the facing-forward task (1596.0 N and 462.0 N). Males experienced higher spinal loads than females across both tasks. The facing-each-other technique was associated with greater hip flexion, lower knee flexion, and reduced trunk flexion, whereas the facing-forward technique resulted in less hip flexion, greater knee flexion, and greater trunk flexion. Overall, under the conditions of the present study, the facing-forward technique was associated with lower lumbar loading indicators. Integrating motion capture with DHM offers a valuable approach for evaluating realistic rescue tasks and can inform ergonomic training strategies for emergency responders. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

51 pages, 55716 KB  
Article
A Novel Method for Motion Blur Detection and Quantification Using Signal Analysis on a Controlled Empirical Image Dataset
by Woottichai Nonsakhoo and Saiyan Saiyod
Sensors 2026, 26(8), 2360; https://doi.org/10.3390/s26082360 - 11 Apr 2026
Viewed by 357
Abstract
Motion blur degrades single-frame imaging when relative motion occurs during sensor exposure; yet, quantitative validation is difficult because ground-truth motion parameters are rarely available in real images. This paper presents an interpretable, measure-first framework for detecting, localizing, and quantifying motion blur in single-frame [...] Read more.
Motion blur degrades single-frame imaging when relative motion occurs during sensor exposure; yet, quantitative validation is difficult because ground-truth motion parameters are rarely available in real images. This paper presents an interpretable, measure-first framework for detecting, localizing, and quantifying motion blur in single-frame grayscale images under a validated operating condition of one-dimensional horizontal uniform motion. The method analyzes each image row as a one-dimensional spatial signal, where Movement Artifact denotes the scanline-level imprint of motion blur retained in the legacy algorithm names MAPE and MAQ. The pipeline combines three stages: Movement Artifact Position Estimation (MAPE) using scanline self-similarity, Reference Origin Point Estimation (ROPE) using robust structural trends, and Movement Artifact Quantification (MAQ), which summarizes blur magnitude as an average horizontal spatial displacement after adaptive filtering. The pipeline is evaluated on a controlled empirical dataset of 110 images of a high-contrast marker acquired at known tangential velocities from 0.0 to 1.0 m/s in 0.1 m/s increments (10 images per level). MAPE achieves 70–90% detection rates across velocities, and ROPE localizes reference origins with 97–99% detection. An empirical polynomial mapping from MAQ to velocity attains R2 = 0.9900 with RMSE 0.0229 m/s and MAE 0.0221 m/s over 0.0–0.7 m/s, enabling calibrated velocity estimates from blur measurements within the validated regime. An extended additive-noise robustness analysis further shows that severe perturbation can preserve candidate self-similarity responses while progressively destabilizing reference-origin localization and MAQ pairing, thereby clarifying the empirical boundary of the current controlled single-marker regime. The approach is not claimed to generalize to uncontrolled scenes, non-uniform blur, or multi-dimensional and non-rigid motion. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

39 pages, 96608 KB  
Article
Multi-Modal Feature Fusion and Hierarchical Classification for Automated Equine–Human Interaction Behavior Recognition
by Samierra Arora, Emily Kieson, Christine Rudd and Peter A. Gloor
Sensors 2026, 26(7), 2202; https://doi.org/10.3390/s26072202 - 2 Apr 2026
Viewed by 1773
Abstract
Automated recognition of equine–human interaction behaviors from video represents a significant challenge in computational ethology, with critical applications spanning animal welfare assessment, equine-assisted services evaluation, and safety monitoring in equestrian environments. Existing approaches to animal behavior recognition typically focus on single species in [...] Read more.
Automated recognition of equine–human interaction behaviors from video represents a significant challenge in computational ethology, with critical applications spanning animal welfare assessment, equine-assisted services evaluation, and safety monitoring in equestrian environments. Existing approaches to animal behavior recognition typically focus on single species in isolation, rely solely on facial expression analysis while ignoring full-body posture, or employ flat classification architectures that fail under the severe class imbalances characteristic of naturalistic behavioral datasets. Furthermore, no prior framework integrates simultaneous analysis of both human and equine body language for cross-species interaction classification. This paper presents a novel hierarchical classification framework integrating multi-modal computer vision features to distinguish behavioral states during horse–human encounters. Our methodology employs three complementary feature extraction pipelines: YOLOv8 for spatial relationship modeling, MediaPipe for human postural analysis, and AP-10K for equine body language interpretation. From 28 annotated interaction videos comprising 50,270 temporal samples across five horse breeds, we extract 35 discriminative features capturing proximity dynamics, body orientation, and species-specific behavioral indicators. To address severe class imbalance (18.3:1 ratio between affiliative and avoidant categories), we implement cost-sensitive gradient boosting with automatic class weight optimization within a two-stage hierarchical architecture. The first stage classifies interactions into three parent categories (affiliative, neutral, avoidant) achieving 73.2% balanced accuracy, while stage two discriminates six fine-grained sub-behaviors achieving 88.5% balanced accuracy (under oracle parent-category routing; cascaded end-to-end performance is 62.9% balanced accuracy due to Stage 1 error propagation, identifying parent classification as the primary bottleneck). Notably, our system achieves 85.0% recall on safety-critical avoidant behaviors despite their representation of only 3.8% of the dataset. Extensive ablation studies demonstrate that equine pose features contribute most critically to classification performance, while comprehensive cross-validation analysis confirms model robustness across diverse interaction contexts. The proposed framework establishes the first systematic multimodal cross-species behavioral assessment pipeline in human–animal interaction research, with direct implications for improving equine welfare monitoring and rider safety protocols. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

19 pages, 1217 KB  
Article
Talking with Actionbits—A Part-Enhanced VLM for Action and Interaction Recognition in Animals
by Yang Yang, Ren Nakagawa, Risa Shinoda, Hiroaki Santo, Kenji Oyama, Takenao Ohkawa and Fumio Okura
Sensors 2026, 26(6), 1969; https://doi.org/10.3390/s26061969 - 21 Mar 2026
Viewed by 494
Abstract
Understanding animal actions and interactions is essential for behavior analysis and ecological monitoring. Although large-scale in-the-wild datasets have advanced animal action recognition, existing methods still struggle with fine-grained motion, spatial relations, and multi-individual interactions. To address these challenges, we introduce AIRA, a unified [...] Read more.
Understanding animal actions and interactions is essential for behavior analysis and ecological monitoring. Although large-scale in-the-wild datasets have advanced animal action recognition, existing methods still struggle with fine-grained motion, spatial relations, and multi-individual interactions. To address these challenges, we introduce AIRA, a unified framework for Action and Interaction Recognition in Animals. Built upon a vision–language model (VLM), AIRA learns in an action-centered representation space defined by body parts and their corresponding motions, thereby improving robustness to background noise and enabling cross-species generalization via a unified mammal-centric part ontology. To model actions, we treat body parts and motion as primary cues and introduce Actionbit tokens—compact representations for parts and motions generated by a large language model (LLM) that encode which parts move and how. We further propose Part-Enhanced Prompt Fine-tuning (PEPF) to make the VLM explicitly sensitive to part and pose cues. Within PEPF, the Action–actionbit Alignment (AbA) module enriches action representations with fine-grained part–motion semantics, and Part-Vision Prompting (PVP) extracts keyframes through action-aware prompting. Experiments across multiple benchmarks show consistent improvements in both action and interaction recognition, highlighting the importance of action-centered adaptation and relational reasoning for understanding animal behavior in the wild. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

16 pages, 1100 KB  
Article
Balance Assessments Using Smartphone Sensor Systems and a Clinician-Led Modified BESS Test in Soccer Athletes with Hip-Related Pain: An Exploratory Cross-Sectional Study
by Alexander Puyol, Matthew King, Charlotte Ganderton, Shuwen Hu and Oren Tirosh
Sensors 2026, 26(3), 1061; https://doi.org/10.3390/s26031061 - 6 Feb 2026
Viewed by 723
Abstract
Background: The Balance Error Scoring System (BESS) is the most practiced static postural balance assessment tool, which relies on visual observation, and has been adopted as the gold standard in the clinic and field. However, the BESS can lead to missed and inaccurate [...] Read more.
Background: The Balance Error Scoring System (BESS) is the most practiced static postural balance assessment tool, which relies on visual observation, and has been adopted as the gold standard in the clinic and field. However, the BESS can lead to missed and inaccurate diagnoses—because of its low inter-rater reliability and limited sensitivity—by missing subtle balance deficits, particularly in the athletic population. Smartphone technology using motion sensors may act as an alternative option for providing quantitative feedback to healthcare clinicians when performing balance assessments. The primary aim of this study was to explore the discriminative validity of an alternative novel smartphone-based cloud system to measure balance remotely in soccer athletes with and without hip pain. Methods: This is an exploratory cross-sectional study. A total of 64 Australian soccer athletes (128 hips, 28% females) between 18 and 40 years completed single and tandem stance balance tests that were scored using the modified BESS test and quantified using the smartphone device attached to their lower back. An Exploratory Factor Analysis (EFA) and a Clustered Receiver Operating Characteristic (ROC) using an Area Under the Curve (AUC) were used to explore the discriminative validity between the smartphone sensor system and the modified BESS test. A Linear Mixed-Effects Analysis of Covariance (ANCOVA) was used to determine any statistical differences in static balance measures between individuals with and without hip-related pain. Results: EFA revealed that the first factor primarily captured variance related to smartphone measurements, while the second factor was associated with modified BESS test scores. The ROC and the AUC showed that the smartphone sway measurements in the anterior–posterior and mediolateral directions during single-leg stance had an acceptable to excellent level of accuracy in distinguishing between individuals with and without hip-related pain (AUC = 0.72–0.80). Linear Mixed-Effects ANCOVA analysis found that individuals with hip-related pain had significantly less single-leg balance variability and magnitude in the anteroposterior and mediolateral directions compared to individuals without hip-related pain (p < 0.05). Conclusion: Due to the ability of smartphone technology to discriminate between individuals with and without hip-related pain during single-leg static balance tasks, it is recommended to use the technology in addition to the modified BESS test to optimise a clinician-led assessment and to further guide clinical balance decision-making. While the study supports smartphone technology as a method to assess static balance, its use in measuring balance during dynamic movements needs further research. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

18 pages, 3369 KB  
Article
3D Local Feature Learning and Analysis on Point Cloud Parts via Momentum Contrast
by Xuanmeng Sha, Tomohiro Mashita, Naoya Chiba and Liyun Zhang
Sensors 2026, 26(3), 1007; https://doi.org/10.3390/s26031007 - 3 Feb 2026
Viewed by 626
Abstract
Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations [...] Read more.
Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations commonly encountered in real-world 3D perception. We propose a momentum contrastive learning framework specifically designed to learn discriminative local features from randomly sampled point cloud regions. By adapting the MoCo architecture with PointNet++ as the feature backbone, our method treats local parts of point cloud as fundamental contrastive learning units, combined with carefully designed augmentation strategies including random dropout and translation. Experiments on ShapeNet demonstrate that our approach effectively learns transferable local features and the empirical observation that approximately 30% object local part represents a practical threshold for effective learning when simulating real-world occlusion scenarios, and achieves comparable downstream classification accuracy while reducing training time by 16%. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

32 pages, 37329 KB  
Article
Movement Artifact Direction Estimation Based on Signal Processing Analysis of Single-Frame Images
by Woottichai Nonsakhoo and Saiyan Saiyod
Sensors 2025, 25(24), 7487; https://doi.org/10.3390/s25247487 - 9 Dec 2025
Cited by 1 | Viewed by 1096
Abstract
Movement artifact direction and magnitude are critical parameters in noise detection and image analysis, especially for single-frame images where temporal information is unavailable. This paper introduces the Movement Artifact Direction Estimation (MADE) algorithm, a signal processing-based approach that performs 3D geometric analysis to [...] Read more.
Movement artifact direction and magnitude are critical parameters in noise detection and image analysis, especially for single-frame images where temporal information is unavailable. This paper introduces the Movement Artifact Direction Estimation (MADE) algorithm, a signal processing-based approach that performs 3D geometric analysis to estimate both the direction (in degrees) and weighted quantity (in pixels) of movement artifacts. Motivated by computational challenges in medical image quality assessment systems such as LUIAS, this work investigates directional multiplicative noise characterization using controlled experimental conditions with optical camera imaging. The MADE algorithm operates on multi-directional quantification outputs from a preprocessing pipeline—MAPE, ROPE, and MAQ. The methodology is designed for computational efficiency and instantaneous processing, providing interpretable outputs. Experimental results using precision-controlled apparatus demonstrate robust estimation of movement artifact direction and magnitude across a range of image shapes and velocities, with principal outputs aligning closely to ground truth parameters. The proposed MADE algorithm offers a methodological proof of concept for movement artifact analysis in single-frame images, emphasizing both directional accuracy and quantitative assessment under controlled imaging conditions. Full article
(This article belongs to the Special Issue Innovative Sensing Methods for Motion and Behavior Analysis)
Show Figures

Figure 1

Back to TopTop