sensors-logo

Journal Browser

Journal Browser

Robust Motion Recognition Based on Sensor Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Communications".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 4327

Special Issue Editors


E-Mail Website
Guest Editor
Grupo de Tecnología del Habla y Aprendizaje Automático (T.H.A.U. Group), Information Processing and Telecommunications Center, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
Interests: artificial intelligence; machine learning; deep learning; neural networks; activity recognition; wearable computing; computer vision; biometrics; motion health applications

E-Mail Website
Guest Editor
Speech Technology Group, Universidad Politecnica de Madrid, 28040 Madrid, Spain
Interests: human activity recognition; speech technology; signal processing; biosignals

E-Mail Website
Guest Editor
Department of Electronic Engineering, Polytechnic University of Madrid, Madrid, Spain
Interests: artificial intelligence; machine learning; multimedia processing and retrieval; speech technology; affective computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

The Special Issue "Robust Motion Recognition Based on Sensor Technology" aims to bring together recent research on motion recognition using sensor technology. Sensor-based motion recognition has become increasingly important in many domains such as healthcare, sports, and robotics, as it enables the collection and analysis of accurate and reliable motion data. In fact, it is possible to model motion through different sensor technologies, including signals from inertial and physiological sensors embedded in wearables or smart devices or images and video frames from cameras.

This Special Issue plans to cover a wide range of topics, including description of new datasets, signal processing techniques, architectures, learning algorithms, intelligent sensing systems, wearables sensors, machine/deep learning and artificial intelligence in sensing and imaging and their application for motion modelling and recognition.

Both review articles and original research papers that are to motion modelling and recognition are welcome.

Dr. Manuel Gil-Martín
Dr. Rubén San-Segundo
Dr. Fernando Fernández-Martínez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pattern recognition
  • sensor technology
  • wearable devices
  • computer vision
  • multi-sensor fusion
  • machine/deep learning
  • signal processing
  • activity recognition
  • biometrics systems
  • healthcare applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4515 KiB  
Article
Playing Flappy Bird Based on Motion Recognition Using a Transformer Model and LIDAR Sensor
by Iveta Dirgová Luptáková, Martin Kubovčík and Jiří Pospíchal
Sensors 2024, 24(6), 1905; https://doi.org/10.3390/s24061905 - 16 Mar 2024
Viewed by 456
Abstract
A transformer neural network is employed in the present study to predict Q-values in a simulated environment using reinforcement learning techniques. The goal is to teach an agent to navigate and excel in the Flappy Bird game, which became a popular model for [...] Read more.
A transformer neural network is employed in the present study to predict Q-values in a simulated environment using reinforcement learning techniques. The goal is to teach an agent to navigate and excel in the Flappy Bird game, which became a popular model for control in machine learning approaches. Unlike most top existing approaches that use the game’s rendered image as input, our main contribution lies in using sensory input from LIDAR, which is represented by the ray casting method. Specifically, we focus on understanding the temporal context of measurements from a ray casting perspective and optimizing potentially risky behavior by considering the degree of the approach to objects identified as obstacles. The agent learned to use the measurements from ray casting to avoid collisions with obstacles. Our model substantially outperforms related approaches. Going forward, we aim to apply this approach in real-world scenarios. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

17 pages, 10025 KiB  
Article
The Development of a Stereo Vision System to Study the Nutation Movement of Climbing Plants
by Diego Rubén Ruiz-Melero, Aditya Ponkshe, Paco Calvo and Ginés García-Mateos
Sensors 2024, 24(3), 747; https://doi.org/10.3390/s24030747 - 24 Jan 2024
Viewed by 597
Abstract
Climbing plants, such as common beans (Phaseolus vulgaris L.), exhibit complex motion patterns that have long captivated researchers. In this study, we introduce a stereo vision machine system for the in-depth analysis of the movement of climbing plants, using image processing and [...] Read more.
Climbing plants, such as common beans (Phaseolus vulgaris L.), exhibit complex motion patterns that have long captivated researchers. In this study, we introduce a stereo vision machine system for the in-depth analysis of the movement of climbing plants, using image processing and computer vision. Our approach involves two synchronized cameras, one lateral to the plant and the other overhead, enabling the simultaneous 2D position tracking of the plant tip. These data are then leveraged to reconstruct the 3D position of the tip. Furthermore, we investigate the impact of external factors, particularly the presence of support structures, on plant movement dynamics. The proposed method is able to extract the position of the tip in 86–98% of cases, achieving an average reprojection error below 4 px, which means an approximate error in the 3D localization of about 0.5 cm. Our method makes it possible to analyze how the plant nutation responds to its environment, offering insights into the interplay between climbing plants and their surroundings. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

13 pages, 2290 KiB  
Article
Enhancing Smart Building Surveillance Systems in Thin Walls: An Efficient Barrier Design
by Taewoo Lee and Hyunbum Kim
Sensors 2024, 24(2), 595; https://doi.org/10.3390/s24020595 - 17 Jan 2024
Viewed by 555
Abstract
This paper introduces an efficient barrier model for enhancing smart building surveillance in harsh environment with thin walls and structures. After the main research problem of minimizing the total number of wall-recognition surveillance barriers, we propose two distinct algorithms, Centralized Node Deployment and [...] Read more.
This paper introduces an efficient barrier model for enhancing smart building surveillance in harsh environment with thin walls and structures. After the main research problem of minimizing the total number of wall-recognition surveillance barriers, we propose two distinct algorithms, Centralized Node Deployment and Adaptation Node Deployment, which are designed to address the challenge by strategic placement of surveillance nodes within the smart building. The Centralized Node Deployment aligns nodes along the thin walls, ensuring consistent communication coverage and effectively countering potential disruptions. Conversely, the Adaptation Node Deployment begins with random node placement, which adapts over time to ensure efficient communication across the building. The novelty of this work is in designing a novel barrier system to achieve energy efficiency and reinforced surveillance in a thin-wall environment. Instead of a real environment, we use an ad hoc server for simulations with various scenarios and parameters. Then, two different algorithms are executed through those simulation environments and settings. Also, with detailed discussions, we provide the performance analysis, which shows that both algorithms deliver similar performance metrics over extended periods, indicating their suitability for long-term operation in smart infrastructure. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

19 pages, 6117 KiB  
Article
Feasibility of 3D Body Tracking from Monocular 2D Video Feeds in Musculoskeletal Telerehabilitation
by Carolina Clemente, Gonçalo Chambel, Diogo C. F. Silva, António Mesquita Montes, Joana F. Pinto and Hugo Plácido da Silva
Sensors 2024, 24(1), 206; https://doi.org/10.3390/s24010206 - 29 Dec 2023
Cited by 1 | Viewed by 1061
Abstract
Musculoskeletal conditions affect millions of people globally; however, conventional treatments pose challenges concerning price, accessibility, and convenience. Many telerehabilitation solutions offer an engaging alternative but rely on complex hardware for body tracking. This work explores the feasibility of a model for 3D Human [...] Read more.
Musculoskeletal conditions affect millions of people globally; however, conventional treatments pose challenges concerning price, accessibility, and convenience. Many telerehabilitation solutions offer an engaging alternative but rely on complex hardware for body tracking. This work explores the feasibility of a model for 3D Human Pose Estimation (HPE) from monocular 2D videos (MediaPipe Pose) in a physiotherapy context, by comparing its performance to ground truth measurements. MediaPipe Pose was investigated in eight exercises typically performed in musculoskeletal physiotherapy sessions, where the Range of Motion (ROM) of the human joints was the evaluated parameter. This model showed the best performance for shoulder abduction, shoulder press, elbow flexion, and squat exercises. Results have shown a MAPE ranging between 14.9% and 25.0%, Pearson’s coefficient ranging between 0.963 and 0.996, and cosine similarity ranging between 0.987 and 0.999. Some exercises (e.g., seated knee extension and shoulder flexion) posed challenges due to unusual poses, occlusions, and depth ambiguities, possibly related to a lack of training data. This study demonstrates the potential of HPE from monocular 2D videos, as a markerless, affordable, and accessible solution for musculoskeletal telerehabilitation approaches. Future work should focus on exploring variations of the 3D HPE models trained on physiotherapy-related datasets, such as the Fit3D dataset, and post-preprocessing techniques to enhance the model’s performance. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

15 pages, 1893 KiB  
Article
Reducing the Impact of Sensor Orientation Variability in Human Activity Recognition Using a Consistent Reference System
by Manuel Gil-Martín, Javier López-Iniesta, Fernando Fernández-Martínez and Rubén San-Segundo
Sensors 2023, 23(13), 5845; https://doi.org/10.3390/s23135845 - 23 Jun 2023
Cited by 2 | Viewed by 988
Abstract
Sensor- orientation is a critical aspect in a Human Activity Recognition (HAR) system based on tri-axial signals (such as accelerations); different sensors orientations introduce important errors in the activity recognition process. This paper proposes a new preprocessing module to reduce the negative impact [...] Read more.
Sensor- orientation is a critical aspect in a Human Activity Recognition (HAR) system based on tri-axial signals (such as accelerations); different sensors orientations introduce important errors in the activity recognition process. This paper proposes a new preprocessing module to reduce the negative impact of sensor-orientation variability in HAR. Firstly, this module estimates a consistent reference system; then, the tri-axial signals recorded from sensors with different orientations are transformed into this consistent reference system. This new preprocessing has been evaluated to mitigate the effect of different sensor orientations on the classification accuracy in several state-of-the-art HAR systems. The experiments were carried out using a subject-wise cross-validation methodology over six different datasets, including movements and postures. This new preprocessing module provided robust HAR performance even when sudden sensor orientation changes were included during data collection in the six different datasets. As an example, for the WISDM dataset, sensors with different orientations provoked a significant reduction in the classification accuracy of the state-of-the-art system (from 91.57 ± 0.23% to 89.19 ± 0.26%). This important reduction was recovered with the proposed algorithm, increasing the accuracy to 91.46 ± 0.30%, i.e., the same result obtained when all sensors had the same orientation. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

Back to TopTop