Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = virtual IMU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1363 KB  
Article
Evaluation Study of Pavement Condition Using Digital Twins and Deep Learning on IMU Signals
by Luis-Dagoberto Gurrola-Mijares, José-Manuel Mejía-Muñoz, Oliverio Cruz-Mejía, Abraham-Leonel López-León and Leticia Ortega-Máynez
Future Internet 2025, 17(10), 436; https://doi.org/10.3390/fi17100436 - 26 Sep 2025
Viewed by 494
Abstract
Traditional road asset management relies on periodic, often inefficient, inspections. Digital Twins offer a paradigm shift towards proactive, data-driven maintenance by creating a real-time virtual replica of physical infrastructure. This paper proposes a comprehensive, formalized framework for a highway Digital Twin, structured into [...] Read more.
Traditional road asset management relies on periodic, often inefficient, inspections. Digital Twins offer a paradigm shift towards proactive, data-driven maintenance by creating a real-time virtual replica of physical infrastructure. This paper proposes a comprehensive, formalized framework for a highway Digital Twin, structured into three integrated components: a Physical Space, which defines key performance indicators through mathematical state vectors; a Data Interconnection layer for real-time data processing; and a Virtual Space equipped with hybrid models. We provide a formal definition of these state vectors and a dynamic synchronization mechanism between the physical and virtual spaces. In this study, we focused on pavement condition assessment by using a data-driven component using accessible technology. This study show the synergy between the Digital Twin and deep learning, specifically by integrating advanced analytical models within the Virtual Space for intelligent pavement condition assessment. To validate this approach, a case study was conducted to classify road surface anomalies using low-cost Inertial Measurement Unit (IMU) data. We evaluated several machine learning classifiers and introduced a novel parallel Gated Recurrent Unit network. The results demonstrate that our proposed architecture achieved superior performance, with an accuracy of 89.5% and an F1-score of 0.875, significantly outperforming traditional methods. The findings validate the viability of the proposed Digital Twin framework and highlight its potential to achieve high-precision pavement monitoring using low-cost sensor data, a critical step towards intelligent road infrastructure management. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

28 pages, 3409 KB  
Article
Research on GNSS/MEMS IMU Array Fusion Localization Method Based on Improved Grey Prediction Model
by Yihao Chen, Jieyu Liu, Weiwei Qin and Can Li
Micromachines 2025, 16(9), 1040; https://doi.org/10.3390/mi16091040 - 11 Sep 2025
Viewed by 2795
Abstract
To address the issue of decreased positioning accuracy caused by interference or blockage of GNSS signals in vehicle navigation systems, this paper proposes a GNSS/MEMS IMU array fusion localization method based on an improved grey prediction model. First, a multi-feature fusion GNSS confidence [...] Read more.
To address the issue of decreased positioning accuracy caused by interference or blockage of GNSS signals in vehicle navigation systems, this paper proposes a GNSS/MEMS IMU array fusion localization method based on an improved grey prediction model. First, a multi-feature fusion GNSS confidence evaluation algorithm is designed to assess the reliability of GNSS data in real time using indicators such as signal strength, satellite visibility, and solution consistency; second, to overcome the limitations of traditional grey prediction models in processing vehicle complex motion data, two key improvements are proposed: (1) a dynamic background value optimization method based on vehicle motion characteristics, which dynamically adjusts the weight coefficients in the background value construction according to vehicle speed, acceleration, and road curvature, enhancing the model’s sensitivity to changes in vehicle motion state; (2) a residual sequence compensation mechanism, which analyzes the variation patterns of historical residual sequences to accurately correct the prediction results, significantly improving the model’s prediction accuracy in nonlinear motion scenarios; finally, an adaptive fusion framework under normal and denied GNSS conditions is constructed, which directly fuses data when GNSS is reliable, and uses the improved grey model prediction results as virtual measurements for fusion during signal denial. Simulation and vehicle experiments verify that: compared to the traditional GM(1,1) model, the proposed method improves prediction accuracy by 31%, 52%, and 45% in straight, turning, and acceleration scenarios, respectively; in a 30-s GNSS denial scenario, the accuracy is improved by over 79% compared to pure INS methods. Full article
Show Figures

Figure 1

41 pages, 4171 KB  
Article
Development of a System for Recognising and Classifying Motor Activity to Control an Upper-Limb Exoskeleton
by Artem Obukhov, Mikhail Krasnyansky, Yaroslav Merkuryev and Maxim Rybachok
Appl. Syst. Innov. 2025, 8(4), 114; https://doi.org/10.3390/asi8040114 - 19 Aug 2025
Viewed by 941
Abstract
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is [...] Read more.
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is proposed, which provides highly accurate detection of users’ movements. Signal preprocessing (noise filtering, segmentation, normalisation) and feature extraction were performed to generate input data for regression and classification models. Various machine learning algorithms are used to recognise motor activity, ranging from classical algorithms (logistic regression, k-nearest neighbors, decision trees) and ensemble methods (random forest, AdaBoost, eXtreme Gradient Boosting, stacking, voting) to deep neural networks, including convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformers. The algorithm for integrating machine learning models into the exoskeleton control system is considered. In experiments aimed at abandoning proprietary tracking systems (VR trackers), absolute position regression was performed using data from IMU sensors with 14 regression algorithms: The random forest ensemble provided the best accuracy (mean absolute error = 0.0022 metres). The task of classifying activity categories out of nine types is considered below. Ablation analysis showed that IMU and VR trackers produce a sufficient informative minimum, while adding EMG also introduces noise, which degrades the performance of simpler models but is successfully compensated for by deep networks. In the classification task using all signals, the maximum result (99.2%) was obtained on Transformer; the fully connected neural network generated slightly worse results (98.4%). When using only IMU data, fully connected neural network, Transformer, and CNN–GRU networks provide 100% accuracy. Experimental results confirm the effectiveness of the proposed architectures for motor activity classification, as well as the use of a multi-sensor approach that allows one to compensate for the limitations of individual types of sensors. The obtained results make it possible to continue research in this direction towards the creation of control systems for upper exoskeletons, including those used in rehabilitation and virtual simulation systems. Full article
Show Figures

Figure 1

20 pages, 2437 KB  
Article
A Skill-Inspired Adaptive Fuzzy Control Framework for Symmetric Gait Tracking with Sparse Sensor Fusion in Lower-Limb Exoskeletons
by Loqmane Bencharif, Abderahim Ibset, Hanbing Liu, Wen Qi, Hang Su and Samer Alfayad
Symmetry 2025, 17(8), 1265; https://doi.org/10.3390/sym17081265 - 7 Aug 2025
Viewed by 798
Abstract
This paper presents a real-time framework for bilateral gait reconstruction and adaptive joint control using sparse inertial sensing. The system estimates full lower-limb motion from a single-side inertial measurement unit (IMU) by applying a pipeline that includes signal smoothing, temporal alignment via Dynamic [...] Read more.
This paper presents a real-time framework for bilateral gait reconstruction and adaptive joint control using sparse inertial sensing. The system estimates full lower-limb motion from a single-side inertial measurement unit (IMU) by applying a pipeline that includes signal smoothing, temporal alignment via Dynamic Time Warping (DTW), and motion modeling using Gaussian Mixture Models with Regression (GMM-GMR). Contralateral leg trajectories are inferred using both ideal and adaptive symmetry-based models to capture inter-limb variations. The reconstructed motion serves as reference input for joint-level control. A classical Proportional–Integral–Derivative (PID) controller is first evaluated, demonstrating satisfactory results under simplified dynamics but notable performance loss when virtual stiffness and gravity compensation are introduced. To address this, an adaptive fuzzy PID controller is implemented, which dynamically adjusts control gains based on real-time tracking error through a fuzzy inference system. This approach enhances control stability and motion fidelity under varying conditions. The combined estimation and control framework enables accurate bilateral gait tracking and smooth joint control using minimal sensing, offering a practical solution for wearable robotic systems such as exoskeletons or smart prosthetics. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Fuzzy Control)
Show Figures

Figure 1

18 pages, 1941 KB  
Article
Design of Virtual Sensors for a Pyramidal Weathervaning Floating Wind Turbine
by Hector del Pozo Gonzalez, Magnus Daniel Kallinger, Tolga Yalcin, José Ignacio Rapha and Jose Luis Domínguez-García
J. Mar. Sci. Eng. 2025, 13(8), 1411; https://doi.org/10.3390/jmse13081411 - 24 Jul 2025
Viewed by 562
Abstract
This study explores virtual sensing techniques for the Eolink floating offshore wind turbine (FOWT), which features a pyramidal platform and a single-point mooring system that enables weathervaning to maximize power production and reduce structural loads. To address the challenges and costs associated with [...] Read more.
This study explores virtual sensing techniques for the Eolink floating offshore wind turbine (FOWT), which features a pyramidal platform and a single-point mooring system that enables weathervaning to maximize power production and reduce structural loads. To address the challenges and costs associated with monitoring submerged components, virtual sensors are investigated as an alternative to physical instrumentation. The main objective is to design a virtual sensor of mooring hawser loads using a reduced set of input features from GPS, anemometer, and inertial measurement unit (IMU) data. A virtual sensor is also proposed to estimate the bending moment at the joint of the pyramid masts. The FOWT is modeled in OrcaFlex, and a range of load cases is simulated for training and testing. Under defined sensor sampling conditions, both supervised and physics-informed machine learning algorithms are evaluated. The models are tested under aligned and misaligned environmental conditions, as well as across operating regimes below- and above-rated conditions. Results show that mooring tensions can be estimated with high accuracy, while bending moment predictions also perform well, though with lower precision. These findings support the use of virtual sensing to reduce instrumentation requirements in critical areas of the floating wind platform. Full article
Show Figures

Figure 1

26 pages, 9462 KB  
Article
A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation
by Jonas Gaigalas, Linas Perkauskas, Henrikas Gricius, Tomas Kanapickas and Andrius Kriščiūnas
Drones 2025, 9(4), 236; https://doi.org/10.3390/drones9040236 - 23 Mar 2025
Cited by 3 | Viewed by 4632
Abstract
UAVs are vastly used in practical applications such as reconnaissance and search and rescue or other missions which typically require experienced operators. Autonomous drone navigation could aid in situations where the environment is unknown, GPS or radio signals are unavailable, and there are [...] Read more.
UAVs are vastly used in practical applications such as reconnaissance and search and rescue or other missions which typically require experienced operators. Autonomous drone navigation could aid in situations where the environment is unknown, GPS or radio signals are unavailable, and there are no existing 3D models to preplan a trajectory. Traditional navigation methods employ multiple sensors: LiDAR, sonar, inertial measurement units (IMUs), and cameras. This increases the weight and cost of such drones. This work focuses on autonomous drone navigation from point A to point B using visual information obtained from a monocular camera in a simulator. The solution utilizes a depth image estimation model to create an occupancy grid map of the surrounding area and uses an A* path planning algorithm to find optimal paths to end goals while navigating around the obstacles. The simulation is conducted using AirSim in Unreal Engine. With this work, we propose a framework and scenarios in three different open-source virtual environments, varying in complexity, to test and compare autonomous UAV navigation methods based on vision. In this study, fine-tuned models using synthetic RGB and depth image data were used for each environment, demonstrating a noticeable improvement in depth estimation accuracy, with reductions in Mean Absolute Percentage Error (MAPE) from 120.45% to 33.41% in AirSimNH, from 70.09% to 8.04% in Blocks, and from 121.94% to 32.86% in MSBuild2018. While the proposed UAV autonomous navigation framework utilizing depth images directly from AirSim achieves 38.89%, 87.78%, and 13.33% success rates of reaching goals in AirSimNH, Blocks, and MSBuild2018 environments, respectively, the method with pre-trained depth estimation models fails to reach any end points of the scenarios. The fine-tuned depth estimation models enhance performance, increasing the number of reached goals by 3.33% for AirSimNH and 72.22% for Blocks. These findings highlight the benefits of adapting vision-based models to specific environments, improving UAV autonomy in visually guided navigation tasks. Full article
Show Figures

Figure 1

24 pages, 3414 KB  
Article
RL-Based Vibration-Aware Path Planning for Mobile Robots’ Health and Safety
by Sathian Pookkuttath, Braulio Felix Gomez and Mohan Rajesh Elara
Mathematics 2025, 13(6), 913; https://doi.org/10.3390/math13060913 - 10 Mar 2025
Cited by 3 | Viewed by 1603
Abstract
Mobile robots are widely used, with research focusing on autonomy and functionality. However, long-term deployment requires their health and safety to be ensured. Terrain-induced vibrations accelerate wear. Hence, self-awareness and optimal path selection, avoiding such terrain anomalies, is essential. This study proposes an [...] Read more.
Mobile robots are widely used, with research focusing on autonomy and functionality. However, long-term deployment requires their health and safety to be ensured. Terrain-induced vibrations accelerate wear. Hence, self-awareness and optimal path selection, avoiding such terrain anomalies, is essential. This study proposes an RL-based vibration-aware path planning framework, incorporating terrain roughness level classification, a vibration cost map, and an optimized vibration-aware path planning strategy. Terrain roughness is classified into four levels using IMU sensor data, achieving average prediction accuracy of 97% with a 1D CNN model. A vibration cost map is created by assigning vibration costs to each predicted class on a 2D occupancy grid, incorporating obstacles, vibration-prone areas, and the robot’s pose for navigation. An RL model is applied that adapts to changing terrain for path planning. The RL agent uses an MDP-based policy and a deep RL training model with PPO, taking the vibration cost map as input. Finally, the RL-based vibration-aware path planning framework is validated through virtual and real-world experiments using an in-house mobile robot. The proposed approach is compared with the A* path planning algorithm using a performance index that assesses movement and the terrain roughness level. The results show that it effectively avoids rough areas while maintaining the shortest distance. Full article
Show Figures

Figure 1

21 pages, 4434 KB  
Article
Scenario Generation and Autonomous Control for High-Precision Vineyard Operations
by Carlos Ruiz Mayo, Federico Cheli, Stefano Arrigoni, Francesco Paparazzo, Simone Mentasti and Marco Ezio Pezzola
AgriEngineering 2025, 7(2), 46; https://doi.org/10.3390/agriengineering7020046 - 18 Feb 2025
Viewed by 996
Abstract
Precision Farming (PF) in vineyards represents an innovative approach to vine cultivation that leverages the advantages of the latest technologies to optimize resource use and improve overall field management. This study investigates the application of PF techniques in a vineyard, focusing on sensor-based [...] Read more.
Precision Farming (PF) in vineyards represents an innovative approach to vine cultivation that leverages the advantages of the latest technologies to optimize resource use and improve overall field management. This study investigates the application of PF techniques in a vineyard, focusing on sensor-based decision-making for autonomous driving. The goal of this research is to define a repeatable methodology for virtual testing of autonomous driving operations in a vineyard, considering realistic scenarios, efficient control architectures, and reliable sensors. The simulation scenario was created to replicate the conditions of a real vineyard, including elevation, banking profiles, and vine positioning. This provides a safe environment for training operators and testing tools such as sensors, algorithms, or controllers. This study also proposes an efficient control scheme, implemented as a state machine, to autonomously drive the tractor during two distinct phases of the navigation process: between rows and out of the field. The implementation demonstrates improvements in trajectory-following precision while reducing the intervention required by the farmer. The proposed system was extensively tested in a virtual environment, with a particular focus on evaluating the effects of micro and macro terrain irregularities on the results. A key feature of the control framework is its ability to achieve adequate accuracy while minimizing the number of sensors used, relying on a configuration of a Global Navigation Satellite System (GNSS) and an Inertial Measurement Unit (IMU) as a cost-effective solution. This minimal-sensor approach, which includes a state machine designed to seamlessly transition between in-field and out-of-field operations, balances performance and cost efficiency. The system was validated through a wide range of simulations, highlighting its robustness and adaptability to various terrain conditions. The main contributions of this work include the high fidelity of the simulation scenario, the efficient integration of the control algorithm and sensors for the two navigation phases, and the detailed analysis of terrain conditions. Together, these elements form a robust framework for testing autonomous tractor operations in vineyards. Full article
(This article belongs to the Section Sensors Technology and Precision Agriculture)
Show Figures

Figure 1

29 pages, 4682 KB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Cited by 3 | Viewed by 2911
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

20 pages, 8888 KB  
Article
E2-VINS: An Event-Enhanced Visual–Inertial SLAM Scheme for Dynamic Environments
by Jiafeng Huang, Shengjie Zhao and Lin Zhang
Appl. Sci. 2025, 15(3), 1314; https://doi.org/10.3390/app15031314 - 27 Jan 2025
Cited by 3 | Viewed by 3013
Abstract
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. [...] Read more.
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. Although SLAM, especially Visual–Inertial SLAM (VI-SLAM), has made substantial progress, most classic algorithms in this field are designed based on the assumption that the observed scene is static. In complex real-world environments, the presence of dynamic objects such as pedestrians and vehicles can seriously affect the robustness and accuracy of such systems. Event cameras, which use recently introduced motion-sensitive biomimetic sensors, efficiently capture scene changes (referred to as “events”) with high temporal resolution, offering new opportunities to enhance VI-SLAM performance in dynamic environments. Integrating this kind of innovative sensor, we propose the first event-enhanced Visual–Inertial SLAM framework specifically designed for dynamic environments, termed E2-VINS. Specifically, the system uses visual–inertial alignment strategy to estimate IMU biases and correct IMU measurements. The calibrated IMU measurements are used to assist in motion compensation, achieving spatiotemporal alignment of events. The event-based dynamicity metrics, which measure the dynamicity of each pixel, are then generated on these aligned events. Based on these metrics, the visual residual terms of different pixels are adaptively assigned weights, namely, dynamicity weights. Subsequently, E2-VINS jointly and alternately optimizes the system state (camera poses and map points) and dynamicity weights, effectively filtering out dynamic features through a soft-threshold mechanism. Our scheme enhances the robustness of classic VI-SLAM against dynamic features, which significantly enhances VI-SLAM performance in dynamic environments, resulting in an average improvement of 1.884% in the mean position error compared to state-of-the-art methods. The superior performance of E2-VINS is validated through both qualitative and quantitative experimental results. To ensure that our results are fully reproducible, all the relevant data and codes have been released. Full article
(This article belongs to the Special Issue Advances in Audio/Image Signals Processing)
Show Figures

Figure 1

18 pages, 11743 KB  
Article
The Design and Validation of an Open-Palm Data Glove for Precision Finger and Wrist Tracking
by Olivia Hosie, Mats Isaksson, John McCormick, Oren Tirosh and Chrys Hensman
Sensors 2025, 25(2), 367; https://doi.org/10.3390/s25020367 - 9 Jan 2025
Cited by 1 | Viewed by 2734
Abstract
Wearable motion capture gloves enable the precise analysis of hand and finger movements for a variety of uses, including robotic surgery, rehabilitation, and most commonly, virtual augmentation. However, many motion capture gloves restrict natural hand movement with a closed-palm design, including fabric over [...] Read more.
Wearable motion capture gloves enable the precise analysis of hand and finger movements for a variety of uses, including robotic surgery, rehabilitation, and most commonly, virtual augmentation. However, many motion capture gloves restrict natural hand movement with a closed-palm design, including fabric over the palm and fingers. In order to alleviate slippage, improve comfort, reduce sizing issues, and eliminate movement restrictions, this paper presents a new low-cost data glove with an innovative open-palm and finger-free design. The new design improves usability and overall functionality by addressing the limitations of traditional closed-palm designs. It is especially beneficial in capturing movements in fields such as physical therapy and robotic surgery. The new glove incorporates resistive flex sensors (RFSs) at each finger and an inertial measurement unit (IMU) at the wrist joint to measure wrist flexion, extension, ulnar and radial deviation, and rotation. Initially the sensors were tested individually for drift, synchronisation delays, and linearity. The results show a drift of 6.60°/h in the IMU and no drift in the RFSs. There was a 0.06 s delay in the data captured by the IMU compared to the RFSs. The glove’s performance was tested with a collaborate robot testing setup. In static conditions, it was found that the IMU had a worst case error across three trials of 7.01° and a mean absolute error (MAE) averaged over three trials of 4.85°, while RFSs had a worst case error of 3.77° and a MAE of 1.25° averaged over all five RFSs used. There was no clear correlation between measurement error and speed. Overall, the new glove design proved to accurately measure joint angles. Full article
Show Figures

Figure 1

20 pages, 11540 KB  
Article
Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion
by Xiaoqi Cheng, Xinfeng Liang, Xiaosong Li, Zhimin Liu and Haishu Tan
Sensors 2024, 24(21), 6889; https://doi.org/10.3390/s24216889 - 27 Oct 2024
Viewed by 2107
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). [...] Read more.
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker’s center point relative to the MAV’s body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

25 pages, 1761 KB  
Review
Efficacy of Sensor-Based Training Using Exergaming or Virtual Reality in Patients with Chronic Low Back Pain: A Systematic Review
by Giovanni Morone, Foivos Papaioannou, Alberto Alberti, Irene Ciancarelli, Mirjam Bonanno and Rocco Salvatore Calabrò
Sensors 2024, 24(19), 6269; https://doi.org/10.3390/s24196269 - 27 Sep 2024
Cited by 9 | Viewed by 6903
Abstract
In its chronic and non-specific form, low back pain is experienced by a large percentage of the population; its persistence impacts the quality of life and increases costs to the health care system. In recent years, the scientific literature highlights how treatment based [...] Read more.
In its chronic and non-specific form, low back pain is experienced by a large percentage of the population; its persistence impacts the quality of life and increases costs to the health care system. In recent years, the scientific literature highlights how treatment based on assessment and functional recovery is effective through IMU technology with biofeedback or exergaming as part of the tools available to assist the evaluation and treatment of these patients, who present not only with symptoms affecting the lumbar spine but often also incorrect postural attitudes. Aim: Evaluate the impact of technology, based on inertial sensors with biofeedback or exergaming, in patients with chronic non-specific low back pain. A systematic review of clinical studies obtained from PubMed, Scopus, Science Direct, and Web of Science databases from 1 January 2016 to 1 July 2024 was conducted, developing the search string based on keywords and combinations of terms with Boolean AND/OR operators; on the retrieved articles were applied inclusion and exclusion criteria. The procedure of publication selection will be represented with the PRISMA diagram, the risk of bias through the RoB scale 2, and methodological validity with the PEDro scale. Eleven articles were included, all RCTs, and most of the publications use technology with exergaming within about 1–2 months. Of the outcomes measured, improvements were reported in pain, disability, and increased function; the neuropsychological sphere related to experiencing the pathology underwent improvements. From the results obtained, the efficacy of using technology based on exergames and inertial sensors, in patients with chronic non-specific low back pain, was increased. Further clinical studies are required to achieve more uniformity in the proposed treatment to create a common guideline for health care providers. Full article
Show Figures

Figure 1

10 pages, 1512 KB  
Communication
Errors in Estimating Lower-Limb Joint Angles and Moments during Walking Based on Pelvic Accelerations: Influence of Virtual Inertial Measurement Unit’s Frontal Plane Misalignment
by Takuma Inai, Yoshiyuki Kobayashi, Motoki Sudo, Yukari Yamashiro and Tomoya Ueda
Sensors 2024, 24(16), 5096; https://doi.org/10.3390/s24165096 - 6 Aug 2024
Cited by 1 | Viewed by 2200
Abstract
The accurate estimation of lower-limb joint angles and moments is crucial for assessing the progression of orthopedic diseases, with continuous monitoring during daily walking being essential. An inertial measurement unit (IMU) attached to the lower back has been used for this purpose, but [...] Read more.
The accurate estimation of lower-limb joint angles and moments is crucial for assessing the progression of orthopedic diseases, with continuous monitoring during daily walking being essential. An inertial measurement unit (IMU) attached to the lower back has been used for this purpose, but the effect of IMU misalignment in the frontal plane on estimation accuracy remains unclear. This study investigated the impact of virtual IMU misalignment in the frontal plane on estimation errors of lower-limb joint angles and moments during walking. Motion capture data were recorded from 278 healthy adults walking at a comfortable speed. An estimation model was developed using principal component analysis and linear regression, with pelvic accelerations as independent variables and lower-limb joint angles and moments as dependent variables. Virtual IMU misalignments of −20°, −10°, 0°, 10°, and 20° in the frontal plane (five conditions) were simulated. The joint angles and moments were estimated and compared across these conditions. The results indicated that increasing virtual IMU misalignment in the frontal plane led to greater errors in the estimation of pelvis and hip angles, particularly in the frontal plane. For misalignments of ±20°, the errors in pelvis and hip angles were significantly amplified compared to well-aligned conditions. These findings underscore the importance of accounting for IMU misalignment when estimating these variables. Full article
(This article belongs to the Special Issue Wearable Sensors for Biomechanics Applications—2nd Edition)
Show Figures

Figure 1

17 pages, 4444 KB  
Article
A Study on Graph Optimization Method for GNSS/IMU Integrated Navigation System Based on Virtual Constraints
by Haiyang Qiu, Yun Zhao, Hui Wang and Lei Wang
Sensors 2024, 24(13), 4419; https://doi.org/10.3390/s24134419 - 8 Jul 2024
Cited by 1 | Viewed by 3038
Abstract
In GNSS/IMU integrated navigation systems, factors like satellite occlusion and non-line-of-sight can degrade satellite positioning accuracy, thereby impacting overall navigation system results. To tackle this challenge and leverage historical pseudorange information effectively, this paper proposes a graph optimization-based GNSS/IMU model with virtual constraints. [...] Read more.
In GNSS/IMU integrated navigation systems, factors like satellite occlusion and non-line-of-sight can degrade satellite positioning accuracy, thereby impacting overall navigation system results. To tackle this challenge and leverage historical pseudorange information effectively, this paper proposes a graph optimization-based GNSS/IMU model with virtual constraints. These virtual constraints in the graph model are derived from the satellite’s position from the previous time step, the rate of change of pseudoranges, and ephemeris data. This virtual constraint serves as an alternative solution for individual satellites in cases of signal anomalies, thereby ensuring the integrity and continuity of the graph optimization model. Additionally, this paper conducts an analysis of the graph optimization model based on these virtual constraints, comparing it with traditional graph models of GNSS/IMU and SLAM. The marginalization of the graph model involving virtual constraints is analyzed next. The experiment was conducted on a set of real-world data, and the results of the proposed method were compared with tightly coupled Kalman filtering and the original graph optimization method. In instantaneous performance testing, the method maintains an RMSE error within 5% compared with real pseudorange measurement, while in a continuous performance testing scenario with no available GNSS signal, the method shows approximately a 30% improvement in horizontal RMSE accuracy over the traditional graph optimization method during a 10-second period. This demonstrates the method’s potential for practical applications. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

Back to TopTop