Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (493)

Search Parameters:
Keywords = motion perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5578 KiB  
Article
Adaptive Covariance Matrix for UAV-Based Visual–Inertial Navigation Systems Using Gaussian Formulas
by Yangzi Cong, Wenbin Su, Nan Jiang, Wenpeng Zong, Long Li, Yan Xu, Tianhe Xu and Paipai Wu
Sensors 2025, 25(15), 4745; https://doi.org/10.3390/s25154745 (registering DOI) - 1 Aug 2025
Abstract
In a variety of UAV applications, visual–inertial navigation systems (VINSs) play a crucial role in providing accurate positioning and navigation solutions. However, traditional VINS struggle to adapt flexibly to varying environmental conditions due to fixed covariance matrix settings. This limitation becomes especially acute [...] Read more.
In a variety of UAV applications, visual–inertial navigation systems (VINSs) play a crucial role in providing accurate positioning and navigation solutions. However, traditional VINS struggle to adapt flexibly to varying environmental conditions due to fixed covariance matrix settings. This limitation becomes especially acute during high-speed drone operations, where motion blur and fluctuating image clarity can significantly compromise navigation accuracy and system robustness. To address these issues, we propose an innovative adaptive covariance matrix estimation method for UAV-based VINS using Gaussian formulas. Our approach enhances the accuracy and robustness of the navigation system by dynamically adjusting the covariance matrix according to the quality of the images. Leveraging the advanced Laplacian operator, detailed assessments of image blur are performed, thereby achieving precise perception of image quality. Based on these assessments, a novel mechanism is introduced for dynamically adjusting the visual covariance matrix using a Gaussian model according to the clarity of images in the current environment. Extensive simulation experiments across the EuRoC and TUM VI datasets, as well as the field tests, have validated our method, demonstrating significant improvements in navigation accuracy of drones in scenarios with motion blur. Our algorithm has shown significantly higher accuracy compared to the famous VINS-Mono framework, outperforming it by 18.18% on average, as well as the optimization rate of RMS, which reaches 65.66% for the F1 dataset and 41.74% for F2 in the field tests outdoors. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 1470 KiB  
Article
An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human–Robot Collaboration
by Dianhao Zhang, Mien Van, Pantelis Sopasakis and Seán McLoone
Machines 2025, 13(8), 672; https://doi.org/10.3390/machines13080672 (registering DOI) - 1 Aug 2025
Abstract
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes [...] Read more.
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute safe path planning based on feedback from a vision system. To satisfy the requirements of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times, NMPC solutions are approximate; therefore, the safety of the system cannot be guaranteed. To address this, we formulate a novel safety-critical paradigm that uses an exponential control barrier function (ECBF) as a safety filter. Several common human–robot assembly subtasks have been integrated into a real-life HRC assembly task to validate the performance of the proposed controller and to investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework, with a 23.2% reduction in execution time achieved for the HRC task compared to an implementation without human motion prediction. Full article
(This article belongs to the Special Issue Visual Measurement and Intelligent Robotic Manufacturing)
Show Figures

Figure 1

19 pages, 1517 KiB  
Article
Continuous Estimation of sEMG-Based Upper-Limb Joint Angles in the Time–Frequency Domain Using a Scale Temporal–Channel Cross-Encoder
by Xu Han, Haodong Chen, Xinyu Cheng and Ping Zhao
Actuators 2025, 14(8), 378; https://doi.org/10.3390/act14080378 (registering DOI) - 31 Jul 2025
Abstract
Surface electromyographic (sEMG) signal-driven joint-angle estimation plays a critical role in intelligent rehabilitation systems, as its accuracy directly affects both control performance and rehabilitation efficacy. This study proposes a continuous elbow joint angle estimation method based on time–frequency domain analysis. Raw sEMG signals [...] Read more.
Surface electromyographic (sEMG) signal-driven joint-angle estimation plays a critical role in intelligent rehabilitation systems, as its accuracy directly affects both control performance and rehabilitation efficacy. This study proposes a continuous elbow joint angle estimation method based on time–frequency domain analysis. Raw sEMG signals were processed using the Short-Time Fourier Transform (STFT) to extract time–frequency features. A Scale Temporal–Channel Cross-Encoder (STCCE) network was developed, integrating temporal and channel attention mechanisms to enhance feature representation and establish the mapping from sEMG signals to elbow joint angles. The model was trained and evaluated on a dataset comprising approximately 103,000 samples collected from seven subjects. In the single-subject test set, the proposed STCCE model achieved an average Mean Absolute Error (MAE) of 2.96±0.24, Root Mean Square Error (RMSE) of 4.41±0.45, Coefficient of Determination (R2) of 0.9924±0.0020, and Correlation Coefficient (CC) of 0.9963±0.0010. It achieved a MAE of 3.30, RMSE of 4.75, R2 of 0.9915, and CC of 0.9962 on the multi-subject test set, and an average MAE of 15.53±1.80, RMSE of 21.72±2.85, R2 of 0.8141±0.0540, and CC of 0.9100±0.0306 on the inter-subject test set. These results demonstrated that the STCCE model enabled accurate joint-angle estimation in the time–frequency domain, contributing to a better motion intent perception for upper-limb rehabilitation. Full article
Show Figures

Figure 1

15 pages, 2290 KiB  
Article
Research on Automatic Detection Method of Coil in Unmanned Reservoir Area Based on LiDAR
by Yang Liu, Meiqin Liang, Xiaozhan Li, Xuejun Zhang, Junqi Yuan and Dong Xu
Processes 2025, 13(8), 2432; https://doi.org/10.3390/pr13082432 - 31 Jul 2025
Abstract
The detection of coils in reservoir areas is part of the environmental perception technology of unmanned cranes. In order to improve the perception ability of unmanned cranes to include environmental information in reservoir areas, a method of automatic detection of coils based on [...] Read more.
The detection of coils in reservoir areas is part of the environmental perception technology of unmanned cranes. In order to improve the perception ability of unmanned cranes to include environmental information in reservoir areas, a method of automatic detection of coils based on two-dimensional LiDAR dynamic scanning is proposed, which realizes the detection of the position and attitude of coils in reservoir areas. This algorithm realizes map reconstruction of 3D point cloud by fusing LiDAR point cloud data and the motion position information of intelligent cranes. Additionally, a processing method based on histogram statistical analysis and 3D normal curvature estimation is proposed to solve the problem of over-segmentation and under-segmentation in 3D point cloud segmentation. Finally, for segmented point cloud clusters, coil models are fitted by the RANSAC method to identify their position and attitude. The accuracy, recall, and F1 score of the detection model are all higher than 0.91, indicating that the model has a good recognition effect. Full article
Show Figures

Figure 1

15 pages, 2538 KiB  
Article
Dynamic Obstacle Perception Technology for UAVs Based on LiDAR
by Wei Xia, Feifei Song and Zimeng Peng
Drones 2025, 9(8), 540; https://doi.org/10.3390/drones9080540 (registering DOI) - 31 Jul 2025
Viewed by 87
Abstract
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations [...] Read more.
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations of small quadcopter drones and meet the demands of obstacle perception in dynamic environments, a LiDAR-based obstacle perception algorithm is proposed. First, accumulation, filtering, and clustering processes are carried out on the LiDAR point cloud data to complete the segmentation and extraction of point cloud obstacles. Then, an obstacle motion/static discrimination algorithm based on three-dimensional point motion attributes is developed to classify dynamic and static point clouds. Finally, oriented bounding box (OBB) detection is employed to simplify the representation of the spatial position and shape of dynamic point cloud obstacles, and motion estimation is achieved by tracking the OBB parameters using a Kalman filter. Simulation experiments demonstrate that this method can ensure a dynamic obstacle detection frequency of 10 Hz and successfully detect multiple dynamic obstacles in the environment. Full article
Show Figures

Figure 1

31 pages, 11649 KiB  
Article
Development of Shunt Connection Communication and Bimanual Coordination-Based Smart Orchard Robot
by Bin Yan and Xiameng Li
Agronomy 2025, 15(8), 1801; https://doi.org/10.3390/agronomy15081801 - 25 Jul 2025
Viewed by 164
Abstract
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of [...] Read more.
This research addresses the enhancement of operational efficiency in apple-picking robots through the design of a bimanual spatial configuration enabling obstacle avoidance in contemporary orchard environments. A parallel coordinated harvesting paradigm for dual-arm systems was introduced, leading to the construction and validation of a six-degree-of-freedom bimanual apple-harvesting robot. Leveraging the kinematic architecture of the AUBO-i5 manipulator, three spatial layout configurations for dual-arm systems were evaluated, culminating in the adoption of a “workspace-overlapping Type B” arrangement. A functional prototype of the bimanual apple-harvesting system was subsequently fabricated. The study further involved developing control architectures for two end-effector types: a compliant gripper and a vacuum-based suction mechanism, with corresponding operational protocols established. A networked communication framework for parallel arm coordination was implemented via Ethernet switching technology, enabling both independent and synchronized bimanual operation. Additionally, an intersystem communication protocol was formulated to integrate the robotic vision system with the dual-arm control architecture, establishing a modular parallel execution model between visual perception and motion control modules. A coordinated bimanual harvesting strategy was formulated, incorporating real-time trajectory and pose monitoring of the manipulators. Kinematic simulations were executed to validate the feasibility of this strategy. Field evaluations in modern Red Fuji apple orchards assessed multidimensional harvesting performance, revealing 85.6% and 80% success rates for the suction and gripper-based arms, respectively. Single-fruit retrieval averaged 7.5 s per arm, yielding an overall system efficiency of 3.75 s per fruit. These findings advance the technological foundation for intelligent apple-harvesting systems, offering methodologies for the evolution of precision agronomic automation. Full article
(This article belongs to the Special Issue Smart Farming: Advancing Techniques for High-Value Crops)
Show Figures

Figure 1

25 pages, 8282 KiB  
Article
Performance Evaluation of Robotic Harvester with Integrated Real-Time Perception and Path Planning for Dwarf Hedge-Planted Apple Orchard
by Tantan Jin, Xiongzhe Han, Pingan Wang, Yang Lyu, Eunha Chang, Haetnim Jeong and Lirong Xiang
Agriculture 2025, 15(15), 1593; https://doi.org/10.3390/agriculture15151593 - 24 Jul 2025
Viewed by 252
Abstract
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a [...] Read more.
Apple harvesting faces increasing challenges owing to rising labor costs and the limited seasonal workforce availability, highlighting the need for robotic harvesting solutions in precision agriculture. This study presents a 6-DOF robotic arm system designed for harvesting in dwarf hedge-planted orchards, featuring a lightweight perception module, a task-adaptive motion planner, and an adaptive soft gripper. A lightweight approach was introduced by integrating the Faster module within the C2f module of the You Only Look Once (YOLO) v8n architecture to optimize the real-time apple detection efficiency. For motion planning, a Dynamic Temperature Simplified Transition Adaptive Cost Bidirectional Transition-Based Rapidly Exploring Random Tree (DSA-BiTRRT) algorithm was developed, demonstrating significant improvements in the path planning performance. The adaptive soft gripper was evaluated for its detachment and load-bearing capacities. Field experiments revealed that the direct-pull method at 150 mN·m torque outperformed the rotation-pull method at both 100 mN·m and 150 mN·m. A custom control system integrating all components was validated in partially controlled orchards, where obstacle clearance and thinning were conducted to ensure operation safety. Tests conducted on 80 apples showed a 52.5% detachment success rate and a 47.5% overall harvesting success rate, with average detachment and full-cycle times of 7.7 s and 15.3 s per apple, respectively. These results highlight the system’s potential for advancing robotic fruit harvesting and contribute to the ongoing development of autonomous agricultural technologies. Full article
(This article belongs to the Special Issue Agricultural Machinery and Technology for Fruit Orchard Management)
Show Figures

Figure 1

22 pages, 4827 KiB  
Article
Development of a Multifunctional Mobile Manipulation Robot Based on Hierarchical Motion Planning Strategy and Hybrid Grasping
by Yuning Cao, Xianli Wang, Zehao Wu and Qingsong Xu
Robotics 2025, 14(7), 96; https://doi.org/10.3390/robotics14070096 - 15 Jul 2025
Viewed by 471
Abstract
A mobile manipulation robot combines the navigation capability of unmanned ground vehicles and manipulation advantage of robotic arms. However, the development of a mobile manipulation robot is challenging due to the integration requirement of numerous heterogeneous subsystems. In this paper, we propose a [...] Read more.
A mobile manipulation robot combines the navigation capability of unmanned ground vehicles and manipulation advantage of robotic arms. However, the development of a mobile manipulation robot is challenging due to the integration requirement of numerous heterogeneous subsystems. In this paper, we propose a multifunctional mobile manipulation robot by integrating perception, mapping, navigation, object detection, and grasping functions into a seamless workflow to conduct search-and-fetch tasks. To realize navigation and collision avoidance in complex environments, a new hierarchical motion planning strategy is proposed by fusing global and local planners. Control Lyapunov Function (CLF) and Control Barrier Function (CBF) are employed to realize path tracking and to guarantee safety during navigation. The convolutional neural network and the gripper’s kinematic constraints are adopted to construct a learning-optimization hybrid grasping algorithm to generate precise grasping poses. The efficiency of the developed mobile manipulation robot is demonstrated by performing indoor fetching experiments, showcasing its promising capabilities in real-world applications. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

19 pages, 26396 KiB  
Article
Development of a Networked Multi-Participant Driving Simulator with Synchronized EEG and Telemetry for Traffic Research
by Poorendra Ramlall, Ethan Jones and Subhradeep Roy
Systems 2025, 13(7), 564; https://doi.org/10.3390/systems13070564 - 10 Jul 2025
Viewed by 428
Abstract
This paper presents a multi-participant driving simulation framework designed to support traffic experiments involving the simultaneous collection of vehicle telemetry and cognitive data. The system integrates motion-enabled driving cockpits, high-fidelity steering and pedal systems, immersive visual displays (monitor or virtual reality), and the [...] Read more.
This paper presents a multi-participant driving simulation framework designed to support traffic experiments involving the simultaneous collection of vehicle telemetry and cognitive data. The system integrates motion-enabled driving cockpits, high-fidelity steering and pedal systems, immersive visual displays (monitor or virtual reality), and the Assetto Corsa simulation engine. To capture cognitive states, dry-electrode EEG headsets are used alongside a custom-built software tool that synchronizes EEG signals with vehicle telemetry across multiple drivers. The primary contribution of this work is the development of a modular, scalable, and customizable experimental platform with robust data synchronization, enabling the coordinated collection of neural and telemetry data in multi-driver scenarios. The synchronization software developed through this study is freely available to the research community. This architecture supports the study of human–human interactions by linking driver actions with corresponding neural activity across a range of driving contexts. It provides researchers with a powerful tool to investigate perception, decision-making, and coordination in dynamic, multi-participant traffic environments. Full article
(This article belongs to the Special Issue Modelling and Simulation of Transportation Systems)
Show Figures

Figure 1

40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 565
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

29 pages, 1184 KiB  
Article
Perception-Based H.264/AVC Video Coding for Resource-Constrained and Low-Bit-Rate Applications
by Lih-Jen Kau, Chin-Kun Tseng and Ming-Xian Lee
Sensors 2025, 25(14), 4259; https://doi.org/10.3390/s25144259 - 8 Jul 2025
Viewed by 366
Abstract
With the rapid expansion of Internet of Things (IoT) and edge computing applications, efficient video transmission under constrained bandwidth and limited computational resources has become increasingly critical. In such environments, perception-based video coding plays a vital role in maintaining acceptable visual quality while [...] Read more.
With the rapid expansion of Internet of Things (IoT) and edge computing applications, efficient video transmission under constrained bandwidth and limited computational resources has become increasingly critical. In such environments, perception-based video coding plays a vital role in maintaining acceptable visual quality while minimizing bit rate and processing overhead. Although newer video coding standards have emerged, H.264/AVC remains the dominant compression format in many deployed systems, particularly in commercial CCTV surveillance, due to its compatibility, stability, and widespread hardware support. Motivated by these practical demands, this paper proposes a perception-based video coding algorithm specifically tailored for low-bit-rate H.264/AVC applications. By targeting regions most relevant to the human visual system, the proposed method enhances perceptual quality while optimizing resource usage, making it particularly suitable for embedded systems and bandwidth-limited communication channels. In general, regions containing human faces and those exhibiting significant motion are of primary importance for human perception and should receive higher bit allocation to preserve visual quality. To this end, macroblocks (MBs) containing human faces are detected using the Viola–Jones algorithm, which leverages AdaBoost for feature selection and a cascade of classifiers for fast and accurate detection. This approach is favored over deep learning-based models due to its low computational complexity and real-time capability, making it ideal for latency- and resource-constrained IoT and edge environments. Motion-intensive macroblocks were identified by comparing their motion intensity against the average motion level of preceding reference frames. Based on these criteria, a dynamic quantization parameter (QP) adjustment strategy was applied to assign finer quantization to perceptually important regions of interest (ROIs) in low-bit-rate scenarios. The experimental results show that the proposed method achieves superior subjective visual quality and objective Peak Signal-to-Noise Ratio (PSNR) compared to the standard JM software and other state-of-the-art algorithms under the same bit rate constraints. Moreover, the approach introduces only a marginal increase in computational complexity, highlighting its efficiency. Overall, the proposed algorithm offers an effective balance between visual quality and computational performance, making it well suited for video transmission in bandwidth-constrained, resource-limited IoT and edge computing environments. Full article
Show Figures

Figure 1

13 pages, 531 KiB  
Article
Adaptive Motion Planning Leveraging Speed-Differentiated Prediction for Mobile Robots in Dynamic Environments
by Tengfei Liu, Zihe Wang, Jiazheng Hu, Shuling Zeng, Xiaoxu Liu and Tan Zhang
Appl. Sci. 2025, 15(13), 7551; https://doi.org/10.3390/app15137551 - 4 Jul 2025
Viewed by 291
Abstract
This paper presents a novel motion planning framework for mobile robots operating in dynamic and uncertain environments, with an emphasis on accurate trajectory prediction and safe, efficient obstacle avoidance. The proposed approach integrates search-based planning with deep learning techniques to improve both robustness [...] Read more.
This paper presents a novel motion planning framework for mobile robots operating in dynamic and uncertain environments, with an emphasis on accurate trajectory prediction and safe, efficient obstacle avoidance. The proposed approach integrates search-based planning with deep learning techniques to improve both robustness and interpretability. A multi-sensor perception module is designed to classify obstacles as either static or dynamic, thereby enhancing environmental awareness and planning reliability. To address the challenge of motion prediction, we introduce the K-GRU Kalman method, which first applies K-means clustering to distinguish between high-speed and low-speed dynamic obstacles, then models their trajectories using a combination of Kalman filtering and gated recurrent units (GRUs). Compared to state-of-the-art RNN and LSTM-based predictors, the proposed method achieves superior accuracy and generalization. Extensive experiments in both simulated and real-world scenarios of varying complexity demonstrate the effectiveness of the framework. The results show an average planning success rate exceeding 60%, along with notable improvements in path safety and smoothness, validating the contribution of each module within the system. Full article
Show Figures

Figure 1

36 pages, 4138 KiB  
Article
Shoulder and Scapular Function Before and After a Scapular Therapeutic Exercise Program for Chronic Shoulder Pain and Scapular Dyskinesis: A Pre–Post Single-Group Study
by Ana S. C. Melo, Ana L. Soares, Catarina Castro, Ricardo Matias, Eduardo B. Cruz, J. Paulo Vilas-Boas and Andreia S. P. Sousa
J. Pers. Med. 2025, 15(7), 285; https://doi.org/10.3390/jpm15070285 - 2 Jul 2025
Viewed by 602
Abstract
Background/Objectives: Scapular adaptations have been associated with shoulder pain. However, conflicting findings have been reported after scapular-focused interventions. The present study aims to evaluate scapula-related outcomes before and after a scapular therapeutic exercise program. Methods: Eighteen adult volunteers with chronic shoulder [...] Read more.
Background/Objectives: Scapular adaptations have been associated with shoulder pain. However, conflicting findings have been reported after scapular-focused interventions. The present study aims to evaluate scapula-related outcomes before and after a scapular therapeutic exercise program. Methods: Eighteen adult volunteers with chronic shoulder pain participated in an 8-week scapular therapeutic exercise program that was personalized according to their pain condition and the presence of scapular dyskinesis. This program included preparation and warm-up, scapular neuromotor control, and strengthening and stretching exercises. Both self-reported (shoulder pain and function, psychosocial factors, and self-impression of change) and performance-based outcomes (scapular muscular stiffness and activity level, tridimensional motion, rhythm, and movement quality, measured while participants drank a bottle of water) were used for analysis. Results: After the intervention, participants presented reduced shoulder pain (p < 0.0001) and pain catastrophizing (p = 0.004) and increased shoulder function (p < 0.0001). Additionally, the participants presented changes in scapular winging (p < 0.0001 to p = 0.043), increased scapular downward rotation (p < 0.0001) and depression (p = 0.038), and decreased global movement smoothness (p = 0.003). These were associated with changes in serratus anterior activity (p = 0.016 to p = 0.035), decreased middle (p < 0.0001 to p = 0.002) and lower trapezius (p < 0.0001) and levator scapulae (p = 0.048) activity levels, and decreased middle trapezius muscle stiffness (p = 0.014). Patients’ self-perception of change was rated favorably. Conclusions: After a scapular therapeutic exercise program, changes were observed in both self-reported and performance-based outcomes. These results need to be confirmed by a randomized controlled trial. Full article
Show Figures

Figure 1

16 pages, 1671 KiB  
Article
How Does the Number of Small Goals Affect National-Level Female Soccer Players in Game-Based Situations? Effects on Technical–Tactical, Physical, and Physiological Variables
by Dovydas Alaune, Audrius Snieckus, Bruno Travassos, Paweł Chmura, David Pizarro and Diogo Coutinho
Sensors 2025, 25(13), 4035; https://doi.org/10.3390/s25134035 - 28 Jun 2025
Viewed by 623
Abstract
This study investigated the impact of varying the number of small goals on elite female soccer players’ decision-making, technical–tactical skills, running performance, and perceived exertion during game-based situations (GBSs). Sixteen national female players (aged 22.33 ± 2.89 years) participated in three conditions within [...] Read more.
This study investigated the impact of varying the number of small goals on elite female soccer players’ decision-making, technical–tactical skills, running performance, and perceived exertion during game-based situations (GBSs). Sixteen national female players (aged 22.33 ± 2.89 years) participated in three conditions within an 8vs8 game without a goalkeeper (45 × 40 m), each featuring a different number of small goals (1.2 × 0.8 m): (i) 1 small goal (1G); (ii) 2 small goals (2G); and (iii) 3 small goals (3G). Sensors to track players’ positioning, perceived exertion, and notational analysis were used to evaluate player performance. The results indicated that players covered a greater distance at low intensity during the 2G condition compared to both 1G (p = 0.024) and 3G (p ≤ 0.05). Conversely, the 3G condition promoted a higher distance covered at high intensity compared to 2G (p ≤ 0.05). The 1G condition resulted in fewer accelerations (2G, p = 0.003; 3G, p < 0.001) and decelerations (2G, p = 0.012) compared to conditions with additional goals. However, there were no statistically significant effects on technical–tactical actions. Notably, a trend toward improved decision-making was observed in the 1G condition compared to 2G (ES = −0.64 [−1.39; 0.11]) and a longer ball possession duration compared to 3G (ES = −0.28 [−0.71; 0.16]). In conclusion, coaches working with elite female soccer players can strategically vary the number of goals to achieve specific physical aims (i.e., using 2G to emphasize acceleration and deceleration or 3G to promote high-intensity distance) with minimal effects on their perceived fatigue, technical–tactical variables, and decision-making. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

22 pages, 5516 KiB  
Article
Technology and Method Optimization for Foot–Ground Contact Force Detection in Wheel-Legged Robots
by Chao Huang, Meng Hong, Yaodong Wang, Hui Chai, Zhuo Hu, Zheng Xiao, Sijia Guan and Min Guo
Sensors 2025, 25(13), 4026; https://doi.org/10.3390/s25134026 - 27 Jun 2025
Viewed by 382
Abstract
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces [...] Read more.
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces when dealing with flexible tire–ground interactions. To address this challenge, this study proposes a foot–ground contact state detection technique and optimization method based on multi-sensor fusion and intelligent modeling for wheel-legged robots. First, finite element analysis (FEA) is used to simulate strain distribution under various contact conditions. Combined with global sensitivity analysis (GSA), the optimal placement of PVDF sensors is determined and experimentally validated. Subsequently, under dynamic gait conditions, data collected from the PVDF sensor array are used to predict three-dimensional contact forces through Gaussian process regression (GPR) and artificial neural network (ANN) models. A custom experimental platform is developed to replicate variable gait frequencies and collect dynamic contact data for validation. The results demonstrate that both GPR and ANN models achieve high accuracy in predicting dynamic 3D contact forces, with normalized root mean square error (NRMSE) as low as 8.04%. The models exhibit reliable repeatability and generalization to novel inputs, providing robust technical support for stable contact perception and motion decision-making in complex environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop