Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (513)

Search Parameters:
Keywords = mobile robotic platform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4788 KB  
Article
Enhanced Indoor Mobile Robot Localization via Lie-Group IMU–UWB Fusion and Dual-Stage Kalman Filtering
by Zhengyang He, Xiaojie Tang, Muzi Li and Fengyun Zhang
Sensors 2026, 26(9), 2686; https://doi.org/10.3390/s26092686 (registering DOI) - 26 Apr 2026
Abstract
Indoor mobile robots often experience degraded localization accuracy and robustness when relying on a single positioning modality. In addition, conventional pose computation based on Euler-parameterized transformations can be computationally involved and susceptible to singularities, while practical fusion schemes may not adequately suppress measurement [...] Read more.
Indoor mobile robots often experience degraded localization accuracy and robustness when relying on a single positioning modality. In addition, conventional pose computation based on Euler-parameterized transformations can be computationally involved and susceptible to singularities, while practical fusion schemes may not adequately suppress measurement errors. This paper proposes an indoor robot localization method, termed IMU_UWB_ESKF, which tightly fuses inertial and UWB measurements using a Lie-group state representation. IMU- and UWB-derived quantities are formulated on the associated Lie algebra, enabling numerically stable pose propagation and measurement updates. To mitigate sensor noise and reduce drift, a dual-stage Kalman filtering strategy is adopted: an EKF-based measurement correction is first performed, followed by a multi-dimensional error-state Kalman filter for refined fusion. The proposed pipeline is implemented on a wheeled-robot platform under ROS, integrating real-time IMU/UWB parameter extraction, pose transformation, and online state estimation. Experimental results demonstrate stable real-time localization with improved robustness and accuracy under dynamic motion, indicating the method’s applicability to indoor navigation tasks. Full article
(This article belongs to the Section Sensors and Robotics)
26 pages, 73077 KB  
Article
Design and Integration of Autonomous Robotic Platform for In Situ Measurement of Soil Organic Carbon and Soil Respiration
by Josip Spudić, Ana Šelek, Matija Rizvan, Ivan Hrabar, Saša Šteković, Stjepan Flegarić, Boris Đurđević, Irena Jug, Danijel Jug, Nikica Perić, Goran Vasiljević and Zdenko Kovačić
Actuators 2026, 15(5), 233; https://doi.org/10.3390/act15050233 - 23 Apr 2026
Viewed by 139
Abstract
The continuous and reliable monitoring of soil organic carbon and soil respiration is vital for sustainable agricultural and environmental management. However, current manual methods are labor-intensive and time-consuming. This work focuses on the development of a fully automated robotic platform for in situ [...] Read more.
The continuous and reliable monitoring of soil organic carbon and soil respiration is vital for sustainable agricultural and environmental management. However, current manual methods are labor-intensive and time-consuming. This work focuses on the development of a fully automated robotic platform for in situ measurement of Soil Organic Carbon (SOC) and Soil Respiration (Rs). The system consists of a four-wheeled mobile platform, equipped with a robotic arm, and custom sampling and measurement tools. The platform is designed with a protected central opening that houses an on-board laboratory, enabling automated surface cleaning, soil drilling, sample collection and homogenization, SOC spectroscopy analysis, and chamber-based soil respiration measurement. The platform is equipped with a high-force mechanical insertion mechanism capable of operating a range of tools designed for soil treatment and penetration. These tools include a soil surface scraper, a soil respiration chamber, and a soil drilling unit. The mobile robotic laboratory system enables the sequential deployment of these tools in any desired order, providing flexible and efficient in-field operation. Full article
(This article belongs to the Special Issue Design and Control of Agricultural Robotics)
35 pages, 8415 KB  
Article
Research on Three-Dimensional Positioning Method for Automatic Strawberry Fruit Picking Based on Vision–IMU Fusion
by Bowen Liu, Chuhan Chen, Junqiu Li, Qinghui Zhang and Yinghao Meng
Agriculture 2026, 16(8), 893; https://doi.org/10.3390/agriculture16080893 - 17 Apr 2026
Viewed by 332
Abstract
Accurate fruit localization and efficient harvesting are key challenges for agricultural robots, especially in dynamic orchard environments, where platform vibration, fruit occlusion, and computational resource limitations of embedded devices significantly impact system performance. To address these issues, this paper proposes a lightweight “fruit [...] Read more.
Accurate fruit localization and efficient harvesting are key challenges for agricultural robots, especially in dynamic orchard environments, where platform vibration, fruit occlusion, and computational resource limitations of embedded devices significantly impact system performance. To address these issues, this paper proposes a lightweight “fruit detection + harvesting” framework. First, by integrating MobileNetV4 and Triplet Attention mechanisms, an improved YOLOv8n network is designed, with the improved YOLOv8n Precision reaching 98.148% and FPS reaching 30 FPS on Jetson Nano, achieving a good balance between detection accuracy and computational efficiency suitable for edge deployment. Second, a strawberry three-dimensional coordinate reconstruction method based on weighted 3D centroid reconstruction is proposed, utilizing depth bias adjustment coefficients to improve spatial accuracy. Third, to address localization errors caused by vibration and platform motion, a dynamic compensation and temporal fusion strategy based on an Inertial Measurement Unit (IMU) is proposed. The rotation matrix estimated from IMU data is first used to correct camera pose variations. Then, an adaptive sliding window is employed to smooth the coordinate sequence. Finally, an Extended Kalman Filter (EKF) is applied to further refine the fused results by incorporating temporal dynamics, ensuring that the reconstructed three-dimensional coordinates in the robotic arm reference frame achieve higher stability and continuity. Experimental results in orchard scenarios show that compared with traditional methods, the system has higher localization accuracy, stronger robustness to dynamic disturbances, and higher harvesting efficiency. This work provides a practical and deployable solution for advancing intelligent fruit-harvesting robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

42 pages, 8791 KB  
Article
Integrating Adaptive Constraints with an Enhanced Metaheuristic for Zero-Latency Trajectory Planning in Robotic Manufacturing Processes
by Houxue Xia, Zhenyu Sun, Huagang Tong and Liusan Wu
Processes 2026, 14(8), 1282; https://doi.org/10.3390/pr14081282 - 17 Apr 2026
Viewed by 169
Abstract
In flexible manufacturing systems, the composite mobile manipulator (CMM) is subject to nonlinear inertial disturbances arising from the dynamic coupling between the mobile platform and the robotic arm. These disturbances significantly impair positioning precision during grasping tasks. This paper addresses the dynamic decoupling [...] Read more.
In flexible manufacturing systems, the composite mobile manipulator (CMM) is subject to nonlinear inertial disturbances arising from the dynamic coupling between the mobile platform and the robotic arm. These disturbances significantly impair positioning precision during grasping tasks. This paper addresses the dynamic decoupling of multi-body nonlinear inertial disturbances within CMM systems. Departing from the conventional “stop-then-plan” serial execution paradigm, we propose a full-cycle spatiotemporally coupled trajectory optimization method. The operation cycle is bifurcated into two synergistic stages: “dynamic calibration” and “static execution.” The dynamic calibration trajectory is pre-planned and executed synchronously during platform movement to actively compensate for inertial-induced pose deviations. Concurrently, the static execution trajectory is optimized and then triggered immediately upon platform standstill, ensuring a seamless and precise transition to the “Grasping Pose”. It is worth noting that the temporal characteristic central to this framework lies in the concurrent execution of static trajectory optimization and platform transit: by the time the platform reaches its destination, the pre-planned trajectory is already available for immediate triggering, achieving zero task-switching wait time at the planning layer. The term “zero-latency” here does not imply a fixed-cycle real-time response at the control layer, but rather the complete elimination of decision latency afforded by the parallel planning architecture. This framework eliminates computational latency, markedly enhancing operational efficiency. Key innovations include two novel constraints. First, the Adaptive Task-space Bounded Search Constraint (ATBSC) framework restricts optimization to a geometry-inspired search region, thereby enhancing search efficiency and ensuring controllable deviations. Second, the Multi-Rigid-Body Coupling Constraint (MRBCC) system explicitly models inertial transmission across motion phases to suppress pose fluctuations. The proposed framework is developed and validated within an obstacle-free workspace. In simulation-based validation on a UR10 6 degree-of-freedom manipulator model, experimental results indicate that ATBSC increases valid solution density to 84.7% and reduces average deviation by 72.8%. Furthermore, under the tested conditions, MRBCC mitigates end-effector position errors by 79.7–81.0% with a 97.5% constraint satisfaction rate. The improved Cuckoo Search algorithm (ICSA), serving as the solver component of the proposed framework, achieves an 11.9% lower fitness value and a 13.1% faster convergence rate compared to the standard Cuckoo Search algorithm in the tested scenarios, suggesting its effectiveness as a reliable solver for the constrained multi-objective trajectory optimisation problem. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

44 pages, 24044 KB  
Review
Ground Mobile Robots for High-Throughput Plant Phenotyping: A Review from the Closed-Loop Perspective of Perception, Decision, and Action
by Heng-Wei Zhang, Yi-Ming Qin, An-Qi Wu, Xi Xi, Pingfan Hu and Rui-Feng Wang
Plants 2026, 15(8), 1218; https://doi.org/10.3390/plants15081218 - 16 Apr 2026
Viewed by 632
Abstract
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap [...] Read more.
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap because they combine close-range sensing, autonomous mobility, and physical interaction within real field environments. In this paper, a structured scoping review is presented using a closed-loop perception–decision–action pipeline as the organizing principle. Within this framework, recent advances are synthesized from the perspectives of multimodal fusion, localization-aware sensing, motion planning, deep-learning-based phenotypic analysis, active observation, robotic intervention, and edge deployment. The review further clarifies the complementary roles of Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), and air–ground collaboration in multiscale phenotyping workflows. Beyond summarizing technologies, the article provides three concrete deliverables: a structured taxonomy of mobile phenotyping systems; comparative tables covering sensing modalities, localization/navigation methods, and AI models; and a research agenda linking technical progress to field deployability. The synthesis highlights four persistent bottlenecks, namely environmental generalization, annotation scarcity, limited standardization and reproducibility, and the gap between advanced models and agricultural edge hardware. Overall, ground robots are identified not merely as sensing platforms, but as the central system architecture for advancing mobile phenotyping toward autonomous, fine-grained, and field-deployable operation. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

28 pages, 3527 KB  
Article
Autonomous Tomato Harvesting System Integrating AI-Controlled Robotics in Greenhouses
by Mihai Gabriel Matache, Florin Bogdan Marin, Catalin Ioan Persu, Robert Dorin Cristea, Florin Nenciu and Atanas Z. Atanasov
Agriculture 2026, 16(8), 847; https://doi.org/10.3390/agriculture16080847 - 11 Apr 2026
Viewed by 994
Abstract
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning [...] Read more.
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning modules. The paper presents the design and experimental validation of an autonomous robotic system for greenhouse tomato harvesting. The proposed platform integrates a rail-guided mobile base, a six-degrees-of-freedom robotic manipulator, and an adaptive end effector with a hybrid vision framework that combines convolutional neural networks and watershed-based segmentation to enable robust fruit detection and localization under occluded conditions. The proposed approach enables improved separation of overlapping fruits and provides accurate spatial localization through stereo vision combined with IMU-assisted camera-to-robot coordinate transformation. An occlusion-aware trajectory planning strategy was developed to generate collision-free manipulation paths in the presence of leaves and stems, enhancing harvesting safety and reliability. The system was trained and evaluated using a dataset of real greenhouse images supplemented with synthetic data augmentation. Experimental trials conducted under practical greenhouse conditions demonstrated a fruit detection precision of 96.9%, recall of 93.5%, and mean Intersection-over-Union of 79.2%. The robotic platform achieved an overall harvesting success rate of 78.5%, reaching 85% for unobstructed fruits, with an average cycle time of 15 s per fruit in direct harvesting scenarios. The rail-guided mobility significantly improved positioning stability and repeatability during manipulation compared with fully mobile platforms. The results confirm that integrating hybrid perception with occlusion-aware motion planning can substantially improve the functionality of robotic harvesting systems in protected cultivation environments. The proposed solution contributes to the advancement of automation technologies for greenhouse vegetable production and supports the transition toward more sustainable and labor-efficient agricultural practices. Full article
Show Figures

Figure 1

26 pages, 6023 KB  
Article
Comparative Modeling and Experimental Validation of Two Four-Wheel Omnidirectional Locomotion Architectures for a Modular Mobile Robot
by Iosif-Adrian Maroșan, Alexandru Bârsan, George Constantin, Sever-Gabriel Racz, Radu-Eugen Breaz, Claudia-Emilia Gîrjob, Mihai Crenganiș and Cristina-Maria Biriș
Appl. Sci. 2026, 16(8), 3646; https://doi.org/10.3390/app16083646 - 8 Apr 2026
Viewed by 319
Abstract
This paper presents a comparative modeling and experimental validation study for a modular four-wheel omnidirectional mobile robot, focusing on two locomotion architectures implemented on the same platform: four omni wheels (90° rollers) and four Mecanum wheels (45° rollers). Both configurations were evaluated under [...] Read more.
This paper presents a comparative modeling and experimental validation study for a modular four-wheel omnidirectional mobile robot, focusing on two locomotion architectures implemented on the same platform: four omni wheels (90° rollers) and four Mecanum wheels (45° rollers). Both configurations were evaluated under identical benchmark conditions on a 1 m × 1 m square path (4 m total path length), using the same nominal 12 V supply and the same test duration, in order to ensure a fair and reproducible cross-architecture comparison. A MATLAB/Simulink–Simscape dynamic model was developed for both architectures, while experimental validation was performed using Hall-effect current sensors integrated into the drive modules. Based on the measured and simulated motor currents, a 12 V-based electrical input-power estimate was evaluated at both motor and robot level. For the considered benchmark, the four-Mecanum configuration exhibited a lower measured input-power estimate than the four-omni configuration (17.88 W vs. 25.75 W), corresponding to an approximate reduction of 30.6% under the adopted assumptions. At robot level, the deviation between simulated and measured total input-power estimate was 3.70% for the four-omni architecture and 21.42% for the four-Mecanum architecture, indicating higher predictive agreement for the omni-wheel model in its present form. The comparative analysis also suggests that wheel–ground interaction and roller geometry influence not only the measured current demand but also the level of agreement between simulation and experiment. Although the present study is limited to a single standardized benchmark and nominal-voltage conditions, it provides a controlled basis for comparing the two locomotion solutions and for identifying directions for further model refinement. The findings should therefore be interpreted as benchmark-specific comparative results offering practical guidance for locomotion architecture selection and for future refinement of friction-aware omnidirectional robot models. Full article
(This article belongs to the Special Issue Kinematics, Motion Planning and Control of Robotics)
Show Figures

Figure 1

20 pages, 3653 KB  
Article
Constrained Multibody Dynamic Modeling and Power Benchmarking of a Three-Omni-Wheel Mobile Robot
by Iosif-Adrian Maroșan, Sever-Gabriel Racz, Radu-Eugen Breaz, Alexandru Bârsan, Claudia-Emilia Gîrjob, Mihai Crenganiș, Cristina-Maria Biriș and Anca-Lucia Chicea
Machines 2026, 14(4), 398; https://doi.org/10.3390/machines14040398 - 5 Apr 2026
Viewed by 407
Abstract
Omnidirectional mobile robots are increasingly used in industrial and service applications due to their high maneuverability and ability to perform combined translational and rotational motions in confined spaces. However, these locomotion advantages are often accompanied by additional wheel–ground interaction losses, making power consumption [...] Read more.
Omnidirectional mobile robots are increasingly used in industrial and service applications due to their high maneuverability and ability to perform combined translational and rotational motions in confined spaces. However, these locomotion advantages are often accompanied by additional wheel–ground interaction losses, making power consumption an important design criterion in the design of efficient mobile platforms. This study presents a dynamic modeling and experimental-power benchmarking framework for a modular mobile robot equipped with three omnidirectional wheels, using a four-omni-wheel configuration as a baseline reference for comparison. A CAD-consistent multibody dynamic model of the three-wheel architecture is developed in the MATLAB/Simulink–Simscape Multibody R2024benvironment to estimate motor currents and electrical-power demand during motion. Experimental validation is carried out on the physical prototype using Hall-effect current sensors integrated into the drive modules, enabling real-time current acquisition for each motor. Both the simulation and experiments are performed on a standardized 1 m square-path benchmark at a constant 12 V supply. The results show that the proposed three-omni-wheel configuration reaches a total measured power of 14.43 W and a simulated power of 12.72 W, corresponding to a robot-level deviation of 11.85%. By comparison, the four-omni-wheel baseline exhibits a total measured power of 25.75 W and a simulated power of 24.92 W. Therefore, the proposed three-wheel architecture reduces the measured power demand by approximately 43.96% relative to the baseline, while the four-wheel configuration provides higher model fidelity. The proposed methodology supports power-oriented evaluation and informed design selection of omnidirectional locomotion architectures for modular mobile robots intended for industrial applications. Full article
(This article belongs to the Special Issue New Trends in Industrial Robots)
Show Figures

Figure 1

38 pages, 3132 KB  
Article
Lightweight Semantic-Aware Route Planning on Edge Hardware for Indoor Mobile Robots: Monocular Camera–2D LiDAR Fusion with Penalty-Weighted Nav2 Route Server Replanning
by Bogdan Felician Abaza, Andrei-Alexandru Staicu and Cristian Vasile Doicin
Sensors 2026, 26(7), 2232; https://doi.org/10.3390/s26072232 - 4 Apr 2026
Viewed by 1120
Abstract
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic [...] Read more.
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic annotations into the Nav2 Route Server for penalty-weighted route selection. Object localization in the map frame is achieved through the Angular Sector Fusion (ASF) pipeline, a deterministic geometric method requiring no parameter tuning. The ASF projects YOLO bounding boxes onto LiDAR angular sectors and estimates the object range using a 25th-percentile distance statistic, providing robustness to sparse returns and partial occlusions. All intrinsic and extrinsic sensor parameters are resolved at runtime via ROS 2 topic introspection and the URDF transform tree, enabling platform-agnostic deployment. Detected entities are classified according to mobility semantics (dynamic, static, and minor) and persistently encoded in a GeoJSON-based semantic map, with these annotations subsequently propagated to navigation graph edges as additive penalties and velocity constraints. Route computation is performed by the Nav2 Route Server through the minimization of a composite cost functional combining geometric path length with semantic penalties. A reactive replanning module monitors semantic cost updates during execution and triggers route invalidation and re-computation when threshold violations occur. Experimental evaluation over 115 navigation segments (legs) on three heterogeneous robotic platforms (two single-board RPi5 configurations and one dual-board setup with inference offloading) yielded an overall success rate of 97% (baseline: 100%, adaptive: 94%), with 42 replanning events observed in 57% of adaptive trials. Navigation time distributions exhibited statistically significant departures from normality (Shapiro–Wilk, p < 0.005). While central tendency differences between the baseline and adaptive modes were not significant (Mann–Whitney U, p = 0.157), the adaptive planner reduced temporal variance substantially (σ = 11.0 s vs. 31.1 s; Levene’s test W = 3.14, p = 0.082), primarily by mitigating AMCL recovery-induced outliers. On-device YOLO26n inference, executed via the NCNN backend, achieved 5.5 ± 0.7 FPS (167 ± 21 ms latency), and distributed inference reduced the average system CPU load from 85% to 48%. The study further reports deployment-level observations relevant to the Nav2 ecosystem, including GeoJSON metadata persistence constraints, graph discontinuity (“path-gap”) artifacts, and practical Route Server configuration patterns for semantic cost integration. Full article
(This article belongs to the Special Issue Advances in Sensing, Control and Path Planning for Robotic Systems)
Show Figures

Figure 1

25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 525
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

31 pages, 7864 KB  
Article
Development of a General-Purpose AI-Powered Robotic Platform for Strawberry Harvesting
by Muhammad Tufail, Jamshed Iqbal and Rafiq Ahmad
Agriculture 2026, 16(7), 769; https://doi.org/10.3390/agriculture16070769 - 31 Mar 2026
Viewed by 576
Abstract
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system [...] Read more.
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system that combines deep learning–based perception with autonomous robotic manipulation for real-time strawberry harvesting. A computer vision pipeline based on the YOLOv11 segmentation model was developed and integrated into a Smart Mobile Manipulator (SMM) equipped with autonomous navigation, a 6-degree-of-freedom (6-DoF) xArm 6 robotic arm, and ROS middleware to enable real-time operation. Using a publicly available strawberry dataset comprising 2,800 images collected under ridge-planted cultivation conditions, the proposed YOLOv11-small segmentation model achieved 84.41% mAP@0.5, outperforming YOLOv11 object detection, Faster R-CNN, and RT-DETR in segmentation quality while maintaining real-time performance at 10 FPS on an NVIDIA Jetson Orin Nano edge GPU. A PCA-based fruit orientation and geometric analysis method achieved 86.5% localization accuracy on 200 test images. Controlled indoor harvesting experiments using synthetic strawberries demonstrated an overall harvesting success rate of 72% across 50 trials. The proposed system provides a general-purpose platform for berry harvesting in controlled environments, offering a scalable and efficient solution for autonomous harvesting. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

37 pages, 6251 KB  
Article
Research on Intelligent Path Planning and Management of X-Type Mecanum-Wheeled Mobile Robot Based on Improved Proximal Policy Optimization–Gated Recurrent Unit Model
by Ning An, Songlin Yang and Shihan Kong
Machines 2026, 14(4), 382; https://doi.org/10.3390/machines14040382 - 30 Mar 2026
Viewed by 422
Abstract
To enhance the navigation efficiency and obstacle avoidance capability of omnidirectional mobile robots in unstructured and complex environments, this paper conducts research on intelligent path planning and management for X-type Mecanum-wheeled mobile robots with the improved Proximal Policy Optimization–Gated Recurrent Unit (PPO-GRU) model [...] Read more.
To enhance the navigation efficiency and obstacle avoidance capability of omnidirectional mobile robots in unstructured and complex environments, this paper conducts research on intelligent path planning and management for X-type Mecanum-wheeled mobile robots with the improved Proximal Policy Optimization–Gated Recurrent Unit (PPO-GRU) model on the basis of robot kinematics modeling and deep reinforcement learning. First, by performing kinematic modeling of the X-type Mecanum-wheeled chassis and designing a high-dimensional state space along with a multi-factor composite reward function, the agent training environment for the robot–environment interaction control is established, laying the environmental foundation for in-depth research on path planning. Second, based on the construction of a Proximal Policy Optimization (PPO) path planning model, the PPO model is integrated with Gated Recurrent Units (GRUs) to form an improved PPO-GRU path planning model, thereby achieving an end-to-end path planning strategy. Finally, using a self-developed kinematic simulation platform for the X-type Mecanum-wheeled robot, the rationality and robustness of the proposed path planning model are investigated through ablation experiments, comparative experiments, dynamic environment tests, and tests considering key real-world phenomena. The research results indicate that the improved PPO-GRU path planning model increases the path planning success rate to 96%, reduces the average number of collisions by 82.7%, and achieves an average linear velocity reaching 84.5% of the maximum speed set in the environment. While attaining high-precision and robust planning management for autonomous navigation paths, it significantly improves the response speed of the agent’s autonomous navigation path planning. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

18 pages, 5105 KB  
Article
Lightweight Visual Localization of Steel Surface Defects for Autonomous Inspection Robots Based on Improved YOLOv10n
by Jinwu Tong, Xin Zhang, Xinyun Lu, Han Cao, Lengtao Yao and Bingbing Gao
Sensors 2026, 26(7), 2132; https://doi.org/10.3390/s26072132 - 30 Mar 2026
Viewed by 525
Abstract
To address the challenges of steel surface defect detection—characterized by fine-grained textures, substantial scale variations, and complex background interference—conventional lightweight detectors often struggle to balance real-time navigation requirements with high-precision spatial localization on mobile inspection platforms. In this work, we propose KDM-YOLO, a [...] Read more.
To address the challenges of steel surface defect detection—characterized by fine-grained textures, substantial scale variations, and complex background interference—conventional lightweight detectors often struggle to balance real-time navigation requirements with high-precision spatial localization on mobile inspection platforms. In this work, we propose KDM-YOLO, a lightweight visual localization and detection method built upon YOLOv10n, designed to provide an efficient perception engine for autonomous inspection robots. The proposed approach enhances the baseline through three key perspectives: feature extraction, context modeling, and multi-scale fusion. Specifically, KWConv is introduced to strengthen the representation of fine-grained texture and edge cues; C2f-DRB is employed to enlarge the effective receptive field and improve long-range dependency perception to reduce missed detections; and a multi-scale attention fusion (MSAF) module is inserted before the detection head to adaptively integrate spatial details with semantic context while suppressing redundant background responses. Ablation studies confirm that each module contributes to performance gains, and their combination yields the best overall results. Comparative experiments further demonstrate that KDM-YOLO significantly improves detection performance while retaining a compact model size and high inference speed. Compared with the YOLOv10n baseline, Precision, Recall and mAP@50 are increased to 91.0%, 93.9%, and 95.4%, respectively, with a parameter count of 3.29 M and an inference speed of 155.6 f/s. These results indicate that KDM-YOLO achieves an ideal balance between the accuracy and computational efficiency required for embedded navigation platforms, providing an effective solution for online autonomous inspection and real-time localization of steel surface defects. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

22 pages, 4435 KB  
Article
Semantic Mapping in Public Indoor Environments Using Improved Instance Segmentation and Continuous-Frame Dynamic Constraint
by Yumin Lu, Xueyu Feng, Zonghuan Guo, Jianchao Wang, Lin Zhou and Yingcheng Lin
Electronics 2026, 15(7), 1392; https://doi.org/10.3390/electronics15071392 - 26 Mar 2026
Viewed by 436
Abstract
Reliable semantic perception is crucial for service robots operating in complex public indoor environments. However, existing semantic mapping approaches often face the dual challenges of high computational overhead and semantic redundancy in maps. To address these limitations, this paper proposes a low-resource semantic [...] Read more.
Reliable semantic perception is crucial for service robots operating in complex public indoor environments. However, existing semantic mapping approaches often face the dual challenges of high computational overhead and semantic redundancy in maps. To address these limitations, this paper proposes a low-resource semantic mapping framework based on improved instance segmentation and dynamic constraints from consecutive frames. First, we design the lightweight model MS-YOLO, which adopts MobileNetV4 as its backbone network and incorporates the SHViT neck module, effectively optimizing the balance between detection accuracy and computational cost. Second, we propose a consecutive frame dynamic constraint method that eliminates redundant object annotations through consecutive frame stability verification. Experimental results relating to both fusion and custom datasets demonstrate that compared to YOLOv8n-seg, MS-YOLO achieves improvements in accuracy, recall, and mAP@0.5, while reducing the number of parameters by 11.7% and floating-point operations (FLOPs) by 32.2%. Furthermore, compared to YOLOv11n-seg and YOLOv5n-seg, its FLOPs are reduced by 17.2% and 25.5%, respectively. Finally, the successful deployment and field validation of this system on the Jetson Orin NX platform demonstrate its real-time capability and engineering practicality for edge computing in public indoor service robots. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 6191 KB  
Article
Mechanically Decoupled Rolling and Turning Design for Pendulum-Driven Unmanned Spherical Robots
by Jiahao Wu, Shiva Raut, Qiqi Xia and Zelin Huang
Actuators 2026, 15(4), 181; https://doi.org/10.3390/act15040181 - 26 Mar 2026
Viewed by 453
Abstract
Unmanned spherical robots are autonomous mobile platforms with a fully enclosed spherical shell, providing high stability and strong adaptability to complex terrains. However, existing pendulum or flywheel spherical robots often suffer from limited maneuverability, whereas complex hybrid actuation schemes tend to compromise system [...] Read more.
Unmanned spherical robots are autonomous mobile platforms with a fully enclosed spherical shell, providing high stability and strong adaptability to complex terrains. However, existing pendulum or flywheel spherical robots often suffer from limited maneuverability, whereas complex hybrid actuation schemes tend to compromise system stability. To address these issues, this study proposes an improved pendulum-driven spherical robot with a mechanically decoupled actuation design, integrating a pendulum system and a circular gear rack turning mechanism. This design enables smooth linear rolling as well as rapid in-place rotation, significantly enhancing maneuverability and motion flexibility on complex terrains. A dynamic model of the spherical robot is established to describe the decoupled actuation mechanism, and a fuzzy proportional–derivative (PD) control strategy is designed for rolling and steering control. Simulation and prototype experiments were conducted to evaluate trajectory tracking, steering response, and terrain adaptability. The results demonstrate that the proposed spherical robot achieves path following and in-place turning with robust mobility. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Back to TopTop