Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = unstructured environment perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 8987 KB  
Article
A Method for UAV Path Planning Based on G-MAPONet Reinforcement Learning
by Jian Deng, Honghai Zhang, Yuetan Zhang, Mingzhuang Hua and Yaru Sun
Drones 2025, 9(12), 871; https://doi.org/10.3390/drones9120871 - 17 Dec 2025
Viewed by 265
Abstract
To address the issues of efficiency and robustness in UAV trajectory planning under complex environments, this paper proposes a Graph Multi-Head Attention Policy Optimization Network (G-MAPONet) algorithm that integrates Graph Attention (GAT), Multi-Head Attention (MHA), and Group Relative Policy Optimization (GRPO). The algorithm [...] Read more.
To address the issues of efficiency and robustness in UAV trajectory planning under complex environments, this paper proposes a Graph Multi-Head Attention Policy Optimization Network (G-MAPONet) algorithm that integrates Graph Attention (GAT), Multi-Head Attention (MHA), and Group Relative Policy Optimization (GRPO). The algorithm adopts a three-layer architecture of “GAT layer for local feature perception–MHA for global semantic reasoning–GRPO for policy optimization”, comprehensively achieving the goals of dynamic graph convolution quantization and global adaptive parallel decoupled dynamic strategy adjustment. Comparative experiments in multi-dimensional spatial environments demonstrate that the Gat_Mha combined mechanism exhibits significant superiority compared to single attention mechanisms, which verifies the efficient representation capability of the dual-layer hybrid attention mechanism in capturing environmental features. Additionally, ablation experiments integrating Gat, Mha, and GRPO algorithms confirm that the dual-layer fusion mechanism of Gat and Mha yields better improvement effects. Finally, comparisons with traditional reinforcement learning algorithms across multiple performance metrics show that the G-MAPONet algorithm reduces the number of convergence episodes (NCE) by an average of more than 19.14%, increases the average reward (AR) by over 16.20%, and successfully completes all dynamic path planning (PPTC) tasks; meanwhile, the algorithm’s reward values and obstacle avoidance success rate are significantly higher than those of other algorithms. Compared with the baseline APF algorithm, its reward value is improved by 8.66%, and the obstacle avoidance repetition rate is also enhanced, which further verifies the effectiveness of the improved G-MAPONet algorithm. In summary, through the dual-layer complementary mode of GAT and MHA, the G-MAPONet algorithm overcomes the bottlenecks of traditional dynamic environment modeling and multi-scale optimization, enhances the decision-making capability of UAVs in unstructured environments, and provides a new technical solution for trajectory planning in intelligent logistics and distribution. Full article
Show Figures

Figure 1

27 pages, 6674 KB  
Article
Design and Development of an Autonomous Mobile Robot for Unstructured Indoor Environments
by Ameur Gargouri, Mohamed Karray, Bechir Zalila and Mohamed Ksantini
Machines 2025, 13(11), 1044; https://doi.org/10.3390/machines13111044 - 12 Nov 2025
Viewed by 1954
Abstract
This research work presents the design and the development of a cost-effective autonomous mobile robot for locating misplaced objects within unstructured indoor environments. The tools integrated into the proposed system for perception and localization are a hardware architecture equipped with LiDAR, an inertial [...] Read more.
This research work presents the design and the development of a cost-effective autonomous mobile robot for locating misplaced objects within unstructured indoor environments. The tools integrated into the proposed system for perception and localization are a hardware architecture equipped with LiDAR, an inertial measurement unit (IMU), and wheel encoders. The system also includes an ROS2-based software stack enabling autonomous navigation via the NAV2 framework and Adaptive Monte Carlo Localization (AMCL). For real-time object detection, a lightweight YOLO11n model is developed and implemented on a Raspberry Pi 4 to enable the robot to identify common household items. The robot’s motion control is achieved by a fuzzy logic-enhanced PID controller that dynamically modifies gain values based on navigation conditions. Remote supervision, task management, and real-time status monitoring are provided by a user-friendly Flutter-based mobile application. Simulations and real-world experiments demonstrate the robustness, modularity, and responsiveness of the robot in dynamic environments. This robot achieves a 3 cm localization error and a 95% task execution success rate. Full article
Show Figures

Figure 1

20 pages, 8109 KB  
Article
Development of an Orchard Inspection Robot: A ROS-Based LiDAR-SLAM System with Hybrid A*-DWA Navigation
by Jiwei Qu, Yanqiu Gu, Zhinuo Qiu, Kangquan Guo and Qingzhen Zhu
Sensors 2025, 25(21), 6662; https://doi.org/10.3390/s25216662 - 1 Nov 2025
Viewed by 1140
Abstract
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using [...] Read more.
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using Light Detection and Ranging (LiDAR) technology. A mobile robot that integrates tightly coupled multi-sensors is developed and implemented. The integration of LiDAR and Inertial Measurement Units (IMUs) enables the perception of environmental information. Moreover, the robot’s kinematic model is established, and coordinate transformations are performed based on the Unified Robotics Description Format (URDF). The URDF facilitates the visualization of robot features within the Robot Operating System (ROS). ROS navigation nodes are configured for path planning, where an improved A* algorithm, combined with the Dynamic Window Approach (DWA), is introduced to achieve efficient global and local path planning. The comparison of the simulation results with classical algorithms demonstrated the implemented algorithm exhibits superior search efficiency and smoothness. The robot’s navigation performance is rigorously tested, focusing on navigation accuracy and obstacle avoidance capability. Results demonstrated that, during temporary stops at waypoints, the robot exhibits an average lateral deviation of 0.163 m and a longitudinal deviation of 0.282 m from the target point. The average braking time and startup time of the robot at the four waypoints are 0.46 s and 0.64 s, respectively. In obstacle avoidance tests, optimal performance is observed with an expansion radius of 0.4 m across various obstacle sizes. The proposed combined method achieves efficient and stable global and local path planning, serving as a reference for future applications of mobile inspection robots in autonomous navigation. Full article
Show Figures

Figure 1

28 pages, 2676 KB  
Article
Multi-Aspect Sentiment Classification of Arabic Tourism Reviews Using BERT and Classical Machine Learning
by Samar Zaid, Amal Hamed Alharbi and Halima Samra
Data 2025, 10(11), 168; https://doi.org/10.3390/data10110168 - 23 Oct 2025
Viewed by 1104
Abstract
Understanding visitor sentiment is essential for developing effective tourism strategies, particularly as Google Maps reviews have become a key channel for public feedback on tourist attractions. Yet, the unstructured format and dialectal diversity of Arabic reviews pose significant challenges for extracting actionable insights [...] Read more.
Understanding visitor sentiment is essential for developing effective tourism strategies, particularly as Google Maps reviews have become a key channel for public feedback on tourist attractions. Yet, the unstructured format and dialectal diversity of Arabic reviews pose significant challenges for extracting actionable insights at scale. This study evaluates the performance of traditional machine learning and transformer-based models for aspect-based sentiment analysis (ABSA) on Arabic Google Maps reviews of tourist sites across Saudi Arabia. A manually annotated dataset of more than 3500 reviews was constructed to assess model effectiveness across six tourism-related aspects: price, cleanliness, facilities, service, environment, and overall experience. Experimental results demonstrate that multi-head BERT architectures, particularly AraBERT, consistently outperform traditional classifiers in identifying aspect-level sentiment. Ara-BERT achieved an F1-score of 0.97 for the cleanliness aspect, compared with 0.91 for the best-performing classical model (LinearSVC), indicating a substantial improvement. The proposed ABSA framework facilitates automated, fine-grained analysis of visitor perceptions, enabling data-driven decision-making for tourism authorities and contributing to the strategic objectives of Saudi Vision 20300. Full article
Show Figures

Figure 1

21 pages, 5019 KB  
Article
Real-Time Parking Space Detection Based on Deep Learning and Panoramic Images
by Wu Wei, Hongyang Chen, Jiayuan Gong, Kai Che, Wenbo Ren and Bin Zhang
Sensors 2025, 25(20), 6449; https://doi.org/10.3390/s25206449 - 18 Oct 2025
Viewed by 1835
Abstract
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. [...] Read more.
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. Accurate and effective detection of parking spaces is still the core problem that needs to be solved in automatic parking systems. In this study, building upon existing public parking space datasets, a comprehensive panoramic parking space dataset named PSEX (Parking Slot Extended) with complex environmental diversity was constructed by integrating the concept of GAN (Generative Adversarial Network)-based image style transfer. Meanwhile, an improved algorithm based on PP-Yoloe (Paddle-Paddle Yoloe) is used to detect the state (free or occupied) and angle (T-shaped or L-shaped) of the parking space in real-time. For the many and small labels of the parking space, the ResSpp in it is replaced by the ResSimSppf module, the SimSppf structure is introduced at the neck end, and Silu is replaced by Relu in the basic structure of the CBS (Conv-BN-SiLU), and finally an auxiliary detector head is added at the prediction head. Experimental results show that the proposed SimSppf_mepre-Yoloe model achieves an average improvement of 4.5% in mAP50 and 2.95% in mAP50:95 over the baseline PP-Yoloe across various parking space detection tasks. In terms of efficiency, the model maintains comparable inference latency with the baseline, reaching up to 33.7 FPS on the Jetson AGX Xavier platform under TensorRT optimization. And the improved enhancement algorithm can greatly enrich the diversity of parking space data. These results demonstrate that the proposed model achieves a better balance between detection accuracy and real-time performance, making it suitable for deployment in intelligent vehicle and robotic perception systems. Full article
(This article belongs to the Special Issue Robot Swarm Collaboration in the Unstructured Environment)
Show Figures

Figure 1

19 pages, 4834 KB  
Article
Continuous Picking Path Planning Based on Lightweight Marigold Corollas Recognition in the Field
by Baojian Ma, Zhenghao Wu, Yun Ge, Bangbang Chen, Jijing Lin, He Zhang and Hao Xia
Biomimetics 2025, 10(10), 648; https://doi.org/10.3390/biomimetics10100648 - 26 Sep 2025
Cited by 1 | Viewed by 539
Abstract
This study addresses the core challenges of precise marigold corollas recognition and efficient continuous path planning under complex natural conditions (strong illumination, occlusion, adhesion) by proposing an integrated lightweight visual recognition and real-time path planning framework. We introduce MPD-YOLO, an optimized model based [...] Read more.
This study addresses the core challenges of precise marigold corollas recognition and efficient continuous path planning under complex natural conditions (strong illumination, occlusion, adhesion) by proposing an integrated lightweight visual recognition and real-time path planning framework. We introduce MPD-YOLO, an optimized model based on YOLOv11n, incorporating (1) a Multi-scale Information Enhancement Module (MSEE) to boost feature extraction; (2) structured pruning for significant model compression (final size: 2.1 MB, 39.6% of original); and (3) knowledge distillation to recover accuracy loss post-pruning. The resulting model achieves high precision (P: 89.8%, mAP@0.5: 95.1%) with reduced computational load (3.2 GFLOPs) while demonstrating enhanced robustness in challenging scenarios—recall significantly increased by 6.8% versus YOLOv11n. Leveraging these recognition outputs, an adaptive ant colony algorithm featuring dynamic parameter adjustment and an improved pheromone strategy reduces average path planning time to 2.2 s—a 68.6% speedup over benchmark methods. This integrated approach significantly enhances perception accuracy and operational efficiency for automated marigold harvesting in unstructured environments, providing robust technical support for continuous automated operations. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 3rd Edition)
Show Figures

Figure 1

20 pages, 5335 KB  
Article
LiGaussOcc: Fully Self-Supervised 3D Semantic Occupancy Prediction from LiDAR via Gaussian Splatting
by Zhiqiang Wei, Tao Huang and Fengdeng Zhang
Sensors 2025, 25(18), 5889; https://doi.org/10.3390/s25185889 - 20 Sep 2025
Viewed by 1399
Abstract
Accurate 3D semantic occupancy perception is critical for autonomous driving, enabling robust navigation in unstructured environments. While vision-based methods suffer from depth inaccuracies and lighting sensitivity, LiDAR-based approaches face challenges due to sparse data and dependence on expensive manual annotations. This work proposes [...] Read more.
Accurate 3D semantic occupancy perception is critical for autonomous driving, enabling robust navigation in unstructured environments. While vision-based methods suffer from depth inaccuracies and lighting sensitivity, LiDAR-based approaches face challenges due to sparse data and dependence on expensive manual annotations. This work proposes LiGaussOcc, a novel self-supervised framework for dense LiDAR-based 3D semantic occupancy prediction. Our method first encodes LiDAR point clouds into voxel features and addresses sparsity via an Empty Voxel Inpainting (EVI) module, refined by an Adaptive Feature Fusion (AFF) module. During training, a Gaussian Primitive from Voxels (GPV) module generates parameters for 3D Gaussian Splatting, enabling efficient rendering of 2D depth and semantic maps. Supervision is achieved through photometric consistency across adjacent camera views and pseudo-labels from vision–language models, eliminating manual 3D annotations. Evaluated on the nuScenes-OpenOccupancy benchmark, LiGaussOcc achieved performance competitive with 30.4% Intersection over Union (IoU) and 14.1% mean Intersection over Union (mIoU). It reached 91.6% of the performance of the fully supervised LiDAR-based L-CONet, while completely eliminating the need for costly and labor-intensive manual 3D annotations. It excelled particularly in static environmental classes, such as drivable surfaces and man-made structures. This work presents a scalable, annotation-free solution for LiDAR-based 3D semantic occupancy perception. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

23 pages, 1660 KB  
Article
Soundtalking: Extending Soundscape Practice Through Long-Term Participant-Led Sound Activities in the Dee Estuary
by Neil Spencer Bruce
Sustainability 2025, 17(17), 7904; https://doi.org/10.3390/su17177904 - 2 Sep 2025
Cited by 1 | Viewed by 1089
Abstract
This study explores the practice of “soundtalking”, a novel method of participant-led sound practice, across the Dee Estuary in the UK. Over the course of twelve months, the Our Dee Estuary Project facilitated monthly meetings where participants engaged in sound workshops, in-depth discussions, [...] Read more.
This study explores the practice of “soundtalking”, a novel method of participant-led sound practice, across the Dee Estuary in the UK. Over the course of twelve months, the Our Dee Estuary Project facilitated monthly meetings where participants engaged in sound workshops, in-depth discussions, and sound-making activities, with the aim of fostering a deeper connection with both their local and sonic environments. This longitudinal practice-based research study created an environment of sonic learning and listening development, documenting how participants’ interactions and narratives both shape and are shaped by the estuarial environment, its soundscape, and their sense of place. Participant-led conversations formed the basis of the methodology, providing rich qualitative data on how individuals perceive, interpret, and interact with their surroundings and the impact that the soundscape has on the individual. The regular and unstructured discussions revealed the intrinsic value of soundscapes in participants’ lives, emphasising themes of memory, reflection, place attachment, environmental awareness, and well-being. The collaborative nature of the project allowed for the co-creation of a film and a radio soundscape, both of which serve as significant outputs, encapsulating the auditory and emotional essence of the estuary. The study’s initial findings indicate that “soundtalking” as a practice not only enhances participants’ auditory perception but also fosters a sense of community and belonging. The regularity of monthly meetings facilitated the development of a shared acoustic vocabulary and experience among participants, which in turn enriched their collective and individual experiences of the estuary. Soundtalking is proposed as an additional tool in the study of soundscapes to complement and extend more commonly implemented methods, such as soundwalking and soundsitting. Soundtalking demonstrates the efficacy of longitudinal, participant-led approaches in capturing the dynamic and lived experiences of soundscapes and their associated environments, over methods that only create fleeting short-term engagements with the soundscape. In conclusion, the Our Dee Estuary Project demonstrates the transformative potential of soundtalking in deepening our understanding of human–environment interactions and, in addition, has shown that there are both health and well-being aspects that arise from the practice. Beyond this, the project has output a film and a radio sound piece, which not only document but also celebrate the intricate and evolving relationship between the participants and the estuarine soundscape, offering valuable insights for future soundscape research and community engagement initiatives. Full article
(This article belongs to the Special Issue Urban Noise Control, Public Health and Sustainable Cities)
Show Figures

Figure 1

16 pages, 11849 KB  
Article
A Modular Soft Gripper with Embedded Force Sensing and an Iris-Type Cutting Mechanism for Harvesting Medium-Sized Crops
by Eduardo Navas, Kai Blanco, Daniel Rodríguez-Nieto and Roemi Fernández
Actuators 2025, 14(9), 432; https://doi.org/10.3390/act14090432 - 2 Sep 2025
Cited by 1 | Viewed by 1700
Abstract
Agriculture is facing increasing challenges due to labor shortages, rising productivity demands, and the need to operate in unstructured environments. Robotics, particularly soft robotics, offers promising solutions for automating delicate tasks such as fruit harvesting. While numerous soft grippers have been proposed, most [...] Read more.
Agriculture is facing increasing challenges due to labor shortages, rising productivity demands, and the need to operate in unstructured environments. Robotics, particularly soft robotics, offers promising solutions for automating delicate tasks such as fruit harvesting. While numerous soft grippers have been proposed, most focus on grasping and lack the capability to detach fruits with rigid peduncles, which require cutting. This paper presents a novel modular hexagonal soft gripper that integrates soft pneumatic actuators, embedded mechano-optical force sensors for real-time contact monitoring, and a self-centering iris-type cutting mechanism. The entire system is 3D-printed, enabling low-cost fabrication and rapid customization. Experimental validation demonstrates successful harvesting of bell peppers and identifies cutting limitations in tougher crops such as aubergine, primarily due to material constraints in the actuation system. This dual-capability design contributes to the development of multifunctional robotic harvesters capable of adapting to a wide range of fruit types with minimal requirements for perception and mechanical reconfiguration. Full article
(This article belongs to the Special Issue Soft Actuators and Robotics—2nd Edition)
Show Figures

Figure 1

22 pages, 3513 KB  
Article
Tightly-Coupled Air-Ground Collaborative System for Autonomous UGV Navigation in GPS-Denied Environments
by Jiacheng Deng, Jierui Liu and Jiangping Hu
Drones 2025, 9(9), 614; https://doi.org/10.3390/drones9090614 - 31 Aug 2025
Viewed by 1582
Abstract
Autonomous navigation for unmanned vehicles in complex, unstructured environments remains challenging, especially in GPS-denied or obstacle-dense scenarios, limiting their practical deployment in logistics, inspection, and emergency response applications. To overcome these limitations, this paper presents a tightly integrated air-ground collaborative system comprising three [...] Read more.
Autonomous navigation for unmanned vehicles in complex, unstructured environments remains challenging, especially in GPS-denied or obstacle-dense scenarios, limiting their practical deployment in logistics, inspection, and emergency response applications. To overcome these limitations, this paper presents a tightly integrated air-ground collaborative system comprising three key components: (1) an aerial perception module employing a YOLOv8-based vision system onboard the UAV to generate real-time global obstacle maps; (2) a low-latency communication module utilizing FAST DDS middleware for reliable air-ground data transmission; and (3) a ground navigation module implementing an A* algorithm for optimal path planning coupled with closed-loop control for precise trajectory execution. The complete system was physically implemented using cost-effective hardware and experimentally validated in cluttered environments. Results demonstrated successful UGV autonomous navigation and obstacle avoidance relying exclusively on UAV-provided environmental data. The proposed framework offers a practical, economical solution for enabling robust UGV operations in challenging real-world conditions, with significant potential for diverse industrial applications. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

21 pages, 812 KB  
Review
A Frontier Review of Semantic SLAM Technologies Applied to the Open World
by Le Miao, Wen Liu and Zhongliang Deng
Sensors 2025, 25(16), 4994; https://doi.org/10.3390/s25164994 - 12 Aug 2025
Cited by 1 | Viewed by 3640
Abstract
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift [...] Read more.
With the growing demand for autonomous robotic operations in complex and unstructured environments, traditional semantic SLAM systems—which rely on closed-set semantic vocabularies—are increasingly limited in their ability to robustly perceive and understand diverse and dynamic scenes. This paper focuses on the paradigm shift toward open-world semantic scene understanding in SLAM and provides a comprehensive review of the technological evolution from closed-world assumptions to open-world frameworks. We survey the current state of research in open-world semantic SLAM, highlighting key challenges and frontiers. In particular, we conduct an in-depth analysis of three critical areas: zero-shot open-vocabulary understanding, dynamic semantic expansion, and multimodal semantic fusion. These capabilities are examined for their crucial roles in unknown class identification, incremental semantic updates, and multisensor perceptual integration. Our main contribution is presenting the first systematic algorithmic benchmarking and performance comparison of representative open-world semantic SLAM systems, revealing the potential of these core techniques to enhance semantic understanding in complex environments. Finally, we propose several promising directions for future research, including lightweight model deployment, real-time performance optimization, and collaborative multimodal perception, and offering a systematic reference and methodological guidance for continued advancements in this emerging field. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 11043 KB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Cited by 2 | Viewed by 5280
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

18 pages, 12097 KB  
Article
Adaptive Outdoor Cleaning Robot with Real-Time Terrain Perception and Fuzzy Control
by Raul Fernando Garcia Azcarate, Akhil Jayadeep, Aung Kyaw Zin, James Wei Shung Lee, M. A. Viraj J. Muthugala and Mohan Rajesh Elara
Mathematics 2025, 13(14), 2245; https://doi.org/10.3390/math13142245 - 10 Jul 2025
Cited by 2 | Viewed by 1811
Abstract
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A [...] Read more.
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A 128-channel LiDAR sensor captures signal intensity images, which are processed by a ResNet-18 convolutional neural network to classify floor types as wood, smooth, or rough. Simultaneously, pitch angles from an onboard IMU detect terrain inclination. These inputs are transformed into fuzzy sets and evaluated using a Mamdani-type fuzzy inference system. The controller adjusts brush height, brush speed, and robot velocity through 81 rules derived from 48 structured cleaning experiments across varying terrain and slopes. Validation was conducted in low-light (night-time) conditions, leveraging LiDAR’s lighting-invariant capabilities. Field trials confirm that the robot responds effectively to environmental conditions, such as reducing speed on slopes or increasing brush pressure on rough surfaces. The integration of deep learning and fuzzy control enables safe, energy-efficient, and adaptive cleaning in complex outdoor environments. This work demonstrates the feasibility and real-world applicability for combining perception and inference-based control in terrain-adaptive robotic systems. Full article
(This article belongs to the Special Issue Research and Applications of Neural Networks and Fuzzy Logic)
Show Figures

Figure 1

23 pages, 16570 KB  
Article
Mobile Ground-Truth 3D Detection Environment for Agricultural Robot Field Testing
by Daniel Barrelmeyer, Stefan Stiene, Jannik Jose and Mario Porrmann
Sensors 2025, 25(13), 4103; https://doi.org/10.3390/s25134103 - 30 Jun 2025
Viewed by 1189
Abstract
Safety and performance validation of autonomous agricultural robots is critically dependent on realistic, mobile test environments that provide high-fidelity ground truth. Existing infrastructures focus on either component-level sensor evaluation in fixed setups or system-level black-box testing under constrained conditions, lacking true mobility, multi-object [...] Read more.
Safety and performance validation of autonomous agricultural robots is critically dependent on realistic, mobile test environments that provide high-fidelity ground truth. Existing infrastructures focus on either component-level sensor evaluation in fixed setups or system-level black-box testing under constrained conditions, lacking true mobility, multi-object capability and tracking or detecting objects in multiple Degrees Of Freedom (DOFs) in unstructured fields. In this paper, we present a sensor station network designed to overcome these limitations. Our mobile testbed consists of self-powered stations, each equipped with a high-resolution 3D-Light Detection And Ranging (LiDAR) sensor, dual-antenna Global Navigation Satellite System (GNSS) receivers and on-board edge computers. By synchronising over GNSS time and calibrating rigid LiDAR-to-LiDAR transformations, we fuse point clouds from multiple stations into a coherent geometric representation of a real agricultural environment, which we sample at up to 20 Hz. We demonstrate the performance of the system in field experiments with an autonomous robot traversing a 26,000 m2 area at up to 20 km/h. Our results show continuous and consistent detections of the robot even at the field boundaries. This work will enable a comprehensive evaluation of geofencing and environmental perception capabilities, paving the way for safety and performance benchmarking of agricultural robot systems. Full article
(This article belongs to the Collection Sensors and Robotics for Digital Agriculture)
Show Figures

Figure 1

28 pages, 3163 KB  
Review
Review on Key Technologies for Autonomous Navigation in Field Agricultural Machinery
by Hongxuan Wu, Xinzhong Wang, Xuegeng Chen, Yafei Zhang and Yaowen Zhang
Agriculture 2025, 15(12), 1297; https://doi.org/10.3390/agriculture15121297 - 17 Jun 2025
Cited by 7 | Viewed by 4802
Abstract
Autonomous navigation technology plays a crucial role in advancing smart agriculture by enhancing operational efficiency, optimizing resource utilization, and reducing labor dependency. With the rapid integration of information technology, modern agricultural machinery increasingly incorporates advanced techniques such as high-precision positioning, environmental perception, path [...] Read more.
Autonomous navigation technology plays a crucial role in advancing smart agriculture by enhancing operational efficiency, optimizing resource utilization, and reducing labor dependency. With the rapid integration of information technology, modern agricultural machinery increasingly incorporates advanced techniques such as high-precision positioning, environmental perception, path planning, and path-tracking control. This paper presents a comprehensive review of recent advancements in these core technologies, systematically analyzing their methodologies, advantages, and application scenarios. Despite notable progress, considerable challenges persist, primarily due to the unstructured nature of farmland, varying terrain conditions, and the demand for robust and adaptive control strategies. This review also discusses current limitations and outlines prospective research directions, aiming to provide valuable insights for the future development and practical deployment of autonomous navigation systems in agricultural machinery. Future research is expected to focus on enhancing multi-modal perception under occlusion and variable lighting conditions, developing terrain-aware path planning algorithms that adapt to irregular field boundaries and elevation changes and designing robust control strategies that integrate model-based and learning-based approaches to manage disturbances and non-linearity. Furthermore, tighter integration among perception, planning, and control modules will be crucial for improving system-level intelligence and coordination in real-world agricultural environments. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Back to TopTop