Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = agricultural visual SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5618 KB  
Article
Real-Time Semantic Reconstruction and Semantically Constrained Path Planning for Agricultural Robots in Greenhouses
by Tianrui Quan, Junjie Luo, Shuxin Xie, Xuesong Ren and Yubin Miao
Agronomy 2025, 15(12), 2696; https://doi.org/10.3390/agronomy15122696 - 23 Nov 2025
Viewed by 711
Abstract
To address perception and navigation challenges in precision agriculture caused by GPS signal loss and weakly structured environments in greenhouses, this study proposes an integrated framework for real-time semantic reconstruction and path planning. This framework comprises three core components: First, it introduces a [...] Read more.
To address perception and navigation challenges in precision agriculture caused by GPS signal loss and weakly structured environments in greenhouses, this study proposes an integrated framework for real-time semantic reconstruction and path planning. This framework comprises three core components: First, it introduces a semantic segmentation method tailored for greenhouse environments, enhancing recognition accuracy of key navigable areas such as furrows. Second, it designs a visual-semantic fusion SLAM point cloud reconstruction algorithm and proposes a semantic point cloud rasterization method. Finally, it develops a semantic-constrained A* path planning algorithm adapted for semantic maps. We collected a segmentation dataset (1083 images, 4 classes) and a reconstruction dataset from greenhouses in Shanghai. Experiments demonstrate that the segmentation algorithm achieves 95.44% accuracy and 87.93% mIoU, with a 3.9% improvement in furrow category recognition accuracy. The reconstructed point cloud exhibits an average relative error of 7.37% on furrows. In practical greenhouse validation, single-frame point cloud fusion took approximately 0.35 s, while path planning was completed in under 1 s. Feasible paths avoiding crops were successfully generated across three structurally distinct greenhouses. Results demonstrate that this framework can stably and in real-time accomplish semantic mapping and path planning, providing effective technical support for digital agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 3577 KB  
Article
Orchard Robot Navigation via an Improved RTAB-Map Algorithm
by Jinxing Niu, Le Zhang, Tao Zhang, Jinpeng Guan and Shuheng Shi
Appl. Sci. 2025, 15(21), 11673; https://doi.org/10.3390/app152111673 - 31 Oct 2025
Viewed by 1236
Abstract
To address issues such as low visual SLAM (Simultaneous Localization and Mapping) positioning accuracy and poor map construction robustness caused by light variations, foliage occlusion, and texture repetition in unstructured orchard environments, this paper proposes an orchard robot navigation method based on an [...] Read more.
To address issues such as low visual SLAM (Simultaneous Localization and Mapping) positioning accuracy and poor map construction robustness caused by light variations, foliage occlusion, and texture repetition in unstructured orchard environments, this paper proposes an orchard robot navigation method based on an improved RTAB-Map algorithm. By integrating ORB-SLAM3 as the visual odometry module within the RTAB-Map framework, the system achieves significantly improved accuracy and stability in pose estimation. During the post-processing stage of map generation, a height filtering strategy is proposed to effectively filter out low-hanging branch point clouds, thereby generating raster maps that better meet navigation requirements. The navigation layer integrates the ROS (Robot Operating System) Navigation framework, employing the A* algorithm for global path planning while incorporating the TEB (Timed Elastic Band) algorithm to achieve real-time local obstacle avoidance and dynamic adjustment. Experimental results demonstrate that the improved system exhibits higher mapping consistency in simulated orchard environments, with the odometry’s absolute trajectory error reduced by approximately 45.5%. The robot can reliably plan paths and traverse areas with low-hanging branches. This study provides a solution for autonomous navigation in agricultural settings that balances precision with practicality. Full article
Show Figures

Figure 1

14 pages, 831 KB  
Article
Migratory Bird-Inspired Adaptive Kalman Filtering for Robust Navigation of Autonomous Agricultural Planters in Unstructured Terrains
by Zijie Zhou, Yitao Huang and Jiyu Sun
Biomimetics 2025, 10(8), 543; https://doi.org/10.3390/biomimetics10080543 - 19 Aug 2025
Viewed by 747
Abstract
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, [...] Read more.
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, and somatosensory balance). The algorithm mimics the migratory bird’s ability to integrate multimodal information by fusing laser SLAM, inertial measurement unit (IMU), and GPS data to estimate the position, velocity, and attitude of the planter in real time. Adopting a nonlinear processing approach, the EKF effectively handles nonlinear dynamic characteristics in complex terrain, similar to the adaptive response of a biological nervous system to environmental perturbations. The algorithm demonstrates bio-inspired robustness through the derivation of the nonlinear dynamic teaching model and measurement model and is able to provide high-precision state estimation in complex environments such as mountainous or hilly terrain. Simulation results show that the algorithm significantly improves the navigation accuracy of the planter in unstructured environments. A new method of bio-inspired adaptive state estimation is provided. Full article
(This article belongs to the Special Issue Computer-Aided Biomimetics: 3rd Edition)
Show Figures

Figure 1

22 pages, 7705 KB  
Article
Implementation of SLAM-Based Online Mapping and Autonomous Trajectory Execution in Software and Hardware on the Research Platform Nimbulus-e
by Thomas Schmitz, Marcel Mayer, Theo Nonnenmacher and Matthias Schmitz
Sensors 2025, 25(15), 4830; https://doi.org/10.3390/s25154830 - 6 Aug 2025
Viewed by 1151
Abstract
This paper presents the design and implementation of a SLAM-based online mapping and autonomous trajectory execution system for the Nimbulus-e, a concept vehicle designed for agile maneuvering in confined spaces. The Nimbulus-e uses individual steer-by-wire corner modules with in-wheel motors at all four [...] Read more.
This paper presents the design and implementation of a SLAM-based online mapping and autonomous trajectory execution system for the Nimbulus-e, a concept vehicle designed for agile maneuvering in confined spaces. The Nimbulus-e uses individual steer-by-wire corner modules with in-wheel motors at all four corners. The associated eight joint variables serve as control inputs, allowing precise trajectory following. These control inputs can be derived from the vehicle’s trajectory using nonholonomic constraints. A LiDAR sensor is used to map the environment and detect obstacles. The system processes LiDAR data in real time, continuously updating the environment map and enabling localization within the environment. The inclusion of vehicle odometry data significantly reduces computation time and improves accuracy compared to a purely visual approach. The A* and Hybrid A* algorithms are used for trajectory planning and optimization, ensuring smooth vehicle movement. The implementation is validated through both full vehicle simulations using an ADAMS Car—MATLAB co-simulation and a scaled physical prototype, demonstrating the effectiveness of the system in navigating complex environments. This work contributes to the field of autonomous systems by demonstrating the potential of combining advanced sensor technologies with innovative control algorithms to achieve reliable and efficient navigation. Future developments will focus on improving the robustness of the system by implementing a robust closed-loop controller and exploring additional applications in dense urban traffic and agricultural operations. Full article
Show Figures

Figure 1

26 pages, 14214 KB  
Article
Stereo Visual Odometry and Real-Time Appearance-Based SLAM for Mapping and Localization in Indoor and Outdoor Orchard Environments
by Imran Hussain, Xiongzhe Han and Jong-Woo Ha
Agriculture 2025, 15(8), 872; https://doi.org/10.3390/agriculture15080872 - 16 Apr 2025
Cited by 4 | Viewed by 4918
Abstract
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an [...] Read more.
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an increased risk of collisions affect the robot’s ability to perform tasks such as fruit harvesting, spraying, and monitoring. To address these limitations, this study integrated stereo visual odometry with real-time appearance-based mapping (RTAB-Map)-based simultaneous localization and mapping (SLAM) to improve mapping and localization in both indoor and outdoor orchard settings. The proposed system leverages stereo image pairs for precise depth estimation while utilizing RTAB-Map’s graph-based SLAM framework with loop-closure detection to ensure global map consistency. In addition, an incorporated inertial measurement unit (IMU) enhances pose estimation, thereby improving localization accuracy. Substantial improvements in both mapping and localization performance over the traditional approach were demonstrated, with an average error of 0.018 m against the ground truth for outdoor mapping and a consistent average error of 0.03 m for indoor trails with a 20.7% reduction in visual odometry trajectory deviation compared to traditional methods. Localization performance remained robust across diverse conditions, with a low RMSE of 0.207 m. Our approach provides critical insights into developing more reliable autonomous navigation systems for agricultural robots. Full article
Show Figures

Figure 1

27 pages, 14033 KB  
Article
MOLO-SLAM: A Semantic SLAM for Accurate Removal of Dynamic Objects in Agricultural Environments
by Jinhong Lv, Beihuo Yao, Haijun Guo, Changlun Gao, Weibin Wu, Junlin Li, Shunli Sun and Qing Luo
Agriculture 2024, 14(6), 819; https://doi.org/10.3390/agriculture14060819 - 24 May 2024
Cited by 6 | Viewed by 3531
Abstract
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of [...] Read more.
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of them rely on static world assumptions. This reliance constrains their use in real dynamic scenarios and leads to increased instability when applied to agricultural contexts. To address the problem of detecting and eliminating slow dynamic objects in outdoor forest and tea garden agricultural scenarios, this paper presents a dynamic VSLAM innovation called MOLO-SLAM (mask ORB label optimization SLAM). MOLO-SLAM merges the ORBSLAM2 framework with the Mask-RCNN instance segmentation network, utilizing masks and bounding boxes to enhance the accuracy and cleanliness of 3D point clouds. Additionally, we used the BundleFusion reconstruction algorithm for 3D mesh model reconstruction. By comparing our algorithm with various dynamic VSLAM algorithms on the TUM and KITTI datasets, the results demonstrate significant improvements, with enhancements of up to 97.72%, 98.51%, and 28.07% relative to the original ORBSLAM2 on the three datasets. This showcases the outstanding advantages of our algorithm. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

18 pages, 17778 KB  
Article
A Compact Handheld Sensor Package with Sensor Fusion for Comprehensive and Robust 3D Mapping
by Peng Wei, Kaiming Fu, Juan Villacres, Thomas Ke, Kay Krachenfels, Curtis Ryan Stofer, Nima Bayati, Qikai Gao, Bill Zhang, Eric Vanacker and Zhaodan Kong
Sensors 2024, 24(8), 2494; https://doi.org/10.3390/s24082494 - 12 Apr 2024
Cited by 5 | Viewed by 4002
Abstract
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various [...] Read more.
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields. Full article
Show Figures

Figure 1

14 pages, 4857 KB  
Article
Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network
by Truong Thi Huong Giang and Young-Jae Ryoo
Sensors 2023, 23(8), 4040; https://doi.org/10.3390/s23084040 - 17 Apr 2023
Cited by 13 | Viewed by 3129
Abstract
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect [...] Read more.
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect the pruning points of leaves in 3D space by using 3D point clouds. Robot arms can move to these positions and cut the leaves. We proposed a method to create 3D point clouds of sweet peppers by applying semantic segmentation neural networks, the ICP algorithm, and ORB-SLAM3, a visual SLAM application with a LiDAR camera. This 3D point cloud consists of plant parts that have been recognized by the neural network. We also present a method to detect the leaf pruning points in 2D images and 3D space by using 3D point clouds. Furthermore, the PCL library was used to visualize the 3D point clouds and the pruning points. Many experiments are conducted to show the method’s stability and correctness. Full article
(This article belongs to the Special Issue Deep Neural Networks Sensing for RGB-D Motion Recognition)
Show Figures

Figure 1

17 pages, 20517 KB  
Article
Place Recognition with Memorable and Stable Cues for Loop Closure of Visual SLAM Systems
by Rafiqul Islam and Habibullah Habibullah
Robotics 2022, 11(6), 142; https://doi.org/10.3390/robotics11060142 - 4 Dec 2022
Cited by 7 | Viewed by 4359
Abstract
Visual Place Recognition (VPR) is a fundamental yet challenging task in Visual Simultaneous Localization and Mapping (V-SLAM) problems. The VPR works as a subsystem of the V-SLAM. VPR is the task of retrieving images upon revisiting the same place in different conditions. The [...] Read more.
Visual Place Recognition (VPR) is a fundamental yet challenging task in Visual Simultaneous Localization and Mapping (V-SLAM) problems. The VPR works as a subsystem of the V-SLAM. VPR is the task of retrieving images upon revisiting the same place in different conditions. The problem is even more difficult for agricultural and all-terrain autonomous mobile robots that work in different scenarios and weather conditions. Over the last few years, many state-of-the-art methods have been proposed to solve the limitations of existing VPR techniques. VPR using bag-of-words obtained from local features works well for a large-scale image retrieval problem. However, the aggregation of local features arbitrarily produces a large bag-of-words vector database, limits the capability of efficient feature learning, and aggregation and querying of candidate images. Moreover, aggregating arbitrary features is inefficient as not all local features equally contribute to long-term place recognition tasks. Therefore, a novel VPR architecture is proposed suitable for efficient place recognition with semantically meaningful local features and their 3D geometrical verifications. The proposed end-to-end architecture is fueled by a deep neural network, a bag-of-words database, and 3D geometrical verification for place recognition. This method is aware of meaningful and informative features of images for better scene understanding. Later, 3D geometrical information from the corresponding meaningful features is computed and utilised for verifying correct place recognition. The proposed method is tested on four well-known public datasets, and Micro Aerial Vehicle (MAV) recorded dataset for experimental validation from Victoria Park, Adelaide, Australia. The extensive experimental results considering standard evaluation metrics for VPR show that the proposed method produces superior performance than the available state-of-the-art methods. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

14 pages, 4321 KB  
Article
C2VIR-SLAM: Centralized Collaborative Visual-Inertial-Range Simultaneous Localization and Mapping
by Jia Xie, Xiaofeng He, Jun Mao, Lilian Zhang and Xiaoping Hu
Drones 2022, 6(11), 312; https://doi.org/10.3390/drones6110312 - 23 Oct 2022
Cited by 8 | Viewed by 4359
Abstract
Collaborative simultaneous localization and mapping have a great impact on various applications such as search-and-rescue and agriculture. For each agent, the key to performing collaboration is to measure the motion relative to other participants or external anchors; currently, this is mainly accompanied by [...] Read more.
Collaborative simultaneous localization and mapping have a great impact on various applications such as search-and-rescue and agriculture. For each agent, the key to performing collaboration is to measure the motion relative to other participants or external anchors; currently, this is mainly accompanied by (1) matching to the shared maps from other agents or (2) measuring the range to anchors with UWB devices. While requiring multiple agents to visit the same area can decrease the task efficiency and anchors demand a distribution process, this paper proposes to use a monocular camera, an inertial measurement unit (IMU), and a UWB device as the onboard sensors on each agent to build an accurate and efficient centralized collaborative SLAM system. For each participant, visual-inertial odometry is adopted to estimate the motion parameters and build a local map of the explored areas. The agent-to-agent range is measured by the onboard UWB and is published to the central server together with the estimated motion parameters and the reconstructed maps. We designed a global optimization algorithm to make use of the cross-agent map match information detected by a visual place technique, and the agent-to-agent range information to optimize the motion parameter of all the participants and merge the local maps into a global map. Compared with existing collaborative SLAM systems, the proposed system can perform collaboration with onboard UWB measurements only, vision only, and a combination of these; this greatly improves the adaptiveness and robustness of the collaborative system. We also present an in-depth analysis of C2VIR-SLAM in multiple UAV real-flight datasets. Full article
Show Figures

Figure 1

20 pages, 7505 KB  
Article
Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments
by Yaxuan Yan, Baohua Zhang, Jun Zhou, Yibo Zhang and Xiao’ang Liu
Agronomy 2022, 12(8), 1740; https://doi.org/10.3390/agronomy12081740 - 23 Jul 2022
Cited by 72 | Viewed by 11905
Abstract
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this [...] Read more.
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this study, a state-of-the-art real-time localization and mapping system was presented to achieve precise pose estimation and dense three-dimensional (3D) point cloud mapping in complex greenhouses by utilizing multi-sensor fusion and Visual–IMU–Wheel odometry. In this method, measurements from wheel odometry, an inertial measurement unit (IMU) and a tightly coupled visual–inertial odometry (VIO) are integrated into a loosely coupled framework based on the Extended Kalman Filter (EKF) to obtain a more accurate state estimation of the robot. In the multi-sensor fusion algorithm, the pose estimations from the wheel odometry and IMU are treated as predictions and the localization results from VIO are used as observations to update the state vector. Simultaneously, the dense 3D map of the greenhouse is reconstructed in real-time by employing the modified ORB-SLAM2. The performance of the proposed system was evaluated in modern standard solar greenhouses with harsh environmental conditions. Taking advantage of measurements from individual sensors, our method is robust enough to cope with various challenges, as shown by extensive experiments conducted in the greenhouses and outdoor campus environment. Additionally, the results show that our proposed framework can improve the localization accuracy of the visual–inertial odometry, demonstrating the satisfactory capability of the proposed approach and highlighting its promising applications in autonomous navigation of agricultural robots. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

21 pages, 2235 KB  
Article
RTSDM: A Real-Time Semantic Dense Mapping System for UAVs
by Zhiteng Li, Jiannan Zhao, Xiang Zhou, Shengxian Wei, Pei Li and Feng Shuang
Machines 2022, 10(4), 285; https://doi.org/10.3390/machines10040285 - 18 Apr 2022
Cited by 12 | Viewed by 4688
Abstract
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and [...] Read more.
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and creating a semantic 3D map is significant for fully autonomous tasks. However, integrating simultaneous localization, 3D reconstruction, and semantic segmentation together is a huge challenge for power-limited systems such as UAVs. To address this, we propose a real-time semantic mapping system that can help a power-limited UAV system to understand its location and surroundings. The proposed approach includes a modified visual SLAM with the direct method to accelerate the computationally intensive feature matching process and a real-time semantic segmentation module at the back end. The semantic module runs a lightweight network, BiSeNetV2, and performs segmentation only at key frames from the front-end SLAM task. Considering fast navigation and the on-board memory resources, we provide a real-time dense-map-building module to generate an OctoMap with the segmented semantic map. The proposed system is verified in real-time experiments on a UAV platform with a Jetson TX2 as the computation unit. A frame rate of around 12 Hz, with a semantic segmentation accuracy of around 89% demonstrates that our proposed system is computationally efficient while providing sufficient information for fully autonomous tasks such as rescue, inspection, etc. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

15 pages, 15831 KB  
Article
Visual SLAM for Indoor Livestock and Farming Using a Small Drone with a Monocular Camera: A Feasibility Study
by Sander Krul, Christos Pantos, Mihai Frangulea and João Valente
Drones 2021, 5(2), 41; https://doi.org/10.3390/drones5020041 - 19 May 2021
Cited by 78 | Viewed by 13925
Abstract
Real-time data collection and decision making with drones will play an important role in precision livestock and farming. Drones are already being used in precision agriculture. Nevertheless, this is not the case for indoor livestock and farming environments due to several challenges and [...] Read more.
Real-time data collection and decision making with drones will play an important role in precision livestock and farming. Drones are already being used in precision agriculture. Nevertheless, this is not the case for indoor livestock and farming environments due to several challenges and constraints. These indoor environments are limited in physical space and there is the localization problem, due to GPS unavailability. Therefore, this work aims to give a step toward the usage of drones for indoor farming and livestock management. To investigate on the drone positioning in these workspaces, two visual simultaneous localization and mapping (VSLAM)—LSD-SLAM and ORB-SLAM—algorithms were compared using a monocular camera onboard a small drone. Several experiments were carried out in a greenhouse and a dairy farm barn with the absolute trajectory and the relative pose error being analyzed. It was found that the approach that suits best these workspaces is ORB-SLAM. This algorithm was tested by performing waypoint navigation and generating maps from the clustered areas. It was shown that aerial VSLAM could be achieved within these workspaces and that plant and cattle monitoring could benefit from using affordable and off-the-shelf drone technology. Full article
(This article belongs to the Special Issue Advances in Civil Applications of Unmanned Aircraft Systems)
Show Figures

Figure 1

22 pages, 28290 KB  
Article
Occupancy Grid and Topological Maps Extraction from Satellite Images for Path Planning in Agricultural Robots
by Luís Carlos Santos, André Silva Aguiar, Filipe Neves Santos, António Valente and Marcelo Petry
Robotics 2020, 9(4), 77; https://doi.org/10.3390/robotics9040077 - 24 Sep 2020
Cited by 35 | Viewed by 7487
Abstract
Robotics will significantly impact large sectors of the economy with relatively low productivity, such as Agri-Food production. Deploying agricultural robots on the farm is still a challenging task. When it comes to localising the robot, there is a need for a preliminary map, [...] Read more.
Robotics will significantly impact large sectors of the economy with relatively low productivity, such as Agri-Food production. Deploying agricultural robots on the farm is still a challenging task. When it comes to localising the robot, there is a need for a preliminary map, which is obtained from a first robot visit to the farm. Mapping is a semi-autonomous task that requires a human operator to drive the robot throughout the environment using a control pad. Visual and geometric features are used by Simultaneous Localisation and Mapping (SLAM) Algorithms to model and recognise places, and track the robot’s motion. In agricultural fields, this represents a time-consuming operation. This work proposes a novel solution—called AgRoBPP-bridge—to autonomously extract Occupancy Grid and Topological maps from satellites images. These preliminary maps are used by the robot in its first visit, reducing the need of human intervention and making the path planning algorithms more efficient. AgRoBPP-bridge consists of two stages: vineyards row detection and topological map extraction. For vineyards row detection, we explored two approaches, one that is based on conventional machine learning technique, by considering Support Vector Machine with Local Binary Pattern-based features, and another one found in deep learning techniques (ResNET and DenseNET). From the vineyards row detection, we extracted an occupation grid map and, by considering advanced image processing techniques and Voronoi diagrams concept, we obtained a topological map. Our results demonstrated an overall accuracy higher than 85% for detecting vineyards and free paths for robot navigation. The Support Vector Machine (SVM)-based approach demonstrated the best performance in terms of precision and computational resources consumption. AgRoBPP-bridge shows to be a relevant contribution to simplify the deployment of robots in agriculture. Full article
(This article belongs to the Special Issue Advances in Agriculture and Forest Robotics)
Show Figures

Figure 1

Back to TopTop