Perception and AI for Field Robotics

A special issue of Robotics (ISSN 2218-6581). This special issue belongs to the section "Agricultural and Field Robotics".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 2491

Special Issue Editors


E-Mail Website
Guest Editor
International Research Lab, Georgia Tech-CNRS UMI 2958, Metz, France
Interests: mobile robot; field robotics; environment monitoring
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, Pontifical Catholic university of Chile, Santiago, Chile
Interests: remote sensing; forest and agricultural environment; digital terrain modeling; lidar odometry and mapping; visual inertial odometry

Special Issue Information

Dear Colleagues,

In recent decades, rapid advances in sensing technologies and artificial intelligence (AI) have significantly expanded robots’ capabilities to perceive, reason, and operate autonomously or semi-autonomously in complex, dynamic, and unstructured outdoor environments. The deployment of robots in such real-world scenarios—agricultural fields, mines, forests, construction sites, oceans, and critical infrastructure—is commonly referred to as field robotics.

Operating safely and efficiently in harsh, cluttered, and often unpredictable conditions requires robust perception pipelines built on advanced sensing systems (e.g., LiDAR, cameras, radar, GNSS/INS, and hyperspectral sensors). These systems must support reliable object detection, tracking, localization, mapping, and high-level scene understanding under varying illumination, weather, and terrain. In parallel, modern AI methods—ranging from deep learning to probabilistic reasoning and planning—have become central to many field robotics tasks, enabling robots to make informed decisions, handle uncertainty, and adapt to environmental changes over long time scales.

Despite substantial progress, perception and AI for field robotics still face critical challenges. These include dealing with sparse and noisy sensor data, extreme domain shifts between lab and field conditions, long-term autonomy, reliable multi-sensor fusion, limited onboard computation, safety and interpretability requirements, and the scarcity of standardized datasets and benchmarks for outdoor environments.

This Special Issue, titled “Perception and AI for Field Robotics”, aims to bring together scientists, researchers, and practitioners from academia, industry, and government to present original contributions that advance the state of the art. This issue will serve as a platform to showcase cutting-edge research, recent developments, emerging trends, and open challenges in perception and AI for field robotics.

Topics of interest include, but are not limited to, the following:

  • Field robotics applications in agriculture, environmental monitoring, exploration, infrastructure inspection, public services, construction, mining, energy, and logistics;
  • Multi-sensor fusion (e.g., LiDAR–camera–radar–GNSS/INS and multispectral/hyperspectral fusion) for robust perception;
  • Sensor calibration, synchronization, and self-calibration in the field;
  • In-field object detection, recognition, and classification;
  • In-field object tracking and pose estimation under occlusions and changing environments;
  • Localization and mapping (SLAM, semantic mapping, and multi-robot mapping) in outdoor and GPS-challenged environments;
  • AI-based decision-making and planning for autonomous and semi-autonomous field robots;
  • Learning under domain shift and adverse conditions (weather, lighting, and seasonal changes);
  • Uncertainty modeling, safety, and interpretability in perception and decision-making pipelines;
  • Resource-aware and real-time perception on embedded and edge computing platforms;
  • Simulation, digital twins, and virtual testing for field robotics perception and AI;
  • Datasets, benchmarks, and evaluation protocols for perception and AI in field robotics.

Dr. Cédric Pradalier
Dr. Tito Arevalo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Publisher’s Notice

The Special Issue has been shifted from Section AI in Robotics to Section Agricultural and Field Robotics on 25 December 2025. At the time of the move, there were no publications in this Special Issue.

Keywords

  • field robotics
  • multi-sensor fusion
  • object detection
  • localization and mapping
  • AI-based decision-making and planning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 39829 KB  
Article
Dual-Detector Vision and Depth-Aware Back-Projection for Accurate Apple Detection and 3D Localisation for Robotic Harvesting
by Tagor Hossain, Peng Shi and Levente Kovacs
Robotics 2026, 15(2), 47; https://doi.org/10.3390/robotics15020047 - 22 Feb 2026
Viewed by 688
Abstract
Accurate apple detection and precise three-dimensional (3D) localisation are essential for autonomous robotic harvesting in orchard environments, where occlusion, illumination variation, depth noise, and the similar colour appearance of fruits and surrounding leaves present significant challenges. This paper proposes a dual-detector vision framework [...] Read more.
Accurate apple detection and precise three-dimensional (3D) localisation are essential for autonomous robotic harvesting in orchard environments, where occlusion, illumination variation, depth noise, and the similar colour appearance of fruits and surrounding leaves present significant challenges. This paper proposes a dual-detector vision framework combined with depth-aware back-projection to achieve robust apple detection and metric 3D localisation in real time. The method integrates the complementary strengths of YOLOv8 and Mask R-CNN through confidence-weighted fusion of bounding boxes and pixel-wise union of segmentation masks, producing stabilised two-dimensional (2D) apple representations under visually ambiguous conditions. The fusion results are converted into dense 3D representations through depth-guided projection within the camera coordinate system representing the visible fruit surface. A depth-consistency weighting strategy assigns higher influence to depth-reliable pixels during centroid computation, thereby suppressing noisy or occluded depth measurements and improving the stability of 3D fruit centre estimation, while local intensity normalisation standardises neighbourhood-level pixel intensities to reduce the impact of shadows, highlights, and uneven lighting, enabling more consistent segmentation and detection across varying illumination conditions. Experimental results demonstrate an accuracy of 98.9%, an mAP of 94.2%, an F1-score of 93.3%, and a recall of 92.8%, while achieving real-time performance at 86.42 FPS, confirming the suitability of the proposed method for robotic harvesting in challenging orchard environments. Full article
(This article belongs to the Special Issue Perception and AI for Field Robotics)
Show Figures

Figure 1

18 pages, 37791 KB  
Article
Instance Segmentation in Autonomous Log Grasping Using EfficientViT-SAM MP-Former
by Sayan Mandal, Stefan Ainetter and Friedrich Fraundorfer
Robotics 2026, 15(2), 44; https://doi.org/10.3390/robotics15020044 - 15 Feb 2026
Viewed by 686
Abstract
Segmenting individual timber logs in robotic grasping scenarios poses significant challenges due to cluttered arrangements, overlapping geometries, and visually uniform textures, requiring instance segmentation models that balance accuracy and computational efficiency. In this work, we study the integration of the EfficientViT-SAM backbone into [...] Read more.
Segmenting individual timber logs in robotic grasping scenarios poses significant challenges due to cluttered arrangements, overlapping geometries, and visually uniform textures, requiring instance segmentation models that balance accuracy and computational efficiency. In this work, we study the integration of the EfficientViT-SAM backbone into the MP-Former framework to analyze its impact on segmentation accuracy, inference speed, and cross-dataset generalization in autonomous forestry applications. Our contributions are threefold: (1) we benchmark Mask2Former and MP-Former with different variants of Swin Transformer as backbones on the TimberSeg 1.0 dataset, (2) we study the use of the EfficientViT-SAM-XL architecture as an alternative encoder backbone to analyze its impact on inference speed and segmentation accuracy, and (3) we use an In-house dataset as a hold-out test set, comprising 113 images and 923 annotations in the annotated subset and 50 images in the unannotated subset, for evaluating model generalization under real-world deployment scenarios. On the TimberSeg 1.0 dataset, our top-performing model, EfficientViT-SAM-XL1 MP-Former, achieves an mAP of 61.05, outperforming the Swin-B Mask2Former of the TimberSeg 1.0 paper by +3.52 mAP, while running at 12 FPS (+3.53 FPS gain). When tested on our In-house dataset, the model attains an mAP of 67.06. Notably, it matches the memory efficiency of TimberSeg’s strongest baseline, despite having nearly double the number of parameters, demonstrating its practical viability for robotic applications in forestry environments. Full article
(This article belongs to the Special Issue Perception and AI for Field Robotics)
Show Figures

Figure 1

Review

Jump to: Research

37 pages, 4888 KB  
Review
Robotics in Precision Agriculture: Task-, Platform-, and Evaluation-Oriented Review
by Natheer Almtireen and Mutaz Ryalat
Robotics 2026, 15(4), 81; https://doi.org/10.3390/robotics15040081 - 20 Apr 2026
Viewed by 589
Abstract
Robotics is increasingly positioned as an enabling technology for precision agriculture, where management actions must be spatially and temporally targeted under constraints on labour, input use, safety, and environmental impact. This review synthesises studies on agricultural field robotics and organises the literature along [...] Read more.
Robotics is increasingly positioned as an enabling technology for precision agriculture, where management actions must be spatially and temporally targeted under constraints on labour, input use, safety, and environmental impact. This review synthesises studies on agricultural field robotics and organises the literature along four complementary axes: task (monitoring, weeding, spraying, and harvesting), platform (UGV, UAV, gantry/fixed-structure, greenhouse robot, and hybrid systems), autonomy-stack module (perception, localisation, planning, control, actuation, safety, and human–robot interaction), and evaluation setting (lab, greenhouse, open-field single season, and open-field multi-season/multi-site). Across these dimensions, this review analyses how platform constraints shape sensing geometry, actuation capability, localisation reliability, energy/endurance, supervision burden, and safety requirements. It further examines enabling technologies that recur across tasks, including vision and multimodal perception under occlusion and illumination variability, localisation and mapping under weak or denied GNSS, uncertainty-aware planning in deformable and partially observed environments, and compliant end-effectors for contact-rich operations. Beyond cataloguing systems, this paper emphasises evaluation practice by synthesising core task-relevant metrics, comparing laboratory and field validation settings, and proposing a reporting checklist and benchmark ladder to improve reproducibility and cross-study comparability. This review identifies recurring bottlenecks in domain shift, long-term autonomy, calibration robustness, crop-safe actuation, and safety assurance near humans, and it concludes with a staged research roadmap linking near-term evaluation reform to longer-term credible multi-site autonomy. Overall, this paper provides a structured framework for interpreting agricultural robotic systems not only by application but also by deployment context, system maturity, and evaluation credibility. Full article
(This article belongs to the Special Issue Perception and AI for Field Robotics)
Show Figures

Figure 1

Back to TopTop