Cutting-Edge Technology in Agricultural Robotics: Sensing and Actuation

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Agricultural Technology".

Deadline for manuscript submissions: 30 October 2025 | Viewed by 2352

Special Issue Editor


E-Mail Website
Guest Editor
School of Mechanical and Electronic Engineering, Shandong Agriculture University, Taian 271018, China
Interests: selective harvesting; robotic manipulator; novel robotic applications; machine vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advancement in agricultural robotics is reshaping the future of farming, enabling precise, efficient, and sustainable agricultural practices. This Special Issue aims to gather the latest research and innovations at the intersection of robotics, automation, and agriculture, focusing on the core areas of sensing and actuation.

We invite submissions that explore the following topics:

Vision-Based Sensing and Perception: Innovative methods for visual sensing, including image processing, machine learning, and AI-driven approaches, tailored for diverse agricultural environments. Topics of interest include crop and weed detection, disease monitoring, and environmental perception.

Precision Actuation: Research on advanced actuation techniques that allow for precise interaction with crops and plants, ensuring minimal resource use while maximizing yield. This includes innovations in robotic arms, cutting tools, and other actuators specifically designed for agricultural tasks.

Hand-Eye Coordination Control: Novel approaches to integrating visual perception with the real-time control of robotic end-effectors, improving the efficiency and accuracy of agricultural robots. This includes algorithms and systems that enhance the hand-eye coordination of robots in dynamic and unstructured environments.

Design and Control of Novel End-Effectors: Development and control of new end-effectors, such as soft robotics and biomimetic designs, that can handle delicate agricultural products with care and precision. Emphasis is placed on the application of these technologies in tasks like harvesting, pruning, and planting.

Multi-Arm Coordination: Innovations in the design and control of multi-arm robotic systems for agriculture, focusing on cooperative task execution and coordination among multiple robotic arms for complex agricultural operations.

Sensing and Reconstruction: Advanced techniques in sensing and environmental reconstruction, enabling robots to build accurate models of their surroundings for better navigation and decision-making. This includes 3D mapping, sensor fusion, and data-driven modeling approaches.

Motion Planning: Cutting-edge research in motion planning for agricultural robots, addressing challenges such as path optimization, obstacle avoidance, and safe navigation in dynamic farm environments.

We welcome both theoretical and applied research, including experimental studies, case studies, and reviews that provide insights into the future of agricultural robotics. This Special Issue seeks to bridge the gap between emerging technologies and their practical applications, fostering innovation in sustainable agriculture.

Prof. Dr. Jin Yuan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural robotics
  • vision-based sensing
  • precision actuation
  • hand-eye coordination
  • end-effector design
  • multi-arm coordination
  • environmental perception
  • sensing and reconstruction
  • motion planning
  • soft robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 24057 KiB  
Article
Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics
by Tabinda Naz Syed, Jun Zhou, Imran Ali Lakhiar, Francesco Marinello, Tamiru Tesfaye Gemechu, Luke Toroitich Rottok and Zhizhen Jiang
Agriculture 2025, 15(8), 827; https://doi.org/10.3390/agriculture15080827 - 10 Apr 2025
Viewed by 447
Abstract
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the [...] Read more.
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the model incorporates Ghost Modules and Squeeze-and-Excitation (SE) blocks to enhance feature extraction while maintaining computational efficiency. Obstacles are categorized as “Real”—those that physically impact navigation, such as tree trunks and persons—and “Fake”—those that do not, such as tall weeds and tree branches—allowing for precise navigation decisions. The model was trained on separate orchard and campus datasets and fine-tuned using Hyperband optimization and evaluated on an external test set to assess generalization to unseen obstacles. The model’s robustness was tested under varied lighting conditions, including low-light scenarios, to ensure real-world applicability. Computational efficiency was analyzed based on inference speed, memory consumption, and hardware requirements. Comparative analysis against state-of-the-art classification models (VGG16, ResNet50, MobileNetV3, DenseNet121, EfficientNetB0, and InceptionV3) confirmed the proposed model’s superior precision (p), recall (r), and F1-score, particularly in complex orchard scenarios. The model maintained strong generalization across diverse environmental conditions, including varying illumination and previously unseen obstacles. Furthermore, computational analysis revealed that the orchard-combined model achieved the highest inference speed at 2.31 FPS while maintaining a strong balance between accuracy and efficiency. When deployed in real-time, the model achieved 95.0% classification accuracy in orchards and 92.0% in campus environments. The real-time system demonstrated a false positive rate of 8.0% in the campus environment and 2.0% in the orchard, with a consistent false negative rate of 8.0% across both environments. These results validate the model’s effectiveness for real-time obstacle differentiation in agricultural settings. Its strong generalization, robustness to unseen obstacles, and computational efficiency make it well-suited for deployment in precision agriculture. Future work will focus on enhancing inference speed, improving performance under occlusion, and expanding dataset diversity to further strengthen real-world applicability. Full article
Show Figures

Figure 1

38 pages, 15114 KiB  
Article
YS3AM: Adaptive 3D Reconstruction and Harvesting Target Detection for Clustered Green Asparagus
by Si Mu, Jian Liu, Ping Zhang, Jin Yuan and Xuemei Liu
Agriculture 2025, 15(4), 407; https://doi.org/10.3390/agriculture15040407 - 14 Feb 2025
Viewed by 518
Abstract
Green asparagus grows in clusters, which can cause overlaps with weeds and immature stems, making it difficult to identify suitable harvest targets and cutting points. Extracting precise stem details in complex spatial arrangements is a challenge. This paper explored the YS3AM (Yolo-SAM-3D-Adaptive-Modeling) method [...] Read more.
Green asparagus grows in clusters, which can cause overlaps with weeds and immature stems, making it difficult to identify suitable harvest targets and cutting points. Extracting precise stem details in complex spatial arrangements is a challenge. This paper explored the YS3AM (Yolo-SAM-3D-Adaptive-Modeling) method for detecting green asparagus and performing 3D adaptive-section modeling using a depth camera, which could benefit harvesting path planning for selective harvesting robots. Firstly, the model was developed and deployed to extract bounding boxes for individual asparagus stems within clusters. Secondly, the stems inside these bounding boxes were segmented, and binary masks were generated. Thirdly, high-quality depth images were obtained through pixel block completion. Finally, a novel 3D reconstruction method, based on adaptive section modeling and combining the mask and depth data, is proposed. And an evaluation method is introduced to assess modeling accuracy. Experimental validation showed high-performance detection (1095 field images demonstrated, Precision: 98.75%, Recall: 95.46%, F1: 0.97) and robust 3D modeling (103 asparagus stems, average RMSE: length 0.74, depth: 1.105) under varying illumination conditions. The system achieved 22 ms per stem processing speed, enabling real-time operation. The results demonstrated that the 3D model accurately represents the spatial distribution of clustered green asparagus, enabling precise identification of harvest targets and cutting points. This model provided essential spatial pathways for end-effector path planning, thereby fulfilling the operational requirements for efficient green asparagus harvesting robots. Full article
Show Figures

Figure 1

19 pages, 16555 KiB  
Article
WED-YOLO: A Detection Model for Safflower Under Complex Unstructured Environment
by Zhenguo Zhang, Yunze Wang, Peng Xu, Ruimeng Shi, Zhenyu Xing and Junye Li
Agriculture 2025, 15(2), 205; https://doi.org/10.3390/agriculture15020205 - 18 Jan 2025
Cited by 2 | Viewed by 880
Abstract
Accurate safflower recognition is a critical research challenge in the field of automated safflower harvesting. The growing environment of safflowers, including factors such as variable weather conditions in unstructured environments, shooting distances, and diverse morphological characteristics, presents significant difficulties for detection. To address [...] Read more.
Accurate safflower recognition is a critical research challenge in the field of automated safflower harvesting. The growing environment of safflowers, including factors such as variable weather conditions in unstructured environments, shooting distances, and diverse morphological characteristics, presents significant difficulties for detection. To address these challenges and enable precise safflower target recognition in complex environments, this study proposes an improved safflower detection model, WED-YOLO, based on YOLOv8n. Firstly, the original bounding box loss function is replaced with the dynamic non-monotonic focusing mechanism Wise Intersection over Union (WIoU), which enhances the model’s bounding box fitting ability and accelerates network convergence. Then, the upsampling module in the network’s neck is substituted with the more efficient and versatile dynamic upsampling module, DySample, to improve the precision of feature map upsampling. Meanwhile, the EMA attention mechanism is integrated into the C2f module of the backbone network to strengthen the model’s feature extraction capabilities. Finally, a small-target detection layer is incorporated into the detection head, enabling the model to focus on small safflower targets. The model is trained and validated using a custom-built safflower dataset. The experimental results demonstrate that the improved model achieves Precision (P), Recall (R), mean Average Precision (mAP), and F1 score values of 93.15%, 86.71%, 95.03%, and 89.64%, respectively. These results represent improvements of 2.9%, 6.69%, 4.5%, and 6.22% over the baseline model. Compared with Faster R-CNN, YOLOv5, YOLOv7, and YOLOv10, the WED-YOLO achieved the highest mAP value. It outperforms the module mentioned by 13.06%, 4.85%, 4.86%, and 4.82%, respectively. The enhanced model exhibits superior precision and lower miss detection rates in safflower recognition tasks, providing a robust algorithmic foundation for the intelligent harvesting of safflowers. Full article
Show Figures

Figure 1

Back to TopTop