Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (879)

Search Parameters:
Keywords = robotics in agriculture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 2025 KB  
Review
Precision Farming with Smart Sensors: Current State, Challenges and Future Outlook
by Bonface O. Manono, Boniface Mwami, Sylvester Mutavi and Faith Nzilu
Sensors 2026, 26(3), 882; https://doi.org/10.3390/s26030882 - 29 Jan 2026
Abstract
The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the [...] Read more.
The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the growing demand for food. Precision farming has emerged as a transformative paradigm to address these issues. It integrates advanced technologies to improve decision making, optimize yield, and conserve resources. This approach leverages technologies such as wireless sensor networks, the Internet of Things (IoT), robotics, drones, artificial intelligence (AI), and cloud computing to provide effective and cost-efficient agricultural services. Smart sensor technologies are foundational to precision farming. They offer crucial information regarding soil conditions, plant growth, and environmental factors in real time. This review explores the status, challenges, and prospects of smart sensor technologies in precision farming. The integration of smart sensors with the IoT and AI has significantly transformed how agricultural data is collected, analyzed, and utilized to optimize yield, conserve resources, and enhance overall farm efficiency. The review delves into various types of smart sensors used, their applications, and emerging technologies that promise to further innovate data acquisition and decision making in agriculture. Despite progress, challenges persist. They include sensor calibration, data privacy, interoperability, and adoption barriers. To fully realize the potential of smart sensors in ensuring global food security and promoting sustainable farming, the challenges need to be addressed. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

33 pages, 10103 KB  
Article
A Visual Navigation Path Extraction Method for Complex and Variable Agricultural Scenarios Based on AFU-Net and Key Contour Point Constraints
by Jin Lu, Zhao Wang, Jin Wang, Zhongji Cao, Jia Zhao and Minjie Zhang
Agriculture 2026, 16(3), 324; https://doi.org/10.3390/agriculture16030324 - 28 Jan 2026
Abstract
In intelligent unmanned agricultural machinery research, navigation line extraction in natural field/orchard environments is critical for autonomous operation. Existing methods still face two prominent challenges: (1) Dynamic shooting perspective shifts caused by natural environmental interference lead to geometric distortion of image features, making [...] Read more.
In intelligent unmanned agricultural machinery research, navigation line extraction in natural field/orchard environments is critical for autonomous operation. Existing methods still face two prominent challenges: (1) Dynamic shooting perspective shifts caused by natural environmental interference lead to geometric distortion of image features, making it difficult to acquire high-precision navigation features; (2) Symmetric distribution of crop row boundaries hinders traditional algorithms from accurately extracting effective navigation trajectories, resulting in insufficient accuracy and reliability. To address these issues, this paper proposes an environment-adaptive navigation path extraction method for multi-type agricultural scenarios, consisting of two core components: an Attention-Feature-Enhanced U-Net (AFU-Net) for semantic segmentation of navigation feature regions, and a key-point constraint-based adaptive navigation line extraction algorithm. AFU-Net improves the U-Net framework by embedding Efficient Channel Attention (ECA) modules at the ends of Encoders 1–3 to enhance feature expression, and replacing Encoder 4 with a cascaded Semantic Aware Multi-scale Enhancement (SAME) module. Trained and tested on both our KVW dataset and Yu’s field dataset, our method achieves outstanding performance: On the KVW dataset, AFU-Net attains a Mean Intersection over Union (MIoU) of 97.55% and a real-time inference speed of 32.60 FPS with only 3.95 M Params, outperforming state-of-the-art models. On Yu’s field dataset, it maintains an MIoU of 95.20% and 16.30 FPS. Additionally, compared with traditional navigation line extraction algorithms, the proposed adaptive algorithm reduces the mean absolute yaw angle error (mAYAE) to 2.06° in complex scenarios. This research exhibits strong adaptability and robustness, providing reliable technical support for the precise navigation of intelligent agricultural machinery across multiple agricultural scenarios. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
24 pages, 5280 KB  
Article
MA-DeepLabV3+: A Lightweight Semantic Segmentation Model for Jixin Fruit Maturity Recognition
by Leilei Deng, Jiyu Xu, Di Fang and Qi Hou
AgriEngineering 2026, 8(2), 40; https://doi.org/10.3390/agriengineering8020040 - 23 Jan 2026
Viewed by 156
Abstract
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels [...] Read more.
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels among fruits on the same plant, gradual color transitions during maturation that result in ambiguous boundaries, and occlusion by branches and foliage—render traditional image recognition methods inadequate for simultaneously achieving high recognition accuracy and computational efficiency. Although existing deep learning models can improve recognition accuracy, their substantial computational demands and high hardware requirements preclude deployment on resource-constrained embedded devices such as harvesting robots. To achieve the rapid and accurate identification of Jixin fruit maturity, this study proposes Multi-Attention DeepLabV3+ (MA-DeepLabV3+), a streamlined semantic segmentation framework derived from an enhanced DeepLabV3+ model. First, a lightweight backbone network is adopted to replace the original complex structure, substantially reducing computational burden. Second, a Multi-Scale Self-Attention Module (MSAM) is proposed to replace the traditional Atrous Spatial Pyramid Pooling (ASPP) structure, reducing network computational cost while enhancing the model’s perception capability for fruits of different scales. Finally, an Attention and Convolution Fusion Module (ACFM) is introduced in the decoding stage to significantly improve boundary segmentation accuracy and small target recognition ability. Experimental results on a self-constructed Jixin fruit dataset demonstrated that the proposed MA-DeepLabV3+ model achieves an mIoU of 86.13%, mPA of 91.29%, and F1 score of 90.05%, while reducing the number of parameters by 89.8% and computational cost by 55.3% compared to the original model. The inference speed increased from 41 frames per second (FPS) to 81 FPS, representing an approximately two-fold improvement. The model memory footprint is only 21 MB, demonstrating potential for deployment on embedded devices such as harvesting robots. Experimental results demonstrate that the proposed model achieves significant reductions in computational complexity while maintaining high segmentation accuracy, exhibiting robust performance particularly in complex scenarios involving color gradients, ambiguous boundaries, and occlusion. This study provides technical support for the development of intelligent Jixin fruit harvesting equipment and offers a valuable reference for the application of lightweight deep learning models in smart agriculture. Full article
Show Figures

Figure 1

25 pages, 4225 KB  
Article
Proactive Path Planning Using Centralized UAV-UGV Coordination in Semi-Structured Agricultural Environments
by Dimitris Katikaridis, Lefteris Benos, Dimitrios Kateris, Elpiniki Papageorgiou, George Karras, Ioannis Menexes, Remigio Berruto, Claus Grøn Sørensen and Dionysis Bochtis
Appl. Sci. 2026, 16(2), 1143; https://doi.org/10.3390/app16021143 - 22 Jan 2026
Viewed by 66
Abstract
Unmanned ground vehicles (UGVs) in agriculture face challenges in navigating complex environments due to the presence of dynamic obstacles. This causes several practical problems including mission delays, higher energy consumption, and potential safety risks. This study addresses the challenge by shifting path planning [...] Read more.
Unmanned ground vehicles (UGVs) in agriculture face challenges in navigating complex environments due to the presence of dynamic obstacles. This causes several practical problems including mission delays, higher energy consumption, and potential safety risks. This study addresses the challenge by shifting path planning from reactive local avoidance to proactive global optimization. To that end, it integrates aerial imagery from an unmanned aerial vehicle (UAV) to identify dynamic obstacles using a low-latency YOLOv8 detection pipeline. These are translated into georeferenced exclusion zones for the UGV. The UGV follows the optimized path while relying on a LiDAR-based reactive protocol to autonomously detect and respond to any missed obstacles. A farm management information system is used as the central coordinator. The system was tested in 30 real-field trials in a walnut orchard for two distinct scenarios with varying worker and vehicle loads. The system achieved high mission success, with the UGV completing all tasks safely, with four partial successes caused by worker detection failures under afternoon shadows. UAV energy consumption remained stable, while UGV energy and mission time increased during reactive maneuvers. Communication latency was low and consistent. This enabled timely execution of both proactive and reactive navigation protocols. In conclusion, the present UAV–UGV system ensured efficient and safe navigation, demonstrating practical applicability in real orchard conditions. Full article
(This article belongs to the Special Issue The Use of Evolutionary Algorithms in Robotics)
Show Figures

Figure 1

14 pages, 4270 KB  
Article
Dual-Arm Coordination of a Tomato Harvesting Robot with Subtask Decoupling and Synthesizing
by Binhao Chen, Liang Gong, Shenghan Xie, Xuhao Zhao, Peixin Gao, Hefei Luo, Cheng Luo, Yanming Li and Chengliang Liu
Agriculture 2026, 16(2), 267; https://doi.org/10.3390/agriculture16020267 - 21 Jan 2026
Viewed by 95
Abstract
Robotic harvesters have the potential to substantially reduce the physical workload of agricultural laborers. However, in complex agricultural environments, traditional single-arm robot path planning methods often struggle to accomplish fruit harvesting tasks due to the presence of collision avoidance requirements and orientation constraints [...] Read more.
Robotic harvesters have the potential to substantially reduce the physical workload of agricultural laborers. However, in complex agricultural environments, traditional single-arm robot path planning methods often struggle to accomplish fruit harvesting tasks due to the presence of collision avoidance requirements and orientation constraints during grasping. In this work, we design a dual-arm tomato harvesting robot and propose a reinforcement learning-based cooperative control algorithm tailored to the dual-arm system. First, a deep learning-based semantic segmentation network is employed to extract the spatial locations of tomatoes and branches from sensory data. Building upon this perception module, we develop a reinforcement learning-based cooperative path planning approach to address inter-arm collision avoidance and end-effector orientation constraints during the harvesting process. Furthermore, a task-driven policy network architecture is introduced to decouple the complex harvesting task into structured subproblems, thereby enabling more efficient learning and improved performance. Simulation and experimental results demonstrate that the proposed method can generate collision-free harvesting trajectories that satisfy dual-arm orientation constraints, significantly improving the tomato harvesting success rate. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 1619 KB  
Article
Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS
by Vesna Antoska Knights, Marija Prchkovska, Luka Krašnjak and Jasenka Gajdoš Kljusurić
AppliedMath 2026, 6(1), 16; https://doi.org/10.3390/appliedmath6010016 - 20 Jan 2026
Viewed by 110
Abstract
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) [...] Read more.
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) for correlation-robust sensor fusion, a Gaussian state–space backbone with Kalman filtering, heteroskedastic Bayesian regression with full posterior sampling via an affine-invariant MCMC sampler, and a Bayesian likelihood-ratio test (LRT) coupled to a risk-sensitive proportional–derivative (PD) control law. Theoretical guarantees are provided by bounding the state covariance under stability conditions, establishing convexity of the aCI weight optimization on the simplex, and deriving a Bayes-risk-optimal decision threshold for the LRT under symmetric Gaussian likelihoods. A proof-of-concept agro-environmental decision-support application is considered, where heterogeneous data streams (IoT soil sensors, meteorological stations, and drone-derived vegetation indices) are fused to generate early-warning alarms for crop stress and to adapt irrigation and fertilization inputs. The proposed pipeline reduces predictive variance and sharpens posterior credible intervals (up to 34% narrower 95% intervals and 44% lower NLL/Brier score under heteroskedastic modeling), while a Bayesian uncertainty-aware controller achieves 14.2% lower water usage and 35.5% fewer false stress alarms compared to a rule-based strategy. The framework is mathematically grounded yet domain-independent, providing a probabilistic pipeline that propagates uncertainty from raw multimodal data to operational control actions, and can be transferred beyond agriculture to robotics, signal processing, and environmental monitoring applications. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

26 pages, 925 KB  
Review
Integrating Artificial Intelligence and Machine Learning for Sustainable Development in Agriculture and Allied Sectors of the Temperate Himalayas
by Arnav Saxena, Mir Faiq, Shirin Ghatrehsamani and Syed Rameem Zahra
AgriEngineering 2026, 8(1), 35; https://doi.org/10.3390/agriengineering8010035 - 19 Jan 2026
Viewed by 222
Abstract
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review [...] Read more.
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review systematically examines 21 critical problem areas, with three key challenges identified per sector across agriculture, agricultural engineering, fisheries, forestry, horticulture, sericulture, and animal husbandry. Artificial Intelligence (AI) and Machine Learning (ML) interventions, including computer vision, predictive modeling, Internet of Things (IoT)-based monitoring, robotics, and blockchain-enabled traceability, are evaluated for their regional applicability, pilot-level outcomes, and operational limitations under temperate Himalayan conditions. The analysis highlights that AI-enabled solutions demonstrate strong potential for early pest and disease detection, improved resource-use efficiency, ecosystem monitoring, and market integration. However, large-scale adoption remains constrained by limited digital infrastructure, data scarcity, high capital costs, low digital literacy, and fragmented institutional frameworks. The novelty of this review lies in its cross-sectoral synthesis of AI/ML applications tailored to the Himalayan context, combined with a sector-wise revenue-loss assessment to quantify economic impacts and guide prioritization. Based on the identified gaps, the review proposes feasible, context-aware strategies, including lightweight edge-AI models, localized data platforms, capacity-building initiatives, and policy-aligned implementation pathways. Collectively, these recommendations aim to enhance sustainability, resilience, and livelihood security across agriculture and allied sectors in the temperate Himalayan region. Full article
Show Figures

Figure 1

17 pages, 2852 KB  
Article
A Lightweight Edge-AI System for Disease Detection and Three-Level Leaf Spot Severity Assessment in Strawberry Using YOLOv10n and MobileViT-S
by Raikhan Amanova, Baurzhan Belgibayev, Madina Mansurova, Madina Suleimenova, Gulshat Amirkhanova and Gulnur Tyulepberdinova
Computers 2026, 15(1), 63; https://doi.org/10.3390/computers15010063 - 16 Jan 2026
Viewed by 205
Abstract
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a [...] Read more.
Mobile edge-AI plant monitoring systems enable automated disease control in greenhouses and open fields, reducing dependence on manual inspection and the variability of visual diagnostics. This paper proposes a lightweight two-stage edge-AI system for strawberries, in which a YOLOv10n detector on board a mobile agricultural robot locates leaves affected by seven common diseases (including Leaf Spot) with real-time capability on an embedded platform. Patches are then automatically extracted for leaves classified as Leaf Spot and transmitted to the second module—a compact MobileViT-S-based classifier with ordinal output that assesses the severity of Leaf Spot on three levels (S1—mild, S2—moderate, S3—severe) on a specialised set of 373 manually labelled leaf patches. In a comparative experiment with lightweight architectures ResNet-18, EfficientNet-B0, MobileNetV3-Small and Swin-Tiny, the proposed Ordinal MobileViT-S demonstrated the highest accuracy in assessing the severity of Leaf Spot (accuracy ≈ 0.97 with 4.9 million parameters), surpassing both the baseline models and the standard MobileViT-S with a cross-entropy loss function. On the original image set, the YOLOv10n detector achieves an mAP@0.5 of 0.960, an F1 score of 0.93 and a recall of 0.917, ensuring reliable detection of affected leaves for subsequent Leaf Spot severity assessment. The results show that the “YOLOv10n + Ordinal MobileViT-S” cascade provides practical severity-aware Leaf Spot diagnosis on a mobile agricultural robot and can serve as the basis for real-time strawberry crop health monitoring systems. Full article
Show Figures

Figure 1

27 pages, 4407 KB  
Systematic Review
Artificial Intelligence in Agri-Robotics: A Systematic Review of Trends and Emerging Directions Leveraging Bibliometric Tools
by Simona Casini, Pietro Ducange, Francesco Marcelloni and Lorenzo Pollini
Robotics 2026, 15(1), 24; https://doi.org/10.3390/robotics15010024 - 15 Jan 2026
Viewed by 356
Abstract
Agricultural robotics and artificial intelligence (AI) are becoming essential to building more sustainable, efficient, and resilient food systems. As climate change, food security pressures, and labour shortages intensify, the integration of intelligent technologies in agriculture has gained strategic importance. This systematic review provides [...] Read more.
Agricultural robotics and artificial intelligence (AI) are becoming essential to building more sustainable, efficient, and resilient food systems. As climate change, food security pressures, and labour shortages intensify, the integration of intelligent technologies in agriculture has gained strategic importance. This systematic review provides a consolidated assessment of AI and robotics research in agriculture from 2000 to 2025, identifying major trends, methodological trajectories, and underexplored domains. A structured search was conducted in the Scopus database—which was selected for its broad coverage of engineering, computer science, and agricultural technology—and records were screened using predefined inclusion and exclusion criteria across title, abstract, keywords, and eligibility levels. The final dataset was analysed through descriptive statistics and science-mapping techniques (VOSviewer, SciMAT). Out of 4894 retrieved records, 3673 studies met the eligibility criteria and were included. As with all bibliometric reviews, the synthesis reflects the scope of indexed publications and available metadata, and potential selection bias was mitigated through a multi-stage screening workflow. The analysis revealed four dominant research themes: deep-learning-based perception, UAV-enabled remote sensing, data-driven decision systems, and precision agriculture. Several strategically relevant but underdeveloped areas also emerged, including soft manipulation, multimodal sensing, sim-to-real transfer, and adaptive autonomy. Geographical patterns highlight a strong concentration of research in China and India, reflecting agricultural scale and investment dynamics. Overall, the field appears technologically mature in perception and aerial sensing but remains limited in physical interaction, uncertainty-aware control, and long-term autonomous operation. These gaps indicate concrete opportunities for advancing next-generation AI-driven robotic systems in agriculture. Funding sources are reported in the full manuscript. Full article
(This article belongs to the Special Issue Smart Agriculture with AI and Robotics)
Show Figures

Figure 1

5 pages, 1197 KB  
Proceeding Paper
Experimental Assessment of Autonomous Fleet Operations for Precision Viticulture Under Real Vineyard Conditions
by Gavriela Asiminari, Vasileios Moysiadis, Dimitrios Kateris, Aristotelis C. Tagarakis, Athanasios Balafoutis and Dionysis Bochtis
Proceedings 2026, 134(1), 47; https://doi.org/10.3390/proceedings2026134047 - 14 Jan 2026
Viewed by 115
Abstract
The increase in global population and climatic instability places unprecedented demands on agricultural productivity. Autonomous robotic systems, specifically unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), provide potential solutions by enhancing precision viticulture operations. This work presents the experimental evaluation of a [...] Read more.
The increase in global population and climatic instability places unprecedented demands on agricultural productivity. Autonomous robotic systems, specifically unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), provide potential solutions by enhancing precision viticulture operations. This work presents the experimental evaluation of a heterogeneous robotic fleet composed of Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs), operating autonomously under real-world vineyard conditions. Over the course of a full growing season, the fleet demonstrated effective autonomous navigation, environment sensing, and data acquisition. More than 4 UGV missions and 10 UAV flights were successfully completed, achieving a 95% data acquisition rate and mapping resolution of 2.5 cm/pixel. Vegetation indices and thermal imagery enabled accurate detection of water stress and crop vigor. These capabilities enabled high-resolution mapping and agricultural task execution, contributing significantly to operational efficiency and sustainability in viticulture. Full article
Show Figures

Figure 1

28 pages, 30101 KB  
Article
Machine Learning-Driven Soil Fungi Identification Using Automated Imaging Techniques
by Karol Struniawski, Ryszard Kozera, Aleksandra Konopka, Lidia Sas-Paszt and Agnieszka Marasek-Ciolakowska
Appl. Sci. 2026, 16(2), 855; https://doi.org/10.3390/app16020855 - 14 Jan 2026
Viewed by 137
Abstract
Soilborne fungi (Fusarium, Trichoderma, Verticillium, Purpureocillium) critically impact agricultural productivity, disease dynamics, and soil health, requiring rapid identification for precision agriculture. Current diagnostics require labor-intensive microscopy or expensive molecular assays (up to 10 days), while existing ML studies [...] Read more.
Soilborne fungi (Fusarium, Trichoderma, Verticillium, Purpureocillium) critically impact agricultural productivity, disease dynamics, and soil health, requiring rapid identification for precision agriculture. Current diagnostics require labor-intensive microscopy or expensive molecular assays (up to 10 days), while existing ML studies suffer from small datasets (<500 images), expert selection bias, and lack of public availability. A fully automated identification system integrating robotic microscopy (Keyence VHX-700) with deep learning was developed. The Soil Fungi Microscopic Images Dataset (SFMID) comprises 20,151 images (11,511 no-water, 8640 water-based)—the largest publicly available soil fungi dataset. Four CNN architectures (InceptionResNetV2, ResNet152V2, DenseNet121, DenseNet201) were evaluated with transfer learning and three-shot majority voting. Grad-CAM analysis validated biological relevance. ResNet152V2 conv2 achieved optimal SFMID-NW performance (precision: 0.6711; AUC: 0.8031), with real-time inference (20 ms, 48–49 images/second). Statistical validation (McNemar’s test: χ2=27.34,p<0.001) confirmed that three-shot classification significantly outperforms single-image prediction. Confusion analysis identified Fusarium–Trichoderma (no-water) and Fusarium–Verticillium (water-based) challenges, indicating morphological ambiguities. The publicly available SFMID provides a scalable foundation for AI-enhanced agricultural diagnostics. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)
Show Figures

Figure 1

18 pages, 11774 KB  
Article
Retrieval Augment: Robust Path Planning for Fruit-Picking Robot Based on Real-Time Policy Reconstruction
by Binhao Chen, Shuo Zhang, Zichuan He and Liang Gong
Sustainability 2026, 18(2), 829; https://doi.org/10.3390/su18020829 - 14 Jan 2026
Viewed by 144
Abstract
The working environment of fruit-picking robots is highly complex, involving numerous obstacles such as branches. Sampling-based algorithms like Rapidly Exploring Random Trees (RRTs) are faster but suffer from low success rates and poor path quality. Deep reinforcement learning (DRL) has excelled in high-degree-of-freedom [...] Read more.
The working environment of fruit-picking robots is highly complex, involving numerous obstacles such as branches. Sampling-based algorithms like Rapidly Exploring Random Trees (RRTs) are faster but suffer from low success rates and poor path quality. Deep reinforcement learning (DRL) has excelled in high-degree-of-freedom (DOF) robot path planning, but typically requires substantial computational resources and long training cycles, which limits its applicability in resource-constrained and large-scale agricultural deployments. However, picking robot agents trained by DRL underperform because of the complexity and dynamics of the picking scenes. We propose a real-time policy reconstruction method based on experience retrieval to augment an agent trained by DRL. The key idea is to optimize the agent’s policy during inference rather than retraining, thereby reducing training cost, energy consumption, and data requirements, which are critical factors for sustainable agricultural robotics. We first use Soft Actor–Critic (SAC) to train the agent with simple picking tasks and less episodes. When faced with complex picking tasks, instead of retraining the agent, we reconstruct its policy by retrieving experience from similar tasks and revising action in real time, which is implemented specifically by real-time action evaluation and rejection sampling. Overall, the agent evolves into an augment agent through policy reconstruction, enabling it to perform much better in complex tasks with narrow passages and dense obstacles than the original agent. We test our method both in simulation and in the real world. Results show that the augment agent outperforms the original agent and sampling-based algorithms such as BIT* and AIT* in terms of success rate (+133.3%) and path quality (+60.4%), demonstrating its potential to support reliable, scalable, and sustainable fruit-picking automation. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

17 pages, 11104 KB  
Article
Lightweight Improvements to the Pomelo Image Segmentation Method for Yolov8n-seg
by Zhen Li, Baiwei Cao, Zhengwei Yu, Qingting Jin, Shilei Lyu, Xiaoyi Chen and Danting Mao
Agriculture 2026, 16(2), 186; https://doi.org/10.3390/agriculture16020186 - 12 Jan 2026
Viewed by 301
Abstract
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic [...] Read more.
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic image acquisition, annotation, and data augmentation. The RepGhost architecture was incorporated into the C2f module of the YOLOv8-seg backbone network to enhance feature reuse capabilities while reducing computational complexity. Experimental results demonstrate that the YOLOv8-seg-RepGhost model enhances efficiency without compromising accuracy: parameter count is reduced by 16.5% (from 3.41 M to 2.84 M), computational load decreases by 14.8% (from 12.8 GFLOPs to 10.9 GFLOPs), and inference time is shortened by 6.3% (to 15 ms). The model maintains excellent detection performance with bounding box mAP50 at 97.75% and mask mAP50 at 97.51%. The research achieves both high segmentation efficiency and detection accuracy, offering core support for developing visual systems in harvesting robots and providing an effective solution for deep learning-based fruit target recognition and automated harvesting applications. Full article
(This article belongs to the Special Issue Advances in Precision Agriculture in Orchard)
Show Figures

Figure 1

28 pages, 9738 KB  
Article
Design and Evaluation of an Underactuated Rigid–Flexible Coupled End-Effector for Non-Destructive Apple Harvesting
by Zeyi Li, Zhiyuan Zhang, Jingbin Li, Gang Hou, Xianfei Wang, Yingjie Li, Huizhe Ding and Yufeng Li
Agriculture 2026, 16(2), 178; https://doi.org/10.3390/agriculture16020178 - 10 Jan 2026
Viewed by 261
Abstract
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ [...] Read more.
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ apples determined the minimum damage threshold to be 24.33 N. A genetic algorithm (GA) was employed to optimize the geometric parameters of the finger mechanism for uniform force distribution. Subsequently, a rigid–flexible coupled multibody dynamics model was established to simulate the grasping of small (70 mm), medium (80 mm), and large (90 mm) apples. Additionally, a harvesting experimental platform was constructed to verify the performance. Results demonstrated that by limiting the contact force of the distal phalange region silicone (DPRS) to 24 N via active feedback, the peak contact forces on the proximal phalange region silicone (PPRS) and middle phalange region silicone (MPRS) were effectively maintained below the damage threshold across all three sizes. The maximum equivalent stress remained significantly below the fruit’s yield limit, ensuring no mechanical damage occurred, with an average enveloping time of approximately 1.30 s. The experimental data showed strong agreement with the simulation, with a mean absolute percentage error (MAPE) of 5.98% for contact force and 5.40% for enveloping time. These results confirm that the proposed end-effector successfully achieves high adaptability and reliability in non-destructive harvesting, offering a valuable reference for agricultural robotics. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

28 pages, 9392 KB  
Article
Analysis Method and Experiment on the Influence of Hard Bottom Layer Contour on Agricultural Machinery Motion Position and Posture Changes
by Tuanpeng Tu, Xiwen Luo, Lian Hu, Jie He, Pei Wang, Peikui Huang, Runmao Zhao, Gaolong Chen, Dawen Feng, Mengdong Yue, Zhongxian Man, Xianhao Duan, Xiaobing Deng and Jiajun Mo
Agriculture 2026, 16(2), 170; https://doi.org/10.3390/agriculture16020170 - 9 Jan 2026
Viewed by 228
Abstract
The hard bottom layer in paddy fields significantly impacts the driving stability, operational quality, and efficiency of agricultural machinery. Continuously improving the precision and efficiency of unmanned, precision operations for paddy field machinery is essential for realizing unmanned smart rice farms. Addressing the [...] Read more.
The hard bottom layer in paddy fields significantly impacts the driving stability, operational quality, and efficiency of agricultural machinery. Continuously improving the precision and efficiency of unmanned, precision operations for paddy field machinery is essential for realizing unmanned smart rice farms. Addressing the unclear influence patterns of hard bottom contours on typical scenarios of agricultural machinery motion and posture changes, this paper employs a rice transplanter chassis equipped with GNSS and AHRS. It proposes methods for acquiring motion state information and hard bottom contour data during agricultural operations, establishing motion state expression models for key points on the machinery antenna, bottom of the wheel, and rear axle center. A correlation analysis method between motion state and hard bottom contour parameters was established, revealing the influence mechanisms of typical hard bottom contours on machinery trajectory deviation, attitude response, and wheel trapping. Results indicate that hard bottom contour height and local roughness exert extremely significant effects on agricultural machinery heading deviation and lateral movement. Heading variation positively correlates with ridge height and negatively with wheel diameter. The constructed mathematical model for heading variation based on hard bottom contour height difference and wheel diameter achieves a coefficient of determination R2 of 0.92. The roll attitude variation in agricultural machinery is primarily influenced by the terrain characteristics encountered by rear wheels. A theoretical model was developed for the offset displacement of the antenna position relative to the horizontal plane during roll motion. The accuracy of lateral deviation detection using the posture-corrected rear axle center and bottom of the wheel center improved by 40.7% and 39.0%, respectively, compared to direct measurement using the positioning antenna. During typical vehicle-trapping events, a segmented discrimination function for trapping states is developed when the terrain profile steeply declines within 5 s and roughness increases from 0.008 to 0.012. This method for analyzing how hard bottom terrain contours affect the position and attitude changes in agricultural machinery provides theoretical foundations and technical support for designing wheeled agricultural robots, path-tracking control for unmanned precision operations, and vehicle-trapping early warning systems. It holds significant importance for enhancing the intelligence and operational efficiency of paddy field machinery. Full article
Show Figures

Figure 1

Back to TopTop