Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (869)

Search Parameters:
Keywords = agricultural robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 30103 KB  
Article
Machine Learning-Driven Soil Fungi Identification Using Automated Imaging Techniques
by Karol Struniawski, Ryszard Kozera, Aleksandra Konopka, Lidia Sas-Paszt and Agnieszka Marasek-Ciolakowska
Appl. Sci. 2026, 16(2), 855; https://doi.org/10.3390/app16020855 - 14 Jan 2026
Abstract
Soilborne fungi (Fusarium, Trichoderma, Verticillium, Purpureocillium) critically impact agricultural productivity, disease dynamics, and soil health, requiring rapid identification for precision agriculture. Current diagnostics require labor-intensive microscopy or expensive molecular assays (up to 10 days), while existing ML studies [...] Read more.
Soilborne fungi (Fusarium, Trichoderma, Verticillium, Purpureocillium) critically impact agricultural productivity, disease dynamics, and soil health, requiring rapid identification for precision agriculture. Current diagnostics require labor-intensive microscopy or expensive molecular assays (up to 10 days), while existing ML studies suffer from small datasets (<500 images), expert selection bias, and lack of public availability. A fully automated identification system integrating robotic microscopy (Keyence VHX-700) with deep learning was developed. The Soil Fungi Microscopic Images Dataset (SFMID) comprises 20,151 images (11,511 no-water, 8640 water-based)—the largest publicly available soil fungi dataset. Four CNN architectures (InceptionResNetV2, ResNet152V2, DenseNet121, DenseNet201) were evaluated with transfer learning and three-shot majority voting. Grad-CAM analysis validated biological relevance. ResNet152V2 conv2 achieved optimal SFMID-NW performance (precision: 0.6711; AUC: 0.8031), with real-time inference (20 ms, 48–49 images/second). Statistical validation (McNemar’s test: χ2=27.34,p<0.001) confirmed that three-shot classification significantly outperforms single-image prediction. Confusion analysis identified Fusarium–Trichoderma (no-water) and Fusarium–Verticillium (water-based) challenges, indicating morphological ambiguities. The publicly available SFMID provides a scalable foundation for AI-enhanced agricultural diagnostics. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)
18 pages, 11774 KB  
Article
Retrieval Augment: Robust Path Planning for Fruit-Picking Robot Based on Real-Time Policy Reconstruction
by Binhao Chen, Shuo Zhang, Zichuan He and Liang Gong
Sustainability 2026, 18(2), 829; https://doi.org/10.3390/su18020829 - 14 Jan 2026
Abstract
The working environment of fruit-picking robots is highly complex, involving numerous obstacles such as branches. Sampling-based algorithms like Rapidly Exploring Random Trees (RRTs) are faster but suffer from low success rates and poor path quality. Deep reinforcement learning (DRL) has excelled in high-degree-of-freedom [...] Read more.
The working environment of fruit-picking robots is highly complex, involving numerous obstacles such as branches. Sampling-based algorithms like Rapidly Exploring Random Trees (RRTs) are faster but suffer from low success rates and poor path quality. Deep reinforcement learning (DRL) has excelled in high-degree-of-freedom (DOF) robot path planning, but typically requires substantial computational resources and long training cycles, which limits its applicability in resource-constrained and large-scale agricultural deployments. However, picking robot agents trained by DRL underperform because of the complexity and dynamics of the picking scenes. We propose a real-time policy reconstruction method based on experience retrieval to augment an agent trained by DRL. The key idea is to optimize the agent’s policy during inference rather than retraining, thereby reducing training cost, energy consumption, and data requirements, which are critical factors for sustainable agricultural robotics. We first use Soft Actor–Critic (SAC) to train the agent with simple picking tasks and less episodes. When faced with complex picking tasks, instead of retraining the agent, we reconstruct its policy by retrieving experience from similar tasks and revising action in real time, which is implemented specifically by real-time action evaluation and rejection sampling. Overall, the agent evolves into an augment agent through policy reconstruction, enabling it to perform much better in complex tasks with narrow passages and dense obstacles than the original agent. We test our method both in simulation and in the real world. Results show that the augment agent outperforms the original agent and sampling-based algorithms such as BIT* and AIT* in terms of success rate (+133.3%) and path quality (+60.4%), demonstrating its potential to support reliable, scalable, and sustainable fruit-picking automation. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

17 pages, 11104 KB  
Article
Lightweight Improvements to the Pomelo Image Segmentation Method for Yolov8n-seg
by Zhen Li, Baiwei Cao, Zhengwei Yu, Qingting Jin, Shilei Lyu, Xiaoyi Chen and Danting Mao
Agriculture 2026, 16(2), 186; https://doi.org/10.3390/agriculture16020186 - 12 Jan 2026
Viewed by 56
Abstract
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic [...] Read more.
Instance segmentation in agricultural robotics requires a balance between real-time performance and accuracy. This study proposes a lightweight pomelo image segmentation method based on the YOLOv8n-seg model integrated with the RepGhost module. A pomelo dataset consisting of 5076 samples was constructed through systematic image acquisition, annotation, and data augmentation. The RepGhost architecture was incorporated into the C2f module of the YOLOv8-seg backbone network to enhance feature reuse capabilities while reducing computational complexity. Experimental results demonstrate that the YOLOv8-seg-RepGhost model enhances efficiency without compromising accuracy: parameter count is reduced by 16.5% (from 3.41 M to 2.84 M), computational load decreases by 14.8% (from 12.8 GFLOPs to 10.9 GFLOPs), and inference time is shortened by 6.3% (to 15 ms). The model maintains excellent detection performance with bounding box mAP50 at 97.75% and mask mAP50 at 97.51%. The research achieves both high segmentation efficiency and detection accuracy, offering core support for developing visual systems in harvesting robots and providing an effective solution for deep learning-based fruit target recognition and automated harvesting applications. Full article
(This article belongs to the Special Issue Advances in Precision Agriculture in Orchard)
Show Figures

Figure 1

28 pages, 9738 KB  
Article
Design and Evaluation of an Underactuated Rigid–Flexible Coupled End-Effector for Non-Destructive Apple Harvesting
by Zeyi Li, Zhiyuan Zhang, Jingbin Li, Gang Hou, Xianfei Wang, Yingjie Li, Huizhe Ding and Yufeng Li
Agriculture 2026, 16(2), 178; https://doi.org/10.3390/agriculture16020178 - 10 Jan 2026
Viewed by 171
Abstract
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ [...] Read more.
In response to the growing need for efficient, stable, and non-destructive gripping in apple harvesting robots, this study proposes a novel rigid–flexible coupled end-effector. The design integrates an underactuated mechanism with a real-time force feedback control system. First, compression tests on ‘Red Fuji’ apples determined the minimum damage threshold to be 24.33 N. A genetic algorithm (GA) was employed to optimize the geometric parameters of the finger mechanism for uniform force distribution. Subsequently, a rigid–flexible coupled multibody dynamics model was established to simulate the grasping of small (70 mm), medium (80 mm), and large (90 mm) apples. Additionally, a harvesting experimental platform was constructed to verify the performance. Results demonstrated that by limiting the contact force of the distal phalange region silicone (DPRS) to 24 N via active feedback, the peak contact forces on the proximal phalange region silicone (PPRS) and middle phalange region silicone (MPRS) were effectively maintained below the damage threshold across all three sizes. The maximum equivalent stress remained significantly below the fruit’s yield limit, ensuring no mechanical damage occurred, with an average enveloping time of approximately 1.30 s. The experimental data showed strong agreement with the simulation, with a mean absolute percentage error (MAPE) of 5.98% for contact force and 5.40% for enveloping time. These results confirm that the proposed end-effector successfully achieves high adaptability and reliability in non-destructive harvesting, offering a valuable reference for agricultural robotics. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

28 pages, 9392 KB  
Article
Analysis Method and Experiment on the Influence of Hard Bottom Layer Contour on Agricultural Machinery Motion Position and Posture Changes
by Tuanpeng Tu, Xiwen Luo, Lian Hu, Jie He, Pei Wang, Peikui Huang, Runmao Zhao, Gaolong Chen, Dawen Feng, Mengdong Yue, Zhongxian Man, Xianhao Duan, Xiaobing Deng and Jiajun Mo
Agriculture 2026, 16(2), 170; https://doi.org/10.3390/agriculture16020170 - 9 Jan 2026
Viewed by 158
Abstract
The hard bottom layer in paddy fields significantly impacts the driving stability, operational quality, and efficiency of agricultural machinery. Continuously improving the precision and efficiency of unmanned, precision operations for paddy field machinery is essential for realizing unmanned smart rice farms. Addressing the [...] Read more.
The hard bottom layer in paddy fields significantly impacts the driving stability, operational quality, and efficiency of agricultural machinery. Continuously improving the precision and efficiency of unmanned, precision operations for paddy field machinery is essential for realizing unmanned smart rice farms. Addressing the unclear influence patterns of hard bottom contours on typical scenarios of agricultural machinery motion and posture changes, this paper employs a rice transplanter chassis equipped with GNSS and AHRS. It proposes methods for acquiring motion state information and hard bottom contour data during agricultural operations, establishing motion state expression models for key points on the machinery antenna, bottom of the wheel, and rear axle center. A correlation analysis method between motion state and hard bottom contour parameters was established, revealing the influence mechanisms of typical hard bottom contours on machinery trajectory deviation, attitude response, and wheel trapping. Results indicate that hard bottom contour height and local roughness exert extremely significant effects on agricultural machinery heading deviation and lateral movement. Heading variation positively correlates with ridge height and negatively with wheel diameter. The constructed mathematical model for heading variation based on hard bottom contour height difference and wheel diameter achieves a coefficient of determination R2 of 0.92. The roll attitude variation in agricultural machinery is primarily influenced by the terrain characteristics encountered by rear wheels. A theoretical model was developed for the offset displacement of the antenna position relative to the horizontal plane during roll motion. The accuracy of lateral deviation detection using the posture-corrected rear axle center and bottom of the wheel center improved by 40.7% and 39.0%, respectively, compared to direct measurement using the positioning antenna. During typical vehicle-trapping events, a segmented discrimination function for trapping states is developed when the terrain profile steeply declines within 5 s and roughness increases from 0.008 to 0.012. This method for analyzing how hard bottom terrain contours affect the position and attitude changes in agricultural machinery provides theoretical foundations and technical support for designing wheeled agricultural robots, path-tracking control for unmanned precision operations, and vehicle-trapping early warning systems. It holds significant importance for enhancing the intelligence and operational efficiency of paddy field machinery. Full article
Show Figures

Figure 1

31 pages, 21618 KB  
Article
Cohesion-Based Flocking Formation Using Potential Linked Nodes Model for Multi-Robot Agricultural Swarms
by Kevin Marlon Soza-Mamani, Marcelo Saavedra Alcoba, Felipe Torres and Alvaro Javier Prado-Romo
Agriculture 2026, 16(2), 155; https://doi.org/10.3390/agriculture16020155 - 8 Jan 2026
Viewed by 189
Abstract
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring [...] Read more.
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring and autonomous field maintenance. This paper introduces a cohesive Potential Linked Nodes (PLNs) framework, an adjustable formation structure that employs Artificial Potential Fields (APFs), and virtual node–link interactions to regulate swarm cohesion and coordinated motion (CM). The proposed model governs swarm formation, modulates structural integrity, and enhances responsiveness to external perturbations. The PLN framework facilitates swarm stability, maintaining high cohesion and adaptability while the system’s tunable parameters enable online adjustment of inter-agent coupling strength and formation rigidity. Comprehensive simulation experiments were conducted to assess the performance of the model under multiple swarm conditions, including static aggregation and dynamic flocking behavior using differential-drive mobile robots. Additional tests within a simulated cropping environment were performed to evaluate the framework’s stability and cohesiveness under agricultural constraints. Swarm cohesion and formation stability were quantitatively analyzed using density-based and inter-robot distance metrics. The experimental results demonstrate that the PLN model effectively maintains formation integrity and cohesive stability throughout all scenarios. Full article
Show Figures

Figure 1

20 pages, 59455 KB  
Article
ACDNet: Adaptive Citrus Detection Network Based on Improved YOLOv8 for Robotic Harvesting
by Zhiqin Wang, Wentao Xia and Ming Li
Agriculture 2026, 16(2), 148; https://doi.org/10.3390/agriculture16020148 - 7 Jan 2026
Viewed by 227
Abstract
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) [...] Read more.
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) module that combines fruit-aware partial convolution with illumination-adaptive attention mechanisms to enhance feature representation with improved efficiency; (2) Dynamic Multi-Scale Sampling (DMS) operator that adaptively focuses sampling points on fruit regions while suppressing background interference through content-aware offset generation; and (3) Fruit-Shape Aware IoU (FSA-IoU) loss function that incorporates citrus morphological priors and occlusion patterns to improve localization accuracy. Extensive experiments on our newly constructed CitrusSet dataset, which comprises 2887 images capturing diverse lighting conditions, occlusion levels, and fruit overlapping scenarios, demonstrate that ACDNet achieves superior performance with mAP@0.5 of 97.5%, precision of 92.1%, and recall of 92.8%, while maintaining real-time inference at 55.6 FPS. Compared to the baseline YOLOv8n model, ACDNet achieves improvements of 1.7%, 3.4%, and 3.6% in mAP@0.5, precision, and recall, respectively, while reducing model parameters by 11% (to 2.67 M) and computational cost by 20% (to 6.5 G FLOPs), making it highly suitable for deployment in resource-constrained robotic harvesting systems. However, the current study is primarily validated on citrus fruits, and future work will focus on extending ACDNet to other spherical fruits and exploring its generalization under extreme weather conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

13 pages, 1149 KB  
Article
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
by Mohamed El-Ouati, Sandro Bimonte and Nicolas Tricot
Computers 2026, 15(1), 32; https://doi.org/10.3390/computers15010032 - 7 Jan 2026
Viewed by 191
Abstract
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five [...] Read more.
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five distinct, interconnected layers: The Source Layer, the Ingestion Layer, the Batch Layer, the Speed Layer, and the Governance Layer. The Source Layer serves as the unified entry point, accommodating structured, spatial, and image data from sensors, Drones, and ROS-equipped robots. The Ingestion Layer uses a hybrid fog/cloud architecture with Kafka for real-time streams and for batch processing of historical data. Data is then segregated for processing: The cloud-deployed Batch Layer employs a Hadoop cluster, Spark, Hive, and Drill for large-scale historical analysis, while the Speed Layer utilizes Geoflink and PostGIS for low-latency, real-time geovisualization. Finally, the Governance Layer guarantees data quality, lineage, and organization across all components using Open Metadata. This layered, hybrid approach provides a scalable and resilient framework capable of transforming raw agricultural data into timely, actionable insights, addressing the critical need for advanced data management in smart farming. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

19 pages, 2314 KB  
Article
Occlusion Avoidance for Harvesting Robots: A Lightweight Active Perception Model
by Tao Zhang, Jiaxi Huang, Jinxing Niu, Zhengyi Liu, Le Zhang and Huan Song
Sensors 2026, 26(1), 291; https://doi.org/10.3390/s26010291 - 2 Jan 2026
Viewed by 226
Abstract
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United [...] Read more.
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United States, with active perception. Firstly, to meet the stringent real-time requirements of the active perception system, a lightweight YOLOv8n model was developed. This model reduces computational redundancy by incorporating the C2f-FasterBlock module and enhances key feature representation by integrating the SE attention mechanism, significantly improving inference speed while maintaining high detection accuracy. Secondly, an end-to-end active perception model based on ResNet50 and multi-modal fusion was designed. This model can intelligently predict the optimal movement direction for the robotic arm based on the current observation image, actively avoiding occlusions to obtain a more complete field of view. The model was trained using a matrix dataset constructed through the robot’s dynamic exploration in real-world scenarios, achieving a direct mapping from visual perception to motion planning. Experimental results demonstrate that the proposed lightweight YOLOv8n model achieves a mAP of 0.885 in apple detection tasks, a frame rate of 83 FPS, a parameter count reduced to 1,983,068, and a model weight file size reduced to 4.3 MB, significantly outperforming the baseline model. In active perception experiments, the proposed method effectively guided the robotic arm to quickly find observation positions with minimal occlusion, substantially improving the success rate of target recognition and the overall operational efficiency of the system. The current research outcomes provide preliminary technical validation and a feasible exploratory pathway for developing agricultural harvesting robot systems suitable for real-world complex environments. It should be noted that the validation of this study was primarily conducted in controlled environments. Subsequent work still requires large-scale testing in diverse real-world orchard scenarios, as well as further system optimization and performance evaluation in more realistic application settings, which include natural lighting variations, complex weather conditions, and actual occlusion patterns. Full article
Show Figures

Figure 1

21 pages, 19413 KB  
Article
Efficient Real-Time Row Detection and Navigation Using LaneATT for Greenhouse Environments
by Ricardo Navarro Gómez, Joel Milla, Paolo Alfonso Reyes Ramírez, Jesús Arturo Escobedo Cabello and Alfonso Gómez-Espinosa
Agriculture 2026, 16(1), 111; https://doi.org/10.3390/agriculture16010111 - 31 Dec 2025
Viewed by 326
Abstract
This study introduces an efficient real-time lane detection and navigation system for greenhouse environments, leveraging the LaneATT architecture. Designed for deployment on the Jetson Xavier NX edge computing platform, the system utilizes an RGB camera to enable autonomous navigation in greenhouse rows. From [...] Read more.
This study introduces an efficient real-time lane detection and navigation system for greenhouse environments, leveraging the LaneATT architecture. Designed for deployment on the Jetson Xavier NX edge computing platform, the system utilizes an RGB camera to enable autonomous navigation in greenhouse rows. From real-world agricultural environments, data were collected and annotated to train the model, achieving 90% accuracy, 91% F1 Score, and an inference speed of 48 ms per frame. The LaneATT-based vision system was trained and validated in greenhouse environments under heterogeneous illumination conditions and across multiple phenological stages of crop development. The navigation system was validated using a commercial skid-steering mobile robot operating within an experimental greenhouse environment under actual operating conditions. The proposed solution minimizes computational overhead, making it highly suitable for deployment on edge devices within resource-constrained environments. Furthermore, experimental results demonstrate robust performance, with precise lane detection and rapid response times on embedded systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 8411 KB  
Article
Vision-Guided Cleaning System for Seed-Production Wheat Harvesters Using RGB-D Sensing and Object Detection
by Junjie Xia, Xinping Zhang, Jingke Zhang, Cheng Yang, Guoying Li, Runzhi Yu and Liqing Zhao
Agriculture 2026, 16(1), 100; https://doi.org/10.3390/agriculture16010100 - 31 Dec 2025
Viewed by 225
Abstract
Residues in the grain tank of seed-production wheat harvesters often cause varietal admixture, challenging seed purity maintenance above 99%. To address this, an intelligent cleaning system was developed for automatic residue recognition and removal. The system utilizes an RGB-D camera and an embedded [...] Read more.
Residues in the grain tank of seed-production wheat harvesters often cause varietal admixture, challenging seed purity maintenance above 99%. To address this, an intelligent cleaning system was developed for automatic residue recognition and removal. The system utilizes an RGB-D camera and an embedded AI unit paired with an improved lightweight object detection model. This model, enhanced for feature extraction and compressed via LAMP, was successfully deployed on a Jetson Nano, achieving 92.5% detection accuracy and 13.37 FPS for real-time 3D localization of impurities. A D–H kinematic model was established for the 4-DOF cleaning manipulator. By integrating the PSO and FWA models, the motion trajectory was optimized for time-optimality, reducing movement time from 9 s to 5.96 s. Furthermore, a gas–solid coupled simulation verified the separation capability of the cyclone-type dust extraction unit, which prevents motor damage and centralizes residue collection. Field tests confirmed the system’s comprehensive functionality, achieving an average cleaning rate of 92.6%. The proposed system successfully enables autonomous residue cleanup, effectively minimizing the risk of variety mixing and significantly improving the harvest purity and operational reliability of seed-production wheat. It presents a novel technological path for efficient seed production under the paradigm of smart agriculture. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

17 pages, 3389 KB  
Article
Offboard Fault Diagnosis for Large UAV Fleets Using Laser Doppler Vibrometer and Deep Extreme Learning
by Mohamed A. A. Ismail, Saadi Turied Kurdi, Mohammad S. Albaraj and Christian Rembe
Automation 2026, 7(1), 6; https://doi.org/10.3390/automation7010006 - 31 Dec 2025
Viewed by 336
Abstract
Unmanned Aerial Vehicles (UAVs) have become integral to modern applications, including smart agricultural robotics, where reliability is essential to ensure safe and efficient operation. It is commonly recognized that traditional fault diagnosis approaches usually rely on vibration and noise measurements acquired via onboard [...] Read more.
Unmanned Aerial Vehicles (UAVs) have become integral to modern applications, including smart agricultural robotics, where reliability is essential to ensure safe and efficient operation. It is commonly recognized that traditional fault diagnosis approaches usually rely on vibration and noise measurements acquired via onboard sensors or similar methods, which typically require continuous data acquisition and non-negligible onboard computational resources. This study presents a portable Laser Doppler Vibrometer (LDV)-based system designed for noncontact, offboard, and high-sensitivity measurement of UAV vibration signatures. The LDV measurements are analyzed using a Deep Extreme Learning-based Neural Network (DeepELM-DNN) capable of identifying both propeller fault type and severity from a single 1 s measurement. Experimental validation on a commercial quadcopter using 50 datasets across multiple induced fault types and severity levels demonstrates a classification accuracy of 97.9%. Compared to conventional onboard sensor-based approaches, the proposed framework shows strong potential for reduced computational effort while maintaining high diagnostic accuracy, owing to its short measurement duration and closed-form learning structure. The proposed LDV setup and DeepELM-DNN framework enable noncontact fault inspection while minimizing or eliminating the need for additional onboard sensing hardware. This approach offers a practical and scalable diagnostic solution for large UAV fleets and next-generation smart agricultural and industrial aerial robotics. Full article
Show Figures

Figure 1

15 pages, 2401 KB  
Review
When Circuits Grow Food: The Ever-Present Analog Electronics Driving Modern Agriculture
by Euzeli C. dos Santos, Josinaldo L. Araujo and Isaac S. de Freitas
Analog 2026, 1(1), 2; https://doi.org/10.3390/analog1010002 - 30 Dec 2025
Viewed by 224
Abstract
Analog electronics, i.e., circuits that process continuously varying signals, have quietly powered the backbone of agricultural automation long before the advent of modern digital technologies. Yet, the accelerating focus on digitalization, IoT, and AI in precision agriculture has largely overshadowed the enduring, indispensable [...] Read more.
Analog electronics, i.e., circuits that process continuously varying signals, have quietly powered the backbone of agricultural automation long before the advent of modern digital technologies. Yet, the accelerating focus on digitalization, IoT, and AI in precision agriculture has largely overshadowed the enduring, indispensable role of analog components in sensing, signal conditioning, power conversion, and actuation. This paper provides a comprehensive state-of-the-art review of analog electronics applied to agricultural systems. It revisits historical milestones, from early electroculture and soil-moisture instrumentation to modern analog front-ends for biosensing and analog electronics for alternatives source of energy and weed control. Emphasis is placed on how analog electronics enable real-time, low-latency, and energy-efficient interfacing with the physical world, a necessity in farming contexts where ruggedness, simplicity, and autonomy prevail. By mapping the trajectory from electroculture experiments of the 18th-century to 21st-century transimpedance amplifiers, analog sensor nodes, and low-noise instrumentation amplifiers in agri-robots, this work argues that the true technological revolution in agriculture is not purely digital but lies in the symbiosis of analog physics and biological processes. Full article
Show Figures

Figure 1

39 pages, 3635 KB  
Review
Application of Navigation Path Planning and Trajectory Tracking Control Methods for Agricultural Robots
by Fan Ye, Feixiang Le, Longfei Cui, Shaobo Han, Jingxing Gao, Junzhe Qu and Xinyu Xue
Agriculture 2026, 16(1), 64; https://doi.org/10.3390/agriculture16010064 - 27 Dec 2025
Viewed by 381
Abstract
Autonomous navigation is a core enabler of smart agriculture, where path planning and trajectory tracking control play essential roles in achieving efficient and precise operations. Path planning determines operational efficiency and coverage completeness, while trajectory tracking directly affects task accuracy and system robustness. [...] Read more.
Autonomous navigation is a core enabler of smart agriculture, where path planning and trajectory tracking control play essential roles in achieving efficient and precise operations. Path planning determines operational efficiency and coverage completeness, while trajectory tracking directly affects task accuracy and system robustness. This paper presents a systematic review of agricultural robot navigation research published between 2020 and 2025, based on literature retrieved from major databases including Web of Science and EI Compendex (ultimately including 95 papers). Research advances in global planning (coverage and point-to-point), local planning (obstacle avoidance and replanning), multi-robot cooperative planning, and classical, advanced, and learning-based trajectory tracking control methods are comprehensively summarized. Particular attention is given to their application and limitations in typical agricultural scenarios such as open-fields, orchards, greenhouses, and hilly slopes. Despite notable progress, key challenges remain, including limited algorithm comparability, weak cross-scenario generalization, and insufficient long-term validation. To address these issues, a scenario-driven “scenario–constraint–performance” adaptive framework is proposed to systematically align navigation methods with environmental and operational conditions, providing practical guidance for developing scalable and engineering-ready agricultural robot navigation systems. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

19 pages, 9601 KB  
Article
Lightweight Transformer and Faster Convolution for Efficient Strawberry Detection
by Jieyan Wu, Jinlai Zhang, Liuqi Tan, You Wu and Kai Gao
Appl. Sci. 2026, 16(1), 293; https://doi.org/10.3390/app16010293 - 27 Dec 2025
Viewed by 165
Abstract
The agricultural system faces the formidable challenge of efficiently harvesting strawberries, a labor-intensive process that has long relied on manual labor. The advent of autonomous harvesting robot systems offers a transformative solution, but their success hinges on the accuracy and efficiency of strawberry [...] Read more.
The agricultural system faces the formidable challenge of efficiently harvesting strawberries, a labor-intensive process that has long relied on manual labor. The advent of autonomous harvesting robot systems offers a transformative solution, but their success hinges on the accuracy and efficiency of strawberry detection. In this paper, we present DPViT-YOLOV8, a novel approach that leverages advancements in computer vision and deep learning to significantly enhance strawberry detection. DPViT-YOLOV8 integrates the EfficientViT backbone for multi-scale linear attention, the Dynamic Head mechanism for unified object detection heads with attention, and the proposed C2f_Faster module for enhanced computational efficiency into the YOLOV8 architecture. We meticulously curate and annotate a diverse dataset of strawberry images on a farm. A rigorous evaluation demonstrates that DPViT-YOLOV8 outperforms baseline models, achieving superior Mean Average Precision (mAP), precision, and recall. Additionally, an ablation study highlights the individual contributions of each enhancement. Qualitative results showcase the model’s proficiency in locating ripe strawberries in real-world agricultural settings. Notably, DPViT-YOLOV8 maintains computational efficiency, reducing inference time and FLOPS compared to the baseline YOLOV8. Our research bridges the gap between computer vision and agriculture systems, offering a powerful tool to accelerate the adoption of autonomous strawberry harvesting, reduce labor costs, and ensure the sustainability of strawberry farming. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

Back to TopTop