Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (882)

Search Parameters:
Keywords = agriculture robotics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 21618 KB  
Article
Cohesion-Based Flocking Formation Using Potential Linked Nodes Model for Multi-Robot Agricultural Swarms
by Kevin Marlon Soza-Mamani, Marcelo Saavedra Alcoba, Felipe Torres and Alvaro Javier Prado-Romo
Agriculture 2026, 16(2), 155; https://doi.org/10.3390/agriculture16020155 - 8 Jan 2026
Abstract
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring [...] Read more.
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring and autonomous field maintenance. This paper introduces a cohesive Potential Linked Nodes (PLNs) framework, an adjustable formation structure that employs Artificial Potential Fields (APFs), and virtual node–link interactions to regulate swarm cohesion and coordinated motion (CM). The proposed model governs swarm formation, modulates structural integrity, and enhances responsiveness to external perturbations. The PLN framework facilitates swarm stability, maintaining high cohesion and adaptability while the system’s tunable parameters enable online adjustment of inter-agent coupling strength and formation rigidity. Comprehensive simulation experiments were conducted to assess the performance of the model under multiple swarm conditions, including static aggregation and dynamic flocking behavior using differential-drive mobile robots. Additional tests within a simulated cropping environment were performed to evaluate the framework’s stability and cohesiveness under agricultural constraints. Swarm cohesion and formation stability were quantitatively analyzed using density-based and inter-robot distance metrics. The experimental results demonstrate that the PLN model effectively maintains formation integrity and cohesive stability throughout all scenarios. Full article
Show Figures

Figure 1

20 pages, 59455 KB  
Article
ACDNet: Adaptive Citrus Detection Network Based on Improved YOLOv8 for Robotic Harvesting
by Zhiqin Wang, Wentao Xia and Ming Li
Agriculture 2026, 16(2), 148; https://doi.org/10.3390/agriculture16020148 - 7 Jan 2026
Viewed by 6
Abstract
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) [...] Read more.
To address the challenging requirements of citrus detection in complex orchard environments, this paper proposes ACDNet (Adaptive Citrus Detection Network), a novel deep learning framework specifically designed for automated citrus harvesting. The proposed method introduces three key innovations: (1) Citrus-Adaptive Feature Extraction (CAFE) module that combines fruit-aware partial convolution with illumination-adaptive attention mechanisms to enhance feature representation with improved efficiency; (2) Dynamic Multi-Scale Sampling (DMS) operator that adaptively focuses sampling points on fruit regions while suppressing background interference through content-aware offset generation; and (3) Fruit-Shape Aware IoU (FSA-IoU) loss function that incorporates citrus morphological priors and occlusion patterns to improve localization accuracy. Extensive experiments on our newly constructed CitrusSet dataset, which comprises 2887 images capturing diverse lighting conditions, occlusion levels, and fruit overlapping scenarios, demonstrate that ACDNet achieves superior performance with mAP@0.5 of 97.5%, precision of 92.1%, and recall of 92.8%, while maintaining real-time inference at 55.6 FPS. Compared to the baseline YOLOv8n model, ACDNet achieves improvements of 1.7%, 3.4%, and 3.6% in mAP@0.5, precision, and recall, respectively, while reducing model parameters by 11% (to 2.67 M) and computational cost by 20% (to 6.5 G FLOPs), making it highly suitable for deployment in resource-constrained robotic harvesting systems. However, the current study is primarily validated on citrus fruits, and future work will focus on extending ACDNet to other spherical fruits and exploring its generalization under extreme weather conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

13 pages, 1149 KB  
Article
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
by Mohamed El-Ouati, Sandro Bimonte and Nicolas Tricot
Computers 2026, 15(1), 32; https://doi.org/10.3390/computers15010032 - 7 Jan 2026
Viewed by 47
Abstract
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five [...] Read more.
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five distinct, interconnected layers: The Source Layer, the Ingestion Layer, the Batch Layer, the Speed Layer, and the Governance Layer. The Source Layer serves as the unified entry point, accommodating structured, spatial, and image data from sensors, Drones, and ROS-equipped robots. The Ingestion Layer uses a hybrid fog/cloud architecture with Kafka for real-time streams and for batch processing of historical data. Data is then segregated for processing: The cloud-deployed Batch Layer employs a Hadoop cluster, Spark, Hive, and Drill for large-scale historical analysis, while the Speed Layer utilizes Geoflink and PostGIS for low-latency, real-time geovisualization. Finally, the Governance Layer guarantees data quality, lineage, and organization across all components using Open Metadata. This layered, hybrid approach provides a scalable and resilient framework capable of transforming raw agricultural data into timely, actionable insights, addressing the critical need for advanced data management in smart farming. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

19 pages, 2314 KB  
Article
Occlusion Avoidance for Harvesting Robots: A Lightweight Active Perception Model
by Tao Zhang, Jiaxi Huang, Jinxing Niu, Zhengyi Liu, Le Zhang and Huan Song
Sensors 2026, 26(1), 291; https://doi.org/10.3390/s26010291 - 2 Jan 2026
Viewed by 191
Abstract
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United [...] Read more.
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United States, with active perception. Firstly, to meet the stringent real-time requirements of the active perception system, a lightweight YOLOv8n model was developed. This model reduces computational redundancy by incorporating the C2f-FasterBlock module and enhances key feature representation by integrating the SE attention mechanism, significantly improving inference speed while maintaining high detection accuracy. Secondly, an end-to-end active perception model based on ResNet50 and multi-modal fusion was designed. This model can intelligently predict the optimal movement direction for the robotic arm based on the current observation image, actively avoiding occlusions to obtain a more complete field of view. The model was trained using a matrix dataset constructed through the robot’s dynamic exploration in real-world scenarios, achieving a direct mapping from visual perception to motion planning. Experimental results demonstrate that the proposed lightweight YOLOv8n model achieves a mAP of 0.885 in apple detection tasks, a frame rate of 83 FPS, a parameter count reduced to 1,983,068, and a model weight file size reduced to 4.3 MB, significantly outperforming the baseline model. In active perception experiments, the proposed method effectively guided the robotic arm to quickly find observation positions with minimal occlusion, substantially improving the success rate of target recognition and the overall operational efficiency of the system. The current research outcomes provide preliminary technical validation and a feasible exploratory pathway for developing agricultural harvesting robot systems suitable for real-world complex environments. It should be noted that the validation of this study was primarily conducted in controlled environments. Subsequent work still requires large-scale testing in diverse real-world orchard scenarios, as well as further system optimization and performance evaluation in more realistic application settings, which include natural lighting variations, complex weather conditions, and actual occlusion patterns. Full article
Show Figures

Figure 1

21 pages, 19413 KB  
Article
Efficient Real-Time Row Detection and Navigation Using LaneATT for Greenhouse Environments
by Ricardo Navarro Gómez, Joel Milla, Paolo Alfonso Reyes Ramírez, Jesús Arturo Escobedo Cabello and Alfonso Gómez-Espinosa
Agriculture 2026, 16(1), 111; https://doi.org/10.3390/agriculture16010111 - 31 Dec 2025
Viewed by 269
Abstract
This study introduces an efficient real-time lane detection and navigation system for greenhouse environments, leveraging the LaneATT architecture. Designed for deployment on the Jetson Xavier NX edge computing platform, the system utilizes an RGB camera to enable autonomous navigation in greenhouse rows. From [...] Read more.
This study introduces an efficient real-time lane detection and navigation system for greenhouse environments, leveraging the LaneATT architecture. Designed for deployment on the Jetson Xavier NX edge computing platform, the system utilizes an RGB camera to enable autonomous navigation in greenhouse rows. From real-world agricultural environments, data were collected and annotated to train the model, achieving 90% accuracy, 91% F1 Score, and an inference speed of 48 ms per frame. The LaneATT-based vision system was trained and validated in greenhouse environments under heterogeneous illumination conditions and across multiple phenological stages of crop development. The navigation system was validated using a commercial skid-steering mobile robot operating within an experimental greenhouse environment under actual operating conditions. The proposed solution minimizes computational overhead, making it highly suitable for deployment on edge devices within resource-constrained environments. Furthermore, experimental results demonstrate robust performance, with precise lane detection and rapid response times on embedded systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 8411 KB  
Article
Vision-Guided Cleaning System for Seed-Production Wheat Harvesters Using RGB-D Sensing and Object Detection
by Junjie Xia, Xinping Zhang, Jingke Zhang, Cheng Yang, Guoying Li, Runzhi Yu and Liqing Zhao
Agriculture 2026, 16(1), 100; https://doi.org/10.3390/agriculture16010100 - 31 Dec 2025
Viewed by 185
Abstract
Residues in the grain tank of seed-production wheat harvesters often cause varietal admixture, challenging seed purity maintenance above 99%. To address this, an intelligent cleaning system was developed for automatic residue recognition and removal. The system utilizes an RGB-D camera and an embedded [...] Read more.
Residues in the grain tank of seed-production wheat harvesters often cause varietal admixture, challenging seed purity maintenance above 99%. To address this, an intelligent cleaning system was developed for automatic residue recognition and removal. The system utilizes an RGB-D camera and an embedded AI unit paired with an improved lightweight object detection model. This model, enhanced for feature extraction and compressed via LAMP, was successfully deployed on a Jetson Nano, achieving 92.5% detection accuracy and 13.37 FPS for real-time 3D localization of impurities. A D–H kinematic model was established for the 4-DOF cleaning manipulator. By integrating the PSO and FWA models, the motion trajectory was optimized for time-optimality, reducing movement time from 9 s to 5.96 s. Furthermore, a gas–solid coupled simulation verified the separation capability of the cyclone-type dust extraction unit, which prevents motor damage and centralizes residue collection. Field tests confirmed the system’s comprehensive functionality, achieving an average cleaning rate of 92.6%. The proposed system successfully enables autonomous residue cleanup, effectively minimizing the risk of variety mixing and significantly improving the harvest purity and operational reliability of seed-production wheat. It presents a novel technological path for efficient seed production under the paradigm of smart agriculture. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

17 pages, 3389 KB  
Article
Offboard Fault Diagnosis for Large UAV Fleets Using Laser Doppler Vibrometer and Deep Extreme Learning
by Mohamed A. A. Ismail, Saadi Turied Kurdi, Mohammad S. Albaraj and Christian Rembe
Automation 2026, 7(1), 6; https://doi.org/10.3390/automation7010006 - 31 Dec 2025
Viewed by 298
Abstract
Unmanned Aerial Vehicles (UAVs) have become integral to modern applications, including smart agricultural robotics, where reliability is essential to ensure safe and efficient operation. It is commonly recognized that traditional fault diagnosis approaches usually rely on vibration and noise measurements acquired via onboard [...] Read more.
Unmanned Aerial Vehicles (UAVs) have become integral to modern applications, including smart agricultural robotics, where reliability is essential to ensure safe and efficient operation. It is commonly recognized that traditional fault diagnosis approaches usually rely on vibration and noise measurements acquired via onboard sensors or similar methods, which typically require continuous data acquisition and non-negligible onboard computational resources. This study presents a portable Laser Doppler Vibrometer (LDV)-based system designed for noncontact, offboard, and high-sensitivity measurement of UAV vibration signatures. The LDV measurements are analyzed using a Deep Extreme Learning-based Neural Network (DeepELM-DNN) capable of identifying both propeller fault type and severity from a single 1 s measurement. Experimental validation on a commercial quadcopter using 50 datasets across multiple induced fault types and severity levels demonstrates a classification accuracy of 97.9%. Compared to conventional onboard sensor-based approaches, the proposed framework shows strong potential for reduced computational effort while maintaining high diagnostic accuracy, owing to its short measurement duration and closed-form learning structure. The proposed LDV setup and DeepELM-DNN framework enable noncontact fault inspection while minimizing or eliminating the need for additional onboard sensing hardware. This approach offers a practical and scalable diagnostic solution for large UAV fleets and next-generation smart agricultural and industrial aerial robotics. Full article
Show Figures

Figure 1

15 pages, 2401 KB  
Review
When Circuits Grow Food: The Ever-Present Analog Electronics Driving Modern Agriculture
by Euzeli C. dos Santos, Josinaldo L. Araujo and Isaac S. de Freitas
Analog 2026, 1(1), 2; https://doi.org/10.3390/analog1010002 - 30 Dec 2025
Viewed by 200
Abstract
Analog electronics, i.e., circuits that process continuously varying signals, have quietly powered the backbone of agricultural automation long before the advent of modern digital technologies. Yet, the accelerating focus on digitalization, IoT, and AI in precision agriculture has largely overshadowed the enduring, indispensable [...] Read more.
Analog electronics, i.e., circuits that process continuously varying signals, have quietly powered the backbone of agricultural automation long before the advent of modern digital technologies. Yet, the accelerating focus on digitalization, IoT, and AI in precision agriculture has largely overshadowed the enduring, indispensable role of analog components in sensing, signal conditioning, power conversion, and actuation. This paper provides a comprehensive state-of-the-art review of analog electronics applied to agricultural systems. It revisits historical milestones, from early electroculture and soil-moisture instrumentation to modern analog front-ends for biosensing and analog electronics for alternatives source of energy and weed control. Emphasis is placed on how analog electronics enable real-time, low-latency, and energy-efficient interfacing with the physical world, a necessity in farming contexts where ruggedness, simplicity, and autonomy prevail. By mapping the trajectory from electroculture experiments of the 18th-century to 21st-century transimpedance amplifiers, analog sensor nodes, and low-noise instrumentation amplifiers in agri-robots, this work argues that the true technological revolution in agriculture is not purely digital but lies in the symbiosis of analog physics and biological processes. Full article
Show Figures

Figure 1

39 pages, 3635 KB  
Review
Application of Navigation Path Planning and Trajectory Tracking Control Methods for Agricultural Robots
by Fan Ye, Feixiang Le, Longfei Cui, Shaobo Han, Jingxing Gao, Junzhe Qu and Xinyu Xue
Agriculture 2026, 16(1), 64; https://doi.org/10.3390/agriculture16010064 - 27 Dec 2025
Viewed by 312
Abstract
Autonomous navigation is a core enabler of smart agriculture, where path planning and trajectory tracking control play essential roles in achieving efficient and precise operations. Path planning determines operational efficiency and coverage completeness, while trajectory tracking directly affects task accuracy and system robustness. [...] Read more.
Autonomous navigation is a core enabler of smart agriculture, where path planning and trajectory tracking control play essential roles in achieving efficient and precise operations. Path planning determines operational efficiency and coverage completeness, while trajectory tracking directly affects task accuracy and system robustness. This paper presents a systematic review of agricultural robot navigation research published between 2020 and 2025, based on literature retrieved from major databases including Web of Science and EI Compendex (ultimately including 95 papers). Research advances in global planning (coverage and point-to-point), local planning (obstacle avoidance and replanning), multi-robot cooperative planning, and classical, advanced, and learning-based trajectory tracking control methods are comprehensively summarized. Particular attention is given to their application and limitations in typical agricultural scenarios such as open-fields, orchards, greenhouses, and hilly slopes. Despite notable progress, key challenges remain, including limited algorithm comparability, weak cross-scenario generalization, and insufficient long-term validation. To address these issues, a scenario-driven “scenario–constraint–performance” adaptive framework is proposed to systematically align navigation methods with environmental and operational conditions, providing practical guidance for developing scalable and engineering-ready agricultural robot navigation systems. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

19 pages, 9601 KB  
Article
Lightweight Transformer and Faster Convolution for Efficient Strawberry Detection
by Jieyan Wu, Jinlai Zhang, Liuqi Tan, You Wu and Kai Gao
Appl. Sci. 2026, 16(1), 293; https://doi.org/10.3390/app16010293 - 27 Dec 2025
Viewed by 152
Abstract
The agricultural system faces the formidable challenge of efficiently harvesting strawberries, a labor-intensive process that has long relied on manual labor. The advent of autonomous harvesting robot systems offers a transformative solution, but their success hinges on the accuracy and efficiency of strawberry [...] Read more.
The agricultural system faces the formidable challenge of efficiently harvesting strawberries, a labor-intensive process that has long relied on manual labor. The advent of autonomous harvesting robot systems offers a transformative solution, but their success hinges on the accuracy and efficiency of strawberry detection. In this paper, we present DPViT-YOLOV8, a novel approach that leverages advancements in computer vision and deep learning to significantly enhance strawberry detection. DPViT-YOLOV8 integrates the EfficientViT backbone for multi-scale linear attention, the Dynamic Head mechanism for unified object detection heads with attention, and the proposed C2f_Faster module for enhanced computational efficiency into the YOLOV8 architecture. We meticulously curate and annotate a diverse dataset of strawberry images on a farm. A rigorous evaluation demonstrates that DPViT-YOLOV8 outperforms baseline models, achieving superior Mean Average Precision (mAP), precision, and recall. Additionally, an ablation study highlights the individual contributions of each enhancement. Qualitative results showcase the model’s proficiency in locating ripe strawberries in real-world agricultural settings. Notably, DPViT-YOLOV8 maintains computational efficiency, reducing inference time and FLOPS compared to the baseline YOLOV8. Our research bridges the gap between computer vision and agriculture systems, offering a powerful tool to accelerate the adoption of autonomous strawberry harvesting, reduce labor costs, and ensure the sustainability of strawberry farming. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

29 pages, 5634 KB  
Article
Blueberry Maturity Detection in Natural Orchard Environments Using an Improved YOLOv11n Network
by Xinyang Li, Jinghao Shi, Yunpeng Li, Chuang Wang, Weiqi Sun, Zonghui Zhuo, Xin Yue, Jing Ni and Kezhu Tan
Agriculture 2026, 16(1), 60; https://doi.org/10.3390/agriculture16010060 - 26 Dec 2025
Viewed by 197
Abstract
To meet the growing demand for automated blueberry harvesting in smart agriculture, this study proposes an improved lightweight detection network, termed M-YOLOv11n, for fast and accurate blueberry maturity detection in complex natural environments. The proposed model enhances feature representation through an improved lightweight [...] Read more.
To meet the growing demand for automated blueberry harvesting in smart agriculture, this study proposes an improved lightweight detection network, termed M-YOLOv11n, for fast and accurate blueberry maturity detection in complex natural environments. The proposed model enhances feature representation through an improved lightweight multi-scale design, enabling more effective extraction of fruit features under complex orchard conditions. In addition, attention-based feature refinement is incorporated to emphasize discriminative ripeness-related cues while suppressing background interference. These design choices improve robustness to scale variation and occlusion, addressing the limitations of conventional lightweight detectors in detecting small and partially occluded fruits. By incorporating MsBlock and the attention mechanism, M-YOLOv11n achieves improved detection accuracy without significantly increasing computational cost. Experimental results demonstrate that the proposed model attains 97.0% mAP50 on the validation set and maintains robust performance under challenging conditions such as occlusion and varying illumination, achieving 96.5% mAP50. With an inference speed of 176.6 FPS, the model satisfies both accuracy and real-time requirements for blueberry maturity detection. Compared with YOLOv11n, M-YOLOv11n increases the parameter count only marginally from 2.60 M to 2.61 M, while maintaining high inference efficiency. These results indicate that the proposed method is suitable for real-time deployment on embedded vision systems in smart agricultural harvesting robots and supports early yield estimation in complex field environments. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 4080 KB  
Article
Adaptive Path Planning for Robotic Winter Jujube Harvesting Using an Improved RRT-Connect Algorithm
by Anxiang Huang, Meng Zhou, Mengfei Liu, Yunxiao Pan, Jiapan Guo and Yaohua Hu
Agriculture 2026, 16(1), 47; https://doi.org/10.3390/agriculture16010047 - 25 Dec 2025
Viewed by 240
Abstract
Winter jujube harvesting is traditionally labor-intensive, yet declining labor availability and rising costs necessitate robotic automation to maintain agricultural competitiveness. Path planning for robotic arms in orchards faces challenges due to the unstructured, dynamic environment containing densely packed fruits and branches. To overcome [...] Read more.
Winter jujube harvesting is traditionally labor-intensive, yet declining labor availability and rising costs necessitate robotic automation to maintain agricultural competitiveness. Path planning for robotic arms in orchards faces challenges due to the unstructured, dynamic environment containing densely packed fruits and branches. To overcome the limitations of existing robotic path planning methods, this research proposes BMGA-RRT Connect (BVH-based Multilevel-step Gradient-descent Adaptive RRT), a novel algorithm integrating adaptive multilevel step-sizing, hierarchical Bounding Volume Hierarchy (BVH)-based collision detection, and gradient-descent path smoothing. Initially, an adaptive step-size strategy dynamically adjusts node expansions, optimizing efficiency and avoiding collisions; subsequently, a hierarchical BVH improves collision-detection speed, significantly reducing computational time; finally, gradient-descent smoothing enhances trajectory continuity and path quality. Comprehensive 2D and 3D simulation experiments, dynamic obstacle validations, and real-world winter jujube harvesting trials were conducted to assess algorithm performance. Results showed that BMGA-RRT Connect significantly reduced average computation time to 2.23 s (2D) and 7.12 s (3D), outperforming traditional algorithms in path quality, stability, and robustness. Specifically, BMGA-RRT Connect achieved 100% path planning success and 90% execution success in robotic harvesting tests. These findings demonstrate that BMGA-RRT Connect provides an efficient, stable, and reliable solution for robotic harvesting in complex, unstructured agricultural settings, offering substantial promise for practical deployment in precision agriculture. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

5 pages, 180 KB  
Editorial
Advanced Autonomous Systems and the Artificial Intelligence Stage
by Liviu Marian Ungureanu and Iulian-Sorin Munteanu
Technologies 2026, 14(1), 9; https://doi.org/10.3390/technologies14010009 - 23 Dec 2025
Viewed by 254
Abstract
This Editorial presents an integrative overview of the Special Issue “Advanced Autonomous Systems and Artificial Intelligence Stage”, which assembles fifteen peer-reviewed articles dedicated to the recent evolution of AI-enabled and autonomous systems. The contributions span a broad spectrum of domains, including renewable energy [...] Read more.
This Editorial presents an integrative overview of the Special Issue “Advanced Autonomous Systems and Artificial Intelligence Stage”, which assembles fifteen peer-reviewed articles dedicated to the recent evolution of AI-enabled and autonomous systems. The contributions span a broad spectrum of domains, including renewable energy and power systems, intelligent transportation, agricultural robotics, clinical and assistive technologies, mobile robotic platforms, and space robotics. Across these diverse applications, the collection highlights core research themes such as robust perception and navigation, semantic and multi modal sensing, resource-efficient embedded inference, human–machine interaction, sustainable infrastructures, and validation frameworks for safety-critical systems. Several articles demonstrate how physical modeling, hybrid control architectures, deep learning, and data-driven methods can be combined to enhance operational robustness, reliability, and autonomy in real-world environments. Other works address challenges related to fall detection, predictive maintenance, teleoperation safety, and the deployment of intelligent systems in large-scale or mission-critical contexts. Overall, this Special Issue offers a consolidated and rigorous academic synthesis of current advances in Autonomous Systems and Artificial Intelligence, providing researchers and practitioners with a valuable reference for understanding emerging trends, practical implementations, and future research directions. Full article
(This article belongs to the Special Issue Advanced Autonomous Systems and Artificial Intelligence Stage)
25 pages, 5269 KB  
Article
An Earthworm-Inspired Subsurface Robot for Low-Disturbance Mitigation of Grassland Soil Compaction
by Yimeng Cai and Sha Liu
Appl. Sci. 2026, 16(1), 115; https://doi.org/10.3390/app16010115 - 22 Dec 2025
Viewed by 163
Abstract
Soil compaction in grassland and agricultural soils reduces water infiltration, root growth and ecosystem services. Conventional deep tillage and coring can alleviate compaction but are energy intensive and strongly disturb the turf. This study proposes an earthworm-inspired subsurface robot as a low-disturbance loosening [...] Read more.
Soil compaction in grassland and agricultural soils reduces water infiltration, root growth and ecosystem services. Conventional deep tillage and coring can alleviate compaction but are energy intensive and strongly disturb the turf. This study proposes an earthworm-inspired subsurface robot as a low-disturbance loosening tool for compacted grassland soils. Design principles are abstracted from earthworm body segmentation, anchoring–propulsion peristaltic locomotion and corrugated body surface, and mapped onto a robotic body with anterior and posterior telescopic units, a flexible mid-body segment, a corrugated outer shell and a brace-wire steering mechanism. Kinematic simulations evaluate the peristaltic actuation mechanism and predict a forward displacement of approximately 15 mm/cycle. Using the finite element method and a Modified Cam–Clay soil model, different linkage layouts and outer-shell geometries are compared in terms of radial soil displacement and drag force in cohesive loam. The optimised corrugated outer shell combining circumferential and longitudinal waves lowers drag by up to 20.1% compared with a smooth cylinder. A 3D-printed prototype demonstrates peristaltic locomotion and steering in bench-top tests. The results indicate the potential of earthworm-inspired subsurface robots to provide low-disturbance loosening in conservation agriculture and grassland management, and highlight the need for field experiments to validate performance in real soils. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

23 pages, 7391 KB  
Article
TSE-YOLO: A Model for Tomato Ripeness Segmentation
by Liangquan Jia, Xinhui Yuan, Ze Chen, Tao Wang, Lu Gao, Guosong Gu, Xuechun Wang and Yang Wang
Agriculture 2026, 16(1), 8; https://doi.org/10.3390/agriculture16010008 - 19 Dec 2025
Viewed by 385
Abstract
Accurate and efficient tomato ripeness estimation is crucial for robotic harvesting and supply chain grading in smart agriculture. However, manual visual inspection is subjective, slow and difficult to scale, while existing vision models often struggle with cluttered field backgrounds, small targets and limited [...] Read more.
Accurate and efficient tomato ripeness estimation is crucial for robotic harvesting and supply chain grading in smart agriculture. However, manual visual inspection is subjective, slow and difficult to scale, while existing vision models often struggle with cluttered field backgrounds, small targets and limited throughput. To overcome these limitations, we introduce TSE-YOLO, an improved real-time detector tailored for tomato ripeness estimation with joint detection and segmentation. In the TSE-YOLO model, three key enhancements are introduced. The C2PSA module is improved with ConvGLU, adapted from TransNeXt, to strengthen feature extraction within tomato regions. A novel segmentation head is designed to accelerate ripeness-aware segmentation and improve recall. Additionally, the C3k2 module is augmented with partial and frequency-dynamic convolutions, enhancing feature representation under complex planting conditions. These components enable precise instance-level localization and pixel-wise segmentation of tomatoes at three ripeness stages: verde, semi-ripe (semi-maduro), and ripe. Experiments on a self-constructed tomato ripeness dataset demonstrate that TSE-YOLO achieves 92.5% mAP@0.5 for detection and 92.2% mAP@0.5 for segmentation with only 9.8 GFLOPs. Deployed on Android via Ncnn Convolutional Neural Network (NCNN), the model runs at 30 fps on Dimensity 9300, offering a practical solution for automated tomato harvesting and grading that accelerates smart agriculture’s industrial adoption. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop