sensors-logo

Journal Browser

Journal Browser

Intelligent Sensing and Machine Vision in Precision Agriculture: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 18587

Special Issue Editors

College of Engineering, Anhui Agricultural University, Hefei 230036, China
Interests: smart agriculture; intelligent agricultural equipment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China
Interests: intelligent agriculture machinery; agriculture robot

E-Mail Website
Guest Editor
College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China
Interests: machine vision; precision agriculture

Special Issue Information

Dear colleagues,

Precision agriculture seeks to employ information technology to support farming operation and management, such as fertilizer inputs, irrigation management, pesticide application, etc. The temporal, spatial, and individual information related to environmental parameters and crop features are gathered, processed, and analyzed through various intelligent sensing technologies. Among them, machine vision technologies, including 3D/2D imaging, visible/near-infrared imaging, and hyperspectral/multispectral imaging, have been extensively used for precision agriculture, such as plant phenotyping, autonomous navigation, disease detection, production prediction, etc. Moreover, deep learning has greatly promoted the development of intelligent sensing technologies, which have a range of potential applications in precision agriculture.

Dr. Lu Liu
Dr. Jianjun Yin
Dr. Haiyong Weng
Dr. Yuwei Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • agricultural robot
  • machine vision
  • image processing
  • multispectral imaging
  • plant phenotyping
  • optical measurement
  • disease detection
  • deep learning
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 3818 KiB  
Article
Measurement of Maize Leaf Phenotypic Parameters Based on 3D Point Cloud
by Yuchen Su, Ran Li, Miao Wang, Chen Li, Mingxiong Ou, Sumei Liu, Wenhui Hou, Yuwei Wang and Lu Liu
Sensors 2025, 25(9), 2854; https://doi.org/10.3390/s25092854 - 30 Apr 2025
Viewed by 171
Abstract
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including [...] Read more.
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including low automation, prolonged measurement duration, and weak environmental interference resistance. This study proposes a novel estimation method for maize PH, LW, and LA based on point cloud projection. The methodology comprises four key stages. First, 3D point cloud data of maize plants are acquired during middle–late growth stages using lidar sensors. Second, a Gaussian mixture model (GMM) is employed for point cloud registration to enhance plant morphological features, resulting in spliced maize point clouds. Third, filtering techniques remove background noise and weeds, followed by a combined point cloud projection and Euclidean clustering approach for stem–leaf segmentation. Finally, PH is determined by calculating vertical distance from plant apex to base, LW is measured through linear fitting of leaf midveins with perpendicular line intersections on projected contours, and LA is derived from plant skeleton diagrams constructed via linear fitting to identify stem apex, stem–leaf junctions, and midrib points. Field validation demonstrated that the method achieves 99%, 86%, and 97% accuracy for PH, LW, and LA estimation, respectively, enabling rapid automated measurement during critical growth phases and providing an efficient solution for maize cultivation automation. Full article
Show Figures

Figure 1

18 pages, 5000 KiB  
Article
SAG-YOLO: A Lightweight Real-Time One-Day-Old Chick Gender Detection Method
by Yulong Chang, Rongqian Sun, Zheng Yang, Shijun Li and Qiaohua Wang
Sensors 2025, 25(7), 1973; https://doi.org/10.3390/s25071973 - 21 Mar 2025
Viewed by 464
Abstract
Feather sexing, based on wing feather growth rate, is a widely used method for chick sex identification. However, it still relies on manual sorting, necessitating automation. This study proposes an improved SAG-YOLO method for chick sex detection. Firstly, the model reduces both parameter [...] Read more.
Feather sexing, based on wing feather growth rate, is a widely used method for chick sex identification. However, it still relies on manual sorting, necessitating automation. This study proposes an improved SAG-YOLO method for chick sex detection. Firstly, the model reduces both parameter size and computational complexity by replacing the original feature extraction with the StarNet lightweight Backbone. Next, the Additive Convolutional Gated Linear Unit (Additive CGLU) module, incorporated in the Neck section, enhances multi-scale feature interaction, improving detail capture while maintaining efficiency. Furthermore, the Group Normalization Head (GN Head) decreases parameters and computational overhead while boosting generalization and detection efficiency. Experimental results demonstrate that SAG-YOLO achieves a precision (P) of 90.5%, recall (R) of 90.7%, and mean average precision (mAP) of 97.0%, outperforming YOLO v10n by 1.3%, 2.6%, and 1.5%, respectively. Model parameters and floating-point operations are reduced by 0.8633 M and 2.0 GFLOPs, with a 0.2 ms faster GPU inference speed. In video stream detection, the model achieves 100% accuracy for female chicks and 96.25% accuracy for male chicks, demonstrating strong performance under motion blur and feature fuzziness. The improved model exhibits robust generalization, providing a practical solution for the intelligent sex sorting of day-old chicks. Full article
Show Figures

Figure 1

20 pages, 2674 KiB  
Article
FFAE-UNet: An Efficient Pear Leaf Disease Segmentation Network Based on U-Shaped Architecture
by Wenyu Wang, Jie Ding, Xin Shu, Wenwen Xu and Yunzhi Wu
Sensors 2025, 25(6), 1751; https://doi.org/10.3390/s25061751 - 12 Mar 2025
Cited by 1 | Viewed by 496
Abstract
The accurate pest control of pear tree diseases is an urgent need for the realization of smart agriculture, with one of the key challenges being the precise segmentation of pear leaf diseases. However, existing methods show poor segmentation performance due to issues such [...] Read more.
The accurate pest control of pear tree diseases is an urgent need for the realization of smart agriculture, with one of the key challenges being the precise segmentation of pear leaf diseases. However, existing methods show poor segmentation performance due to issues such as the small size of certain pear leaf disease areas, blurred edge details, and background noise interference. To address these problems, this paper proposes an improved U-Net architecture, FFAE-UNet, for the segmentation of pear leaf diseases. Specifically, two innovative modules are introduced in FFAE-UNet: the Attention Guidance Module (AGM) and the Feature Enhancement Supplementation Module (FESM). The AGM module effectively suppresses background noise interference by reconstructing features and accurately capturing spatial and channel relationships, while the FESM module enhances the model’s responsiveness to disease features at different scales through channel aggregation and feature supplementation mechanisms. Experimental results show that FFAE-UNet achieves 86.60%, 92.58%, and 91.85% in MIoU, Dice coefficient, and MPA evaluation metrics, respectively, significantly outperforming current mainstream methods. FFAE-UNet can assist farmers and agricultural experts in more effectively evaluating and managing diseases, thereby enabling precise disease control and management. Full article
Show Figures

Figure 1

16 pages, 20081 KiB  
Article
YOLO-ACE: Enhancing YOLO with Augmented Contextual Efficiency for Precision Cotton Weed Detection
by Qi Zhou, Huicheng Li, Zhiling Cai, Yiwen Zhong, Fenglin Zhong, Xiaoyu Lin and Lijin Wang
Sensors 2025, 25(5), 1635; https://doi.org/10.3390/s25051635 - 6 Mar 2025
Cited by 1 | Viewed by 743
Abstract
Effective weed management is essential for protecting crop yields in cotton production, yet conventional deep learning approaches often falter in detecting small or occluded weeds and can be restricted by large parameter counts. To tackle these challenges, we propose YOLO-ACE, an advanced extension [...] Read more.
Effective weed management is essential for protecting crop yields in cotton production, yet conventional deep learning approaches often falter in detecting small or occluded weeds and can be restricted by large parameter counts. To tackle these challenges, we propose YOLO-ACE, an advanced extension of YOLOv5s, which was selected for its optimal balance of accuracy and speed, making it well suited for agricultural applications. YOLO-ACE integrates a Context Augmentation Module (CAM) and Selective Kernel Attention (SKAttention) to capture multi-scale features and dynamically adjust the receptive field, while a decoupled detection head separates classification from bounding box regression, enhancing overall efficiency. Experiments on the CottonWeedDet12 (CWD12) dataset show that YOLO-ACE achieves notable mAP@0.5 and mAP@0.5:0.95 scores—95.3% and 89.5%, respectively—surpassing previous benchmarks. Additionally, we tested the model’s transferability and generalization across different crops and environments using the CropWeed dataset, where it achieved a competitive mAP@0.5 of 84.3%, further showcasing its robust ability to adapt to diverse conditions. These results confirm that YOLO-ACE combines precise detection with parameter efficiency, meeting the exacting demands of modern cotton weed management. Full article
Show Figures

Figure 1

13 pages, 16117 KiB  
Article
A Stride Toward Wine Yield Estimation from Images: Metrological Validation of Grape Berry Number, Radius, and Volume Estimation
by Bernardo Lanza, Davide Botturi, Alessandro Gnutti, Matteo Lancini, Cristina Nuzzi and Simone Pasinetti
Sensors 2024, 24(22), 7305; https://doi.org/10.3390/s24227305 - 15 Nov 2024
Cited by 1 | Viewed by 845
Abstract
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards [...] Read more.
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards using color images, allowing the computation of the visible (and total) volume of grape clusters, which is necessary to reach the ultimate goal of estimating yield production. The proposed algorithm is validated by analyzing its performance on a custom dataset. The number of berries, their mean radius, and the grape cluster volume are converted to millimeters and compared to reference values obtained through manual measurements. The validation experiment also analyzes the uncertainties of the parameters. Results show that the algorithm can reliably estimate the number (MPE=5%, σ=6%) and the radius of the visible portion of the grape clusters (MPE=0.8%, σ=7%). Instead, the volume estimated in px3 results in a MPE=0.4% with σ=21%, thus the corresponding volume in mm3 is affected by high uncertainty. This analysis highlighted that half of the total uncertainty on the volume is due to the camera–object distance d and parameter R used to take into account the proportion of visible grapes with respect to the total grapes in the grape cluster. This issue is mostly due to the absence of a reliable depth measure between the camera and the grapes, which could be overcome by using depth sensors in combination with color images. Despite being preliminary, the results prove that the model and the metrological analysis are a remarkable advancement toward a reliable approach for directly estimating yield from 2D pictures in the field. Full article
Show Figures

Figure 1

18 pages, 7770 KiB  
Article
Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots
by Jingwen Yang, Xin Li, Xin Wang, Leiyang Fu and Shaowen Li
Sensors 2024, 24(21), 6777; https://doi.org/10.3390/s24216777 - 22 Oct 2024
Cited by 3 | Viewed by 1448
Abstract
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale [...] Read more.
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale scenes through network architecture and loss function optimizations. In the far-view test set, the detection accuracy of tea buds reached 80.8%; for the near-view test set, the mAP0.5 values for tea stem detection in bounding boxes and masks reached 93.6% and 93.7%, respectively, showing improvements of 9.1% and 14.1% over the baseline model. Secondly, a layered visual servoing strategy for near and far views was designed, integrating the RealSense depth sensor with robotic arm cooperation. This strategy identifies the region of interest (ROI) of the tea bud in the far view and fuses the stem mask information with depth data to calculate the three-dimensional coordinates of the picking point. The experiments show that this method achieved a picking point localization success rate of 86.4%, with a mean depth measurement error of 1.43 mm. The proposed method improves the accuracy of picking point recognition and reduces depth information fluctuations, providing technical support for the intelligent and rapid picking of premium tea. Full article
Show Figures

Figure 1

16 pages, 9089 KiB  
Article
Consecutive Image Acquisition without Anomalies
by Angel Mur, Patrice Galaup, Etienne Dedic, Dominique Henry and Hervé Aubert
Sensors 2024, 24(20), 6608; https://doi.org/10.3390/s24206608 - 14 Oct 2024
Viewed by 1232
Abstract
An image is a visual representation that can be used to obtain information. A camera on a moving vector (e.g., on a rover, drone, quad, etc.) may acquire images along a controlled trajectory. The maximum visual information is captured during a fixed acquisition [...] Read more.
An image is a visual representation that can be used to obtain information. A camera on a moving vector (e.g., on a rover, drone, quad, etc.) may acquire images along a controlled trajectory. The maximum visual information is captured during a fixed acquisition time when consecutive images do not overlap and have no space (or gap) between them. The images acquisition is said to be anomalous when two consecutive images overlap (overlap anomaly) or have a gap between them (gap anomaly). In this article, we report a new algorithm, named OVERGAP, that remove these two types of anomalies when consecutive images are obtained from an on-board camera on a moving vector. Anomaly detection and correction use here both the Dynamic Time Warping distance and Wasserstein distance. The proposed algorithm produces consecutive, anomaly-free images with the desired size that can conveniently be used in a machine learning process (mainly Deep Learning) to create a prediction model for a feature of interest. Full article
Show Figures

Figure 1

27 pages, 8828 KiB  
Article
Research on Detection Method of Chaotian Pepper in Complex Field Environments Based on YOLOv8
by Yichu Duan, Jianing Li and Chi Zou
Sensors 2024, 24(17), 5632; https://doi.org/10.3390/s24175632 - 30 Aug 2024
Cited by 2 | Viewed by 1318
Abstract
The intelligent detection of chili peppers is crucial for achieving automated operations. In complex field environments, challenges such as overlapping plants, branch occlusions, and uneven lighting make detection difficult. This study conducted comparative experiments to select the optimal detection model based on YOLOv8 [...] Read more.
The intelligent detection of chili peppers is crucial for achieving automated operations. In complex field environments, challenges such as overlapping plants, branch occlusions, and uneven lighting make detection difficult. This study conducted comparative experiments to select the optimal detection model based on YOLOv8 and further enhanced it. The model was optimized by incorporating BiFPN, LSKNet, and FasterNet modules, followed by the addition of attention and lightweight modules such as EMBC, EMSCP, DAttention, MSBlock, and Faster. Adjustments to CIoU, Inner CIoU, Inner GIoU, and inner_mpdiou loss functions and scaling factors further improved overall performance. After optimization, the YOLOv8 model achieved precision, recall, and mAP scores of 79.0%, 75.3%, and 83.2%, respectively, representing increases of 1.1, 4.3, and 1.6 percentage points over the base model. Additionally, GFLOPs were reduced by 13.6%, the model size decreased to 66.7% of the base model, and the FPS reached 301.4. This resulted in accurate and rapid detection of chili peppers in complex field environments, providing data support and experimental references for the development of intelligent picking equipment. Full article
Show Figures

Figure 1

19 pages, 3177 KiB  
Article
Developing Machine Vision in Tree-Fruit Applications—Fruit Count, Fruit Size and Branch Avoidance in Automated Harvesting
by Chiranjivi Neupane, Kerry B. Walsh, Rafael Goulart and Anand Koirala
Sensors 2024, 24(17), 5593; https://doi.org/10.3390/s24175593 - 29 Aug 2024
Cited by 7 | Viewed by 1893
Abstract
Recent developments in affordable depth imaging hardware and the use of 2D Convolutional Neural Networks (CNN) in object detection and segmentation have accelerated the adoption of machine vision in a range of applications, with mainstream models often out-performing previous application-specific architectures. The need [...] Read more.
Recent developments in affordable depth imaging hardware and the use of 2D Convolutional Neural Networks (CNN) in object detection and segmentation have accelerated the adoption of machine vision in a range of applications, with mainstream models often out-performing previous application-specific architectures. The need for the release of training and test datasets with any work reporting model development is emphasized to enable the re-evaluation of published work. An additional reporting need is the documentation of the performance of the re-training of a given model, quantifying the impact of stochastic processes in training. Three mango orchard applications were considered: the (i) fruit count, (ii) fruit size and (iii) branch avoidance in automated harvesting. All training and test datasets used in this work are available publicly. The mAP ‘coefficient of variation’ (Standard Deviation, SD, divided by mean of predictions using models of repeated trainings × 100) was approximately 0.2% for the fruit detection model and 1 and 2% for the fruit and branch segmentation models, respectively. A YOLOv8m model achieved a mAP50 of 99.3%, outperforming the previous benchmark, the purpose-designed ‘MangoYOLO’, for the application of the real-time detection of mango fruit on images of tree canopies using an edge computing device as a viable use case. YOLOv8 and v9 models outperformed the benchmark MaskR-CNN model in terms of their accuracy and inference time, achieving up to a 98.8% mAP50 on fruit predictions and 66.2% on branches in a leafy canopy. For fruit sizing, the accuracy of YOLOv8m-seg was like that achieved using Mask R-CNN, but the inference time was much shorter, again an enabler for the field adoption of this technology. A branch avoidance algorithm was proposed, where the implementation of this algorithm in real-time on an edge computing device was enabled by the short inference time of a YOLOv8-seg model for branches and fruit. This capability contributes to the development of automated fruit harvesting. Full article
Show Figures

Figure 1

16 pages, 3229 KiB  
Article
Streamlining YOLOv7 for Rapid and Accurate Detection of Rapeseed Varieties on Embedded Device
by Siqi Gu, Wei Meng and Guodong Sun
Sensors 2024, 24(17), 5585; https://doi.org/10.3390/s24175585 - 28 Aug 2024
Viewed by 1031
Abstract
Real-time seed detection on resource-constrained embedded devices is essential for the agriculture industry and crop yield. However, traditional seed variety detection methods either suffer from low accuracy or cannot directly run on embedded devices with desirable real-time performance. In this paper, we focus [...] Read more.
Real-time seed detection on resource-constrained embedded devices is essential for the agriculture industry and crop yield. However, traditional seed variety detection methods either suffer from low accuracy or cannot directly run on embedded devices with desirable real-time performance. In this paper, we focus on the detection of rapeseed varieties and design a dual-dimensional (spatial and channel) pruning method to lighten the YOLOv7 (a popular object detection model based on deep learning). We design experiments to prove the effectiveness of the spatial dimension pruning strategy. And after evaluating three different channel pruning methods, we select the custom ratio layer-by-layer pruning, which offers the best performance for the model. The results show that using custom ratio layer-by-layer pruning can achieve the best model performance. Compared to the YOLOv7 model, this approach results in mAP increasing from 96.68% to 96.89%, the number of parameters reducing from 36.5 M to 9.19 M, and the inference time per image on the Raspberry Pi 4B reducing from 4.48 s to 1.18 s. Overall, our model is suitable for deployment on embedded devices and can perform real-time detection tasks accurately and efficiently in various application scenarios. Full article
Show Figures

Figure 1

Review

Jump to: Research

42 pages, 13582 KiB  
Review
A Comprehensive Review of LiDAR Applications in Crop Management for Precision Agriculture
by Sheikh Muhammad Farhan, Jianjun Yin, Zhijian Chen and Muhammad Sohail Memon
Sensors 2024, 24(16), 5409; https://doi.org/10.3390/s24165409 - 21 Aug 2024
Cited by 14 | Viewed by 7501
Abstract
Precision agriculture has revolutionized crop management and agricultural production, with LiDAR technology attracting significant interest among various technological advancements. This extensive review examines the various applications of LiDAR in precision agriculture, with a particular emphasis on its function in crop cultivation and harvests. [...] Read more.
Precision agriculture has revolutionized crop management and agricultural production, with LiDAR technology attracting significant interest among various technological advancements. This extensive review examines the various applications of LiDAR in precision agriculture, with a particular emphasis on its function in crop cultivation and harvests. The introduction provides an overview of precision agriculture, highlighting the need for effective agricultural management and the growing significance of LiDAR technology. The prospective advantages of LiDAR for increasing productivity, optimizing resource utilization, managing crop diseases and pesticides, and reducing environmental impact are discussed. The introduction comprehensively covers LiDAR technology in precision agriculture, detailing airborne, terrestrial, and mobile systems along with their specialized applications in the field. After that, the paper reviews the several uses of LiDAR in agricultural cultivation, including crop growth and yield estimate, disease detection, weed control, and plant health evaluation. The use of LiDAR for soil analysis and management, including soil mapping and categorization and the measurement of moisture content and nutrient levels, is reviewed. Additionally, the article examines how LiDAR is used for harvesting crops, including its use in autonomous harvesting systems, post-harvest quality evaluation, and the prediction of crop maturity and yield. Future perspectives, emergent trends, and innovative developments in LiDAR technology for precision agriculture are discussed, along with the critical challenges and research gaps that must be filled. The review concludes by emphasizing potential solutions and future directions for maximizing LiDAR’s potential in precision agriculture. This in-depth review of the uses of LiDAR gives helpful insights for academics, practitioners, and stakeholders interested in using this technology for effective and environmentally friendly crop management, which will eventually contribute to the development of precision agricultural methods. Full article
Show Figures

Figure 1

Back to TopTop