sensors-logo

Journal Browser

Journal Browser

Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: 31 August 2025 | Viewed by 20492

Special Issue Editors


E-Mail Website
Guest Editor
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: bionic intelligent agricultural machinery; autonomous navigation; target recognition based on visual bionics; agricultural drones; agricultural artificial intelligence; soil and plant sensing; agricultural machinery information collection and control
Special Issues, Collections and Topics in MDPI journals
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: agricultural machinery; conservation tillage; sensors; automation; intelligence; plant protection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, sensors and artificial intelligence (AI) technologies have received increasing interest from both academia and industry and have been extensively applied in intelligent agriculture. Accelerating the application of agriculture sensors and AI technologies in intelligent agriculture is urgently needed for the development of modern agriculture, as it will help promote the development of smart agriculture. This Special Issue aims to showcase the excellent implementation of agricultural sensors and AI technologies for intelligent agricultural applications and to provide opportunities for researchers to publish their work related to this topic. Articles that address agricultural sensors and AI technologies applied to crop and animal production are welcome. This Special Issue seeks to amass original research articles and reviews. The scope of this Special Issue includes but is not limited to the following topics:

  • Crop sensing and sensors;
  • Animal perception and sensors;
  • Environmental information perception and sensors;
  • Agricultural equipment information collection and processing;
  • Key technologies of smart agriculture;
  • Artificial intelligence in agriculture;
  • Farm-intelligent equipment;
  • Orchard-intelligent equipment;
  • Garden-intelligent equipment;
  • Pasture-intelligent equipment;
  • Fishing ground-intelligent equipment.

Prof. Dr. Jiangtao Qi
Dr. Gang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • artificial intelligence
  • intelligent agriculture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3178 KiB  
Article
AS-YOLO: Enhanced YOLO Using Ghost Bottleneck and Global Attention Mechanism for Apple Stem Segmentation
by Na Rae Baek, Yeongwook Lee, Dong-hee Noh, Hea-Min Lee and Se Woon Cho
Sensors 2025, 25(5), 1422; https://doi.org/10.3390/s25051422 - 26 Feb 2025
Viewed by 588
Abstract
Stem removal from harvested fruits remains one of the most labor-intensive tasks in fruit harvesting, directly affecting the fruit quality and marketability. Accurate and rapid fruit and stem segmentation techniques are essential for automating this process. This study proposes an enhanced You Only [...] Read more.
Stem removal from harvested fruits remains one of the most labor-intensive tasks in fruit harvesting, directly affecting the fruit quality and marketability. Accurate and rapid fruit and stem segmentation techniques are essential for automating this process. This study proposes an enhanced You Only Look Once (YOLO) model, AppleStem (AS)-YOLO, which uses a ghost bottleneck and global attention mechanism to segment apple stems. The proposed model reduces the number of parameters and enhances the computational efficiency using the ghost bottleneck while improving feature extraction capabilities using the global attention mechanism. The model was evaluated using both a custom-built and an open dataset, which will be later released to ensure reproducibility. Experimental results demonstrated that the AS-YOLO model achieved high accuracy, with a mean average precision (mAP)@50 of 0.956 and mAP@50–95 of 0.782 across all classes, along with a real-time inference speed of 129.8 frames per second (FPS). Compared with state-of-the-art segmentation models, AS-YOLO exhibited superior performance. The proposed AS-YOLO model demonstrates the potential for real-time application in automated fruit-harvesting systems, contributing to the advancement of agricultural automation. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

18 pages, 6781 KiB  
Article
A Non-Destructive Moisture Detection System for Unshelled Green Tea Seed Kernels Based on Microwave Technology with Multi-Frequency Scanning Signals
by Bo Zhou, Ye Yuan, Zhenbo Wei and Siying Li
Sensors 2025, 25(5), 1324; https://doi.org/10.3390/s25051324 - 21 Feb 2025
Viewed by 334
Abstract
A self-developed microwave moisture detection system (ranged from 2.00 GHz to 10.00 GHz) based on multi-frequency sweep technology was used to quickly determine the moisture content of tea seed kernels without breaking the shells. A multi-frequency evaluation method combined cross-validation and majority voting [...] Read more.
A self-developed microwave moisture detection system (ranged from 2.00 GHz to 10.00 GHz) based on multi-frequency sweep technology was used to quickly determine the moisture content of tea seed kernels without breaking the shells. A multi-frequency evaluation method combined cross-validation and majority voting rules was proposed to select the optimal microwave features from the original microwave signals. Firstly, the moisture content of tea seed kernels was detected by the moisture detection system, and the determination coefficients of the ANN model established based on seven attenuations and six phase-shifts were over 0.999. Then, the moisture content of unshelled tea seeds was detected, and the determination coefficients of the ANN model based on 13 preferred frequency features were over 0.995. Moreover, the predicted moisture values of unshelled tea seeds were calibrated accurately by a moisture function (y = −0.017x2 + 1.431x − 1.019). Above all, the self-developed system could achieve non-destructive moisture content prediction of tea seed kernels. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

17 pages, 7209 KiB  
Article
Sorghum Spike Detection Method Based on Gold Feature Pyramid Module and Improved YOLOv8s
by Shujin Qiu, Jian Gao, Mengyao Han, Qingliang Cui, Xiangyang Yuan and Cuiqing Wu
Sensors 2025, 25(1), 104; https://doi.org/10.3390/s25010104 - 27 Dec 2024
Viewed by 549
Abstract
In order to solve the problems of high planting density, similar color, and serious occlusion between spikes in sorghum fields, such as difficult identification and detection of sorghum spikes, low accuracy and high false detection, and missed detection rates, this study proposes an [...] Read more.
In order to solve the problems of high planting density, similar color, and serious occlusion between spikes in sorghum fields, such as difficult identification and detection of sorghum spikes, low accuracy and high false detection, and missed detection rates, this study proposes an improved sorghum spike detection method based on YOLOv8s. The method involves augmenting the information fusion capability of the YOLOv8 model’s neck module by integrating the Gold feature pyramid module. Additionally, the SPPF module is refined with the LSKA attention mechanism to heighten focus on critical features. To tackle class imbalance in sorghum detection and expedite model convergence, a loss function incorporating Focal-EIOU is employed. Consequently, the YOLOv8s-Gold-LSKA model, based on the Gold module and LSKA attention mechanism, is developed. Experimental results demonstrate that this improved method significantly enhances sorghum spike detection accuracy in natural field settings. The improved model achieved a precision of 90.72%, recall of 76.81%, mean average precision (mAP) of 85.86%, and an F1-score of 81.19%. Comparing the improved model of this study with the three target detection models of YOLOv5s, SSD, and YOLOv8, respectively, the improved model of this study has better detection performance. This advancement provides technical support for the rapid and accurate recognition of multiple sorghum spike targets in natural field backgrounds, thereby improving sorghum yield estimation accuracy. It also contributes to increased sorghum production and harvest, as well as the enhancement of intelligent harvesting equipment for agricultural machinery. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

23 pages, 3463 KiB  
Article
Can a Light Detection and Ranging (LiDAR) and Multispectral Sensor Discriminate Canopy Structure Changes Due to Pruning in Olive Growing? A Field Experimentation
by Carolina Perna, Andrea Pagliai, Daniele Sarri, Riccardo Lisci and Marco Vieri
Sensors 2024, 24(24), 7894; https://doi.org/10.3390/s24247894 - 10 Dec 2024
Viewed by 880
Abstract
The present research aimed to evaluate whether two sensors, optical and laser, could highlight the change in olive trees’ canopy structure due to pruning. Therefore, two proximal sensors were mounted on a ground vehicle (Kubota B2420 tractor): a multispectral sensor (OptRx ACS 430 [...] Read more.
The present research aimed to evaluate whether two sensors, optical and laser, could highlight the change in olive trees’ canopy structure due to pruning. Therefore, two proximal sensors were mounted on a ground vehicle (Kubota B2420 tractor): a multispectral sensor (OptRx ACS 430 AgLeader) and a 2D LiDAR sensor (Sick TIM 561). The multispectral sensor was used to evaluate the potential effect of biomass variability before pruning on sensor response. The 2D LiDAR was used to assess its ability to discriminate volume before and after pruning. Data were collected in a traditional olive grove located in Tenute di Cesa Farm, in the east of Tuscany, Italy, characterized by a 4x6 m planting layout and by developed plants. LiDAR data were used to measure canopy volumes, height, and diameter, and the generated point cloud was studied to assess the difference in density between treatments. Ten plants were selected for the study. To validate the LiDAR results, manual measurements of the canopy height and diameter dimensions of the plants were taken. The pruning weights of the monitored plants were obtained to assess the correlation with the canopy characterization data. The results obtained showed that pruning did not affect the results of the multispectral sensor, and the potential variation in canopy density and porosity did not lead to different results with this instrument. Plant volumes, height, and diameters calculated with the LiDAR sensor correlated well with the values of manual measurements, while volume differences between before and after pruning obtained good correlations with pruning weights (Pearson correlation coefficient: 0.66–0.83). The study of point cloud density in canopy thickness and height showed different shapes before and after pruning, especially in the former case. Correlations between point cloud density obtained from LiDAR and multispectral sensor results were not statistically significant. Even if more studies are necessary, the results obtained can be of interest in pruning management. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

15 pages, 5563 KiB  
Article
Calibration and Validation of Simulation Parameters for Maize Straw Based on Discrete Element Method and Genetic Algorithm–Backpropagation
by Fandi Zeng, Hongwei Diao, Yinzeng Liu, Dong Ji, Meiling Dou, Ji Cui and Zhihuan Zhao
Sensors 2024, 24(16), 5217; https://doi.org/10.3390/s24165217 - 12 Aug 2024
Cited by 1 | Viewed by 1252
Abstract
There is a significant difference between the simulation effect and the actual effect in the design process of maize straw-breaking equipment due to the lack of accurate simulation model parameters in the breaking and processing of maize straw. This article used a combination [...] Read more.
There is a significant difference between the simulation effect and the actual effect in the design process of maize straw-breaking equipment due to the lack of accurate simulation model parameters in the breaking and processing of maize straw. This article used a combination of physical experiments, virtual simulation, and machine learning to calibrate the simulation parameters of maize straw. A bimodal-distribution discrete element model of maize straw was established based on the intrinsic and contact parameters measured via physical experiments. The significance analysis of the simulation parameters was conducted via the Plackett–Burman experiment. The Poisson ratio, shear modulus, and normal stiffness of the maize straw significantly impacted the peak compression force of the maize straw and steel plate. The steepest-climb test was carried out for the significance parameter, and the relative error between the peak compression force in the simulation test and the peak compression force in the physical test was used as the evaluation index. It was found that the optimal range intervals for the Poisson ratio, shear modulus, and normal stiffness of the maize straw were 0.32–0.36, 1.24 × 108–1.72 × 108 Pa, and 5.9 × 106–6.7 × 106 N/m3, respectively. Using the experimental data of the central composite design as the dataset, a GA–BP neural network prediction model for the peak compression force of maize straw was established, analyzed, and evaluated. The GA–BP prediction model’s accuracy was verified via experiments. It was found that the ideal combination of parameters was a Poisson ratio of 0.357, a shear modulus of 1.511 × 108 Pa, and a normal stiffness of 6.285 × 106 N/m3 for the maize straw. The results provide a basis for analyzing the damage mechanism of maize straw during the grinding process. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

17 pages, 20371 KiB  
Article
YOLOv8 Model for Weed Detection in Wheat Fields Based on a Visual Converter and Multi-Scale Feature Fusion
by Yinzeng Liu, Fandi Zeng, Hongwei Diao, Junke Zhu, Dong Ji, Xijie Liao and Zhihuan Zhao
Sensors 2024, 24(13), 4379; https://doi.org/10.3390/s24134379 - 5 Jul 2024
Cited by 3 | Viewed by 2387
Abstract
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise [...] Read more.
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise identification of weeds, wheat weed datasets were constructed, and a wheat field weed detection model, YOLOv8-MBM, based on improved YOLOv8s, was proposed. In this study, a lightweight visual converter (MobileViTv3) was introduced into the C2f module to enhance the detection accuracy of the model by integrating input, local (CNN), and global (ViT) features. Secondly, a bidirectional feature pyramid network (BiFPN) was introduced to enhance the performance of multi-scale feature fusion. Furthermore, to address the weak generalization and slow convergence speed of the CIoU loss function for detection tasks, the bounding box regression loss function (MPDIOU) was used instead of the CIoU loss function to improve the convergence speed of the model and further enhance the detection performance. Finally, the model performance was tested on the wheat weed datasets. The experiments show that the YOLOv8-MBM proposed in this paper is superior to Fast R-CNN, YOLOv3, YOLOv4-tiny, YOLOv5s, YOLOv7, YOLOv9, and other mainstream models in regards to detection performance. The accuracy of the improved model reaches 92.7%. Compared with the original YOLOv8s model, the precision, recall, mAP1, and mAP2 are increased by 10.6%, 8.9%, 9.7%, and 9.3%, respectively. In summary, the YOLOv8-MBM model successfully meets the requirements for accurate weed detection in wheat fields. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

12 pages, 2248 KiB  
Communication
Automatic Shrimp Fry Counting Method Using Multi-Scale Attention Fusion
by Xiaohong Peng, Tianyu Zhou, Ying Zhang and Xiaopeng Zhao
Sensors 2024, 24(9), 2916; https://doi.org/10.3390/s24092916 - 2 May 2024
Cited by 1 | Viewed by 2709
Abstract
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which [...] Read more.
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which is very helpful for the subsequent growth status, transportation management, and yield assessment. However, traditional manual counting methods are often inefficient and prone to counting errors; a more efficient and accurate method for shrimp fry counting is urgently needed. In this paper, we first collected and labeled the images of shrimp fry in breeding tanks according to the constructed experimental environment and generated corresponding density maps using the Gaussian kernel function. Then, we proposed a multi-scale attention fusion-based shrimp fry counting network called the SFCNet. Experiments showed that our proposed SFCNet model reached the optimal performance in terms of shrimp fry counting compared to CNN-based baseline counting models, with MAEs and RMSEs of 3.96 and 4.682, respectively. This approach was able to effectively calculate the number of shrimp fry and provided a better solution for accurately calculating the number of shrimp fry. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

19 pages, 10732 KiB  
Article
Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations
by Rizky Mulya Sampurno, Zifu Liu, R. M. Rasika D. Abeyrathna and Tofael Ahamed
Sensors 2024, 24(3), 893; https://doi.org/10.3390/s24030893 - 30 Jan 2024
Cited by 20 | Viewed by 3290
Abstract
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due [...] Read more.
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 5235 KiB  
Review
A Systematic Review on the Advancements in Remote Sensing and Proximity Tools for Grapevine Disease Detection
by Fernando Portela, Joaquim J. Sousa, Cláudio Araújo-Paredes, Emanuel Peres, Raul Morais and Luís Pádua
Sensors 2024, 24(24), 8172; https://doi.org/10.3390/s24248172 - 21 Dec 2024
Cited by 3 | Viewed by 2147
Abstract
Grapevines (Vitis vinifera L.) are one of the most economically relevant crops worldwide, yet they are highly vulnerable to various diseases, causing substantial economic losses for winegrowers. This systematic review evaluates the application of remote sensing and proximal tools for vineyard disease [...] Read more.
Grapevines (Vitis vinifera L.) are one of the most economically relevant crops worldwide, yet they are highly vulnerable to various diseases, causing substantial economic losses for winegrowers. This systematic review evaluates the application of remote sensing and proximal tools for vineyard disease detection, addressing current capabilities, gaps, and future directions in sensor-based field monitoring of grapevine diseases. The review covers 104 studies published between 2008 and October 2024, identified through searches in Scopus and Web of Science, conducted on 25 January 2024, and updated on 10 October 2024. The included studies focused exclusively on the sensor-based detection of grapevine diseases, while excluded studies were not related to grapevine diseases, did not use remote or proximal sensing, or were not conducted in field conditions. The most studied diseases include downy mildew, powdery mildew, Flavescence dorée, esca complex, rots, and viral diseases. The main sensors identified for disease detection are RGB, multispectral, hyperspectral sensors, and field spectroscopy. A trend identified in recent published research is the integration of artificial intelligence techniques, such as machine learning and deep learning, to improve disease detection accuracy. The results demonstrate progress in sensor-based disease monitoring, with most studies concentrating on specific diseases, sensor platforms, or methodological improvements. Future research should focus on standardizing methodologies, integrating multi-sensor data, and validating approaches across diverse vineyard contexts to improve commercial applicability and sustainability, addressing both economic and environmental challenges. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

37 pages, 2256 KiB  
Review
Internet of Things-Based Automated Solutions Utilizing Machine Learning for Smart and Real-Time Irrigation Management: A Review
by Bryan Nsoh, Abia Katimbo, Hongzhi Guo, Derek M. Heeren, Hope Njuki Nakabuye, Xin Qiao, Yufeng Ge, Daran R. Rudnick, Joshua Wanyama, Erion Bwambale and Shafik Kiraga
Sensors 2024, 24(23), 7480; https://doi.org/10.3390/s24237480 - 23 Nov 2024
Cited by 6 | Viewed by 5181
Abstract
This systematic review critically evaluates the current state and future potential of real-time, end-to-end smart, and automated irrigation management systems, focusing on integrating the Internet of Things (IoTs) and machine learning technologies for enhanced agricultural water use efficiency and crop productivity. In this [...] Read more.
This systematic review critically evaluates the current state and future potential of real-time, end-to-end smart, and automated irrigation management systems, focusing on integrating the Internet of Things (IoTs) and machine learning technologies for enhanced agricultural water use efficiency and crop productivity. In this review, the automation of each component is examined in the irrigation management pipeline from data collection to application while analyzing its effectiveness, efficiency, and integration with various precision agriculture technologies. It also investigates the role of the interoperability, standardization, and cybersecurity of IoT-based automated solutions for irrigation applications. Furthermore, in this review, the existing gaps are identified and solutions are proposed for seamless integration across multiple sensor suites for automated systems, aiming to achieve fully autonomous and scalable irrigation management. The findings highlight the transformative potential of automated irrigation systems to address global food challenges by optimizing water use and maximizing crop yields. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

Back to TopTop