sensors-logo

Journal Browser

Journal Browser

Intelligent Agricultural Applications with Sensing and Vision System

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 6664

Special Issue Editors


E-Mail Website
Guest Editor
1. College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
2. Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
Interests: intelligent agricultural vision system, including X-ray-computed tomography imaging; 3D reconstruction based on robot-structured light; deep learning algorithm; cloud computing application; edge computing application

E-Mail Website
Guest Editor
1 College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
2 Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
Interests: plant phenomics technology; agricultural robot; AI visual inspection technology

E-Mail Website
Guest Editor
TUM School of Life Sciences, Technical University of Munich, Munich, Germany
Interests: plant phenotyping; precision agriculture
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The labor shortage and cost have led to a bottleneck in agriculture, with smart agriculture, an emerging area where sensor-based technologies play an important role, being proposed as a solution. With the rapid development in sensing and AI-based vision technologies, intelligent agriculture applications have increasingly been implemented, while advanced sensor technologies are used to produce novel tools to reduce labor.

This Special Issue, “Intelligent Agricultural Applications with Sensing and Vision System”, aims to bring together recent research and developments concerning novel sensing and vision systems applied in agriculture. We welcome papers addressing sensing and vision systems for a wide range of agricultural tasks, including, but not limited to, recent research and developments in the following areas:

  • Advanced imaging technology: hyper-spectral, multispectral, fluorescence, thermal, 3-dimension imaging technology;
  • AI-driven phenotypic analysis: crop yield estimation, dynamic growth prediction, health status determination, microorganism and pest identification;
  • High-throughput phenotyping;
  • Airborne sensors (UAVs);
  • AI cloud or edge computing application;
  • Sensors for robotic applications in crop management;
  • Sensors for positioning, navigation and obstacle detection.

Dr. Chenglong Huang
Dr. Shengyong Xu
Prof. Dr. Kang Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-throughput phenotyping
  • AI-driven phenotypic analysis
  • advanced imaging technology

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 7540 KiB  
Article
Estimation of Soil Characteristics from Multispectral Sentinel-3 Imagery and DEM Derivatives Using Machine Learning
by Flavio Piccoli, Mirko Paolo Barbato, Marco Peracchi and Paolo Napoletano
Sensors 2023, 23(18), 7876; https://doi.org/10.3390/s23187876 - 14 Sep 2023
Viewed by 783
Abstract
In this paper, different machine learning methodologies have been evaluated for the estimation of the multiple soil characteristics of a continental-wide area corresponding to the European region, using multispectral Sentinel-3 satellite imagery and digital elevation model (DEM) derivatives. The results confirm the importance [...] Read more.
In this paper, different machine learning methodologies have been evaluated for the estimation of the multiple soil characteristics of a continental-wide area corresponding to the European region, using multispectral Sentinel-3 satellite imagery and digital elevation model (DEM) derivatives. The results confirm the importance of multispectral imagery in the estimation of soil properties and specifically show that the use of DEM derivatives improves the quality of the estimates, in terms of R2, by about 19% on average. In particular, the estimation of soil texture increases by about 43%, and that of cation exchange capacity (CEC) by about 65%. The importance of each input source (multispectral and DEM) in predicting the soil properties using machine learning has been traced back. It has been found that, overall, the use of multispectral features is more important than the use of DEM derivatives with a ration, on average, of 60% versus 40%. Full article
(This article belongs to the Special Issue Intelligent Agricultural Applications with Sensing and Vision System)
Show Figures

Figure 1

14 pages, 3787 KiB  
Article
A Novel Method for Filled/Unfilled Grain Classification Based on Structured Light Imaging and Improved PointNet++
by Shihao Huang, Zhihao Lu, Yuxuan Shi, Jiale Dong, Lin Hu, Wanneng Yang and Chenglong Huang
Sensors 2023, 23(14), 6331; https://doi.org/10.3390/s23146331 - 12 Jul 2023
Viewed by 1008
Abstract
China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low [...] Read more.
China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low efficiency, poor repeatability, and low precision. In this study, we have proposed a novel method for filled/unfilled grain classification based on structured light imaging and Improved PointNet++. Firstly, the 3D point cloud data of rice grains were obtained by structured light imaging. And then the specified processing algorithms were developed for the single grain segmentation, and data enhancement with normal vector. Finally, the PointNet++ network was improved by adding an additional Set Abstraction layer and combining the maximum pooling of normal vectors to realize filled/unfilled rice grain point cloud classification. To verify the model performance, the Improved PointNet++ was compared with six machine learning methods, PointNet and PointConv. The results showed that the optimal machine learning model is XGboost, with a classification accuracy of 91.99%, while the classification accuracy of Improved PointNet++ was 98.50% outperforming the PointNet 93.75% and PointConv 92.25%. In conclusion, this study has demonstrated a novel and effective method for filled/unfilled grain recognition. Full article
(This article belongs to the Special Issue Intelligent Agricultural Applications with Sensing and Vision System)
Show Figures

Figure 1

15 pages, 3067 KiB  
Article
Fruit Detection and Counting in Apple Orchards Based on Improved Yolov7 and Multi-Object Tracking Methods
by Jing Hu, Chuang Fan, Zhoupu Wang, Jinglin Ruan and Suyin Wu
Sensors 2023, 23(13), 5903; https://doi.org/10.3390/s23135903 - 25 Jun 2023
Cited by 6 | Viewed by 2474
Abstract
With the increasing popularity of online fruit sales, accurately predicting fruit yields has become crucial for optimizing logistics and storage strategies. However, existing manual vision-based systems and sensor methods have proven inadequate for solving the complex problem of fruit yield counting, as they [...] Read more.
With the increasing popularity of online fruit sales, accurately predicting fruit yields has become crucial for optimizing logistics and storage strategies. However, existing manual vision-based systems and sensor methods have proven inadequate for solving the complex problem of fruit yield counting, as they struggle with issues such as crop overlap and variable lighting conditions. Recently CNN-based object detection models have emerged as a promising solution in the field of computer vision, but their effectiveness is limited in agricultural scenarios due to challenges such as occlusion and dissimilarity among the same fruits. To address this issue, we propose a novel variant model that combines the self-attentive mechanism of Vision Transform, a non-CNN network architecture, with Yolov7, a state-of-the-art object detection model. Our model utilizes two attention mechanisms, CBAM and CA, and is trained and tested on a dataset of apple images. In order to enable fruit counting across video frames in complex environments, we incorporate two multi-objective tracking methods based on Kalman filtering and motion trajectory prediction, namely SORT, and Cascade-SORT. Our results show that the Yolov7-CA model achieved a 91.3% mAP and 0.85 F1 score, representing a 4% improvement in mAP and 0.02 improvement in F1 score compared to using Yolov7 alone. Furthermore, three multi-object tracking methods demonstrated a significant improvement in MAE for inter-frame counting across all three test videos, with an 0.642 improvement over using yolov7 alone achieved using our multi-object tracking method. These findings suggest that our proposed model has the potential to improve fruit yield assessment methods and could have implications for decision-making in the fruit industry. Full article
(This article belongs to the Special Issue Intelligent Agricultural Applications with Sensing and Vision System)
Show Figures

Figure 1

17 pages, 4085 KiB  
Article
End-to-End Point Cloud Completion Network with Attention Mechanism
by Yaqin Li, Binbin Han, Shan Zeng, Shengyong Xu and Cao Yuan
Sensors 2022, 22(17), 6439; https://doi.org/10.3390/s22176439 - 26 Aug 2022
Cited by 2 | Viewed by 1607
Abstract
We propose a conceptually simple, general framework and end-to-end approach to point cloud completion, entitled PCA-Net. This approach differs from the existing methods in that it does not require a “simple” network, such as multilayer perceptrons (MLPs), to generate a coarse point cloud [...] Read more.
We propose a conceptually simple, general framework and end-to-end approach to point cloud completion, entitled PCA-Net. This approach differs from the existing methods in that it does not require a “simple” network, such as multilayer perceptrons (MLPs), to generate a coarse point cloud and then a “complex” network, such as auto-encoders or transformers, to enhance local details. It can directly learn the mapping between missing and complete points, ensuring that the structure of the input missing point cloud remains unchanged while accurately predicting the complete points. This approach follows the minimalist design of U-Net. In the encoder, we encode the point clouds into point cloud blocks by iterative farthest point sampling (IFPS) and k-nearest neighbors and then extract the depth interaction features between the missing point cloud blocks by the attention mechanism. In the decoder, we introduce a new trilinear interpolation method to recover point cloud details, with the help of the coordinate space and feature space of low-resolution point clouds, and missing point cloud information. This paper also proposes a method to generate multi-view missing point cloud data using a 3D point cloud hidden point removal algorithm, so that each 3D point cloud model generates a missing point cloud through eight uniformly distributed camera poses. Experiments validate the effectiveness and superiority of PCA-Net in several challenging point cloud completion tasks, and PCA-Net also shows great versatility and robustness in real-world missing point cloud completion. Full article
(This article belongs to the Special Issue Intelligent Agricultural Applications with Sensing and Vision System)
Show Figures

Figure 1

Back to TopTop