Applications of Deep Learning in Smart Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (20 August 2022) | Viewed by 47846

Special Issue Editors

Centre Eau Terre Environnement, Institut National de la Recherche Scientifique (INRS), Quebec City, QC G1K 9A9, Canada
Interests: remote sensing; geomatics; analysis of optical, SAR, and UAV Earth observations through artificial intelligence and machine learning approaches for agro-environmental applications
Special Issues, Collections and Topics in MDPI journals
Département de Géomatique Appliquée, Université de Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
Interests: remote sensing; deep learning; precision agriculture
Special Issues, Collections and Topics in MDPI journals
Centre Eau Terre Environnement, INRS, 490 Rue de la Couronne, Québec, QC G1K 9A9, Canada
Interests: remote sensing; precision agriculture; deep learning; geomatics; spatial and temporal variability of water resources; microclimate; UAVs
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Smart agriculture, comprising precision agriculture, digital agriculture, and other new concepts in agricultural research and practice, has gained increasing attention in recent years due to the rising importance of sustainable food production and resource management, as well as to the opportunity offered by the emergence of several digital hardware and software technologies. Accordingly, the development of geospatial, information technology, Internet of Things, robotics, artificial intelligence, and data analytics applications plays an essential role in modern farm management. Traditional approaches of information and knowledge collection for the monitoring of agricultural fields is laborious, time-consuming, and may contain uncertainties. Therefore, technological advances in remote sensing platforms and sensors, digital web applications, and cloud data storage and management centers, as well as the development of intelligent data analysis methods and decision support systems, have improved the quality of monitoring of agricultural lands in order to meet agricultural requirements. Smart agriculture, based on today’s variable-rate technology, geospatial technology, sensor technology, Internet of Things, open-source data and algorithms, machine learning (e.g., deep learning), and high-performance computing can benefit from these opportunities and can address the new food production challenges related to cropping system optimization for improving productivity and reducing environmental impacts.

This is a joint Special Issue of Agronomy and Remote Sensing, titled “Applications of Deep Learning in Smart Agriculture,” that aims to present the state-of-the-art and original analytical methods based on deep learning for transforming diverse advanced agro-environmental data from machinery, drone, airborne, and satellite sensors into information relevant to various agronomy applications. Research papers that examine the latest developments in concepts, methods, techniques, and case study applications are welcomed. According to the aims and scope of these journals, articles based on the application of deep learning to agricultural remote sensing data can be submitted to Remote Sensing, while articles presenting analyses of other types of data or technologies in smart/precision agriculture can be submitted to Agronomy.

You may choose our Joint Special Issue in Remote Sensing.

Dr. Saeid Homayouni
Dr. Yacine Bouroubi
Dr. Karem Chokmani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart agriculture
  • digital agriculture
  • precision agriculture
  • variable-rate technology
  • automatic agricultural screening
  • deep learning
  • computer vision
  • convolutional neural networks
  • recurrent neural networks
  • data mining
  • data analytics
  • Big Data
  • modeling
  • remote sensing (satellite, airborne, UAV Imagery, and proximal sensing)
  • crop monitoring and mapping
  • disease detection
  • phenological characterization
  • global positioning system and geospatial information technology
  • robotics
  • Internet of Things

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6074 KiB  
Article
Guided Filtered Sparse Auto-Encoder for Accurate Crop Mapping from Multitemporal and Multispectral Imagery
Agronomy 2022, 12(11), 2615; https://doi.org/10.3390/agronomy12112615 - 24 Oct 2022
Cited by 2 | Viewed by 1192
Abstract
Accurate crop mapping is a fundamental requirement in various agricultural applications, such as inventory, yield modeling, and resource management. However, it is challenging due to crop fields’ high spectral, spatial, and temporal variabilities. New technology in space-borne Earth observation systems has provided high [...] Read more.
Accurate crop mapping is a fundamental requirement in various agricultural applications, such as inventory, yield modeling, and resource management. However, it is challenging due to crop fields’ high spectral, spatial, and temporal variabilities. New technology in space-borne Earth observation systems has provided high spatial and temporal resolution image data as a valuable source of information, which can produce accurate crop maps through efficient analytical approaches. Spatial information has high importance in accurate crop mapping; a Window-based strategy is a common way to extract spatial information by considering neighbourhood information. However, crop field boundaries implicitly exist in image data and can be more helpful in identifying different crop types. This study proposes Guided Filtered Sparse Auto-Encoder (GFSAE) as a deep learning framework guided implicitly with field boundary information to produce accurate crop maps. The proposed GFSAE was evaluated over two time-series datasets of high-resolution PlanetScope (3 m) and RapidEye (5 m) imagery, and the results were compared against the usual Sparse Auto Encoder (SAE). The results show impressive improvements in terms of all performance metrics for both datasets (namely 3.69% in Overal Accuracy, 0.04 in Kappa, and 4.15% in F-score for the PlanetScope dataset, and 3.71% in OA, 0.05 in K, and 1.61% in F-score for RapidEye dataset). Comparing accuracy metrics in field boundary areas has also proved the superiority of GFSAE over the original classifier in classifying these areas. It is also appropriate to be used in field boundary delineation applications. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

17 pages, 10441 KiB  
Article
An Improved Lightweight Network for Real-Time Detection of Apple Leaf Diseases in Natural Scenes
Agronomy 2022, 12(10), 2363; https://doi.org/10.3390/agronomy12102363 - 30 Sep 2022
Cited by 6 | Viewed by 1751
Abstract
Achieving rapid and accurate detection of apple leaf diseases in the natural environment is essential for the growth of apple plants and the development of the apple industry. In recent years, deep learning has been widely studied and applied to apple leaf disease [...] Read more.
Achieving rapid and accurate detection of apple leaf diseases in the natural environment is essential for the growth of apple plants and the development of the apple industry. In recent years, deep learning has been widely studied and applied to apple leaf disease detection. However, existing networks have too many parameters to be easily deployed or lack research on leaf diseases in complex backgrounds to effectively use in real agricultural environments. This study proposes a novel deep learning network, YOLOX-ASSANano, which is an improved lightweight real-time model for apple leaf disease detection based on YOLOX-Nano. We improved the YOLOX-Nano backbone using a designed asymmetric ShuffleBlock, a CSP-SA module, and blueprint-separable convolution (BSConv), which significantly enhance feature-extraction capability and boost detection performance. In addition, we construct a multi-scene apple leaf disease dataset (MSALDD) for experiments. The experimental results show that the YOLOX-ASSANano model with only 0.83 MB parameters achieves 91.08% mAP on MSALDD and 58.85% mAP on the public dataset PlantDoc with a speed of 122 FPS. This study indicates that the YOLOX-ASSANano provides a feasible solution for the real-time diagnosis of apple leaf diseases in natural scenes, and could be helpful for the detection of other plant diseases. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 7657 KiB  
Article
Classification Method of Significant Rice Pests Based on Deep Learning
Agronomy 2022, 12(9), 2096; https://doi.org/10.3390/agronomy12092096 - 01 Sep 2022
Cited by 8 | Viewed by 2464
Abstract
Rice pests are one of the main factors affecting rice yield. The accurate identification of pests facilitates timely preventive measures to avoid economic losses. Some existing open source datasets related to rice pest identification mostly include only a small number of samples, or [...] Read more.
Rice pests are one of the main factors affecting rice yield. The accurate identification of pests facilitates timely preventive measures to avoid economic losses. Some existing open source datasets related to rice pest identification mostly include only a small number of samples, or suffer from inter-class and intra-class variance and data imbalance challenges, which limit the application of deep learning techniques in the field of rice pest identification. In this paper, based on the IP102 dataset, we first reorganized a large-scale dataset for rice pest identification by Web crawler technique and manual screening. This dataset was given the name IP_RicePests. Specifically, the dataset includes 8248 images belonging to 14 categories. The IP_RicePests dataset was then expanded to include 14,000 images via ARGAN data augmentation technique to address the difficulties in obtaining large samples of rice pests. Finally, the parameters trained on the public image ImageNet dataset using VGGNet, ResNet and MobileNet networks were used as the initial values of the target data training network to achieve image classification in the field of rice pests. The experimental results show that all three classification networks combined with transfer learning have good recognition accuracy, among which the highest classification accuracy can be obtained on the IP_RicePests dataset via fine-tuning the parameters of the VGG16 network. In addition, following ARGAN data augmentation the dataset demonstrates high accuracy improvements in all three models, and fine-tuning the VGG16 network parameters obtains the highest accuracy in the augmented IP_RicePests dataset. It is demonstrated that CNN combined with transfer learning can employ the ARGAN data augmentation technique to overcome difficulties in obtaining large sample sizes and improve the efficiency of rice pest identification. This study provides foundational data and technical support for rice pest identification. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 14257 KiB  
Article
Maize Small Leaf Spot Classification Based on Improved Deep Convolutional Neural Networks with a Multi-Scale Attention Mechanism
Agronomy 2022, 12(4), 906; https://doi.org/10.3390/agronomy12040906 - 09 Apr 2022
Cited by 16 | Viewed by 2809
Abstract
Maize small leaf spot (Bipolaris maydis) is one of the most important diseases of maize. The severity of the disease cannot be accurately identified, the cost of pesticide application increases every year, and the agricultural ecological environment is polluted. Therefore, in [...] Read more.
Maize small leaf spot (Bipolaris maydis) is one of the most important diseases of maize. The severity of the disease cannot be accurately identified, the cost of pesticide application increases every year, and the agricultural ecological environment is polluted. Therefore, in order to solve this problem, this study proposes a novel deep learning network DISE-Net. We designed a dilated-inception module instead of the traditional inception module for strengthening the performance of multi-scale feature extraction, then embedded the attention module to learn the importance of interchannel relationships for input features. In addition, a dense connection strategy is used in model building to strengthen channel feature propagation. In this paper, we constructed a data set of maize small leaf spot, including 1268 images of four disease grades and healthy leaves. Comparative experiments show that DISE-Net with a test accuracy of 97.12% outperforms the classical VGG16 (91.11%), ResNet50 (89.77%), InceptionV3 (90.97%), MobileNetv1 (92.51%), MobileNetv2 (92.17%) and DenseNet121 (94.25%). In addition, Grad-Cam network visualization also shows that DISE-Net is able to pay more attention to the key areas in making the decision. The results showed that the DISE-Net was suitable for the classification of maize small leaf spot in the field. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

14 pages, 944 KiB  
Article
A Deep Learning-Based Sensor Modeling for Smart Irrigation System
Agronomy 2022, 12(1), 212; https://doi.org/10.3390/agronomy12010212 - 16 Jan 2022
Cited by 22 | Viewed by 4534
Abstract
The use of Internet of things (IoT)-based physical sensors to perceive the environment is a prevalent and global approach. However, one major problem is the reliability of physical sensors’ nodes, which creates difficulty in a real-time system to identify whether the physical sensor [...] Read more.
The use of Internet of things (IoT)-based physical sensors to perceive the environment is a prevalent and global approach. However, one major problem is the reliability of physical sensors’ nodes, which creates difficulty in a real-time system to identify whether the physical sensor is transmitting correct values or malfunctioning due to external disturbances affecting the system, such as noise. In this paper, the use of Long Short-Term Memory (LSTM)-based neural networks is proposed as an alternate approach to address this problem. The proposed solution is tested for a smart irrigation system, where a physical sensor is replaced by a neural sensor. The Smart Irrigation System (SIS) contains several physical sensors, which transmit temperature, humidity, and soil moisture data to calculate the transpiration in a particular field. The real-world values are taken from an agriculture field, located in a field of lemons near the Ghadap Sindh province of Pakistan. The LM35 sensor is used for temperature, DHT-22 for humidity, and we designed a customized sensor in our lab for the acquisition of moisture values. The results of the experiment show that the proposed deep learning-based neural sensor predicts the real-time values with high accuracy, especially the temperature values. The humidity and moisture values are also in an acceptable range. Our results highlight the possibility of using a neural network, referred to as a neural sensor here, to complement the functioning of a physical sensor deployed in an agriculture field in order to make smart irrigation systems more reliable. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

20 pages, 1439 KiB  
Article
Plant Disease Identification Using Shallow Convolutional Neural Network
Agronomy 2021, 11(12), 2388; https://doi.org/10.3390/agronomy11122388 - 24 Nov 2021
Cited by 31 | Viewed by 4042
Abstract
Various plant diseases are major threats to agriculture. For timely control of different plant diseases in effective manner, automated identification of diseases are highly beneficial. So far, different techniques have been used to identify the diseases in plants. Deep learning is among the [...] Read more.
Various plant diseases are major threats to agriculture. For timely control of different plant diseases in effective manner, automated identification of diseases are highly beneficial. So far, different techniques have been used to identify the diseases in plants. Deep learning is among the most widely used techniques in recent times due to its impressive results. In this work, we have proposed two methods namely shallow VGG with RF and shallow VGG with Xgboost to identify the diseases. The proposed model is compared with other hand-crafted and deep learning-based approaches. The experiments are carried on three different plants namely corn, potato, and tomato. The considered diseases in corns are Blight, Common rust, and Gray leaf spot, diseases in potatoes are early blight and late blight, and tomato diseases are bacterial spot, early blight, and late blight. The result shows that our implemented shallow VGG with Xgboost model outperforms different deep learning models in terms of accuracy, precision, recall, f1-score, and specificity. Shallow Visual Geometric Group (VGG) with Xgboost gives the highest accuracy rate of 94.47% in corn, 98.74% in potato, and 93.91% in the tomato dataset. The models are also tested with field images of potato, corn, and tomato. Even in field image the average accuracy obtained using shallow VGG with Xgboost are 94.22%, 97.36%, and 93.14%, respectively. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

20 pages, 4245 KiB  
Article
Ginger Seeding Detection and Shoot Orientation Discrimination Using an Improved YOLOv4-LITE Network
Agronomy 2021, 11(11), 2328; https://doi.org/10.3390/agronomy11112328 - 17 Nov 2021
Cited by 12 | Viewed by 2077
Abstract
A consistent orientation of ginger shoots when sowing ginger is more conducive to high yields and later harvesting. However, current ginger sowing mainly relies on manual methods, seriously hindering the ginger industry’s development. Existing ginger seeders still require manual assistance in placing ginger [...] Read more.
A consistent orientation of ginger shoots when sowing ginger is more conducive to high yields and later harvesting. However, current ginger sowing mainly relies on manual methods, seriously hindering the ginger industry’s development. Existing ginger seeders still require manual assistance in placing ginger seeds to achieve consistent ginger shoot orientation. To address the problem that existing ginger seeders have difficulty in automating seeding and ensuring consistent ginger shoot orientation, this study applies object detection techniques in deep learning to the detection of ginger and proposes a ginger recognition network based on YOLOv4-LITE, which, first, uses MobileNetv2 as the backbone network of the model and, second, adds coordinate attention to MobileNetv2 and uses Do-Conv convolution to replace part of the traditional convolution. After completing the prediction of ginger and ginger shoots, this paper determines ginger shoot orientation by calculating the relative positions of the largest ginger shoot and the ginger. The mean average precision, Params, and giga Flops of the proposed YOLOv4-LITE in the test set reached 98.73%, 47.99 M, and 8.74, respectively. The experimental results show that YOLOv4-LITE achieved ginger seed detection and ginger shoot orientation calculation, and that it provides a technical guarantee for automated ginger seeding. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 3004 KiB  
Article
Deep Learning-Based Growth Prediction System: A Use Case of China Agriculture
Agronomy 2021, 11(8), 1551; https://doi.org/10.3390/agronomy11081551 - 03 Aug 2021
Cited by 18 | Viewed by 4150
Abstract
Agricultural advancements have significantly impacted people’s lives and their surroundings in recent years. The insufficient knowledge of the whole agricultural production system and conventional ways of irrigation have limited agricultural yields in the past. The remote sensing innovations recently implemented in agriculture have [...] Read more.
Agricultural advancements have significantly impacted people’s lives and their surroundings in recent years. The insufficient knowledge of the whole agricultural production system and conventional ways of irrigation have limited agricultural yields in the past. The remote sensing innovations recently implemented in agriculture have dramatically revolutionized production efficiency by offering unparalleled opportunities for convenient, versatile, and quick collection of land images to collect critical details on the crop’s conditions. These innovations have enabled automated data collection, simulation, and interpretation based on crop analytics facilitated by deep learning techniques. This paper aims to reveal the transformative patterns of old Chinese agrarian development and fruit production by focusing on the major crop production (from 1980 to 2050) taking into account various forms of data from fruit production (e.g., apples, bananas, citrus fruits, pears, and grapes). In this study, we used production data for different fruits grown in China to predict the future production of these fruits. The study employs deep neural networks to project future fruit production based on the statistics issued by China’s National Bureau of Statistics on the total fruit growth output for this period. The proposed method exhibits encouraging results with an accuracy of 95.56% calculating by accuracy formula based on fruit production variation. Authors further provide recommendations on the AGR-DL (agricultural deep learning) method being helpful for developing countries. The results suggest that the agricultural development in China is acceptable but demands more improvement and government needs to prioritize expanding the fruit production by establishing new strategies for cultivators to boost their performance. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

17 pages, 5042 KiB  
Article
Lychee Surface Defect Detection Based on Deep Convolutional Neural Networks with GAN-Based Data Augmentation
Agronomy 2021, 11(8), 1500; https://doi.org/10.3390/agronomy11081500 - 28 Jul 2021
Cited by 32 | Viewed by 3727
Abstract
The performance of fruit surface defect detection is easily affected by factors such as noisy background and foliage occlusion. In this study, we choose lychee as a fruit type to investigate its surface quality. Lychees are hard to preserve and have to be [...] Read more.
The performance of fruit surface defect detection is easily affected by factors such as noisy background and foliage occlusion. In this study, we choose lychee as a fruit type to investigate its surface quality. Lychees are hard to preserve and have to be stored at low temperatures to keep fresh. Additionally, the surface of lychees is subject to scratches and cracks during harvesting/processing. To explore the feasibility of the automation of defective surface detection for lychees, we build a dataset with 3743 samples divided into three categories, namely, mature, defects, and rot. The original dataset suffers an imbalanced distribution issue. To address it, we adopt a transformer-based generative adversarial network (GAN) as a means of data augmentation that can effectively enhance the original training set with more and diverse samples to rebalance the three categories. In addition, we investigate three deep convolutional neural network (DCNN) models, including SSD-MobileNet V2, Faster RCNN-ResNet50, and Faster RCNN-Inception-ResNet V2, trained under different settings for an extensive comparison study. The results show that all three models demonstrate consistent performance gains in mean average precision (mAP), with the application of GAN-based augmentation. The rebalanced dataset also reduces the inter-category discrepancy, allowing a DCNN model to be trained equally across categories. In addition, the qualitative results show that models trained under the augmented setting can better identify the critical regions and the object boundary, leading to gains in mAP. Lastly, we conclude that the most cost-effective model, SSD-MobileNet V2, presents a comparable mAP (91.81%) and a superior inference speed (102 FPS), suitable for real-time detection in industrial-level applications. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

22 pages, 3799 KiB  
Article
Deep Learning-Based Estimation of Crop Biophysical Parameters Using Multi-Source and Multi-Temporal Remote Sensing Observations
Agronomy 2021, 11(7), 1363; https://doi.org/10.3390/agronomy11071363 - 03 Jul 2021
Cited by 17 | Viewed by 3647
Abstract
Remote sensing data are considered as one of the primary data sources for precise agriculture. Several studies have demonstrated the excellent capability of radar and optical imagery for crop mapping and biophysical parameter estimation. This paper aims at modeling the crop biophysical parameters, [...] Read more.
Remote sensing data are considered as one of the primary data sources for precise agriculture. Several studies have demonstrated the excellent capability of radar and optical imagery for crop mapping and biophysical parameter estimation. This paper aims at modeling the crop biophysical parameters, e.g., Leaf Area Index (LAI) and biomass, using a combination of radar and optical Earth observations. We extracted several radar features from polarimetric Synthetic Aperture Radar (SAR) data and Vegetation Indices (VIs) from optical images to model crops’ LAI and dry biomass. Then, the mutual correlations between these features and Random Forest feature importance were calculated. We considered two scenarios to estimate crop parameters. First, Machine Learning (ML) algorithms, e.g., Support Vector Regression (SVR), Random Forest (RF), Gradient Boosting (GB), and Extreme Gradient Boosting (XGB), were utilized to estimate two crop biophysical parameters. To this end, crops’ dry biomass and LAI were estimated using three input data; (1) SAR polarimetric features; (2) spectral VIs; (3) integrating both SAR and optical features. Second, a deep artificial neural network was created. These input data were fed to the mentioned algorithms and evaluated using the in-situ measurements. These observations of three cash crops, including soybean, corn, and canola, have been collected over Manitoba, Canada, during the Soil Moisture Active Validation Experimental 2012 (SMAPVEX-12) campaign. The results showed that GB and XGB have great potential in parameter estimation and remarkably improved accuracy. Our results also demonstrated a significant improvement in the dry biomass and LAI estimation compared to the previous studies. For LAI, the validation Root Mean Square Error (RMSE) was reported as 0.557 m2/m2 for canola using GB, and 0.298 m2/m2 for corn using GB, 0.233 m2/m2 for soybean using XGB. RMSE was reported for dry biomass as 26.29 g/m2 for canola utilizing SVR, 57.97 g/m2 for corn using RF, and 5.00 g/m2 for soybean using GB. The results revealed that the deep artificial neural network had a better potential to estimate crop parameters than the ML algorithms. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

16 pages, 2690 KiB  
Article
A Densely Connected GRU Neural Network Based on Coattention Mechanism for Chinese Rice-Related Question Similarity Matching
Agronomy 2021, 11(7), 1307; https://doi.org/10.3390/agronomy11071307 - 27 Jun 2021
Cited by 6 | Viewed by 1682
Abstract
In the question-and-answer (Q&A) communities of the “China Agricultural Technology Extension Information Platform”, thousands of rice-related Chinese questions are newly added every day. The rapid detection of the same semantic question is the key to the success of a rice-related intelligent Q&A system. [...] Read more.
In the question-and-answer (Q&A) communities of the “China Agricultural Technology Extension Information Platform”, thousands of rice-related Chinese questions are newly added every day. The rapid detection of the same semantic question is the key to the success of a rice-related intelligent Q&A system. To allow the fast and automatic detection of the same semantic rice-related questions, we propose a new method based on the Coattention-DenseGRU (Gated Recurrent Unit). According to the rice-related question characteristics, we applied word2vec with the TF-IDF (Term Frequency–Inverse Document Frequency) method to process and analyze the text data and compare it with the Word2vec, GloVe, and TF-IDF methods. Combined with the agricultural word segmentation dictionary, we applied Word2vec with the TF-IDF method, effectively solving the problem of high dimension and sparse data in the rice-related text. Each network layer employed the connection information of features and all previous recursive layers’ hidden features. To alleviate the problem of feature vector size increasing due to dense splicing, an autoencoder was used after dense concatenation. The experimental results show that rice-related question similarity matching based on Coattention-DenseGRU can improve the utilization of text features, reduce the loss of features, and achieve fast and accurate similarity matching of the rice-related question dataset. The precision and F1 values of the proposed model were 96.3% and 96.9%, respectively. Compared with seven other kinds of question similarity matching models, we present a new state-of-the-art method with our rice-related question dataset. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

17 pages, 15044 KiB  
Article
Rapid Detection and Counting of Wheat Ears in the Field Using YOLOv4 with Attention Module
Agronomy 2021, 11(6), 1202; https://doi.org/10.3390/agronomy11061202 - 12 Jun 2021
Cited by 66 | Viewed by 5996
Abstract
The detection and counting of wheat ears are very important for crop field management, yield estimation, and phenotypic analysis. Previous studies have shown that most methods for detecting wheat ears were based on shallow features such as color and texture extracted by machine [...] Read more.
The detection and counting of wheat ears are very important for crop field management, yield estimation, and phenotypic analysis. Previous studies have shown that most methods for detecting wheat ears were based on shallow features such as color and texture extracted by machine learning methods, which have obtained good results. However, due to the lack of robustness of these features, it was difficult for the above-mentioned methods to meet the detection and counting of wheat ears in natural scenes. Other studies have shown that convolutional neural network (CNN) methods could be used to achieve wheat ear detection and counting. However, the adhesion and occlusion of wheat ears limit the accuracy of detection. Therefore, to improve the accuracy of wheat ear detection and counting in the field, an improved YOLOv4 (you only look once v4) with CBAM (convolutional block attention module) including spatial and channel attention model was proposed that could enhance the feature extraction capabilities of the network by adding receptive field modules. In addition, to improve the generalization ability of the model, not only local wheat data (WD), but also two public data sets (WEDD and GWHDD) were used to construct the training set, the validation set, and the test set. The results showed that the model could effectively overcome the noise in the field environment and realize accurate detection and counting of wheat ears with different density distributions. The average accuracy of wheat ear detection was 94%, 96.04%, and 93.11%. Moreover, the wheat ears were counted on 60 wheat images. The results showed that R2 = 0.8968 for WD, 0.955 for WEDD, and 0.9884 for GWHDD. In short, the CBAM-YOLOv4 model could meet the actual requirements of wheat ear detection and counting, which provided technical support for other high-throughput parameters of the extraction of crops. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

16 pages, 4786 KiB  
Article
Assessing the Sentinel-2 Capabilities to Identify Abandoned Crops Using Deep Learning
Agronomy 2021, 11(4), 654; https://doi.org/10.3390/agronomy11040654 - 30 Mar 2021
Cited by 14 | Viewed by 2464
Abstract
The termination or interruption of agro-forestry practices for a long period gradually results in abandoned land. Abandoned land parcels do not match the requirements to access to the basic payment of the European Common Agricultural Policy (CAP). Therefore, the identification of those parcels [...] Read more.
The termination or interruption of agro-forestry practices for a long period gradually results in abandoned land. Abandoned land parcels do not match the requirements to access to the basic payment of the European Common Agricultural Policy (CAP). Therefore, the identification of those parcels is key in order to return fair subsidies to farmers. In this context, the present work proposes a methodology to detect abandoned crops in the Valencian Community (Spain) from remote sensing data. The approach is based on the assessment of multitemporal Sentinel-2 images and derived spectral indices, which are used as predictors for training machine learning and deep learning classifiers. Several classification scenarios, including both abandoned and active parcels, were evaluated. The best results (98.2% overall accuracy) were obtained when a bi-directional Long Short Term Memory (BiLSTM) network was trained with a multitemporal dataset composed of twelve reflectance time series, and a derived bare soil spectral index (BSI). In this scenario we were able to effectively distinguish abandoned crops from active ones. The results revealed Sentinel-2 features are well suited for land use identification including abandoned lands, and open the possibility of implementing this type of remote sensing based methodology into the CAP payments supervision. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

17 pages, 5694 KiB  
Article
Identifying Cotton Fields from Remote Sensing Images Using Multiple Deep Learning Networks
Agronomy 2021, 11(1), 174; https://doi.org/10.3390/agronomy11010174 - 18 Jan 2021
Cited by 17 | Viewed by 3047
Abstract
Remote sensing imageries processed through empirical and deterministic approaches help predict multiple agronomic traits throughout the growing season. Accurate identification of cotton crop from remotely sensed imageries is a significant task in precision agriculture. This study aims to utilize a deep learning-based framework [...] Read more.
Remote sensing imageries processed through empirical and deterministic approaches help predict multiple agronomic traits throughout the growing season. Accurate identification of cotton crop from remotely sensed imageries is a significant task in precision agriculture. This study aims to utilize a deep learning-based framework for cotton crop field identification with Gaofen-1 (GF-1) high-resolution (16 m) imageries in Wei-Ku region, China. An optimized model for the pixel-wise multidimensional densely connected convolutional neural network (DenseNet) was used. Four widely-used classic convolutional neural networks (CNNs), including ResNet, VGG, SegNet, and DeepLab v3+, were also used for accuracy assessment. The results infer that DenseNet can identify cotton crop features within a relatively shorter time about 5 h for training convergence. The model performance was examined by multiple indicators (P, F1, R, and mIou) produced through the confusion matrix, and the derived cotton fields were then visualized. The DenseNet model has illustrated considerable improvements in comparison with the preceding mainstream models. The results showed that the retrieval precision was 0.948, F1 score was 0.953, and mIou was 0.911. Furthermore, its performance is relatively better in discriminating cotton crop fields’ fine structures when clouds, mountain shadows, and urban built up. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

Back to TopTop