remotesensing-logo

Journal Browser

Journal Browser

Deep Learning and Remote Sensing for Agriculture

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (30 November 2020) | Viewed by 69998

Special Issue Editor


E-Mail Website
Guest Editor
INSA Centre Val de Loire, PRISME, EA 4229, F18020 Bourges, France
Interests: machine learning; computer vision; image processing; pattern recognition; remote sensing; application in agriculture
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, agriculture is facing important challenges to increase productivity and quality while reducing the environmental impact. Meeting these challenges, through advances in science and technology, has become the goal for many worldwide research teams. Sustainable crop production depends on innovation in many fields, such as agronomy, sensors, data science, robotics, biotechnology, etc. The last two decades have seen a growing trend towards application of remote sensing technologies to the agriculture domain. Indeed, modern remote sensing offers unprecedented possibilities for acquiring land images in an easy, flexible, and fast manner, making it possible to obtain interesting information on crops conditions. One major issue that has dominated the field of remote sensing for many years concerns automatic data processing, modeling, and analysis. On the other hand, the deep learning approach continues to show impressive performance in several areas, and existing research recognizes the essential role played by this approach to solve many hard problems related to data modeling, interpretation, classification, etc. The aim of this Special Issue is to disseminate the latest research findings in the deep learning methods for crops monitoring using remote sensing. It includes but is not limited to crops classification, weeds detection, disease detection, yield estimation, plants counting, etc.

Dr. Adel Hafiane
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • environment
  • aerial imaging
  • signal processing
  • image processing
  • machine learning
  • plant classification
  • crop monitoring
  • field mapping

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 11327 KiB  
Article
Field Geometry and the Spatial and Temporal Generalization of Crop Classification Algorithms—A Randomized Approach to Compare Pixel Based and Convolution Based Methods
by Mario Gilcher and Thomas Udelhoven
Remote Sens. 2021, 13(4), 775; https://doi.org/10.3390/rs13040775 - 20 Feb 2021
Cited by 7 | Viewed by 2733
Abstract
With the ongoing trend towards deep learning in the remote sensing community, classical pixel based algorithms are often outperformed by convolution based image segmentation algorithms. This performance was mostly validated spatially, by splitting training and validation pixels for a given year. Though generalizing [...] Read more.
With the ongoing trend towards deep learning in the remote sensing community, classical pixel based algorithms are often outperformed by convolution based image segmentation algorithms. This performance was mostly validated spatially, by splitting training and validation pixels for a given year. Though generalizing models temporally is potentially more difficult, it has been a recent trend to transfer models from one year to another, and therefore to validate temporally. The study argues that it is always important to check both, in order to generate models that are useful beyond the scope of the training data. It shows that convolutional neural networks have potential to generalize better than pixel based models, since they do not rely on phenological development alone, but can also consider object geometry and texture. The UNET classifier was able to achieve the highest F1 scores, averaging 0.61 in temporal validation samples, and 0.77 in spatial validation samples. The theoretical potential for overfitting geometry and just memorizing the shape of fields that are maize has been shown to be insignificant in practical applications. In conclusion, kernel based convolutions can offer a large contribution in making agricultural classification models more transferable, both to other regions and to other years. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

20 pages, 7009 KiB  
Article
Estimating Pasture Biomass Using Sentinel-2 Imagery and Machine Learning
by Yun Chen, Juan Guerschman, Yuri Shendryk, Dave Henry and Matthew Tom Harrison
Remote Sens. 2021, 13(4), 603; https://doi.org/10.3390/rs13040603 - 8 Feb 2021
Cited by 53 | Viewed by 7344
Abstract
Effective dairy farm management requires the regular estimation and prediction of pasture biomass. This study explored the suitability of high spatio-temporal resolution Sentinel-2 imagery and the applicability of advanced machine learning techniques for estimating aboveground biomass at the paddock level in five dairy [...] Read more.
Effective dairy farm management requires the regular estimation and prediction of pasture biomass. This study explored the suitability of high spatio-temporal resolution Sentinel-2 imagery and the applicability of advanced machine learning techniques for estimating aboveground biomass at the paddock level in five dairy farms across northern Tasmania, Australia. A sequential neural network model was developed by integrating Sentinel-2 time-series data, weekly field biomass observations and daily climate variables from 2017 to 2018. Linear least-squares regression was employed for evaluating the results for model calibration and validation. Optimal model performance was realised with an R2 of ≈0.6, a root-mean-square error (RMSE) of ≈356 kg dry matter (DM)/ha and a mean absolute error (MAE) of 262 kg DM/ha. These performance markers indicated the results were within the variability of the pasture biomass measured in the field, and therefore represent a relatively high prediction accuracy. Sensitivity analysis further revealed what impact each farm’s in situ measurement, pasture management and grazing practices have on the model’s predictions. The study demonstrated the potential benefits and feasibility of improving biomass estimation in a cheap and rapid manner over traditional field measurement and commonly used remote-sensing methods. The proposed approach will help farmers and policymakers to estimate the amount of pasture present for optimising grazing management and improving decision-making regarding dairy farming. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Figure 1

19 pages, 4657 KiB  
Article
Feasibility of Combining Deep Learning and RGB Images Obtained by Unmanned Aerial Vehicle for Leaf Area Index Estimation in Rice
by Tomoaki Yamaguchi, Yukie Tanaka, Yuto Imachi, Megumi Yamashita and Keisuke Katsura
Remote Sens. 2021, 13(1), 84; https://doi.org/10.3390/rs13010084 - 29 Dec 2020
Cited by 45 | Viewed by 6616
Abstract
Leaf area index (LAI) is a vital parameter for predicting rice yield. Unmanned aerial vehicle (UAV) surveillance with an RGB camera has been shown to have potential as a low-cost and efficient tool for monitoring crop growth. Simultaneously, deep learning (DL) algorithms have [...] Read more.
Leaf area index (LAI) is a vital parameter for predicting rice yield. Unmanned aerial vehicle (UAV) surveillance with an RGB camera has been shown to have potential as a low-cost and efficient tool for monitoring crop growth. Simultaneously, deep learning (DL) algorithms have attracted attention as a promising tool for the task of image recognition. The principal aim of this research was to evaluate the feasibility of combining DL and RGB images obtained by a UAV for rice LAI estimation. In the present study, an LAI estimation model developed by DL with RGB images was compared to three other practical methods: a plant canopy analyzer (PCA); regression models based on color indices (CIs) obtained from an RGB camera; and vegetation indices (VIs) obtained from a multispectral camera. The results showed that the estimation accuracy of the model developed by DL with RGB images (R2 = 0.963 and RMSE = 0.334) was higher than those of the PCA (R2 = 0.934 and RMSE = 0.555) and the regression models based on CIs (R2 = 0.802-0.947 and RMSE = 0.401–1.13), and comparable to that of the regression models based on VIs (R2 = 0.917–0.976 and RMSE = 0.332–0.644). Therefore, our results demonstrated that the estimation model using DL with an RGB camera on a UAV could be an alternative to the methods using PCA and a multispectral camera for rice LAI estimation. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

22 pages, 26754 KiB  
Article
ATSS Deep Learning-Based Approach to Detect Apple Fruits
by Leonardo Josoé Biffi, Edson Mitishita, Veraldo Liesenberg, Anderson Aparecido dos Santos, Diogo Nunes Gonçalves, Nayara Vasconcelos Estrabis, Jonathan de Andrade Silva, Lucas Prado Osco, Ana Paula Marques Ramos, Jorge Antonio Silva Centeno, Marcos Benedito Schimalski, Leo Rufato, Sílvio Luís Rafaeli Neto, José Marcato Junior and Wesley Nunes Gonçalves
Remote Sens. 2021, 13(1), 54; https://doi.org/10.3390/rs13010054 - 25 Dec 2020
Cited by 42 | Viewed by 6191
Abstract
In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. Specifically, in fruit detection problems, several recent works were developed using Deep Learning (DL) methods applied in images acquired in different acquisition levels. [...] Read more.
In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. Specifically, in fruit detection problems, several recent works were developed using Deep Learning (DL) methods applied in images acquired in different acquisition levels. However, the increasing use of anti-hail plastic net cover in commercial orchards highlights the importance of terrestrial remote sensing systems. Apples are one of the most highly-challenging fruits to be detected in images, mainly because of the target occlusion problem occurrence. Additionally, the introduction of high-density apple tree orchards makes the identification of single fruits a real challenge. To support farmers to detect apple fruits efficiently, this paper presents an approach based on the Adaptive Training Sample Selection (ATSS) deep learning method applied to close-range and low-cost terrestrial RGB images. The correct identification supports apple production forecasting and gives local producers a better idea of forthcoming management practices. The main advantage of the ATSS method is that only the center point of the objects is labeled, which is much more practicable and realistic than bounding-box annotations in heavily dense fruit orchards. Additionally, we evaluated other object detection methods such as RetinaNet, Libra Regions with Convolutional Neural Network (R-CNN), Cascade R-CNN, Faster R-CNN, Feature Selective Anchor-Free (FSAF), and High-Resolution Network (HRNet). The study area is a highly-dense apple orchard consisting of Fuji Suprema apple fruits (Malus domestica Borkh) located in a smallholder farm in the state of Santa Catarina (southern Brazil). A total of 398 terrestrial images were taken nearly perpendicularly in front of the trees by a professional camera, assuring both a good vertical coverage of the apple trees in terms of heights and overlapping between picture frames. After, the high-resolution RGB images were divided into several patches for helping the detection of small and/or occluded apples. A total of 3119, 840, and 2010 patches were used for training, validation, and testing, respectively. Moreover, the proposed method’s generalization capability was assessed by applying simulated image corruptions to the test set images with different severity levels, including noise, blurs, weather, and digital processing. Experiments were also conducted by varying the bounding box size (80, 100, 120, 140, 160, and 180 pixels) in the image original for the proposed approach. Our results showed that the ATSS-based method slightly outperformed all other deep learning methods, between 2.4% and 0.3%. Also, we verified that the best result was obtained with a bounding box size of 160 × 160 pixels. The proposed method was robust regarding most of the corruption, except for snow, frost, and fog weather conditions. Finally, a benchmark of the reported dataset is also generated and publicly available. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

20 pages, 2722 KiB  
Article
A Gated Recurrent Units (GRU)-Based Model for Early Detection of Soybean Sudden Death Syndrome through Time-Series Satellite Imagery
by Luning Bi, Guiping Hu, Muhammad Mohsin Raza, Yuba Kandel, Leonor Leandro and Daren Mueller
Remote Sens. 2020, 12(21), 3621; https://doi.org/10.3390/rs12213621 - 4 Nov 2020
Cited by 20 | Viewed by 3606
Abstract
In general, early detection and timely management of plant diseases are essential for reducing yield loss. Traditional manual inspection of fields is often time-consuming and laborious. Automated imaging techniques have recently been successfully applied to detect plant diseases. However, these methods mostly focus [...] Read more.
In general, early detection and timely management of plant diseases are essential for reducing yield loss. Traditional manual inspection of fields is often time-consuming and laborious. Automated imaging techniques have recently been successfully applied to detect plant diseases. However, these methods mostly focus on the current state of the crop. This paper proposes a gated recurrent unit (GRU)-based model to predict soybean sudden death syndrome (SDS) disease development. To detect SDS at a quadrat level, the proposed method uses satellite images collected from PlanetScope as the training set. The pixel image data include the spectral bands of red, green, blue and near-infrared (NIR). Data collected during the 2016 and 2017 soybean-growing seasons were analyzed. Instead of using individual static imagery, the GRU-based model converts the original imagery into time-series data. SDS predictions were made on different data scenarios and the results were compared with fully connected deep neural network (FCDNN) and XGBoost methods. The overall test accuracy of classifying healthy and diseased quadrates in all methods was above 76%. The test accuracy of the FCDNN and XGBoost were 76.3–85.5% and 80.6–89.2%, respectively, while the test accuracy of the GRU-based model was 82.5–90.4%. The calculation results show that the proposed method can improve the detection accuracy by up to 7% with time-series imagery. Thus, the proposed method has the potential to predict SDS at a future time. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

18 pages, 5059 KiB  
Article
Crop Mapping from Sentinel-1 Polarimetric Time-Series with a Deep Neural Network
by Yang Qu, Wenzhi Zhao, Zhanliang Yuan and Jiage Chen
Remote Sens. 2020, 12(15), 2493; https://doi.org/10.3390/rs12152493 - 3 Aug 2020
Cited by 27 | Viewed by 5772
Abstract
Timely and accurate agricultural information is essential for food security assessment and agricultural management. Synthetic aperture radar (SAR) systems are increasingly available in crop mapping, as they provide all-weather imagery. In particular, the Sentinel-1 sensor provides dense time-series data, thus offering a unique [...] Read more.
Timely and accurate agricultural information is essential for food security assessment and agricultural management. Synthetic aperture radar (SAR) systems are increasingly available in crop mapping, as they provide all-weather imagery. In particular, the Sentinel-1 sensor provides dense time-series data, thus offering a unique opportunity for crop mapping. However, in most studies, the Sentinel-1 complex backscatter coefficient was used directly which limits the potential of the Sentinel-1 in crop mapping. Meanwhile, most of the existing methods may not be tailored for the task of crop classification in time-series polarimetric SAR data. To solve the above problem, we present a novel deep learning strategy in this research. To be specific, we collected Sentinel-1 time-series data in two study areas. The Sentinel-1 image covariance matrix is used as an input to maintain the integrity of polarimetric information. Then, a depthwise separable convolution recurrent neural network (DSCRNN) architecture is proposed to characterize crop types from multiple perspectives and achieve better classification results. The experimental results indicate that the proposed method achieves better accuracy in complex agricultural areas than other classical methods. Additionally, the variable importance provided by the random forest (RF) illustrated that the covariance vector has a far greater influence than the backscatter coefficient. Consequently, the strategy proposed in this research is effective and promising for crop mapping. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

17 pages, 2574 KiB  
Article
Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network
by Chunhua Liao, Jinfei Wang, Qinghua Xie, Ayman Al Baz, Xiaodong Huang, Jiali Shang and Yongjun He
Remote Sens. 2020, 12(5), 832; https://doi.org/10.3390/rs12050832 - 4 Mar 2020
Cited by 42 | Viewed by 5002
Abstract
Annual crop inventory information is important for many agriculture applications and government statistics. The synergistic use of multi-temporal polarimetric synthetic aperture radar (SAR) and available multispectral remote sensing data can reduce the temporal gaps and provide the spectral and polarimetric information of the [...] Read more.
Annual crop inventory information is important for many agriculture applications and government statistics. The synergistic use of multi-temporal polarimetric synthetic aperture radar (SAR) and available multispectral remote sensing data can reduce the temporal gaps and provide the spectral and polarimetric information of the crops, which is effective for crop classification in areas with frequent cloud interference. The main objectives of this study are to develop a deep learning model to map agricultural areas using multi-temporal full polarimetric SAR and multi-spectral remote sensing data, and to evaluate the influence of different input features on the performance of deep learning methods in crop classification. In this study, a one-dimensional convolutional neural network (Conv1D) was proposed and tested on multi-temporal RADARSAT-2 and VENµS data for crop classification. Compared with the Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN) and non-deep learning methods including XGBoost, Random Forest (RF), and Support Vector Machina (SVM), the Conv1D performed the best when the multi-temporal RADARSAT-2 data (Pauli decomposition or coherency matrix) and VENµS multispectral data were fused by the Minimum Noise Fraction (MNF) transformation. The Pauli decomposition and coherency matrix gave similar overall accuracy (OA) for Conv1D when fused with the VENµS data by the MNF transformation (OA = 96.65 ± 1.03% and 96.72 ± 0.77%). The MNF transformation improved the OA and F-score for most classes when Conv1D was used. The results reveal that the coherency matrix has a great potential in crop classification and the MNF transformation of multi-temporal RADARSAT-2 and VENµS data can enhance the performance of Conv1D. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

25 pages, 7915 KiB  
Article
Improved Winter Wheat Spatial Distribution Extraction Using A Convolutional Neural Network and Partly Connected Conditional Random Field
by Shouyi Wang, Zhigang Xu, Chengming Zhang, Jinghan Zhang, Zhongshan Mu, Tianyu Zhao, Yuanyuan Wang, Shuai Gao, Hao Yin and Ziyun Zhang
Remote Sens. 2020, 12(5), 821; https://doi.org/10.3390/rs12050821 - 3 Mar 2020
Cited by 8 | Viewed by 3042
Abstract
Improving the accuracy of edge pixel classification is crucial for extracting the winter wheat spatial distribution from remote sensing imagery using convolutional neural networks (CNNs). In this study, we proposed an approach using a partly connected conditional random field model (PCCRF) to refine [...] Read more.
Improving the accuracy of edge pixel classification is crucial for extracting the winter wheat spatial distribution from remote sensing imagery using convolutional neural networks (CNNs). In this study, we proposed an approach using a partly connected conditional random field model (PCCRF) to refine the classification results of RefineNet, named RefineNet-PCCRF. First, we used an improved RefineNet model to initially segment remote sensing images, followed by obtaining the category probability vectors for each pixel and initial pixel-by-pixel classification result. Second, using manual labels as references, we performed a statistical analysis on the results to select pixels that required optimization. Third, based on prior knowledge, we redefined the pairwise potential energy, used a linear model to connect different levels of potential energies, and used only pixel pairs associated with the selected pixels to build the PCCRF. The trained PCCRF was then used to refine the initial pixel-by-pixel classification result. We used 37 Gaofen-2 images obtained from 2018 to 2019 of a representative Chinese winter wheat region (Tai’an City, China) to create the dataset, employed SegNet and RefineNet as the standard CNNs, and a fully connected conditional random field as the refinement methods to conduct comparison experiments. The RefineNet-PCCRF’s accuracy (94.51%), precision (92.39%), recall (90.98%), and F1-Score (91.68%) were clearly superior than the methods used for comparison. The results also show that the RefineNet-PCCRF improved the accuracy of large-scale winter wheat extraction results using remote sensing imagery. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 4743 KiB  
Technical Note
Predicting Plant Growth from Time-Series Data Using Deep Learning
by Robail Yasrab, Jincheng Zhang, Polina Smyth and Michael P. Pound
Remote Sens. 2021, 13(3), 331; https://doi.org/10.3390/rs13030331 - 20 Jan 2021
Cited by 42 | Viewed by 11936
Abstract
Phenotyping involves the quantitative assessment of the anatomical, biochemical, and physiological plant traits. Natural plant growth cycles can be extremely slow, hindering the experimental processes of phenotyping. Deep learning offers a great deal of support for automating and addressing key plant phenotyping research [...] Read more.
Phenotyping involves the quantitative assessment of the anatomical, biochemical, and physiological plant traits. Natural plant growth cycles can be extremely slow, hindering the experimental processes of phenotyping. Deep learning offers a great deal of support for automating and addressing key plant phenotyping research issues. Machine learning-based high-throughput phenotyping is a potential solution to the phenotyping bottleneck, promising to accelerate the experimental cycles within phenomic research. This research presents a study of deep networks’ potential to predict plants’ expected growth, by generating segmentation masks of root and shoot systems into the future. We adapt an existing generative adversarial predictive network into this new domain. The results show an efficient plant leaf and root segmentation network that provides predictive segmentation of what a leaf and root system will look like at a future time, based on time-series data of plant growth. We present benchmark results on two public datasets of Arabidopsis (A. thaliana) and Brassica rapa (Komatsuna) plants. The experimental results show strong performance, and the capability of proposed methods to match expert annotation. The proposed method is highly adaptable, trainable (transfer learning/domain adaptation) on different plant species and mutations. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

13 pages, 6902 KiB  
Letter
Application of Deep Learning Architectures for Accurate Detection of Olive Tree Flowering Phenophase
by Mario Milicevic, Krunoslav Zubrinic, Ivan Grbavac and Ines Obradovic
Remote Sens. 2020, 12(13), 2120; https://doi.org/10.3390/rs12132120 - 2 Jul 2020
Cited by 15 | Viewed by 3311
Abstract
The importance of monitoring and modelling the impact of climate change on crop phenology in a given ecosystem is ever-growing. For example, these procedures are useful when planning various processes that are important for plant protection. In order to proactively monitor the olive [...] Read more.
The importance of monitoring and modelling the impact of climate change on crop phenology in a given ecosystem is ever-growing. For example, these procedures are useful when planning various processes that are important for plant protection. In order to proactively monitor the olive (Olea europaea)’s phenological response to changing environmental conditions, it is proposed to monitor the olive orchard with moving or stationary cameras, and to apply deep learning algorithms to track the timing of particular phenophases. The experiment conducted for this research showed that hardly perceivable transitions in phenophases can be accurately observed and detected, which is a presupposition for the effective implementation of integrated pest management (IPM). A number of different architectures and feature extraction approaches were compared. Ultimately, using a custom deep network and data augmentation technique during the deployment phase resulted in a fivefold cross-validation classification accuracy of 0.9720 ± 0.0057. This leads to the conclusion that a relatively simple custom network can prove to be the best solution for a specific problem, compared to more complex and very deep architectures. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

1 pages, 160 KiB  
Erratum
Erratum: Wang, S., et al. Improved Winter Wheat Spatial Distribution Extraction Using a Convolutional Neural Network and Partly Connected Conditional Random Field. Remote Sensing 2020, 12, 821
by Shouyi Wang, Zhigang Xu, Chengming Zhang, Yuanyuan Wang, Shuai Gao, Hao Yin and Ziyun Zhang
Remote Sens. 2020, 12(10), 1568; https://doi.org/10.3390/rs12101568 - 14 May 2020
Viewed by 1772
Abstract
After re-considering the contribution of Jinghan Zhang, Zhongshan Mu, and Tianyu Zhao, respectively, we wish to remove them from the authorship of our paper [...] Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
14 pages, 16155 KiB  
Technical Note
Automatic Mapping of Center Pivot Irrigation Systems from Satellite Images Using Deep Learning
by Marciano Saraiva, Églen Protas, Moisés Salgado and Carlos Souza
Remote Sens. 2020, 12(3), 558; https://doi.org/10.3390/rs12030558 - 7 Feb 2020
Cited by 47 | Viewed by 9933
Abstract
The availability of freshwater is becoming a global concern. Because agricultural consumption has been increasing steadily, the mapping of irrigated areas is key for supporting the monitoring of land use and better management of available water resources. In this paper, we propose a [...] Read more.
The availability of freshwater is becoming a global concern. Because agricultural consumption has been increasing steadily, the mapping of irrigated areas is key for supporting the monitoring of land use and better management of available water resources. In this paper, we propose a method to automatically detect and map center pivot irrigation systems using U-Net, an image segmentation convolutional neural network architecture, applied to a constellation of PlanetScope images from the Cerrado biome of Brazil. Our objective is to provide a fast and accurate alternative to map center pivot irrigation systems with very high spatial and temporal resolution imagery. We implemented a modified U-Net architecture using the TensorFlow library and trained it on the Google cloud platform with a dataset built from more than 42,000 very high spatial resolution PlanetScope images acquired between August 2017 and November 2018. The U-Net implementation achieved a precision of 99% and a recall of 88% to detect and map center pivot irrigation systems in our study area. This method, proposed to detect and map center pivot irrigation systems, has the potential to be scaled to larger areas and improve the monitoring of freshwater use by agricultural activities. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

Back to TopTop