Special Issue "Image Processing in Agriculture and Forestry"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 December 2016)

Special Issue Editors

Guest Editor
Prof. Dr. Gonzalo Pajares Martinsanz

Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Website | E-Mail
Phone: +34.1.3947546
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and track movement; fusion and registering from imaging sensors; superresolution from low-resolution image sensors
Guest Editor
Prof. Dr. Francisco Rovira-Más

Agricultural Robotics Laboratory, Polytechnic University of Valencia, 46022 Valencia, Spain
Website | E-Mail
Interests: agricultural robotics and automation; intelligent vehicles; artificial intelligence; machine vision; mechatronics; control systems; autonomous navigation; stereoscopic vision; fluid power; automatic steering; off-road equipment; precision agriculture

Special Issue Information

Dear Colleagues,

Agriculture, forestry and forests are specific areas where imaging-based systems play an important role. They allow a more efficient use of resources while facilitating the realization of different tasks, which are occasionally difficult and dangerous.

Image acquisition, processing and interpretation are oriented toward the efficiency of agricultural activities.

The following is a list of the main topics covered by this Special Issue. The issue will, however, not be limited to these topics:

  • Image acquisition devices and systems in outdoor environments.
  • Image processing techniques: color, segmentation, texture analysis, image fusion.
  • Computing vision-based approaches: pattern recognition, 3D structures and movement.
  • Applications: autonomous agricultural vehicles, obstacle avoidance, crop rows detection, yield estimation and quality, plant health, trees monitoring, crown height, bark thickness, communications.

Prof. Dr. Gonzalo Pajares Martinsanz
Prof. Dr. Francisco Rovira-Más
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Early Yield Prediction Using Image Analysis of Apple Fruit and Tree Canopy Features with Neural Networks
J. Imaging 2017, 3(1), 6; doi:10.3390/jimaging3010006
Received: 30 September 2016 / Revised: 23 December 2016 / Accepted: 11 January 2017 / Published: 19 January 2017
Cited by 2 | PDF Full-text (2562 KB) | HTML Full-text | XML Full-text
Abstract
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN);
[...] Read more.
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); (2) Methods: Two back propagation neural network (BPNN) models were developed for the early period after natural fruit drop in June and the ripening period, respectively. Within the same periods, images of apple cv. “Gala” trees were captured from an orchard near Bonn, Germany. Two sample sets were developed to train and test models; each set included 150 samples from the 2009 and 2010 growing season. For each sample (each canopy image), pixels were segmented into fruit, foliage, and background using image segmentation. The four features extracted from the data set for the canopy were: total cross-sectional area of fruits, fruit number, total cross-section area of small fruits, and cross-sectional area of foliage, and were used as inputs. With the actual weighted yield per tree as a target, BPNN was employed to learn their mutual relationship as a prerequisite to develop the prediction; (3) Results: For the developed BPNN model of the early period after June drop, correlation coefficients (R2) between the estimated and the actual weighted yield, mean forecast error (MFE), mean absolute percentage error (MAPE), and root mean square error (RMSE) were 0.81, −0.05, 10.7%, 2.34 kg/tree, respectively. For the model of the ripening period, these measures were 0.83, −0.03, 8.9%, 2.3 kg/tree, respectively. In 2011, the two previously developed models were used to predict apple yield. The RMSE and R2 values between the estimated and harvested apple yield were 2.6 kg/tree and 0.62 for the early period (small, green fruit) and improved near harvest (red, large fruit) to 2.5 kg/tree and 0.75 for a tree with ca. 18 kg yield per tree. For further method verification, the cv. “Pinova” apple trees were used as another variety in 2012 to develop the BPNN prediction model for the early period after June drop. The model was used in 2013, which gave similar results as those found with cv. “Gala”; (4) Conclusion: Overall, the results showed in this research that the proposed estimation models performed accurately using canopy and fruit features using image analysis algorithms. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Figure 1

Open AccessArticle Peach Flower Monitoring Using Aerial Multispectral Imaging
J. Imaging 2017, 3(1), 2; doi:10.3390/jimaging3010002
Received: 25 October 2016 / Revised: 27 December 2016 / Accepted: 29 December 2016 / Published: 6 January 2017
Cited by 1 | PDF Full-text (5608 KB) | HTML Full-text | XML Full-text
Abstract
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image
[...] Read more.
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image processing algorithm to detect peach blossoms on trees. Aerial images of peach (Prunus persica) trees were acquired from both experimental and commercial peach orchards in the southwestern part of Idaho using an off-the-shelf unmanned aerial system (UAS), equipped with a multispectral camera (near-infrared, green, blue). The image processing algorithm included contrast stretching of the three bands to enhance the image and thresholding segmentation method to detect the peach blossoms. Initial results showed that the image processing algorithm could detect peach blossoms with an average detection rate of 84.3% and demonstrated good potential as a monitoring tool for orchard management. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Figure 1

Open AccessArticle Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery
J. Imaging 2016, 2(4), 35; doi:10.3390/jimaging2040035
Received: 29 September 2016 / Revised: 26 November 2016 / Accepted: 7 December 2016 / Published: 13 December 2016
PDF Full-text (2121 KB) | HTML Full-text | XML Full-text
Abstract
Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera) to estimate the structure, texture and drainage of the soil. The method is adapted from
[...] Read more.
Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera) to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale), soil texture (sand, silt, clay), bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Figure 1

Open AccessArticle 3D Reconstruction of Plant/Tree Canopy Using Monocular and Binocular Vision
J. Imaging 2016, 2(4), 28; doi:10.3390/jimaging2040028
Received: 29 August 2016 / Revised: 16 September 2016 / Accepted: 19 September 2016 / Published: 29 September 2016
Cited by 1 | PDF Full-text (3748 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional (3D) reconstruction of a tree canopy is an important step in order to measure canopy geometry, such as height, width, volume, and leaf cover area. In this research, binocular stereo vision was used to recover the 3D information of the canopy. Multiple
[...] Read more.
Three-dimensional (3D) reconstruction of a tree canopy is an important step in order to measure canopy geometry, such as height, width, volume, and leaf cover area. In this research, binocular stereo vision was used to recover the 3D information of the canopy. Multiple images were taken from different views around the target. The Structure-from-motion (SfM) method was employed to recover the camera calibration matrix for each image, and the corresponding 3D coordinates of the feature points were calculated and used to recover the camera calibration matrix. Through this method, a sparse projective reconstruction of the target was realized. Subsequently, a ball pivoting algorithm was used to do surface modeling to realize dense reconstruction. Finally, this dense reconstruction was transformed to metric reconstruction through ground truth points which were obtained from camera calibration of binocular stereo cameras. Four experiments were completed, one for a known geometric box, and the other three were: a croton plant with big leaves and salient features, a jalapeno pepper plant with median leaves, and a lemon tree with small leaves. A whole-view reconstruction of each target was realized. The comparison of the reconstructed box’s size with the real box’s size shows that the 3D reconstruction is in metric reconstruction. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Figure 1

Open AccessArticle Estimating Mangrove Biophysical Variables Using WorldView-2 Satellite Data: Rapid Creek, Northern Territory, Australia
J. Imaging 2016, 2(3), 24; doi:10.3390/jimaging2030024
Received: 2 June 2016 / Revised: 2 September 2016 / Accepted: 5 September 2016 / Published: 8 September 2016
Cited by 1 | PDF Full-text (4160 KB) | HTML Full-text | XML Full-text
Abstract
Mangroves are one of the most productive coastal communities in the world. Although we acknowledge the significance of ecosystems, mangroves are under natural and anthropogenic pressures at various scales. Therefore, understanding biophysical variations of mangrove forests is important. An extensive field survey is
[...] Read more.
Mangroves are one of the most productive coastal communities in the world. Although we acknowledge the significance of ecosystems, mangroves are under natural and anthropogenic pressures at various scales. Therefore, understanding biophysical variations of mangrove forests is important. An extensive field survey is impossible within mangroves. WorldView-2 multi-spectral images having a 2-m spatial resolution were used to quantify above ground biomass (AGB) and leaf area index (LAI) in the Rapid Creek mangroves, Darwin, Australia. Field measurements, vegetation indices derived from WorldView-2 images and a partial least squares regression algorithm were incorporated to produce LAI and AGB maps. LAI maps with 2-m and 5-m spatial resolutions showed root mean square errors (RMSEs) of 0.75 and 0.78, respectively, compared to validation samples. Correlation coefficients between field samples and predicted maps were 0.7 and 0.8, respectively. RMSEs obtained for AGB maps were 2.2 kg/m2 and 2.0 kg/m2 for a 2-m and a 5-m spatial resolution, and the correlation coefficients were 0.4 and 0.8, respectively. We would suggest implementing the transects method for field sampling and establishing end points of these transects with a highly accurate positioning system. The study demonstrated the possibility of assessing biophysical variations of mangroves using remotely-sensed data. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Viewing Geometry Sensitivity of Commonly Used Vegetation Indices towards the Estimation of Biophysical Variables in Orchards
J. Imaging 2016, 2(2), 15; doi:10.3390/jimaging2020015
Received: 9 February 2016 / Revised: 25 April 2016 / Accepted: 26 April 2016 / Published: 9 May 2016
PDF Full-text (6052 KB) | HTML Full-text | XML Full-text
Abstract
Stress-related biophysical variables of capital intensive orchard crops can be estimated with proxies via spectral vegetation indices from off-nadir viewing satellite imagery. However, variable viewing compositions affect the relationship between spectral vegetation indices and stress-related variables (i.e., chlorophyll content, water content
[...] Read more.
Stress-related biophysical variables of capital intensive orchard crops can be estimated with proxies via spectral vegetation indices from off-nadir viewing satellite imagery. However, variable viewing compositions affect the relationship between spectral vegetation indices and stress-related variables (i.e., chlorophyll content, water content and Leaf Area Index (LAI)) and could obstruct change detection. A sensitivity analysis was performed on the estimation of biophysical variables via vegetation indices for a wide range of viewing geometries. Subsequently, off-nadir viewing satellite imagery of an experimental orchard was analyzed, while all influences of background admixture were minimized through vegetation index normalization. Results indicated significant differences between nadir and off-nadir viewing scenes (∆R2 > 0.4). The Photochemical Reflectance Index (PRI), Normalized Difference Infrared Index (NDII) and Simple Ratio Pigment Index (SRPI) showed increased R2 values for off-nadir scenes taken perpendicular compared to parallel to row orientation. Other indices, such as Normalized Difference Vegetation Index (NDVI), Gitelson and Merzlyak (GM) and Structure Insensitive Pigment Index (SIPI), showed a significant decrease in R2 values from nadir to off-nadir viewing scenes. These results show the necessity of vegetation index selection for variable viewing applications to obtain an optimal derivation of biophysical variables in all circumstances. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Using Deep Learning to Challenge Safety Standard for Highly Autonomous Machines in Agriculture
J. Imaging 2016, 2(1), 6; doi:10.3390/jimaging2010006
Received: 18 December 2015 / Revised: 29 January 2016 / Accepted: 2 February 2016 / Published: 15 February 2016
Cited by 4 | PDF Full-text (13912 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety
[...] Read more.
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety of highly automated machinery in agriculture, a barrel-shaped obstacle is defined as the obstacle which should be robustly detected to comply with the standard. We show that our fine-tuned deep convolutional net is capable of detecting this obstacle with a precision of 99 . 9 % in row crops and 90 . 8 % in grass mowing, while simultaneously not detecting people and other very distinct obstacles in the image frame. As such, this short note argues that the obstacle defined in the emerging standard is not capable of ensuring safe operations when imaging sensors are part of the safety system. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Imaging for High-Throughput Phenotyping in Energy Sorghum
J. Imaging 2016, 2(1), 4; doi:10.3390/jimaging2010004
Received: 3 November 2015 / Revised: 15 January 2016 / Accepted: 15 January 2016 / Published: 26 January 2016
PDF Full-text (2152 KB) | HTML Full-text | XML Full-text
Abstract
The increasing energy demand in recent years has resulted in a continuous growing interest in renewable energy sources, such as efficient and high-yielding energy crops. Energy sorghum is a crop that has shown great potential in this area, but needs further improvement. Plant
[...] Read more.
The increasing energy demand in recent years has resulted in a continuous growing interest in renewable energy sources, such as efficient and high-yielding energy crops. Energy sorghum is a crop that has shown great potential in this area, but needs further improvement. Plant phenotyping—measuring physiological characteristics of plants—is a laborious and time-consuming task, but it is essential for crop breeders as they attempt to improve a crop. The development of high-throughput phenotyping (HTP)—the use of autonomous sensing systems to rapidly measure plant characteristics—offers great potential for vastly expanding the number of types of a given crop plant surveyed. HTP can thus enable much more rapid progress in crop improvement through the inclusion of more genetic variability. For energy sorghum, stalk thickness is a critically important phenotype, as the stalk contains most of the biomass. Imaging is an excellent candidate for certain phenotypic measurements, as it can simulate visual observations. The aim of this study was to evaluate image analysis techniques involving K-means clustering and minimum-distance classification for use on red-green-blue (RGB) images of sorghum plants as a means to measure stalk thickness. Additionally, a depth camera integrated with the RGB camera was tested for the accuracy of distance measurements between camera and plant. Eight plants were imaged on six dates through the growing season, and image segmentation, classification and stalk thickness measurement were performed. While accuracy levels with both image analysis techniques needed improvement, both showed promise as tools for HTP in sorghum. The average error for K-means with supervised stalk measurement was 10.7% after removal of known outliers. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Non-Parametric Retrieval of Aboveground Biomass in Siberian Boreal Forests with ALOS PALSAR Interferometric Coherence and Backscatter Intensity
J. Imaging 2016, 2(1), 1; doi:10.3390/jimaging2010001
Received: 30 October 2015 / Revised: 2 December 2015 / Accepted: 15 December 2015 / Published: 25 December 2015
Cited by 2 | PDF Full-text (15965 KB) | HTML Full-text | XML Full-text
Abstract
The main objective of this paper is to investigate the effectiveness of two recently popular non-parametric models for aboveground biomass (AGB) retrieval from Synthetic Aperture Radar (SAR) L-band backscatter intensity and coherence images. An area in Siberian boreal forests was selected for this
[...] Read more.
The main objective of this paper is to investigate the effectiveness of two recently popular non-parametric models for aboveground biomass (AGB) retrieval from Synthetic Aperture Radar (SAR) L-band backscatter intensity and coherence images. An area in Siberian boreal forests was selected for this study. The results demonstrated that relatively high estimation accuracy can be obtained at a spatial resolution of 50 m using the MaxEnt and the Random Forests machine learning algorithms. Overall, the AGB estimation errors were similar for both tested models (approximately 35 t∙ha−1). The retrieval accuracy slightly increased, by approximately 1%, when the filtered backscatter intensity was used. Random Forests underestimated the AGB values, whereas MaxEnt overestimated the AGB values. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Precise Navigation of Small Agricultural Robots in Sensitive Areas with a Smart Plant Camera
J. Imaging 2015, 1(1), 115-133; doi:10.3390/jimaging1010115
Received: 4 September 2015 / Revised: 30 September 2015 / Accepted: 30 September 2015 / Published: 13 October 2015
Cited by 2 | PDF Full-text (5249 KB) | HTML Full-text | XML Full-text
Abstract
Most of the relevant technology related to precision agriculture is currently controlled by Global Positioning Systems (GPS) and uploaded map data; however, in sensitive areas with young or expensive plants, small robots are becoming more widely used in exclusive work. These robots must
[...] Read more.
Most of the relevant technology related to precision agriculture is currently controlled by Global Positioning Systems (GPS) and uploaded map data; however, in sensitive areas with young or expensive plants, small robots are becoming more widely used in exclusive work. These robots must follow the plant lines with centimeter precision to protect plant growth. For cases in which GPS fails, a camera-based solution is often used for navigation because of the system cost and simplicity. The low-cost plant camera presented here generates images in which plants are contrasted against the soil, thus enabling the use of simple cross-correlation functions to establish high-resolution navigation control in the centimeter range. Based on the foresight provided by images from in front of the vehicle, robust vehicle control can be established without any dead time; as a result, off-loading the main robot control and overshooting can be avoided. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Open AccessArticle Land Cover Change Image Analysis for Assateague Island National Seashore Following Hurricane Sandy
J. Imaging 2015, 1(1), 85-114; doi:10.3390/jimaging1010085
Received: 1 September 2015 / Revised: 29 September 2015 / Accepted: 30 September 2015 / Published: 5 October 2015
PDF Full-text (4137 KB) | HTML Full-text | XML Full-text
Abstract
The assessment of storm damages is critically important if resource managers are to understand the impacts of weather pattern changes and sea level rise on their lands and develop management strategies to mitigate its effects. This study was performed to detect land cover
[...] Read more.
The assessment of storm damages is critically important if resource managers are to understand the impacts of weather pattern changes and sea level rise on their lands and develop management strategies to mitigate its effects. This study was performed to detect land cover change on Assateague Island as a result of Hurricane Sandy. Several single-date classifications were performed on the pre and post hurricane imagery utilized using both a pixel-based and object-based approach with the Random Forest classifier. Univariate image differencing and a post classification comparison were used to conduct the change detection. This study found that the addition of the coastal blue band to the Landsat 8 sensor did not improve classification accuracy and there was also no statistically significant improvement in classification accuracy using Landsat 8 compared to Landsat 5. Furthermore, there was no significant difference found between object-based and pixel-based classification. Change totals were estimated on Assateague Island following Hurricane Sandy and were found to be minimal, occurring predominately in the most active sections of the island in terms of land cover change, however, the post classification detected significantly more change, mainly due to classification errors in the single-date maps used. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Other

Jump to: Research

Open AccessTechnical Note Machine-Vision Systems Selection for Agricultural Vehicles: A Guide
J. Imaging 2016, 2(4), 34; doi:10.3390/jimaging2040034
Received: 12 September 2016 / Revised: 14 November 2016 / Accepted: 15 November 2016 / Published: 22 November 2016
Cited by 5 | PDF Full-text (12072 KB) | HTML Full-text | XML Full-text
Abstract
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain
[...] Read more.
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a) spectral bands (visible and infrared); (b) imaging sensors and optical systems (including intrinsic parameters) and (c) geometric visual system arrangement (considering extrinsic parameters and stereovision systems). A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management) project for effective weed control in maize fields (wide-rows crops), funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Figures

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Imaging for High-Throughput Phenotyping in Energy Sorghum
Author: Alex Thomasson et al.
Abstract: The paper will report on (1) observations related to challenges in identifying and measuring plant-stalk properties, (2) imaging techniques to meet these challenges, and (3) testing of the imaging solution.

Title: Comparison of Off-the-Shelf Small Unmanned Aerial Systems Using Image Processing
Authors: Bulanon, D.M., Cano, E., Horton, R.
Abstract: Precision agriculture is a farm management technology that involves
sensing and then responding to the observed variability in the field.
Remote sensing is one of the tools of precision agriculture. The
emergence of small unmanned aerial systems (sUAS) have paved the way
to accessible remote sensing tools for farmers. This paper describes
the comparison of two popular off-the-shelf sUASs: 3DR Iris and DJI
Phantom 2. Both units are equipped with a camera gimbal attached with
a GoPro camera. The comparison of the two sUAS involves a hovering
test and a rectilinear motion test. In the hovering test, the sUAS was
allowed to hover over a known object and images were taken every
second for two minutes. The position of the object in the images was
measured and this was used to assess the stability of the sUAV while
hovering. In the rectilinear test, the sUAS was allowed to follow a
straight path and images of a lined track was acquired. The lines on
the images were then measured on how accurate the sUAS followed the
path. Results showed that both sUAV performed well in both the
hovering test and the rectilinear motion test. This demonstrates that
both sUASs can be used for agricultural monitoring.

Title: Combining Satellite, Aerial and Ground Measurements to Assess Forest Carbon Stocks in Democratic Republic of Congo
Author
: Jean-François Bastin et al.
Abstract
: Monitoring tropical forest carbon stocks changes is a rising topic in the recent years, as a result of REDD+ mechanisms negotiations. Aerial and satellite remote sensing technologies offer cost advantages in implementing large scale forest inventories. Despite the recent progress, no widely operational and cost effective method has yet been delivered for central Africa forest monitoring. In the present study, we assess the potential of use of aireborne stereoscopic images to predict aboveground-biomass (AGB) variations of forests located in the Maï Ndombe region of the Democratic Republic of the Congo. In particular, we built a simple linear model to predict the AGB from airborne sterescopic image pair metrics. The accuracy of the results (R² = 0.7) allow to consider the applications of the method on a much larger scale but will probal by require some adjustments considering the variation of forest structure and allometry among the main forest types of Central Africa.

Title: Analysis of the effect of direct sunlight on the calculation of NDVI for ground-truth multispectral images
Authors: Pilar Barreiro, Pilar Barreiro et al.
Abstract: A new tool, such as unmanned aerial vehicles (UAV), may help to solve the problems derived from data obtained from sensors mounted on satellites. This new tool has advantages and disadvantages in its application regarding the techniques used so far.
In this work, we have studied  NDVI under both: direct and diffuse light conditions computed on multispectral images, because it is believed to be one of the parameters, external to the crop that affects mostly the determination of vegetation index. Crop data, taken in conditions of abundant clouds, are data that until now have not been obtained with satellite records. Nowadays multispectral cameras mounted on unmanned aerial vehicles, allow taking the data under these new conditions. Diffuse lighting has the advantage that the incidence of light on the crop is much more homogeneous than in clear-sky conditions, where even making photos in conditions of solar 12h shadows appear. Moreover, the increased spatial resolution derived from the lower flying height allows observing individual plants in the plot, as well as individualized weeds; unavailable from satellite images.
A particularly interesting aspect is to know if the data of concerning both lighting conditions are comparable, in order to assess the usability of both types of data into a historical study. Many authors agree that the use of UAVs allows data collection in diverse conditions compared to satellite, but nobody shows whether the data taken with this new technique are of better or worse quality than those taking so far.

Title: Estimating area of paddy fields in Korean Peninsula around 2001 using vegetation and water indices based on Landsat Thematic Mapper/Enhanced Thematic Mapper Plus data in 2 seasons
Author: Dr. Katsuo Okamoto
Affiliation: NARO Institute for Agro-Environmental Sciences, 1-3 Kannon-dai 3-chome, Tsukuba, 305-8604, Japan
Abstract: Using water and vegetation indices, a method to classify land cover was developed and applied Landsat TM/ETM+ data to estimate the area of paddy field in Korean Peninsula in 2000-2002. NDVI and MNDWI were calculated from TM/ETM+ data in the rice-planting season and the ripening season. Using these indices, the data in each season were classified into 5 categories. Finally, using a decision table from land cover for two seasons, pixels changing from water to vegetation were defined as paddy fields. The estimation accuracy was around 80%. The result shows that this classification method reduced working hours comparing with the usual methods, such as unsupervised classification and supervised classification.

Back to Top