Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = colour-infrared imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 12069 KiB  
Article
Exploring the Use of Orthophotos in Google Earth Engine for Very High-Resolution Mapping of Impervious Surfaces: A Data Fusion Approach in Wuppertal, Germany
by Jan-Philipp Langenkamp and Andreas Rienow
Remote Sens. 2023, 15(7), 1818; https://doi.org/10.3390/rs15071818 - 29 Mar 2023
Cited by 9 | Viewed by 3684
Abstract
Germany aims to reduce soil sealing to under 30 hectares per day by 2030 to address negative environmental impacts from the expansion of impervious surfaces. As cities adapt to climate change, spatially explicit very high-resolution information about the distribution of impervious surfaces is [...] Read more.
Germany aims to reduce soil sealing to under 30 hectares per day by 2030 to address negative environmental impacts from the expansion of impervious surfaces. As cities adapt to climate change, spatially explicit very high-resolution information about the distribution of impervious surfaces is becoming increasingly important for urban planning and decision-making. This study proposes a method for mapping impervious surfaces in Google Earth Engine (GEE) using a data fusion approach of 0.9 m colour-infrared true orthophotos, digital elevation models, and vector data. We conducted a pixel-based random forest (RF) classification utilizing spectral indices, Grey-Level Co-occurrence Matrix texture features, and topographic features. Impervious surfaces were mapped with 0.9 m precision resulting in an Overall Accuracy of 92.31% and Kappa-Coefficient of 84.62%. To address challenges posed by high-resolution imagery, we superimposed the RF classification results with land use data from Germany’s Authoritative Real Estate Cadastre Information System (ALKIS). The results show that 25.26% of the city of Wuppertal is covered by impervious surfaces coinciding with a government-funded study from 2020 based on Sentinel-2 Copernicus data that defined a proportion of 25.22% as built-up area. This demonstrates the effectiveness of our method for semi-automated mapping of impervious surfaces in GEE to support urban planning on a local to regional scale. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology)
Show Figures

Graphical abstract

17 pages, 26873 KiB  
Article
Geomorphometric and Geophysical Constraints on Outlining Drained Shallow Mountain Mires
by Stanisław Burliga, Marek Kasprzak, Artur Sobczyk and Wioletta Niemczyk
Geosciences 2023, 13(2), 43; https://doi.org/10.3390/geosciences13020043 - 30 Jan 2023
Cited by 1 | Viewed by 1861
Abstract
Long-term draining of peatlands results in transformation of vegetation and obliteration of their morphological features. In many areas, efforts are made to restore the original ecosystems and increase their water retention potential. Using combined analyses of a LiDAR-based digital terrain model (DTM), colour-infrared [...] Read more.
Long-term draining of peatlands results in transformation of vegetation and obliteration of their morphological features. In many areas, efforts are made to restore the original ecosystems and increase their water retention potential. Using combined analyses of a LiDAR-based digital terrain model (DTM), colour-infrared (CIR) imagery data, ground-penetrating radar (GPR) data and electrical resistivity tomography (ERT) data, we tested the applicability of these methods in outlining the extent and subsurface structure of drained mires located in the Stolowe Mountains National Park area, Poland. The LiDAR-DTMs enabled parameterisation of physiographic features of the mires and determination of their extent, runoff directions and potential waterlogging areas. CIR analysis enabled classification of vegetation types. GPR prospecting revealed the bedrock morphology, thickness and internal structure of the peat deposits, showing that this technique can also provide data on variability in the decomposition of phytogenic deposits. The obtained ERT sections indicate both the thickness of peat deposits and variability in the bedrock internal structure. The results show that integrated analyses of data obtained with different methods can be an effective tool in outlining the original extent of peatlands, with potential application in the planning of peatland ecosystem restitution. Full article
Show Figures

Figure 1

21 pages, 5816 KiB  
Article
Estimating the Reduction in Cover Crop Vitality Followed by Pelargonic Acid Application Using Drone Imagery
by Eliyeh Ganji, Görres Grenzdörffer and Sabine Andert
Agronomy 2023, 13(2), 354; https://doi.org/10.3390/agronomy13020354 - 26 Jan 2023
Cited by 3 | Viewed by 2172
Abstract
Cultivation of cover crops is a valuable practice in sustainable agriculture. In cover crop management, the method of desiccation is an important consideration, and one widely used method for this is the application of glyphosate. With use of glyphosate likely to be banned [...] Read more.
Cultivation of cover crops is a valuable practice in sustainable agriculture. In cover crop management, the method of desiccation is an important consideration, and one widely used method for this is the application of glyphosate. With use of glyphosate likely to be banned soon in Europe, the purpose of this study was to evaluate the herbicidal effect of pelargonic acid (PA) as a bio-based substitute for glyphosate. This study presents the results of a two-year field experiment (2019 and 2021) conducted in northeast Germany. The experimental setup included an untreated control, three different dosages (16, 8, and 5 L/ha) of PA, and the active ingredients glyphosate and pyraflufen. A completely randomised block design was established. The effect of the herbicide treatments was assessed by a visual estimate of the percentage of crop vitality and a comparison assessment provided by an Ebee+ drone. Four vegetation indices (VIs) calculated from the drone images were used to verify the credibility of colour (RGB)-based and near-infrared (NIR)-based vegetation indices. The results of both types of assessment indicated that pelargonic acid was reasonably effective in controlling cover crops within a week of application. In both experimental years, the PA (16 L/ha) and PA_2T (double application of 8 L/ha) treatments demonstrated their highest herbicidal effect for up to seven days after application. PA (16 L/ha) vitality loss decreased over time, while PA_2T (double application of 8 L/ha) continued to exhibit an almost constant effect for longer due to the second application one week later. The PA dosage of 5 L/ha, pyraflufen, and a mixture of the two exhibited a smaller vitality loss than the other treatments. However, except for glyphosate, the herbicidal effect of all the other treatments decreased over time. At the end of the experiment, the glyphosate treatment (3 L/ha) demonstrated the lowest estimated vitality. The results of the drone assessments indicated that vegetation indices (VIs) can provide detailed information regarding crop vitality following herbicide application and that RGB-based indices, such as EXG, have the potential to be applied efficiently and cost-effectively utilising drone imagery. The results of this study demonstrate that pelargonic acid has considerable potential for use as an additional tool in integrated crop management. Full article
Show Figures

Figure 1

20 pages, 3640 KiB  
Article
Single Tree Classification Using Multi-Temporal ALS Data and CIR Imagery in Mixed Old-Growth Forest in Poland
by Agnieszka Kamińska, Maciej Lisiewicz and Krzysztof Stereńczak
Remote Sens. 2021, 13(24), 5101; https://doi.org/10.3390/rs13245101 - 15 Dec 2021
Cited by 11 | Viewed by 4596
Abstract
Tree species classification is important for a variety of environmental applications, including biodiversity monitoring, wildfire risk assessment, ecosystem services assessment, and sustainable forest management. In this study we used a fusion of three remote sensing (RM) datasets including ALS (leaf-on and leaf-off) and [...] Read more.
Tree species classification is important for a variety of environmental applications, including biodiversity monitoring, wildfire risk assessment, ecosystem services assessment, and sustainable forest management. In this study we used a fusion of three remote sensing (RM) datasets including ALS (leaf-on and leaf-off) and colour-infrared (CIR) imagery (leaf-on), to classify different coniferous and deciduous tree species, including dead class, in a mixed temperate forest in Poland. We used intensity and structural variables from the ALS data and spectral information derived from aerial imagery for the classification procedure. Additionally, we tested the differences in classification accuracy of all the variants included in the data integration. The random forest classifier was used in the study. The highest accuracies were obtained for classification based on both point clouds and including image spectral information. The mean values for overall accuracy and kappa were 84.3% and 0.82, respectively. Analysis of the leaf-on and leaf-off alone is not sufficient to identify individual tree species due to their different discriminatory power. Leaf-on and leaf-off ALS point cloud features alone gave the lowest accuracies of 72% ≤ OA ≤ 74% and 0.67 ≤ κ ≤ 0.70. Classification based on both point clouds was found to give satisfactory and comparable results to classification based on combined information from all three sources (83% ≤ OA ≤ 84% and 0.81 ≤ κ ≤ 0.82). The classification accuracy varied between species. The classification results for coniferous trees were always better than for deciduous trees independent of the datasets. In the classification based on both point clouds (leaf-on and leaf-off), the intensity features seemed to be more important than the other groups of variables, especially the coefficient of variation, skewness, and percentiles. The NDVI was the most important CIR-based feature. Full article
(This article belongs to the Special Issue Forest Monitoring in a Multi-Sensor Approach)
Show Figures

Graphical abstract

21 pages, 60847 KiB  
Article
Estimating Plant Pasture Biomass and Quality from UAV Imaging across Queensland’s Rangelands
by Jason Barnetson, Stuart Phinn and Peter Scarth
AgriEngineering 2020, 2(4), 523-543; https://doi.org/10.3390/agriengineering2040035 - 5 Nov 2020
Cited by 36 | Viewed by 7123
Abstract
The aim of this research was to test recent developments in the use of Remotely Piloted Aircraft Systems or Unmanned Aerial Vehicles (UAV)/drones to map both pasture quantity as biomass yield and pasture quality as the proportions of key pasture nutrients, across a [...] Read more.
The aim of this research was to test recent developments in the use of Remotely Piloted Aircraft Systems or Unmanned Aerial Vehicles (UAV)/drones to map both pasture quantity as biomass yield and pasture quality as the proportions of key pasture nutrients, across a selected range of field sites throughout the rangelands of Queensland. Improved pasture management begins with an understanding of the state of the resource base, UAV based methods can potentially achieve this at improved spatial and temporal scales. This study developed machine learning based predictive models of both pasture measures. UAV-based structure from motion photogrammetry provided a measure of yield from overlapping high resolution visible colour imagery. Pasture nutrient composition was estimated from the spectral signatures of visible near infrared hyperspectral UAV sensing. An automated pasture height surface modelling technique was developed, tested and used along with field site measurements to predict further estimates across each field site. Both prior knowledge and automated predictive modelling techniques were employed to predict yield and nutrition. Pasture height surface modelling was assessed against field measurements using a rising plate meter, results reported correlation coefficients (R2) ranging from 0.2 to 0.4 for both woodland and grassland field sites. Accuracy of the predictive modelling was determined from further field measurements of yield and on average indicated an error of 0.8 t ha−1 in grasslands and 1.3 t ha−1 in mixed woodlands across both modelling approaches. Correlation analyses between measures of pasture quality, acid detergent fibre and crude protein (ADF, CP), and spectral reflectance data indicated the visible red (651 nm) and red-edge (759 nm) regions were highly correlated (ADF R2 = 0.9 and CP R2 = 0.5 mean values). These findings agreed with previous studies linking specific absorption features with grass chemical composition. These results conclude that the practical application of such techniques, to efficiently and accurately map pasture yield and quality, is possible at the field site scale; however, further research is needed, in particular further field sampling of both yield and nutrient elements across such a diverse landscape, with the potential to scale up to a satellite platform for broader scale monitoring. Full article
(This article belongs to the Special Issue Digital Agriculture: Latest Advances and Prospects)
Show Figures

Figure 1

23 pages, 322 KiB  
Article
State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement
by Lori White, Robert A. Ryerson, Jon Pasher and Jason Duffe
Remote Sens. 2020, 12(18), 3024; https://doi.org/10.3390/rs12183024 - 16 Sep 2020
Cited by 11 | Viewed by 3657
Abstract
The purpose of this research was to develop a state of science synthesis of remote sensing technologies that could be used to track changes in Great Lakes coastal vegetation for the Great Lakes-St. Lawrence River Adaptive Management (GLAM) Committee. The mapping requirements included [...] Read more.
The purpose of this research was to develop a state of science synthesis of remote sensing technologies that could be used to track changes in Great Lakes coastal vegetation for the Great Lakes-St. Lawrence River Adaptive Management (GLAM) Committee. The mapping requirements included a minimum mapping unit (MMU) of either 2 × 2 m or 4 × 4 m, a digital elevation model (DEM) accuracy in x and y of 2 m, a “z” value or vertical accuracy of 1–5 cm, and an accuracy of 90% for the classes of interest. To determine the appropriate remote sensing sensors, we conducted an extensive literature review. The required high degree of accuracy resulted in the elimination of many of the remote sensing sensors used in other wetland mapping applications including synthetic aperture radar (SAR) and optical imagery with a resolution >1 m. Our research showed that remote sensing sensors that could at least partially detect the different types of wetland vegetation in this study were the following types: (1) advanced airborne “coastal” Airborne Light Detection and Ranging (LiDAR) with either a multispectral or a hyperspectral sensor, (2) colour-infrared aerial photography (airplane) with (optimum) 8 cm resolution, (3) colour-infrared unmanned aerial vehicle (UAV) photography with vertical accuracy determination rated at 10 cm, (4) colour-infrared UAV photography with high vertical accuracy determination rated at 3–5 cm, (5) airborne hyperspectral imagery, and (6) very high-resolution optical satellite data with better than 1 m resolution. Full article
(This article belongs to the Special Issue Remote Sensing for Wetland Inventory, Mapping and Change Analysis)
Show Figures

Graphical abstract

13 pages, 2781 KiB  
Article
Evaluating the Efficacy and Optimal Deployment of Thermal Infrared and True-Colour Imaging When Using Drones for Monitoring Kangaroos
by Elizabeth A. Brunton, Javier X. Leon and Scott E. Burnett
Drones 2020, 4(2), 20; https://doi.org/10.3390/drones4020020 - 27 May 2020
Cited by 31 | Viewed by 7000
Abstract
Advances in drone technology have given rise to much interest in the use of drone-mounted thermal imagery in wildlife monitoring. This research tested the feasibility of monitoring large mammals in an urban environment and investigated the influence of drone flight parameters and environmental [...] Read more.
Advances in drone technology have given rise to much interest in the use of drone-mounted thermal imagery in wildlife monitoring. This research tested the feasibility of monitoring large mammals in an urban environment and investigated the influence of drone flight parameters and environmental conditions on their successful detection using thermal infrared (TIR) and true-colour (RGB) imagery. We conducted 18 drone flights at different altitudes on the Sunshine Coast, Queensland, Australia. Eastern grey kangaroos (Macropus giganteus) were detected from TIR (n=39) and RGB orthomosaics (n=33) using manual image interpretation. Factors that predicted the detection of kangaroos from drone images were identified using unbiased recursive partitioning. Drone-mounted imagery achieved an overall 73.2% detection success rate using TIR imagery and 67.2% using RGB imagery when compared to on-ground counts of kangaroos. We showed that the successful detection of kangaroos using TIR images was influenced by vegetation type, whereas detection using RGB images was influenced by vegetation type, time of day that the drone was deployed, and weather conditions. Kangaroo detection was highest in grasslands, and kangaroos were not successfully detected in shrublands. Drone-mounted TIR and RGB imagery are effective at detecting large mammals in urban and peri-urban environments. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Graphical abstract

23 pages, 3936 KiB  
Article
Evaluation of Atmospheric Correction Algorithms over Spanish Inland Waters for Sentinel-2 Multi Spectral Imagery Data
by Marcela Pereira-Sandoval, Ana Ruescas, Patricia Urrego, Antonio Ruiz-Verdú, Jesús Delegido, Carolina Tenjo, Xavier Soria-Perpinyà, Eduardo Vicente, Juan Soria and José Moreno
Remote Sens. 2019, 11(12), 1469; https://doi.org/10.3390/rs11121469 - 21 Jun 2019
Cited by 113 | Viewed by 8500
Abstract
The atmospheric contribution constitutes about 90 percent of the signal measured by satellite sensors over oceanic and inland waters. Over open ocean waters, the atmospheric contribution is relatively easy to correct as it can be assumed that water-leaving radiance in the near-infrared (NIR) [...] Read more.
The atmospheric contribution constitutes about 90 percent of the signal measured by satellite sensors over oceanic and inland waters. Over open ocean waters, the atmospheric contribution is relatively easy to correct as it can be assumed that water-leaving radiance in the near-infrared (NIR) is equal to zero and it can be performed by applying a relatively simple dark-pixel-correction-based type of algorithm. Over inland and coastal waters, this assumption cannot be made since the water-leaving radiance in the NIR is greater than zero due to the presence of water components like sediments and dissolved organic particles. The aim of this study is to determine the most appropriate atmospheric correction processor to be applied on Sentinel-2 MultiSpectral Imagery over several types of inland waters. Retrievals obtained from different atmospheric correction processors (i.e., Atmospheric correction for OLI ‘lite’ (ACOLITE), Case 2 Regional Coast Colour (here called C2RCC), Case 2 Regional Coast Colour for Complex waters (here called C2RCCCX), Image correction for atmospheric effects (iCOR), Polynomial-based algorithm applied to MERIS (Polymer) and Sen2Cor or Sentinel 2 Correction) are compared against in situ reflectance measured in lakes and reservoirs in the Valencia region (Spain). Polymer and C2RCC are the processors that give back the best statistics, with coefficients of determination higher than 0.83 and mean average errors less than 0.01. An evaluation of the performance based on water types and single bands–classification based on ranges of in situ chlorophyll-a concentration and Secchi disk depth values- showed that performance of these set of processors is better for relatively complex waters. ACOLITE, iCOR and Sen2Cor had a better performance when applied to meso- and hyper-eutrophic waters, compare with oligotrophic. However, other considerations should also be taken into account, like the elevation of the lakes above sea level, their distance from the sea and their morphology. Full article
Show Figures

Graphical abstract

24 pages, 6965 KiB  
Article
Atmospheric Correction of OLCI Imagery over Extremely Turbid Waters Based on the Red, NIR and 1016 nm Bands and a New Baseline Residual Technique
by Juan Ignacio Gossn, Kevin George Ruddick and Ana Inés Dogliotti
Remote Sens. 2019, 11(3), 220; https://doi.org/10.3390/rs11030220 - 22 Jan 2019
Cited by 34 | Viewed by 4802
Abstract
A common approach to the pixel-by-pixel atmospheric correction of satellite water colour imagery is to calculate aerosol and water reflectance at two spectral bands, typically in the near infra-red (NIR, 700–1000 nm) or the short-wave-infra-red (SWIR, 1000–3000 nm), and then extrapolate aerosol reflectance [...] Read more.
A common approach to the pixel-by-pixel atmospheric correction of satellite water colour imagery is to calculate aerosol and water reflectance at two spectral bands, typically in the near infra-red (NIR, 700–1000 nm) or the short-wave-infra-red (SWIR, 1000–3000 nm), and then extrapolate aerosol reflectance to shorter wavelengths. For clear waters, this can be achieved simply for NIR bands, where the water reflectance can be assumed negligible i.e., the “black water” assumption. For moderately turbid waters, either the NIR water reflectance, which is non-negligible, must be modelled or longer wavelength SWIR bands, with negligible water reflectance, must be used. For extremely turbid waters, modelling of non-zero NIR water reflectance becomes uncertain because the spectral slopes of water and aerosol reflectance in the NIR become similar, making it difficult to distinguish between them. In such waters the use of SWIR bands is definitely preferred and the use of the MODIS bands at 1240 nm and 2130 nm is clearly established although, on many sensors such as the Ocean and Land Colour Instrument (OLCI), such SWIR bands are not included. Instead, a new, cheaper SWIR band at 1016 nm is available on OLCI with potential for much better atmospheric correction over extremely turbid waters. That potential is tested here. In this work, we demonstrate that for spectrally-close band triplets (such as OLCI bands at 779–865–1016 nm), the Rayleigh-corrected reflectance of the triplet’s “middle” band after baseline subtraction (or baseline residual, BLR) is essentially independent of the atmospheric conditions. We use the three BLRs defined by three consecutive band triplets of the group of bands 620–709–779–865–1016 nm to calculate water reflectance and hence aerosol reflectance at these wavelengths. Comparison with standard atmospheric correction algorithms shows similar performance in moderately turbid and clear waters and a considerable improvement in extremely turbid waters. Full article
Show Figures

Graphical abstract

23 pages, 26252 KiB  
Article
DeepFruits: A Fruit Detection System Using Deep Neural Networks
by Inkyu Sa, Zongyuan Ge, Feras Dayoub, Ben Upcroft, Tristan Perez and Chris McCool
Sensors 2016, 16(8), 1222; https://doi.org/10.3390/s16081222 - 3 Aug 2016
Cited by 965 | Viewed by 66477
Abstract
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element [...] Read more.
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Show Figures

Figure 1

18 pages, 5992 KiB  
Article
Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution
by José M. Peña, Jorge Torres-Sánchez, Angélica Serrano-Pérez, Ana I. De Castro and Francisca López-Granados
Sensors 2015, 15(3), 5609-5626; https://doi.org/10.3390/s150305609 - 6 Mar 2015
Cited by 168 | Viewed by 17350
Abstract
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early [...] Read more.
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5–6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations. Full article
(This article belongs to the Collection Sensors in Agriculture and Forestry)
Show Figures

Figure 1

Back to TopTop