Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Authors = Arko Lucieer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Considerations for Assessing Functional Forest Diversity in High-Dimensional Trait Space Derived from Drone-Based Lidar
Remote Sens. 2022, 14(17), 4287; https://doi.org/10.3390/rs14174287 - 30 Aug 2022
Viewed by 172
Abstract
Remotely sensed morphological traits have been used to assess functional diversity of forests. This approach is potentially spatial-scale-independent. Lidar data collected from the ground or by drone at a high point density provide an opportunity to consider multiple ecologically meaningful traits at fine-scale [...] Read more.
Remotely sensed morphological traits have been used to assess functional diversity of forests. This approach is potentially spatial-scale-independent. Lidar data collected from the ground or by drone at a high point density provide an opportunity to consider multiple ecologically meaningful traits at fine-scale ecological units such as individual trees. However, high-spatial-resolution and multi-trait datasets used to calculate functional diversity can produce large volumes of data that can be computationally resource demanding. Functional diversity can be derived through a trait probability density (TPD) approach. Computing TPD in a high-dimensional trait space is computationally intensive. Reductions of the number of dimensions through trait selection and principal component analysis (PCA) may reduce the computational load. Trait selection can facilitate identification of ecologically meaningful traits and reduce inter-trait correlation. This study investigates whether kernel density estimator (KDE) or one-class support vector machine (SVM) may be computationally more efficient in calculating TPD. Four traits were selected for input into the TPD: canopy height, effective number of layers, plant to ground ratio, and box dimensions. When simulating a high-dimensional trait space, we found that TPD derived from KDE was more efficient than using SVM when the number of input traits was high. For five or more traits, applying dimension reduction techniques (e.g., PCA) are recommended. Furthermore, the kernel size for TPD needs to be appropriate for the ecological target unit and should be appropriate for the number of traits. The kernel size determines the required number of data points within the trait space. Therefore, 3–5 traits require a kernel size of at least 7×7pixels. This study contributes to improving the quality of TPD calculations based on traits derived from remote sensing data. We provide a set of recommendations based on our findings. This has the potential to improve reliability in identifying biodiversity hotspots. Full article
(This article belongs to the Section Ecological Remote Sensing)
Article
Thermal Sensor Calibration for Unmanned Aerial Systems Using an External Heated Shutter
Drones 2021, 5(4), 119; https://doi.org/10.3390/drones5040119 - 17 Oct 2021
Cited by 1 | Viewed by 1057
Abstract
Uncooled thermal infrared sensors are increasingly being deployed on unmanned aerial systems (UAS) for agriculture, forestry, wildlife surveys, and surveillance. The acquisition of thermal data requires accurate and uniform testing of equipment to ensure precise temperature measurements. We modified an uncooled thermal infrared [...] Read more.
Uncooled thermal infrared sensors are increasingly being deployed on unmanned aerial systems (UAS) for agriculture, forestry, wildlife surveys, and surveillance. The acquisition of thermal data requires accurate and uniform testing of equipment to ensure precise temperature measurements. We modified an uncooled thermal infrared sensor, specifically designed for UAS remote sensing, with a proprietary external heated shutter as a calibration source. The performance of the modified thermal sensor and a standard thermal sensor (i.e., without a heated shutter) was compared under both field and temperature modulated laboratory conditions. During laboratory trials with a blackbody source at 35 °C over a 150 min testing period, the modified and unmodified thermal sensor produced temperature ranges of 34.3–35.6 °C and 33.5–36.4 °C, respectively. A laboratory experiment also included the simulation of flight conditions by introducing airflow over the thermal sensor at a rate of 4 m/s. With the blackbody source held at a constant temperature of 25 °C, the introduction of 2 min air flow resulted in a ’shock cooling’ event in both the modified and unmodified sensors, oscillating between 19–30 °C and -15–65 °C, respectively. Following the initial ‘shock cooling’ event, the modified and unmodified thermal sensor oscillated between 22–27 °C and 5–45 °C, respectively. During field trials conducted over a pine plantation, the modified thermal sensor also outperformed the unmodified sensor in a side-by-side comparison. We found that the use of a mounted heated shutter improved thermal measurements, producing more consistent accurate temperature data for thermal mapping projects. Full article
Show Figures

Figure 1

Article
A Comparison of ALS and Dense Photogrammetric Point Clouds for Individual Tree Detection in Radiata Pine Plantations
Remote Sens. 2021, 13(17), 3536; https://doi.org/10.3390/rs13173536 - 06 Sep 2021
Cited by 1 | Viewed by 1391
Abstract
Digital aerial photogrammetry (DAP) has emerged as a potentially cost-effective alternative to airborne laser scanning (ALS) for forest inventory methods that employ point cloud data. Forest inventory derived from DAP using area-based methods has been shown to achieve accuracy similar to that of [...] Read more.
Digital aerial photogrammetry (DAP) has emerged as a potentially cost-effective alternative to airborne laser scanning (ALS) for forest inventory methods that employ point cloud data. Forest inventory derived from DAP using area-based methods has been shown to achieve accuracy similar to that of ALS data. At the tree level, individual tree detection (ITD) algorithms have been developed to detect and/or delineate individual trees either from ALS point cloud data or from ALS- or DAP-based canopy height models. An examination of the application of ITDs to DAP-based point clouds has not yet been reported. In this research, we evaluate the suitability of DAP-based point clouds for individual tree detection in the Pinus radiata plantation. Two ITD algorithms designed to work with point cloud data are applied to dense point clouds generated from small- and medium-format photography and to an ALS point cloud. Performance of the two ITD algorithms, the influence of stand structure on tree detection rates, and the relationship between tree detection rates and canopy structural metrics are investigated. Overall, we show that there is a good agreement between ALS- and DAP-based ITD results (proportion of false negatives for ALS, SFP, and MFP was always lower than 29.6%, 25.3%, and 28.6%, respectively, whereas, the proportion of false positives for ALS, SFP, and MFP was always lower than 39.4%, 30.7%, and 33.7%, respectively). Differences between small- and medium-format DAP results were minor (for SFP and MFP, differences between recall, precision, and F-score were always less than 0.08, 0.03, and 0.05, respectively), suggesting that DAP point cloud data is robust for ITD. Our results show that among all the canopy structural metrics, the number of trees per hectare has the greatest influence on the tree detection rates. Full article
(This article belongs to the Special Issue Advances in LiDAR Remote Sensing for Forestry and Ecology)
Show Figures

Graphical abstract

Review
Underwater Hyperspectral Imaging (UHI): A Review of Systems and Applications for Proximal Seafloor Ecosystem Studies
Remote Sens. 2021, 13(17), 3451; https://doi.org/10.3390/rs13173451 - 31 Aug 2021
Cited by 3 | Viewed by 2122
Abstract
Marine ecosystem monitoring requires observations of its attributes at different spatial and temporal scales that traditional sampling methods (e.g., RGB imaging, sediment cores) struggle to efficiently provide. Proximal optical sensing methods can fill this observational gap by providing observations of, and tracking changes [...] Read more.
Marine ecosystem monitoring requires observations of its attributes at different spatial and temporal scales that traditional sampling methods (e.g., RGB imaging, sediment cores) struggle to efficiently provide. Proximal optical sensing methods can fill this observational gap by providing observations of, and tracking changes in, the functional features of marine ecosystems non-invasively. Underwater hyperspectral imaging (UHI) employed in proximity to the seafloor has shown a further potential to monitor pigmentation in benthic and sympagic phototrophic organisms at small spatial scales (mm–cm) and for the identification of minerals and taxa through their finely resolved spectral signatures. Despite the increasing number of studies applying UHI, a review of its applications, capabilities, and challenges for seafloor ecosystem research is overdue. In this review, we first detail how the limited band availability inherent to standard underwater cameras has led to a data analysis “bottleneck” in seafloor ecosystem research, in part due to the widespread implementation of underwater imaging platforms (e.g., remotely operated vehicles, time-lapse stations, towed cameras) that can acquire large image datasets. We discuss how hyperspectral technology brings unique opportunities to address the known limitations of RGB cameras for surveying marine environments. The review concludes by comparing how different studies harness the capacities of hyperspectral imaging, the types of methods required to validate observations, and the current challenges for accurate and replicable UHI research. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Article
Handheld Laser Scanning Detects Spatiotemporal Differences in the Development of Structural Traits among Species in Restoration Plantings
Remote Sens. 2021, 13(9), 1706; https://doi.org/10.3390/rs13091706 - 28 Apr 2021
Cited by 4 | Viewed by 956
Abstract
A major challenge in ecological restoration is assessing the success of restoration plantings in producing habitats that provide the desired ecosystem functions and services. Forest structural complexity and biomass accumulation are key measures used to monitor restoration success and are important factors determining [...] Read more.
A major challenge in ecological restoration is assessing the success of restoration plantings in producing habitats that provide the desired ecosystem functions and services. Forest structural complexity and biomass accumulation are key measures used to monitor restoration success and are important factors determining animal habitat availability and carbon sequestration. Monitoring their development through time using traditional field measurements can be costly and impractical, particularly at the landscape-scale, which is a common requirement in ecological restoration. We explored the application of proximal sensing technology as an alternative to traditional field surveys to capture the development of key forest structural traits in a restoration planting in the Midlands of Tasmania, Australia. We report the use of a hand-held laser scanner (ZEB1) to measure annual changes in structural traits at the tree-level, in a mixed species common-garden experiment from seven- to nine-years after planting. Using very dense point clouds, we derived estimates of multiple structural traits, including above ground biomass, tree height, stem diameter, crown dimensions, and crown properties. We detected annual increases in most LiDAR-derived traits, with individual crowns becoming increasingly interconnected. Time by species interaction were detected, and were associated with differences in productivity between species. We show the potential for remote sensing technology to monitor temporal changes in forest structural traits, as well as to provide base-line measures from which to assess the restoration trajectory towards a desired state. Full article
(This article belongs to the Special Issue Use of Remote Sensing Techniques for Wildlife Habitat Assessment)
Show Figures

Graphical abstract

Article
High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia
Fire 2021, 4(1), 14; https://doi.org/10.3390/fire4010014 - 18 Mar 2021
Cited by 9 | Viewed by 2184
Abstract
With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and [...] Read more.
With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and compact sensors (RGB and LiDAR) provide new opportunities to map fire severity. This paper conducts a comparison of metrics derived from UAS Light Detecting and Ranging (LiDAR) point clouds and UAS image based products to classify fire severity. A workflow which derives novel metrics describing vegetation structure and fire severity from UAS remote sensing data is developed that fully utilises the vegetation information available in both data sources. UAS imagery and LiDAR data were captured pre- and post-fire over a 300 m by 300 m study area in Tasmania, Australia. The study area featured a vegetation gradient from sedgeland vegetation (e.g., button grass 0.2m) to forest (e.g., Eucalyptus obliqua and Eucalyptus globulus 50m). To classify the vegetation and fire severity, a comprehensive set of variables describing structural, textural and spectral characteristics were gathered using UAS images and UAS LiDAR datasets. A recursive feature elimination process was used to highlight the subsets of variables to be included in random forest classifiers. The classifier was then used to map vegetation and severity across the study area. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined (UAS LiDAR and UAS image predictor values) data streams to classify vegetation (UAS image: 80.6%; UAS LiDAR: 78.9%; and Combined: 83.1%) and severity in areas of forest (UAS image: 76.6%, UAS LiDAR: 74.5%; and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4%; UAS LiDAR: 75.2%; and Combined: 76.6%). These results indicate that UAS SfM and LiDAR point clouds can be used to assess fire severity at very high spatial resolution. Full article
(This article belongs to the Special Issue Bushfire in Tasmania)
Show Figures

Figure 1

Article
Retrieval of Hyperspectral Information from Multispectral Data for Perennial Ryegrass Biomass Estimation
Sensors 2020, 20(24), 7192; https://doi.org/10.3390/s20247192 - 15 Dec 2020
Cited by 2 | Viewed by 1037
Abstract
The use of spectral data is seen as a fast and non-destructive method capable of monitoring pasture biomass. Although there is great potential in this technique, both end users and sensor manufacturers are uncertain about the necessary sensor specifications and achievable accuracies in [...] Read more.
The use of spectral data is seen as a fast and non-destructive method capable of monitoring pasture biomass. Although there is great potential in this technique, both end users and sensor manufacturers are uncertain about the necessary sensor specifications and achievable accuracies in an operational scenario. This study presents a straightforward parametric method able to accurately retrieve the hyperspectral signature of perennial ryegrass (Lolium perenne) canopies from multispectral data collected within a two-year period in Australia and the Netherlands. The retrieved hyperspectral data were employed to generate optimal indices and continuum-removed spectral features available in the scientific literature. For performance comparison, both these simulated features and a set of currently employed vegetation indices, derived from the original band values, were used as inputs in a random forest algorithm and accuracies of both methods were compared. Our results have shown that both sets of features present similar accuracies (root mean square error (RMSE) ≈490 and 620 kg DM/ha) when assessed in cross-validation and spatial cross-validation, respectively. These results suggest that for pasture biomass retrieval solely from top-of-canopy reflectance (ranging from 550 to 790 nm), better performing methods do not rely on the use of hyperspectral or, yet, in a larger number of bands than those already available in current sensors. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing of the Earth)
Show Figures

Figure 1

Article
From Drones to Phenotype: Using UAV-LiDAR to Detect Species and Provenance Variation in Tree Productivity and Structure
Remote Sens. 2020, 12(19), 3184; https://doi.org/10.3390/rs12193184 - 29 Sep 2020
Cited by 16 | Viewed by 2124
Abstract
The use of unmanned aerial vehicles (UAVs) for remote sensing of natural environments has increased over the last decade. However, applications of this technology for high-throughput individual tree phenotyping in a quantitative genetic framework are rare. We here demonstrate a two-phased analytical pipeline [...] Read more.
The use of unmanned aerial vehicles (UAVs) for remote sensing of natural environments has increased over the last decade. However, applications of this technology for high-throughput individual tree phenotyping in a quantitative genetic framework are rare. We here demonstrate a two-phased analytical pipeline that rapidly phenotypes and filters for genetic signals in traditional and novel tree productivity and architectural traits derived from ultra-dense light detection and ranging (LiDAR) point clouds. The goal of this study was rapidly phenotype individual trees to understand the genetic basis of ecologically and economically significant traits important for guiding the management of natural resources. Individual tree point clouds were acquired using UAV-LiDAR captured over a multi-provenance common-garden restoration field trial located in Tasmania, Australia, established using two eucalypt species (Eucalyptus pauciflora and Eucalyptus tenuiramis). Twenty-five tree productivity and architectural traits were calculated for each individual tree point cloud. The first phase of the analytical pipeline found significant species differences in 13 of the 25 derived traits, revealing key structural differences in productivity and crown architecture between species. The second phase investigated the within species variation in the same 25 structural traits. Significant provenance variation was detected for 20 structural traits in E. pauciflora and 10 in E. tenuiramis, with signals of divergent selection found for 11 and 7 traits, respectively, putatively driven by the home-site environment shaping the observed variation. Our results highlight the genetic-based diversity within and between species for traits important for forest structure, such as crown density and structural complexity. As species and provenances are being increasingly translocated across the landscape to mitigate the effects of rapid climate change, our results that were achieved through rapid phenotyping using UAV-LiDAR, raise the need to understand the functional value of productivity and architectural traits reflecting species and provenance differences in crown structure and the interplay they have on the dependent biotic communities. Full article
(This article belongs to the Special Issue Feature Paper Special Issue on Forest Remote Sensing)
Show Figures

Figure 1

Article
Retrieval of Crude Protein in Perennial Ryegrass Using Spectral Data at the Canopy Level
Remote Sens. 2020, 12(18), 2958; https://doi.org/10.3390/rs12182958 - 11 Sep 2020
Cited by 2 | Viewed by 1749
Abstract
Crude protein estimation is an important parameter for perennial ryegrass (Lolium perenne) management. This study aims to establish an effective and affordable approach for a non-destructive, near-real-time crude protein retrieval based solely on top-of-canopy reflectance. The study contrasts different spectral ranges [...] Read more.
Crude protein estimation is an important parameter for perennial ryegrass (Lolium perenne) management. This study aims to establish an effective and affordable approach for a non-destructive, near-real-time crude protein retrieval based solely on top-of-canopy reflectance. The study contrasts different spectral ranges while selecting a minimal number of bands and analyzing achievable accuracies for crude protein expressed as a dry matter fraction or on a weight-per-area basis. In addition, the model’s prediction performance in known and new locations is compared. This data collection comprised 266 full-range (350–2500 nm) proximal spectral measurements and corresponding ground truth observations in Australia and the Netherlands from May to November 2018. An exhaustive-search (based on a genetic algorithm) successfully selected band subsets within different regions and across the full spectral range, minimizing both the number of bands and an error metric. For field conditions, our results indicate that the best approach for crude protein estimation relies on the use of the visible to near-infrared range (400–1100 nm). Within this range, eleven sparse broad bands (of 10 nm bandwidth) provide performance better than or equivalent to those of previous studies that used a higher number of bands and narrower bandwidths. Additionally, when using top-of-canopy reflectance, our results demonstrate that the highest accuracy is achievable when estimating crude protein on its weight-per-area basis (RMSEP 80 kg.ha1). These models can be employed in new unseen locations, resulting in a minor decrease in accuracy (RMSEP 85.5 kg.ha1). Crude protein as a dry matter fraction presents a bottom-line accuracy (RMSEP) ranging from 2.5–3.0 percent dry matter in optimal models (requiring ten bands). However, these models display a low explanatory ability for the observed variability (R2 > 0.5), rendering them only suitable for qualitative grading. Full article
(This article belongs to the Special Issue Advances of Remote Sensing in Pasture Management)
Show Figures

Graphical abstract

Article
A Calibration Procedure for Field and UAV-Based Uncooled Thermal Infrared Instruments
Sensors 2020, 20(11), 3316; https://doi.org/10.3390/s20113316 - 10 Jun 2020
Cited by 26 | Viewed by 2930
Abstract
Thermal infrared cameras provide unique information on surface temperature that can benefit a range of environmental, industrial and agricultural applications. However, the use of uncooled thermal cameras for field and unmanned aerial vehicle (UAV) based data collection is often hampered by vignette effects, [...] Read more.
Thermal infrared cameras provide unique information on surface temperature that can benefit a range of environmental, industrial and agricultural applications. However, the use of uncooled thermal cameras for field and unmanned aerial vehicle (UAV) based data collection is often hampered by vignette effects, sensor drift, ambient temperature influences and measurement bias. Here, we develop and apply an ambient temperature-dependent radiometric calibration function that is evaluated against three thermal infrared sensors (Apogee SI-11(Apogee Electronics, Santa Monica, CA, USA), FLIR A655sc (FLIR Systems, Wilsonville, OR, USA), TeAx 640 (TeAx Technology, Wilnsdorf, Germany)). Upon calibration, all systems demonstrated significant improvement in measured surface temperatures when compared against a temperature modulated black body target. The laboratory calibration process used a series of calibrated resistance temperature detectors to measure the temperature of a black body at different ambient temperatures to derive calibration equations for the thermal data acquired by the three sensors. As a point-collecting device, the Apogee sensor was corrected for sensor bias and ambient temperature influences. For the 2D thermal cameras, each pixel was calibrated independently, with results showing that measurement bias and vignette effects were greatly reduced for the FLIR A655sc (from a root mean squared error (RMSE) of 6.219 to 0.815 degrees Celsius (℃)) and TeAx 640 (from an RMSE of 3.438 to 1.013 ℃) cameras. This relatively straightforward approach for the radiometric calibration of infrared thermal sensors can enable more accurate surface temperature retrievals to support field and UAV-based data collection efforts. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Article
Automated Georectification and Mosaicking of UAV-Based Hyperspectral Imagery from Push-Broom Sensors
Remote Sens. 2020, 12(1), 34; https://doi.org/10.3390/rs12010034 - 20 Dec 2019
Cited by 20 | Viewed by 2640
Abstract
Hyperspectral systems integrated on unmanned aerial vehicles (UAV) provide unique opportunities to conduct high-resolution multitemporal spectral analysis for diverse applications. However, additional time-consuming rectification efforts in postprocessing are routinely required, since geometric distortions can be introduced due to UAV movements during flight, even [...] Read more.
Hyperspectral systems integrated on unmanned aerial vehicles (UAV) provide unique opportunities to conduct high-resolution multitemporal spectral analysis for diverse applications. However, additional time-consuming rectification efforts in postprocessing are routinely required, since geometric distortions can be introduced due to UAV movements during flight, even if navigation/motion sensors are used to track the position of each scan. Part of the challenge in obtaining high-quality imagery relates to the lack of a fast processing workflow that can retrieve geometrically accurate mosaics while optimizing the ground data collection efforts. To address this problem, we explored a computationally robust automated georectification and mosaicking methodology. It operates effectively in a parallel computing environment and evaluates results against a number of high-spatial-resolution datasets (mm to cm resolution) collected using a push-broom sensor and an associated RGB frame-based camera. The methodology estimates the luminance of the hyperspectral swaths and coregisters these against a luminance RGB-based orthophoto. The procedure includes an improved coregistration strategy by integrating the Speeded-Up Robust Features (SURF) algorithm, with the Maximum Likelihood Estimator Sample Consensus (MLESAC) approach. SURF identifies common features between each swath and the RGB-orthomosaic, while MLESAC fits the best geometric transformation model to the retrieved matches. Individual scanlines are then geometrically transformed and merged into a single spatially continuous mosaic reaching high positional accuracies only with a few number of ground control points (GCPs). The capacity of the workflow to achieve high spatial accuracy was demonstrated by examining statistical metrics such as RMSE, MAE, and the relative positional accuracy at 95% confidence level. Comparison against a user-generated georectification demonstrates that the automated approach speeds up the coregistration process by 85%. Full article
Show Figures

Figure 1

Article
An Under-Ice Hyperspectral and RGB Imaging System to Capture Fine-Scale Biophysical Properties of Sea Ice
Remote Sens. 2019, 11(23), 2860; https://doi.org/10.3390/rs11232860 - 02 Dec 2019
Cited by 9 | Viewed by 2255
Abstract
Sea-ice biophysical properties are characterized by high spatio-temporal variability ranging from the meso- to the millimeter scale. Ice coring is a common yet coarse point sampling technique that struggles to capture such variability in a non-invasive manner. This hinders quantification and understanding of [...] Read more.
Sea-ice biophysical properties are characterized by high spatio-temporal variability ranging from the meso- to the millimeter scale. Ice coring is a common yet coarse point sampling technique that struggles to capture such variability in a non-invasive manner. This hinders quantification and understanding of ice algae biomass patchiness and its complex interaction with some of its sea ice physical drivers. In response to these limitations, a novel under-ice sled system was designed to capture proxies of biomass together with 3D models of bottom topography of land-fast sea-ice. This system couples a pushbroom hyperspectral imaging (HI) sensor with a standard digital RGB camera and was trialed at Cape Evans, Antarctica. HI aims to quantify per-pixel chlorophyll-a content and other ice algae biological properties at the ice-water interface based on light transmitted through the ice. RGB imagery processed with digital photogrammetry aims to capture under-ice structure and topography. Results from a 20 m transect capturing a 0.61 m wide swath at sub-mm spatial resolution are presented. We outline the technical and logistical approach taken and provide recommendations for future deployments and developments of similar systems. A preliminary transect subsample was processed using both established and novel under-ice bio-optical indices (e.g., normalized difference indexes and the area normalized by the maximal band depth) and explorative analyses (e.g., principal component analyses) to establish proxies of algal biomass. This first deployment of HI and digital photogrammetry under-ice provides a proof-of-concept of a novel methodology capable of delivering non-invasive and highly resolved estimates of ice algal biomass in-situ, together with some of its environmental drivers. Nonetheless, various challenges and limitations remain before our method can be adopted across a range of sea-ice conditions. Our work concludes with suggested solutions to these challenges and proposes further method and system developments for future research. Full article
Show Figures

Graphical abstract

Article
A Robust Rule-Based Ensemble Framework Using Mean-Shift Segmentation for Hyperspectral Image Classification
Remote Sens. 2019, 11(17), 2057; https://doi.org/10.3390/rs11172057 - 01 Sep 2019
Cited by 7 | Viewed by 1855
Abstract
This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and [...] Read more.
This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and sequences of occurrences. Shannon entropy was derived for assessing the uncertainty of every rule and the subsequent filtering of unreliable rules. DoTRules is not only a transparent approach for image classification but also a tool to map rule uncertainty, where rule uncertainty assessment can be applied as an estimate of classification accuracy prior to image classification. In this research, the proposed image classification framework is implemented using three world reference hyperspectral image datasets. We found that the overall accuracy of classification using the proposed ensemble framework was superior to state-of-the-art ensemble algorithms, as well as two non-ensemble algorithms, at multiple training sample sizes. We believe DoTRules can be applied more generally to the classification of discrete data such as hyperspectral satellite imagery products. Full article
(This article belongs to the Special Issue Image Segmentation for Environmental Monitoring)
Show Figures

Graphical abstract

Article
Leveraging Machine Learning to Extend Ontology-Driven Geographic Object-Based Image Analysis (O-GEOBIA): A Case Study in Forest-Type Mapping
Remote Sens. 2019, 11(5), 503; https://doi.org/10.3390/rs11050503 - 01 Mar 2019
Cited by 18 | Viewed by 2708
Abstract
Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA) contributes to the identification of meaningful objects. In fusing data from multiple sensors, the number of feature variables is increased and object identification becomes a challenging task. We propose a methodological contribution that extends feature variable characterisation. [...] Read more.
Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA) contributes to the identification of meaningful objects. In fusing data from multiple sensors, the number of feature variables is increased and object identification becomes a challenging task. We propose a methodological contribution that extends feature variable characterisation. This method is illustrated with a case study in forest-type mapping in Tasmania, Australia. Satellite images, airborne LiDAR (Light Detection and Ranging) and expert photo-interpretation data are fused for feature extraction and classification. Two machine learning algorithms, Random Forest and Boruta, are used to identify important and relevant feature variables. A variogram is used to describe textural and spatial features. Different variogram features are used as input for rule-based classifications. The rule-based classifications employ (i) spectral features, (ii) vegetation indices, (iii) LiDAR, and (iv) variogram features, and resulted in overall classification accuracies of 77.06%, 78.90%, 73.39% and 77.06% respectively. Following data fusion, the use of combined feature variables resulted in a higher classification accuracy (81.65%). Using relevant features extracted from the Boruta algorithm, the classification accuracy is further improved (82.57%). The results demonstrate that the use of relevant variogram features together with spectral and LiDAR features resulted in improved classification accuracy. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Forest Structure and Applications)
Show Figures

Graphical abstract

Article
Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest
Entropy 2019, 21(1), 78; https://doi.org/10.3390/e21010078 - 16 Jan 2019
Cited by 22 | Viewed by 3591
Abstract
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test [...] Read more.
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

Back to TopTop