Special Issue "Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (31 December 2018).

Special Issue Editors

Dr. Eija Honkavaara
Website
Guest Editor
Finnish Geospatial Research Institute FGI, National Land Survey of Finland, Geodeetinrinne 2, Masala, FI-02430, Finland
Interests: photogrammetry; hyperspectral imaging; UAV; calibration; SLAM; machine learning
Special Issues and Collections in MDPI journals
Dr. Konstantinos Karantzalos
Website
Guest Editor
Associate Professor, Remote Sensing Laboratory, National Technical University of Athens, 15780, Greece
Interests: hyperspectral imaging; UAVs; earth observation; data fusion; machine learning; computer vision; crop type classification; precision agriculture
Special Issues and Collections in MDPI journals
Dr. Erica Nocerino
Website
Guest Editor
Bruno Kessler Foundation, Via Santa Croce, 77, 38122 Trento, Italy
Interests: photogrammetry; laser scanning; calibration; 3D modelling; indoor and outdoor mapping; image processing
Dr. Ilkka Pölönen
Website SciProfiles
Guest Editor
Faculty of Information Technology, University of Jyväskylä P.O. Box 35, FIN-40014 University of Jyväskylä, Finland Building: Agora, Room: C411.2
Interests: spectral imaging; data analysis; manifold learning; mathematical modelling
Dr. Petri Rönnholm
Website
Guest Editor
Department of Built Enviroment, Aalto University, PO BOX 14100, 00076 AALTO, Finland
Interests: laser scanning; photogrammetry; mobile mapping; registration; data integration; digital image processing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Spectral imaging and 3D sensor technologies have developed explosively in recent years for a variety of geospatial applications, ranging, but not limited to, agriculture (precision farming, vegetative index and water supply mapping), forestry (forest inventory, forest health monitoring), mapping, environmental monitoring (surface geology survey, pollutant and hazardous substances mapping), industry (large industrial plant monitoring), etc. New cutting-edge hardware and software solutions are emerging, and integration of multi-modal information is allowing increasingly accurate, automated and fast remote sensing.

The main ambition of this Special Issue is to promote discussion on new developments in the field of combined use of spectral and 3D remote sensing technologies, comprising sensing technologies, thematic information extraction, and geospatial solutions. It aims at bringing together research presented in the first ISPRS SPEC3D workshop "Frontiers in Spectral imaging and 3D Technologies for Geospatial Solutions" organized in Finland in 25–27 October, 2017, as well as high quality scientific contributions from the global research community. Researchers and experts are kindly encouraged to submit innovative papers related to integrated use of spectral and 3D technologies, focused on, but not limited to, the following topics:

1. New aspects of sensors, systems and calibration: 3D spectral information capture using spectral imaging, LIDAR, micro-LIDAR and -RADAR, low-cost 3D and spectral sensors, emerging platforms (aerial, UAV, robotic, mobile, portable, etc.), and geometric and radiometric sensor and system integration and calibration.

2. Processing and interpretation requirements for novel spectroscopic and 3D data, including georeferencing, radiometric calibration, multisource data fusion, video data analysis, time-series and change detection, context awareness, big data, and crowd sourcing, as well as aspects of automation, fast response and real-time processing.

3. Geospatial solutions utilizing the new sensors and data in indoor and outdoor applications, such as mapping and monitoring in natural and built environments, forestry and agriculture, biodiversity, industrial and civil applications, robotics, and virtual and augmented reality.

Dr. Eija Honkavaara,
Dr. Konstantinos Karantzalos,
Dr. Xianlian Liang,
Dr. Erica Nocerino,
Dr. Ilkka Pölönen,
Dr. Petri Rönnholm
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Imaging technologies for 3D hyper- and multispectral data capture
  • Multi- and hyperspectral LiDAR
  • Low-cost 3D and spectral sensors
  • Sensor integration in spectral and 3D systems
  • Radiometric and geometric calibration, system calibration
  • Emerging platforms for 3D spectral data capture: UAV, backpack, trolley, handheld, industrial, etc.
  • Georeferencing, radiometric correction, registration, integration
  • Real-time processing, robotics
  • Pointcloud processing integrating spectral and 3D
  • Machine learning and classification with 3D and spectral features
  • Hyperspectral dimensionality reduction, unmixing, source separation, endmember extraction
  • Noise estimation and reduction
  • Hyper- and multispectral multitemporal and video data analysis
  • Utilizing integrated 3D, spectral and multitemporal features in geospatial solutions, such as agriculture, forestry, mining, biodiversity, indoor and outdoor built environments mapping, and engineering applications
  • Benchmarking
  • Thematic information extraction

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Editorial for the Special Issue “Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions”
Remote Sens. 2019, 11(14), 1714; https://doi.org/10.3390/rs11141714 - 19 Jul 2019
Abstract
This Special Issue hosts papers on the integrated use of spectral imaging and 3D technologies in remote sensing, including novel sensors, evolving machine learning technologies for data analysis, and the utilization of these technologies in a variety of geospatial applications. The presented results [...] Read more.
This Special Issue hosts papers on the integrated use of spectral imaging and 3D technologies in remote sensing, including novel sensors, evolving machine learning technologies for data analysis, and the utilization of these technologies in a variety of geospatial applications. The presented results showed improved results when multimodal data was used in object analysis. Full article

Research

Jump to: Editorial

Open AccessArticle
Filtering Airborne LiDAR Data Through Complementary Cloth Simulation and Progressive TIN Densification Filters
Remote Sens. 2019, 11(9), 1037; https://doi.org/10.3390/rs11091037 - 01 May 2019
Cited by 7
Abstract
Separating point clouds into ground and non-ground points is a preliminary and essential step in various applications of airborne light detection and ranging (LiDAR) data, and many filtering algorithms have been proposed to automatically filter ground points. Among them, the progressive triangulated irregular [...] Read more.
Separating point clouds into ground and non-ground points is a preliminary and essential step in various applications of airborne light detection and ranging (LiDAR) data, and many filtering algorithms have been proposed to automatically filter ground points. Among them, the progressive triangulated irregular network (TIN) densification filtering (PTDF) algorithm is widely employed due to its robustness and effectiveness. However, the performance of this algorithm usually depends on the detailed initial terrain and the cautious tuning of parameters to cope with various terrains. Consequently, many approaches have been proposed to provide as much detailed initial terrain as possible. However, most of them require many user-defined parameters. Moreover, these parameters are difficult to determine for users. Recently, the cloth simulation filtering (CSF) algorithm has gradually drawn attention because its parameters are few and easy-to-set. CSF can obtain a fine initial terrain, which simultaneously provides a good foundation for parameter threshold estimation of progressive TIN densification (PTD). However, it easily causes misclassification when further refining the initial terrain. To achieve the complementary advantages of CSF and PTDF, a novel filtering algorithm that combines cloth simulation (CS) and PTD is proposed in this study. In the proposed algorithm, a high-quality initial provisional digital terrain model (DTM) is obtained by CS, and the parameter thresholds of PTD are estimated from the initial provisional DTM based on statistical analysis theory. Finally, PTD with adaptive parameter thresholds is used to refine the initial provisional DTM. These contributions of the implementation details achieve accuracy enhancement and resilience to parameter tuning. The experimental results indicate that the proposed algorithm improves performance over their direct predecessors. Furthermore, compared with the publicized improved PTDF algorithms, our algorithm is not only superior in accuracy but also practicality. The fact that the proposed algorithm is of high accuracy and easy-to-use is desirable for users. Full article
Show Figures

Graphical abstract

Open AccessArticle
A Novel Object-Based Deep Learning Framework for Semantic Segmentation of Very High-Resolution Remote Sensing Data: Comparison with Convolutional and Fully Convolutional Networks
Remote Sens. 2019, 11(6), 684; https://doi.org/10.3390/rs11060684 - 21 Mar 2019
Cited by 8
Abstract
Deep learning architectures have received much attention in recent years demonstrating state-of-the-art performance in several segmentation, classification and other computer vision tasks. Most of these deep networks are based on either convolutional or fully convolutional architectures. In this paper, we propose a novel [...] Read more.
Deep learning architectures have received much attention in recent years demonstrating state-of-the-art performance in several segmentation, classification and other computer vision tasks. Most of these deep networks are based on either convolutional or fully convolutional architectures. In this paper, we propose a novel object-based deep-learning framework for semantic segmentation in very high-resolution satellite data. In particular, we exploit object-based priors integrated into a fully convolutional neural network by incorporating an anisotropic diffusion data preprocessing step and an additional loss term during the training process. Under this constrained framework, the goal is to enforce pixels that belong to the same object to be classified at the same semantic category. We compared thoroughly the novel object-based framework with the currently dominating convolutional and fully convolutional deep networks. In particular, numerous experiments were conducted on the publicly available ISPRS WGII/4 benchmark datasets, namely Vaihingen and Potsdam, for validation and inter-comparison based on a variety of metrics. Quantitatively, experimental results indicate that, overall, the proposed object-based framework slightly outperformed the current state-of-the-art fully convolutional networks by more than 1% in terms of overall accuracy, while intersection over union results are improved for all semantic categories. Qualitatively, man-made classes with more strict geometry such as buildings were the ones that benefit most from our method, especially along object boundaries, highlighting the great potential of the developed approach. Full article
Show Figures

Figure 1

Open AccessArticle
Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes
Remote Sens. 2019, 11(4), 446; https://doi.org/10.3390/rs11040446 - 21 Feb 2019
Cited by 2
Abstract
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. [...] Read more.
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes. Full article
Show Figures

Graphical abstract

Open AccessArticle
A Novel Approach for the Detection of Standing Tree Stems from Plot-Level Terrestrial Laser Scanning Data
Remote Sens. 2019, 11(2), 211; https://doi.org/10.3390/rs11020211 - 21 Jan 2019
Cited by 19
Abstract
Tree stem detection is a key step toward retrieving detailed stem attributes from terrestrial laser scanning (TLS) data. Various point-based methods have been proposed for the stem point extraction at both individual tree and plot levels. The main limitation of the point-based methods [...] Read more.
Tree stem detection is a key step toward retrieving detailed stem attributes from terrestrial laser scanning (TLS) data. Various point-based methods have been proposed for the stem point extraction at both individual tree and plot levels. The main limitation of the point-based methods is their high computing demand when dealing with plot-level TLS data. Although segment-based methods can reduce the computational burden and uncertainties of point cloud classification, its application is largely limited to urban scenes due to the complexity of the algorithm, as well as the conditions of natural forests. Here we propose a novel and simple segment-based method for efficient stem detection at the plot level, which is based on the curvature feature of the points and connected component segmentation. We tested our method using a public TLS dataset with six forest plots that were collected for the international TLS benchmarking project in Evo, Finland. Results showed that the mean accuracies of the stem point extraction were comparable to the state-of-art methods (>95%). The accuracies of the stem mappings were also comparable to the methods tested in the international TLS benchmarking project. Additionally, our method was applicable to a wide range of stem forms. In short, the proposed method is accurate and simple; it is a sensible solution for the stem detection of standing trees using TLS data. Full article
Show Figures

Figure 1

Open AccessArticle
Estimating Biomass and Nitrogen Amount of Barley and Grass Using UAV and Aircraft Based Spectral and Photogrammetric 3D Features
Remote Sens. 2018, 10(7), 1082; https://doi.org/10.3390/rs10071082 - 07 Jul 2018
Cited by 31
Abstract
The timely estimation of crop biomass and nitrogen content is a crucial step in various tasks in precision agriculture, for example in fertilization optimization. Remote sensing using drones and aircrafts offers a feasible tool to carry out this task. Our objective was to [...] Read more.
The timely estimation of crop biomass and nitrogen content is a crucial step in various tasks in precision agriculture, for example in fertilization optimization. Remote sensing using drones and aircrafts offers a feasible tool to carry out this task. Our objective was to develop and assess a methodology for crop biomass and nitrogen estimation, integrating spectral and 3D features that can be extracted using airborne miniaturized multispectral, hyperspectral and colour (RGB) cameras. We used the Random Forest (RF) as the estimator, and in addition Simple Linear Regression (SLR) was used to validate the consistency of the RF results. The method was assessed with empirical datasets captured of a barley field and a grass silage trial site using a hyperspectral camera based on the Fabry-Pérot interferometer (FPI) and a regular RGB camera onboard a drone and an aircraft. Agricultural reference measurements included fresh yield (FY), dry matter yield (DMY) and amount of nitrogen. In DMY estimation of barley, the Pearson Correlation Coefficient (PCC) and the normalized Root Mean Square Error (RMSE%) were at best 0.95% and 33.2%, respectively; and in the grass DMY estimation, the best results were 0.79% and 1.9%, respectively. In the nitrogen amount estimations of barley, the PCC and RMSE% were at best 0.97% and 21.6%, respectively. In the biomass estimation, the best results were obtained when integrating hyperspectral and 3D features, but the integration of RGB images and 3D features also provided results that were almost as good. In nitrogen content estimation, the hyperspectral camera gave the best results. We concluded that the integration of spectral and high spatial resolution 3D features and radiometric calibration was necessary to optimize the accuracy. Full article
Show Figures

Graphical abstract

Open AccessArticle
Assessment of Classifiers and Remote Sensing Features of Hyperspectral Imagery and Stereo-Photogrammetric Point Clouds for Recognition of Tree Species in a Forest Area of High Species Diversity
Remote Sens. 2018, 10(5), 714; https://doi.org/10.3390/rs10050714 - 05 May 2018
Cited by 17
Abstract
Recognition of tree species and geospatial information on tree species composition is essential for forest management. In this study, tree species recognition was examined using hyperspectral imagery from visible to near-infrared (VNIR) and short-wave infrared (SWIR) camera sensors in combination with a 3D [...] Read more.
Recognition of tree species and geospatial information on tree species composition is essential for forest management. In this study, tree species recognition was examined using hyperspectral imagery from visible to near-infrared (VNIR) and short-wave infrared (SWIR) camera sensors in combination with a 3D photogrammetric canopy surface model based on RGB camera stereo-imagery. An arboretum with a diverse selection of 26 tree species from 14 genera was used as a test area. Aerial hyperspectral imagery and high spatial resolution photogrammetric color imagery were acquired from the test area using unmanned aerial vehicle (UAV) borne sensors. Hyperspectral imagery was processed to calibrated reflectance mosaics and was tested along with the mosaics based on original image digital number values (DN). Two alternative classifiers, a k nearest neighbor method (k-nn), combined with a genetic algorithm and a random forest method, were tested for predicting the tree species and genus, as well as for selecting an optimal set of remote sensing features for this task. The combination of VNIR, SWIR, and 3D features performed better than any of the data sets individually. Furthermore, the calibrated reflectance values performed better compared to uncorrected DN values. These trends were similar with both tested classifiers. Of the classifiers, the k-nn combined with the genetic algorithm provided consistently better results than the random forest algorithm. The best result was thus achieved using calibrated reflectance features from VNIR and SWIR imagery together with 3D point cloud features; the proportion of correctly-classified trees was 0.823 for tree species and 0.869 for tree genus. Full article
Show Figures

Graphical abstract

Open AccessArticle
Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging
Remote Sens. 2018, 10(2), 338; https://doi.org/10.3390/rs10020338 - 23 Feb 2018
Cited by 21
Abstract
Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global [...] Read more.
Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global scale. Several forest attributes, including size variability, amount of dead wood, and tree species richness, can be applied in assessing biodiversity of a forest ecosystem. Remote sensing offers complimentary tool for traditional field measurements in mapping and monitoring forest biodiversity. Recent development of small unmanned aerial vehicles (UAVs) enable the detailed characterization of forest ecosystems through providing data with high spatial but also temporal resolution at reasonable costs. The objective here is to deepen the knowledge about assessment of plot-level biodiversity indicators in boreal forests with hyperspectral imagery and photogrammetric point clouds from a UAV. We applied individual tree crown approach (ITC) and semi-individual tree crown approach (semi-ITC) in estimating plot-level biodiversity indicators. Structural metrics from the photogrammetric point clouds were used together with either spectral features or vegetation indices derived from hyperspectral imagery. Biodiversity indicators like the amount of dead wood and species richness were mainly underestimated with UAV-based hyperspectral imagery and photogrammetric point clouds. Indicators of structural variability (i.e., standard deviation in diameter-at-breast height and tree height) were the most accurately estimated biodiversity indicators with relative RMSE between 24.4% and 29.3% with semi-ITC. The largest relative errors occurred for predicting deciduous trees (especially aspen and alder), partly due to their small amount within the study area. Thus, especially the structural diversity was reliably predicted by integrating the three-dimensional and spectral datasets of UAV-based point clouds and hyperspectral imaging, and can therefore be further utilized in ecological studies, such as biodiversity monitoring. Full article
Show Figures

Graphical abstract

Back to TopTop