E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "3D Reconstruction & Semantic Information from Aerial and Satellite Images"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 July 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Fabio Remondino

Bruno Kessler Foundation (FBK), 3D Optical Metrology (3DOM) unit, Trento, Italy; Vice-President of EuroSDR; President of ISPRS Technical Commission II “Photogrammetry”; Vice-President CIPA Heritage Documentation
Website | E-Mail
Fax: +39 0461 314340
Interests: photogrammetry; laser scanning; 3D reconstruction; 3D modeling; sensor integration
Guest Editor
Prof. Dr. Franz Rottensteiner

Leibniz Universität Hannover, Institute of Photogrammetry and GeoInformation, Hannover, Germany; Chair ISPRS WG II/4: “3D Scene Reconstruction and Analysis”
Website | E-Mail
Interests: classification; object recognition; 3D city modeling; updating of topographic databases

Special Issue Information

Dear Colleagues,

The growing awareness and usability of image-based methods for generating 3D models and for deriving semantic information about our intensively used environment is increasing. The past years have witnessed many important changes in every stage of the photogrammetric pipeline and multiple applications have demonstrated the versatility and capability of photogrammetry in retrieving 3D metric, as well as semantic information from imagery. New UAV platforms, aerial cameras and satellite sensors are available for mapping and 3D modeling purposes. However, besides numerous societal needs and reasons for employing photogrammetric (3D) products, and despite the recent advances in automated 3D reconstruction and sematic interpretation of images, there are still many challenges and open research issues that need to be tackled.

This Remote Sensing Special Issue is meant to support the abovementioned scope by collecting and publishing full papers on related topics. Extended and improved papers from related conferences are also welcome.

Authors are invited to submit papers related to the 3D reconstruction based on aerial and satellite imagery and to the semantic interpretation of these data, particularly:

  • UAV, airborne and satellite data capturing sensors for the generation of 3D topographic data
  • oblique aerial cameras for mapping purposes
  • very high-resolution satellite optical sensors
  • sensor calibration and characterization
  • feature extraction and dense image matching
  • image classification, 3D object recognition, and reconstruction
  • LOD city modelling
  • processing of very large datasets and 3D modeling at national scale
  • benchmarking, validation procedures and consistency of the generated 3D data

Prof. Dr. Fabio Remondino
Prof. Dr. Franz Rottensteiner
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access bimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • airborne and UAV photogrammetry
  • satellite photogrammetry
  • 3D city modeling
  • 3D modeling and 3D object recognition
  • image classification
  • very large datasets
  • benchmarking

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle GPVC: Graphics Pipeline-Based Visibility Classification for Texture Reconstruction
Remote Sens. 2018, 10(11), 1725; https://doi.org/10.3390/rs10111725
Received: 6 September 2018 / Revised: 15 October 2018 / Accepted: 30 October 2018 / Published: 1 November 2018
PDF Full-text (13909 KB) | HTML Full-text | XML Full-text
Abstract
The shadow-mapping and ray-tracing algorithms are the two popular approaches used in visibility handling for multi-view based texture reconstruction. Visibility testing based on the two algorithms needs a user-defined bias to reduce computation error. However, a constant bias does not work for every
[...] Read more.
The shadow-mapping and ray-tracing algorithms are the two popular approaches used in visibility handling for multi-view based texture reconstruction. Visibility testing based on the two algorithms needs a user-defined bias to reduce computation error. However, a constant bias does not work for every part of a geometry. Therefore, the accuracy of the two algorithms is limited. In this paper, we propose a high-precision graphics pipeline-based visibility classification (GPVC) method without introducing a bias. The method consists of two stages. In the first stage, a shader-based rendering is designed in the fixed graphics pipeline to generate initial visibility maps (IVMs). In the second stage, two algorithms, namely, lazy-projection coverage correction (LPCC) and hierarchical iterative vertex-edge-region sampling (HIVERS), are proposed to classify visible primitives into fully visible or partially visible primitives. The proposed method can be easily implemented in the graphics pipeline to achieve parallel acceleration. With respect to efficiency, the proposed method outperforms the bias-based methods. With respect to accuracy, the proposed method can theoretically reach a value of 100%. Compared with available libraries and software, the textured model based on our method is smoother with less distortion and dislocation. Full article
Figures

Figure 1

Open AccessArticle Multi-Resolution Feature Fusion for Image Classification of Building Damages with Convolutional Neural Networks
Remote Sens. 2018, 10(10), 1636; https://doi.org/10.3390/rs10101636
Received: 27 July 2018 / Revised: 28 September 2018 / Accepted: 9 October 2018 / Published: 14 October 2018
PDF Full-text (4879 KB) | HTML Full-text | XML Full-text
Abstract
Remote sensing images have long been preferred to perform building damage assessments. The recently proposed methods to extract damaged regions from remote sensing imagery rely on convolutional neural networks (CNN). The common approach is to train a CNN independently considering each of the
[...] Read more.
Remote sensing images have long been preferred to perform building damage assessments. The recently proposed methods to extract damaged regions from remote sensing imagery rely on convolutional neural networks (CNN). The common approach is to train a CNN independently considering each of the different resolution levels (satellite, aerial, and terrestrial) in a binary classification approach. In this regard, an ever-growing amount of multi-resolution imagery are being collected, but the current approaches use one single resolution as their input. The use of up/down-sampled images for training has been reported as beneficial for the image classification accuracy both in the computer vision and remote sensing domains. However, it is still unclear if such multi-resolution information can also be captured from images with different spatial resolutions such as imagery of the satellite and airborne (from both manned and unmanned platforms) resolutions. In this paper, three multi-resolution CNN feature fusion approaches are proposed and tested against two baseline (mono-resolution) methods to perform the image classification of building damages. Overall, the results show better accuracy and localization capabilities when fusing multi-resolution feature maps, specifically when these feature maps are merged and consider feature information from the intermediate layers of each of the resolution level networks. Nonetheless, these multi-resolution feature fusion approaches behaved differently considering each level of resolution. In the satellite and aerial (unmanned) cases, the improvements in the accuracy reached 2% while the accuracy improvements for the airborne (manned) case was marginal. The results were further confirmed by testing the approach for geographical transferability, in which the improvements between the baseline and multi-resolution experiments were overall maintained. Full article
Figures

Figure 1

Open AccessArticle Large-Scale Accurate Reconstruction of Buildings Employing Point Clouds Generated from UAV Imagery
Remote Sens. 2018, 10(7), 1148; https://doi.org/10.3390/rs10071148
Received: 27 May 2018 / Revised: 9 July 2018 / Accepted: 16 July 2018 / Published: 20 July 2018
Cited by 2 | PDF Full-text (8924 KB) | HTML Full-text | XML Full-text
Abstract
High-density point clouds are valuable and detailed sources of data for different processes related to photogrammetry. We explore the knowledge-based generation of accurate large-scale three-dimensional (3D) models of buildings employing point clouds derived from UAV-based photogrammetry. A new two-level segmentation approach based on
[...] Read more.
High-density point clouds are valuable and detailed sources of data for different processes related to photogrammetry. We explore the knowledge-based generation of accurate large-scale three-dimensional (3D) models of buildings employing point clouds derived from UAV-based photogrammetry. A new two-level segmentation approach based on efficient RANdom SAmple Consensus (RANSAC) shape detection is developed to segment potential facades and roofs of the buildings and extract their footprints. In the first level, the cylinder primitive is implemented to trim point clouds and split buildings, and the second level of the segmentation produces planar segments. The efficient RANSAC algorithm is enhanced in sizing up the segments via point-based analyses for both levels of segmentation. Then, planar modelling is carried out employing contextual knowledge through a new constrained least squares method. New evaluation criteria are proposed based on conceptual knowledge. They can examine the abilities of the approach in reconstruction of footprints, 3D models, and planar segments in addition to detection of over/under segmentation. Evaluation of the 3D models proves that the geometrical accuracy of LoD3 is achieved, since the average horizontal and vertical accuracy of the reconstructed vertices of roofs and footprints are better than (0.24, 0.23) m, (0.19, 0.17) m for the first dataset, and (0.35, 0.37) m, (0.28, 0.24) m for the second dataset. Full article
Figures

Figure 1

Open AccessArticle Pushbroom Hyperspectral Data Orientation by Combining Feature-Based and Area-Based Co-Registration Techniques
Remote Sens. 2018, 10(4), 645; https://doi.org/10.3390/rs10040645
Received: 26 March 2018 / Revised: 9 April 2018 / Accepted: 19 April 2018 / Published: 22 April 2018
Cited by 1 | PDF Full-text (37047 KB) | HTML Full-text | XML Full-text
Abstract
Direct georeferencing of airborne pushbroom scanner data usually suffers from the limited precision of navigation sensors onboard of the aircraft. The bundle adjustment of images and orientation parameters, used to perform geocorrection of frame images during the post-processing phase, cannot be used for
[...] Read more.
Direct georeferencing of airborne pushbroom scanner data usually suffers from the limited precision of navigation sensors onboard of the aircraft. The bundle adjustment of images and orientation parameters, used to perform geocorrection of frame images during the post-processing phase, cannot be used for pushbroom cameras without difficulties—it relies on matching corresponding points between scan lines, which is not feasible in the absence of sufficient overlap and texture information. We address this georeferencing problem by equipping our aircraft with both a frame camera and a pushbroom scanner: the frame images and the navigation parameters measured by a couple GPS/Inertial Measurement Unit (IMU) are input to a bundle adjustment algorithm; the output orientation parameters are used to project the scan lines on a Digital Elevation Model (DEM) and on an orthophoto generated during the bundle adjustment step; using the image feature matching algorithm Speeded Up Robust Features (SURF), corresponding points between the image formed by the projected scan lines and the orthophoto are matched, and through a least-squares method, the boresight between the two cameras is estimated and included in the calculation of the projection. Finally, using Particle Image Velocimetry (PIV) on the gradient image, the projection is deformed into a final image that fits the geometry of the orthophoto. We apply this algorithm to five test acquisitions over Lake Geneva region (Switzerland) and Lake Baikal region (Russia). The results are quantified in terms of Root Mean Square Error (RMSE) between matching points of the RGB orthophoto and the pushbroom projection. From a first projection where the Interior Orientation Parameters (IOP) are known with limited precision and the RMSE goes up to 41 pixels, our geocorrection estimates IOP, boresight and Exterior Orientation Parameters (EOP) and produces a new projection with an RMSE, with the reference orthophoto, around two pixels. Full article
Figures

Graphical abstract

Open AccessArticle Optimization of OpenStreetMap Building Footprints Based on Semantic Information of Oblique UAV Images
Remote Sens. 2018, 10(4), 624; https://doi.org/10.3390/rs10040624
Received: 12 March 2018 / Revised: 1 April 2018 / Accepted: 16 April 2018 / Published: 18 April 2018
Cited by 1 | PDF Full-text (35057 KB) | HTML Full-text | XML Full-text
Abstract
Building footprint information is vital for 3D building modeling. Traditionally, in remote sensing, building footprints are extracted and delineated from aerial imagery and/or LiDAR point cloud. Taking a different approach, this paper is dedicated to the optimization of OpenStreetMap (OSM) building footprints exploiting
[...] Read more.
Building footprint information is vital for 3D building modeling. Traditionally, in remote sensing, building footprints are extracted and delineated from aerial imagery and/or LiDAR point cloud. Taking a different approach, this paper is dedicated to the optimization of OpenStreetMap (OSM) building footprints exploiting the contour information, which is derived from deep learning-based semantic segmentation of oblique images acquired by the Unmanned Aerial Vehicle (UAV). First, a simplified 3D building model of Level of Detail 1 (LoD 1) is initialized using the footprint information from OSM and the elevation information from Digital Surface Model (DSM). In parallel, a deep neural network for pixel-wise semantic image segmentation is trained in order to extract the building boundaries as contour evidence. Subsequently, an optimization integrating the contour evidence from multi-view images as a constraint results in a refined 3D building model with optimized footprints and height. Our method is leveraged to optimize OSM building footprints for four datasets with different building types, demonstrating robust performance for both individual buildings and multiple buildings regardless of image resolution. Finally, we compare our result with reference data from German Authority Topographic-Cartographic Information System (ATKIS). Quantitative and qualitative evaluations reveal that the original OSM building footprints have large offset, but can be significantly improved from meter level to decimeter level after optimization. Full article
Figures

Figure 1

Open AccessArticle Quality Assessment of DSMs Produced from UAV Flights Georeferenced with On-Board RTK Positioning
Remote Sens. 2018, 10(2), 311; https://doi.org/10.3390/rs10020311
Received: 14 January 2018 / Revised: 13 February 2018 / Accepted: 15 February 2018 / Published: 17 February 2018
Cited by 3 | PDF Full-text (9216 KB) | HTML Full-text | XML Full-text
Abstract
High-resolution Digital Surface Models (DSMs) from unmanned aerial vehicles (UAVs) imagery with accuracy better than 10 cm open new possibilities in geosciences and engineering. The accuracy of such DSMs depends on the number and distribution of ground control points (GCPs). Placing and measuring
[...] Read more.
High-resolution Digital Surface Models (DSMs) from unmanned aerial vehicles (UAVs) imagery with accuracy better than 10 cm open new possibilities in geosciences and engineering. The accuracy of such DSMs depends on the number and distribution of ground control points (GCPs). Placing and measuring GCPs are often the most time-consuming on-site tasks in a UAV project. Safety or accessibility concerns may impede their proper placement, so either costlier techniques must be used, or a less accurate DSM is obtained. Photogrammetric blocks flown by drones with on-board receivers capable of RTK (real-time kinematic) positioning do not need GCPs, as camera stations at exposure time can be determined with cm-level accuracy, and used to georeference the block and control its deformations. This paper presents an experimental investigation on the repeatability of DSM generation from several blocks acquired with a RTK-enabled drone, where differential corrections were sent from a local master station or a network of Continuously Operating Reference Stations (CORS). Four different flights for each RTK mode were executed over a test field, according to the same flight plan. DSM generation was performed with three block control configurations: GCP only, camera stations only, and with camera stations and one GCP. The results show that irrespective of the RTK mode, the first and third configurations provide the best DSM inner consistency. The average range of the elevation discrepancies among the DSMs in such cases is about 6 cm (2.5 GSD, ground sampling density) for a 10-cm resolution DSM. Using camera stations only, the average range is almost twice as large (4.7 GSD). The average DSM accuracy, which was verified on checkpoints, turned out to be about 2.1 GSD with the first and third configurations, and 3.7 GSD with camera stations only. Full article
Figures

Graphical abstract

Open AccessArticle Geospatial Computer Vision Based on Multi-Modal Data—How Valuable Is Shape Information for the Extraction of Semantic Information?
Remote Sens. 2018, 10(1), 2; https://doi.org/10.3390/rs10010002
Received: 12 October 2017 / Revised: 11 December 2017 / Accepted: 17 December 2017 / Published: 21 December 2017
Cited by 1 | PDF Full-text (3286 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate the value of different modalities and their combination for the analysis of geospatial data of low spatial resolution. For this purpose, we present a framework that allows for the enrichment of geospatial data with additional semantics based on
[...] Read more.
In this paper, we investigate the value of different modalities and their combination for the analysis of geospatial data of low spatial resolution. For this purpose, we present a framework that allows for the enrichment of geospatial data with additional semantics based on given color information, hyperspectral information, and shape information. While the different types of information are used to define a variety of features, classification based on these features is performed using a random forest classifier. To draw conclusions about the relevance of different modalities and their combination for scene analysis, we present and discuss results which have been achieved with our framework on the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set. Full article
Figures

Graphical abstract

Open AccessArticle Estimating the Rut Depth by UAV Photogrammetry
Remote Sens. 2017, 9(12), 1279; https://doi.org/10.3390/rs9121279
Received: 24 September 2017 / Revised: 22 November 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
Cited by 1 | PDF Full-text (18967 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The rut formation during forest operations is an undesirable phenomenon. A methodology is being proposed to measure the rut depth distribution of a logging site by photogrammetric point clouds produced by unmanned aerial vehicles (UAV). The methodology includes five processing steps that aim
[...] Read more.
The rut formation during forest operations is an undesirable phenomenon. A methodology is being proposed to measure the rut depth distribution of a logging site by photogrammetric point clouds produced by unmanned aerial vehicles (UAV). The methodology includes five processing steps that aim at reducing the noise from the surrounding trees and undergrowth for identifying the trails. A canopy height model is produced to focus the point cloud on the open pathway around the forest machine trail. A triangularized ground model is formed by a point cloud filtering method. The ground model is vectorized using the histogram of directed curvatures (HOC) method to produce an overall ground visualization. Finally, a manual selection of the trails leads to an automated rut depth profile analysis. The bivariate correlation (Pearson’s r) between rut depths measured manually and by UAV photogrammetry is r = 0.67 . The two-class accuracy a of detecting the rut depth exceeding 20 cm is a = 0.65 . There is potential for enabling automated large-scale evaluation of the forestry areas by using autonomous drones and the process described. Full article
Figures

Graphical abstract

Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top