sensors-logo

Journal Browser

Journal Browser

Special Issue "Innovations in Photogrammetry and Remote Sensing: Modern Sensors, New Processing Strategies and Frontiers in Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 November 2020).

Special Issue Editors

Prof. Francesco Pirotti
Website
Guest Editor
University of Padova, Italy
Interests: laser scanning; remote sensing; machine learning; geomatics engineering; photogrammetry
Special Issues and Collections in MDPI journals
Prof. Francesco Mancini
Website
Guest Editor
University of Modena and Reggio Emilia, Italy
Interests: Geomatics engineering; photogrammetry; remote sensing; surveying; spatial analysis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue aims for papers showing the progress made in key areas of photogrammetry and remote sensing. Papers focused on modern and/or forthcoming sensors, improvements in data processing strategies, and assessment of their reliability are welcome for the Special Issue. Additionally, the Special Issue aims to collect papers devoted to the application of such innovations as proof of the contribution offered in the observation of the natural and built environment and understanding of phenomena at required spatial scale. In particular, the following topics can be addressed in proposed submissions:

- Forthcoming sensors in photogrammetry and remote sensing

- Quality Assurance / Quality Control (QA/QC)

- Potentialities offered by multi-sensors data fusion

- Methodologies for near real-time mapping and monitoring from aerial/satellite platforms

- Big dataset handling

- Artificial Intelligence for data processing

- 3D modelling

- Error budget

- Novel approaches for processing of multi-temporal data

- Design, testing and applications of new sensors

Prof. Francesco Mancini
Prof. Francesco Pirotti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Photogrammetry
  • Remote sensing
  • Innovative sensors
  • Multi-sensor data fusion
  • Artificial Intelligence for data processing
  • 3D modelling
  • Error budget
  • Earth observation

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessCommunication
Damage Proxy Map of the Beirut Explosion on 4th of August 2020 as Observed from the Copernicus Sensors
Sensors 2020, 20(21), 6382; https://doi.org/10.3390/s20216382 - 09 Nov 2020
Abstract
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, [...] Read more.
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, such as those of the Copernicus radar and optical sensors, can deliver a damage proxy map of this devastating event. Sentinel-1 radar images acquired just prior (the 24th of July 2020) and after the event (5th of August 2020) were processed and analyzed, indicating areas with significant changes of the VV (vertical transmit, vertical receive) and VH (vertical transmit, horizontal receive) backscattering signal. In addition, an Interferometric Synthetic Aperture Radar (InSAR) analysis was performed for both descending (31st of July 2020 and 6th of August 2020) and ascending (29th of July 2020 and 10th of August 2020) orbits of Sentinel-1 images, indicating relative small ground displacements in the area near the harbor. Moreover, low coherence for these images is mapped around the blast zone. The current study uses the Hybrid Pluggable Processing Pipeline (HyP3) cloud-based system provided by the Alaska Satellite Facility (ASF) for the processing of the radar datasets. In addition, medium-resolution Sentinel-2 optical data were used to support thorough visual inspection and Principal Component Analysis (PCA) the damage in the area. While the overall findings are well aligned with other official reports found on the World Wide Web, which were mainly delivered by international space agencies, those reports were generated after the processing of either optical or radar datasets. In contrast, the current communication showcases how both optical and radar satellite data can be parallel used to map other devastating events. The use of open access and freely distributed Sentinel mission data was found very promising for delivering damage proxies maps after devastating events worldwide. Full article
Show Figures

Figure 1

Open AccessArticle
Surface Reconstruction Assessment in Photogrammetric Applications
Sensors 2020, 20(20), 5863; https://doi.org/10.3390/s20205863 - 16 Oct 2020
Cited by 1
Abstract
The image-based 3D reconstruction pipeline aims to generate complete digital representations of the recorded scene, often in the form of 3D surfaces. These surfaces or mesh models are required to be highly detailed as well as accurate enough, especially for metric applications. Surface [...] Read more.
The image-based 3D reconstruction pipeline aims to generate complete digital representations of the recorded scene, often in the form of 3D surfaces. These surfaces or mesh models are required to be highly detailed as well as accurate enough, especially for metric applications. Surface generation can be considered as a problem integrated in the complete 3D reconstruction workflow and thus visibility information (pixel similarity and image orientation) is leveraged in the meshing procedure contributing to an optimal photo-consistent mesh. Other methods tackle the problem as an independent and subsequent step, generating a mesh model starting from a dense 3D point cloud or even using depth maps, discarding input image information. Out of the vast number of approaches for 3D surface generation, in this study, we considered three state of the art methods. Experiments were performed on benchmark and proprietary datasets of varying nature, scale, shape, image resolution and network designs. Several evaluation metrics were introduced and considered to present qualitative and quantitative assessment of the results. Full article
Show Figures

Graphical abstract

Open AccessArticle
Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions
Sensors 2020, 20(19), 5725; https://doi.org/10.3390/s20195725 - 08 Oct 2020
Abstract
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing [...] Read more.
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing and the reconstructed texture for smaller details, which is a subpar solution when the results are used for surface inspection. Other noise estimation and removal algorithms do not take advantage of all the additional data connected with SfM. We propose a number of geometrical and statistical metrics for noise assessment, based on both the reconstructed object and the capturing camera setup. We test the correlation of each of the metrics to the presence of noise on reconstructed surfaces and demonstrate that classical supervised learning methods, trained with these metrics can be used to distinguish between noise and roughness with an accuracy above 85%, with an additional 5–6% performance coming from the capturing setup metrics. Our proposed solution can easily be integrated into existing SfM workflows as it does not require more image data or additional sensors. Finally, as part of the testing we create an image dataset for SfM from a number of objects with varying shapes and sizes, which are available online together with ground truth annotations. Full article
Show Figures

Graphical abstract

Open AccessArticle
A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence
Sensors 2020, 20(18), 5076; https://doi.org/10.3390/s20185076 - 07 Sep 2020
Abstract
Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can [...] Read more.
Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively. Full article
Show Figures

Figure 1

Open AccessArticle
Potential of Pléiades and WorldView-3 Tri-Stereo DSMs to Represent Heights of Small Isolated Objects
Sensors 2020, 20(9), 2695; https://doi.org/10.3390/s20092695 - 09 May 2020
Cited by 3
Abstract
High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction [...] Read more.
High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction process encounters difficulties for water surfaces, areas with no texture or with a repetitive pattern appearance in the images, and for very small objects. This study investigates the capabilities and limitations of space-borne very high resolution imagery, specifically Pléiades (0.70 m) and WorldView-3 (0.31 m) imagery, with respect to the automatic point cloud reconstruction of small isolated objects. For this purpose, single buildings, vehicles, and trees were analyzed. The main focus is to quantify their detectability in the photogrammetrically-derived DSMs by estimating their heights as a function of object type and size. The estimated height was investigated with respect to the following parameters: building length and width, vehicle length and width, and tree crown diameter. Manually measured object heights from the oriented images were used as a reference. We demonstrate that the DSM-based estimated height of a single object strongly depends on its size, and we quantify this effect. Starting from very small objects, which are not elevated against their surroundings, and ending with large objects, we obtained a gradual increase of the relative heights. For small vehicles, buildings, and trees (lengths <7 pixels, crown diameters <4 pixels), the Pléiades-derived DSM showed less than 20% or none of the actual object’s height. For large vehicles, buildings, and trees (lengths >14 pixels, crown diameters >7 pixels), the estimated heights were higher than 60% of the real values. In the case of the WorldView-3 derived DSM, the estimated height of small vehicles, buildings, and trees (lengths <16 pixels, crown diameters <8 pixels) was less than 50% of their actual height, whereas larger objects (lengths >33 pixels, crown diameters >16 pixels) were reconstructed at more than 90% in height. Full article
Show Figures

Figure 1

Back to TopTop