Next Article in Journal
Self-Adaptive Aspect Ratio Anchor for Oriented Object Detection in Remote Sensing Images
Previous Article in Journal
Evaluations of Surface PM10 Concentration and Chemical Compositions in MERRA-2 Aerosol Reanalysis over Central and Eastern China
Previous Article in Special Issue
VIIRS Nighttime Light Data for Income Estimation at Local Level
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

2nd Edition of Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques

by
Gabriele Bitelli
* and
Emanuele Mandanici
Department of Civil, Chemical, Environmental and Materials Engineering (DICAM), University of Bologna, Viale del Risorgimento 2, 40136 Bologna, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(7), 1310; https://doi.org/10.3390/rs13071310
Submission received: 8 February 2021 / Revised: 15 March 2021 / Accepted: 25 March 2021 / Published: 30 March 2021
The exponential growth in the volume of Earth observation data and the increasing quality and availability of high-resolution imagery are increasingly making more applications possible in urban environments. In particular, remote sensing information, especially when combined with location-specific data collected locally or through connected devices, presents exciting opportunities for smart city applications, such as risk analysis and mitigation, climate prediction, and remote surveillance. On the other hand, the exploitation of this great amount of data poses new challenges for big data analysis models and requires new spatial information frameworks capable of integrating imagery, sensor observations, and social media in geographic information systems (GIS).
In this Special Issue, which follows the first by Ranjan et al. [1], we invited original research articles contributing to the development of new algorithms, applications, and interpretative models for the urban environment, in order to fill the gap between the impressive mass of available remote sensing (RS) data and their effective usability by stakeholders. Thus, five papers have been published.
Ivan et al. [2] proposed a model to estimate income per capita starting from the National Polar-orbiting Partnership–Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) night-time satellite images and implementing a machine learning approach. The experimentation was performed on more than 40 cities with more than 50,000 inhabitants, and the results proved a stable and strong relationship between the “sum of light” (the sum of all pixel values of the night-time light image) and the income in each territorial unit.
Pilant et al. [3] defined a classification system for the United States Environmental Protection Agency (U.S. EPA) EnviroAtlas Meter-scale Urban Land Cover (MULC). At a 1 × 1 m pixel size, this land cover data support community mapping, planning, modelling, and decision making at high spatial resolution, as fine as individual trees, buildings, and roads. To obtain an overall accuracy of about 88%, large datasets of four-band aerial photos and LiDAR products were exploited. The classification methodology was a combination of pixel- and object-oriented approaches.
Cheng et al. [4] developed two strategies to automatically extract lane markings (also including dashed lines, edge lines, arrows, and crosswalk markings) from LiDAR intensity data, acquired by a mobile mapping system. The first approach was based on normalized intensity thresholding, while the second was on deep learning (this provided more accurate results). The reliable identification of lane markings is essential for autonomous driving and driver assistance systems.
Wróżyński et al. [5] used LiDAR data and GIS and 3D graphic software to create a classified digital surface model for quantitative landscape assessment. With this model, it is possible to generate 360° panoramic images from the point of view of the observer, and it is also possible to quantify the percentage of each landscape class (ground; low, medium, and high vegetation; buildings; water; and sky).
Yao et al. [6] dealt with the problem of preserving semantic information across 3D model editing operations. Many current data models for 3D applications can encode rich semantic information in addition to the traditional geometry and material representations. Despite a variety of techniques that are available for 3D editing, the maintenance of semantic and hierarchical information through the process is only supported by specific and limited applications. Therefore, an automatic matching method is proposed that is independent of the specific operation, can be easily integrated in existing applications, and can preserve all the information of the original model.
We hope that the research published in the five papers of this Special Issue may contribute to the challenging development of smart city applications, which require sophisticated strategies to manage, process, and integrate a huge amount of data coming from a variety of instruments and techniques.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ranjan, R.; Jayaraman, P.P.; Georgeakopoulos, D. Special Issue “Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques”. Remote Sens. Available online: https://www.mdpi.com/journal/remotesensing/special_issues/smartcity_bigsensing_EO (accessed on 8 February 2021).
  2. Ivan, K.; Holobâcă, I.-H.; Benedek, J.; Török, I. VIIRS Nighttime Light Data for Income Estimation at Local Level. Remote Sens. 2020, 12, 2950. [Google Scholar] [CrossRef]
  3. Pilant, A.; Endres, K.; Rosenbaum, D.; Gundersen, G. US EPA EnviroAtlas Meter-Scale Urban Land Cover (MULC): 1-m Pixel Land Cover Class Definitions and Guidance. Remote Sens. 2020, 12, 1909. [Google Scholar] [CrossRef] [PubMed]
  4. Cheng, Y.-T.; Patel, A.; Wen, C.; Bullock, D.; Habib, A. Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds. Remote Sens. 2020, 12, 1379. [Google Scholar] [CrossRef]
  5. Wróżyński, R.; Pyszny, K.; Sojka, M. Quantitative Landscape Assessment Using LiDAR and Rendered 360° Panoramic Images. Remote Sens. 2020, 12, 386. [Google Scholar] [CrossRef] [Green Version]
  6. Yao, S.; Ling, X.; Nueesch, F.; Schrotter, G.; Schubiger, S.; Fang, Z.; Ma, L.; Tian, Z. Maintaining Semantic Information across Generic 3D Model Editing Operations. Remote Sens. 2020, 12, 335. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bitelli, G.; Mandanici, E. 2nd Edition of Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques. Remote Sens. 2021, 13, 1310. https://doi.org/10.3390/rs13071310

AMA Style

Bitelli G, Mandanici E. 2nd Edition of Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques. Remote Sensing. 2021; 13(7):1310. https://doi.org/10.3390/rs13071310

Chicago/Turabian Style

Bitelli, Gabriele, and Emanuele Mandanici. 2021. "2nd Edition of Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques" Remote Sensing 13, no. 7: 1310. https://doi.org/10.3390/rs13071310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop