remotesensing-logo

Journal Browser

Journal Browser

Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 April 2018) | Viewed by 63152

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Urban Sciences Building, Newcastle University, 1 Science Square, Newcastle Helix, Newcastle upon Tyne NE4 5TG, UK
Interests: cloud computing; internet of things; big data; distributed systems; peer-to-peer networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Science, Engineering & Technology, Swinburne University of Technology, 1 Alfred Street, Hawthorn, VIC 3122, Australia
Interests: internet of things; distributed computing; mobile and cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

With the remarkable advances in high-resolution Earth Observation (EO), we are witnessing an explosive growth in the volume, and also velocity, of Remote Sensing (RS) data. Generally, the volume of archived RS (Remote Sensing) is currently represented in the petabytes scale and this amount is growing everyday by terabytes. If predictions holds true, this could soon move from the petabyte to the exabyte scale given that the Square Kilometer Array (SKA) radio telescopes will transmit 400,000 petabytes (about 400 exabytes) per month, or a massive 155.7 terabytes per second. Furthermore, the European Space Agency (ESA) will launch several satellites in the next few years, which will collect data about the environment, such as air temperatures and soil conditions, and stream that data back, in real time, for analyses. In addition, instrumentation of sophisticated sensing devices (e.g., high resolution cameras, radar Altimeter, radiometers, photometers, etc.) in satellites has further led to the exponential increase in the velocity, variety, and volume of remotely sensed data. Therefore, RS data are referred to as the "Big Remote Sensing Data" or "Big Earth Observation Data''.

RS data play an important role in many application domains; in particular, the smart city domain, e.g., disaster monitoring, climate prediction, and remote surveillance. Effective integration of human, physical, and digital systems holds the promise of improving quality of life and making our cities smart and sustainable. For example, Open Geospatial Consortium (OGC) white paper provides the foundations for a spatial information framework that establishes the basics in order to integrate Geographic Information System (GIS) features, imagery, sensor observations, and social media. Remotely-sensed information, combined with location specific data collected locally or via connected Internet of Things (IoT) devices, presents tremendous opportunities for smart city applications. High-resolution RS data are used by insurance and financial companies to track consumer spending and assist with consumer claims. This, coupled with IoT data generated locally or via connected devices, can exponentially compound the ways to spatially process, analyze, and draw insights from data.

The I/O intensive "Big Remote Sensing Data", compounded by the velocity and variety of data from connected devices (e.g., Internet of things devices, such as smart watches, cars, etc.), pose several new technical challenges for traditional High Performance Computing (HPC) platforms, such as clusters and supercomputers, which have been widely used in the past for data processing, analysis, and knowledge discovery. At the outset, our current systems lack the capacity to store and manage this massive amount of RS data effectively. Cloud computing provides scientists with a revolutionary paradigm of utilizing elastic computing infrastructure and applications. By virtue of virtualization, computing resources and various algorithms could be accommodated and delivered as ubiquitous services on-demand according to the application requirements. Cloud paradigm has also been widely adopted in large-scale RS applications, such as the Matsu project for cloud-based flood assessment. It has also been used to analyze data captured via IoT devices in smart city applications using Big Data processing framework such as apache spark, Hadoop, etc. However, current datacenter clouds and big data processing frameworks are not optimized for deploying data-intensive RS applications due to lack of techniques that can support: (i) data and computation parallelism at finer granularity; (ii) efficient indexing for multi-dimensional RS data; (iii) holistic resource allocation that adapts to the uncertainties of cloud datacentre resources (failure, over-utilization, unavailability) and RS data flow (volume and velocity); (iv) RS data analytics across multiple datacenters, (v) fusion and integration of RS data with IoT data generated within smart city environments, and (vi) ) provide support for scalable and real-time processing of big RS and Internet scale IoT data. Moreover, current technique does not provide the means to fuse RS data with data generated locally or via connected IoT devices or social media. Optimizing RS data combined with data from IoT devices in smart cities will lead to development of future sustainable smart cities.

To address these issues in instrumenting smart city applications, this Special Issues solicits high quality articles in the following areas, but not limited to:

  • Scalable storage algorithms for highly distributed RS data

  • Programing abstraction for porting RS data analysis workflows to big data computing programming models (e.g., MapReduce, Stream Processing, NoSQL)

  • Smart city application specific ontology models for fusing RS data with other Internet of things data sources

  • Indexing techniques for petabyte efficient NoSQL query-based RS data processing

  • Quality of service optimized RS data analytic provisioning techniques exploiting cloud datacentre resources

  • Benchmarking kernels for optimizing RS data analytic tasks over cloud resources

  • Innovative smart city application use cases augmented by RS data and/or IoT data

Submitted articles must not have been previously published or currently submitted for journal publication elsewhere. Research articles, review articles, as well as technical notes (https://www.mdpi.com/journal/remotesensing/instructions), are invited. You can access them at the MDPI Remote Sensing Open Access Journal, https://www.mdpi.com/journal/remotesensing. Please submit your paper at: https://susy.mdpi.com/

Prof. Rajiv Ranjan
Dr. Prem Prakash Jayaraman
Prof. Dimitrios Georgeakopoulos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 12162 KiB  
Article
A CNN-SIFT Hybrid Pedestrian Navigation Method Based on First-Person Vision
by Qi Zhao, Boxue Zhang, Shuchang Lyu, Hong Zhang, Daniel Sun, Guoqiang Li and Wenquan Feng
Remote Sens. 2018, 10(8), 1229; https://doi.org/10.3390/rs10081229 - 05 Aug 2018
Cited by 18 | Viewed by 6558
Abstract
The emergence of new wearable technologies, such as action cameras and smart glasses, has driven the use of the first-person perspective in computer applications. This field is now attracting the attention and investment of researchers aiming to develop methods to process first-person vision [...] Read more.
The emergence of new wearable technologies, such as action cameras and smart glasses, has driven the use of the first-person perspective in computer applications. This field is now attracting the attention and investment of researchers aiming to develop methods to process first-person vision (FPV) video. The current approaches present particular combinations of different image features and quantitative methods to accomplish specific objectives, such as object detection, activity recognition, user–machine interaction, etc. FPV-based navigation is necessary in some special areas, where Global Position System (GPS) or other radio-wave strength methods are blocked, and is especially helpful for visually impaired people. In this paper, we propose a hybrid structure with a convolutional neural network (CNN) and local image features to achieve FPV pedestrian navigation. A novel end-to-end trainable global pooling operator, called AlphaMEX, has been designed to improve the scene classification accuracy of CNNs. A scale-invariant feature transform (SIFT)-based tracking algorithm is employed for movement estimation and trajectory tracking of the person through each frame of FPV images. Experimental results demonstrate the effectiveness of the proposed method. The top-1 error rate of the proposed AlphaMEX-ResNet outperforms the original ResNet (k = 12) by 1.7% on the ImageNet dataset. The CNN-SIFT hybrid pedestrian navigation system reaches 0.57 m average absolute error, which is an adequate accuracy for pedestrian navigation. Both positions and movements can be well estimated by the proposed pedestrian navigation algorithm with a single wearable camera. Full article
Show Figures

Graphical abstract

18 pages, 1805 KiB  
Article
Systematic Comparison of Power Line Classification Methods from ALS and MLS Point Cloud Data
by Yanjun Wang, Qi Chen, Lin Liu, Xiong Li, Arun Kumar Sangaiah and Kai Li
Remote Sens. 2018, 10(8), 1222; https://doi.org/10.3390/rs10081222 - 03 Aug 2018
Cited by 32 | Viewed by 5057
Abstract
Power lines classification is important for electric power management and geographical objects extraction using LiDAR (light detection and ranging) point cloud data. Many supervised classification approaches have been introduced for the extraction of features such as ground, trees, and buildings, and several studies [...] Read more.
Power lines classification is important for electric power management and geographical objects extraction using LiDAR (light detection and ranging) point cloud data. Many supervised classification approaches have been introduced for the extraction of features such as ground, trees, and buildings, and several studies have been conducted to evaluate the framework and performance of such supervised classification methods in power lines applications. However, these studies did not systematically investigate all of the relevant factors affecting the classification results, including the segmentation scale, feature selection, classifier variety, and scene complexity. In this study, we examined these factors systematically using airborne laser scanning and mobile laser scanning point cloud data. Our results indicated that random forest and neural network were highly suitable for power lines classification in forest, suburban, and urban areas in terms of the precision, recall, and quality rates of the classification results. In contrast to some previous studies, random forest yielded the best results, while Naïve Bayes was the worst classifier in most cases. Random forest was the more robust classifier with or without feature selection for various LiDAR point cloud data. Furthermore, the classification accuracies were directly related to the selection of the local neighborhood, classifier, and feature set. Finally, it was suggested that random forest should be considered in most cases for power line classification. Full article
Show Figures

Figure 1

21 pages, 17193 KiB  
Article
Towards Real-Time Service from Remote Sensing: Compression of Earth Observatory Video Data via Long-Term Background Referencing
by Jing Xiao, Rong Zhu, Ruimin Hu, Mi Wang, Ying Zhu, Dan Chen and Deren Li
Remote Sens. 2018, 10(6), 876; https://doi.org/10.3390/rs10060876 - 05 Jun 2018
Cited by 11 | Viewed by 4194
Abstract
City surveillance enables many innovative applications of smart cities. However, the real-time utilization of remotely sensed surveillance data via unmanned aerial vehicles (UAVs) or video satellites is hindered by the considerable gap between the high data collection rate and the limited transmission bandwidth. [...] Read more.
City surveillance enables many innovative applications of smart cities. However, the real-time utilization of remotely sensed surveillance data via unmanned aerial vehicles (UAVs) or video satellites is hindered by the considerable gap between the high data collection rate and the limited transmission bandwidth. High efficiency compression of the data is in high demand. Long-term background redundancy (LBR) (in contrast to local spatial/temporal redundancies in a single video clip) is a new form of redundancy common in Earth observatory video data (EOVD). LBR is induced by the repetition of static landscapes across multiple video clips and becomes significant as the number of video clips shot of the same area increases. Eliminating LBR improves EOVD coding efficiency considerably. First, this study proposes eliminating LBR by creating a long-term background referencing library (LBRL) containing high-definition geographically registered images of an entire area. Then, it analyzes the factors affecting the variations in the image representations of the background. Next, it proposes a method of generating references for encoding current video and develops the encoding and decoding framework for EOVD compression. Experimental results show that encoding UAV video clips with the proposed method saved an average of more than 54% bits using references generated under the same conditions. Bitrate savings reached 25–35% when applied to satellite video data with arbitrarily collected reference images. Applying the proposed coding method to EOVD will facilitate remote surveillance, which can foster the development of online smart city applications. Full article
Show Figures

Figure 1

16 pages, 37702 KiB  
Article
Optimal Seamline Detection for Orthoimage Mosaicking Based on DSM and Improved JPS Algorithm
by Gang Chen, Song Chen, Xianju Li, Ping Zhou and Zhou Zhou
Remote Sens. 2018, 10(6), 821; https://doi.org/10.3390/rs10060821 - 25 May 2018
Cited by 9 | Viewed by 4184
Abstract
Based on the digital surface model (DSM) and jump point search (JPS) algorithm, this study proposed a novel approach to detect the optimal seamline for orthoimage mosaicking. By threshold segmentation, DSM was first identified as ground regions and obstacle regions (e.g., buildings, trees, [...] Read more.
Based on the digital surface model (DSM) and jump point search (JPS) algorithm, this study proposed a novel approach to detect the optimal seamline for orthoimage mosaicking. By threshold segmentation, DSM was first identified as ground regions and obstacle regions (e.g., buildings, trees, and cars). Then, the mathematical morphology method was used to make the edge of obstacles more prominent. Subsequently, the processed DSM was considered as a uniform-cost grid map, and the JPS algorithm was improved and employed to search for key jump points in the map. Meanwhile, the jump points would be evaluated according to an optimized function, finally generating a minimum cost path as the optimal seamline. Furthermore, the search strategy was modified to avoid search failure when the search map was completely blocked by obstacles in the search direction. Comparison of the proposed method and the Dijkstra’s algorithm was carried out based on two groups of image data with different characteristics. Results showed the following: (1) the proposed method could detect better seamlines near the centerlines of the overlap regions, crossing far fewer ground objects; (2) the efficiency and resource consumption were greatly improved since the improved JPS algorithm skips many image pixels without them being explicitly evaluated. In general, based on DSM, the proposed method combining threshold segmentation, mathematical morphology, and improved JPS algorithms was helpful for detecting the optimal seamline for orthoimage mosaicking. Full article
Show Figures

Figure 1

34 pages, 20737 KiB  
Article
Infrared Image Enhancement Using Adaptive Histogram Partition and Brightness Correction
by Minjie Wan, Guohua Gu, Weixian Qian, Kan Ren, Qian Chen and Xavier Maldague
Remote Sens. 2018, 10(5), 682; https://doi.org/10.3390/rs10050682 - 27 Apr 2018
Cited by 54 | Viewed by 8791
Abstract
Infrared image enhancement is a crucial pre-processing technique in intelligent urban surveillance systems for Smart City applications. Existing grayscale mapping-based algorithms always suffer from over-enhancement of the background, noise amplification, and brightness distortion. To cope with these problems, an infrared image enhancement method [...] Read more.
Infrared image enhancement is a crucial pre-processing technique in intelligent urban surveillance systems for Smart City applications. Existing grayscale mapping-based algorithms always suffer from over-enhancement of the background, noise amplification, and brightness distortion. To cope with these problems, an infrared image enhancement method based on adaptive histogram partition and brightness correction is proposed. First, the grayscale histogram is adaptively segmented into several sub-histograms by a locally weighted scatter plot smoothing algorithm and local minima examination. Then, the fore-and background sub-histograms are distinguished according to a proposed metric called grayscale density. The foreground sub-histograms are equalized using a local contrast weighted distribution for the purpose of enhancing the local details, while the background sub-histograms maintain the corresponding proportions of the whole dynamic range in order to avoid over-enhancement. Meanwhile, a visual correction factor considering the property of human vision is designed to reduce the effect of noise during the procedure of grayscale re-mapping. Lastly, particle swarm optimization is used to correct the mean brightness of the output by virtue of a reference image. Both qualitative and quantitative evaluations implemented on real infrared images demonstrate the superiority of our method when compared with other conventional methods. Full article
Show Figures

Figure 1

22 pages, 64596 KiB  
Article
Total Variation Regularization Term-Based Low-Rank and Sparse Matrix Representation Model for Infrared Moving Target Tracking
by Minjie Wan, Guohua Gu, Weixian Qian, Kan Ren, Qian Chen, Hai Zhang and Xavier Maldague
Remote Sens. 2018, 10(4), 510; https://doi.org/10.3390/rs10040510 - 24 Mar 2018
Cited by 34 | Viewed by 4810
Abstract
Infrared moving target tracking plays a fundamental role in many burgeoning research areas of Smart City. Challenges in developing a suitable tracker for infrared images are particularly caused by pose variation, occlusion, and noise. In order to overcome these adverse interferences, a total [...] Read more.
Infrared moving target tracking plays a fundamental role in many burgeoning research areas of Smart City. Challenges in developing a suitable tracker for infrared images are particularly caused by pose variation, occlusion, and noise. In order to overcome these adverse interferences, a total variation regularization term-based low-rank and sparse matrix representation (TV-LRSMR) model is designed in order to exploit a robust infrared moving target tracker in this paper. First of all, the observation matrix that is derived from the infrared sequence is decomposed into a low-rank target matrix and a sparse occlusion matrix. For the purpose of preventing the noise pixel from being separated into the occlusion term, a total variation regularization term is proposed to further constrain the occlusion matrix. Then an alternating algorithm combing principal component analysis and accelerated proximal gradient methods is employed to separately optimize the two matrices. For long-term tracking, the presented algorithm is implemented using a Bayesien state inference under the particle filtering framework along with a dynamic model update mechanism. Both qualitative and quantitative experiments that were examined on real infrared video sequences verify that our algorithm outperforms other state-of-the-art methods in terms of precision rate and success rate. Full article
Show Figures

Graphical abstract

21 pages, 8116 KiB  
Article
A CNN-Based Method of Vehicle Detection from Aerial Images Using Hard Example Mining
by Yohei Koga, Hiroyuki Miyazaki and Ryosuke Shibasaki
Remote Sens. 2018, 10(1), 124; https://doi.org/10.3390/rs10010124 - 18 Jan 2018
Cited by 64 | Viewed by 7488
Abstract
Recently, deep learning techniques have had a practical role in vehicle detection. While much effort has been spent on applying deep learning to vehicle detection, the effective use of training data has not been thoroughly studied, although it has great potential for improving [...] Read more.
Recently, deep learning techniques have had a practical role in vehicle detection. While much effort has been spent on applying deep learning to vehicle detection, the effective use of training data has not been thoroughly studied, although it has great potential for improving training results, especially in cases where the training data are sparse. In this paper, we proposed using hard example mining (HEM) in the training process of a convolutional neural network (CNN) for vehicle detection in aerial images. We applied HEM to stochastic gradient descent (SGD) to choose the most informative training data by calculating the loss values in each batch and employing the examples with the largest losses. We picked 100 out of both 500 and 1000 examples for training in one iteration, and we tested different ratios of positive to negative examples in the training data to evaluate how the balance of positive and negative examples would affect the performance. In any case, our method always outperformed the plain SGD. The experimental results for images from New York showed improved performance over a CNN trained in plain SGD where the F1 score of our method was 0.02 higher. Full article
Show Figures

Graphical abstract

13 pages, 1003 KiB  
Article
SparkCloud: A Cloud-Based Elastic Bushfire Simulation Service
by Saurabh Garg, Nicholas Forbes-Smith, James Hilton and Mahesh Prakash
Remote Sens. 2018, 10(1), 74; https://doi.org/10.3390/rs10010074 - 07 Jan 2018
Cited by 8 | Viewed by 6246
Abstract
The accurate modeling of bushfires is not only complex and contextual but also a computationally intensive task. Ensemble predictions, involving several thousands to millions of simulations, can be required to capture and quantify the uncertain nature of bushfires. Moreover, users’ requirement and configuration [...] Read more.
The accurate modeling of bushfires is not only complex and contextual but also a computationally intensive task. Ensemble predictions, involving several thousands to millions of simulations, can be required to capture and quantify the uncertain nature of bushfires. Moreover, users’ requirement and configuration may change in different situations requiring either more computational resources or modeling to be completed with a stricter time constraint. For example, during emergency situations, the user may need to make time-critical decisions that require the execution of bushfire-spread models within a deadline. Currently, most operational tools are not flexible and scalable enough to consider different users’ time requirements. In this paper, we propose the SparkCloud service, which integrates features of user-defined customizable configuration for bushfire simulations and scalability/elasticity features of the cloud to handle computation requirements. The proposed cloud service utilizes Data61’s Spark, which is a significantly flexible and scalable software system for bushfire-spread prediction and has been used in practical scenarios. The effectiveness of the SparkCloud service is demonstrated using real cases of bushfires and on real cloud computing infrastructure. Full article
Show Figures

Graphical abstract

5719 KiB  
Article
Research on the Parallelization of the DBSCAN Clustering Algorithm for Spatial Data Mining Based on the Spark Platform
by Fang Huang, Qiang Zhu, Ji Zhou, Jian Tao, Xiaocheng Zhou, Du Jin, Xicheng Tan and Lizhe Wang
Remote Sens. 2017, 9(12), 1301; https://doi.org/10.3390/rs9121301 - 12 Dec 2017
Cited by 37 | Viewed by 9211
Abstract
Density-based spatial clustering of applications with noise (DBSCAN) is a density-based clustering algorithm that has the characteristics of being able to discover clusters of any shape, effectively distinguishing noise points and naturally supporting spatial databases. DBSCAN has been widely used in the field [...] Read more.
Density-based spatial clustering of applications with noise (DBSCAN) is a density-based clustering algorithm that has the characteristics of being able to discover clusters of any shape, effectively distinguishing noise points and naturally supporting spatial databases. DBSCAN has been widely used in the field of spatial data mining. This paper studies the parallelization design and realization of the DBSCAN algorithm based on the Spark platform, and solves the following problems that arise when computing macro data: the requirement of a great deal of calculation using the single-node algorithm; the low level of resource-utilization with the multi-node algorithm; the large time consumption; and the lack of instantaneity. The experimental results indicate that the proposed parallel algorithm design is able to achieve more stable speedup at an increased involved spatial data scale. Full article
Show Figures

Graphical abstract

2837 KiB  
Article
A Flexible Algorithm for Detecting Challenging Moving Objects in Real-Time within IR Video Sequences
by Andrea Zingoni, Marco Diani and Giovanni Corsini
Remote Sens. 2017, 9(11), 1128; https://doi.org/10.3390/rs9111128 - 06 Nov 2017
Cited by 15 | Viewed by 4654
Abstract
Real-time detecting moving objects in infrared video sequences may be particularly challenging because of the characteristics of the objects, such as their size, contrast, velocity and trajectory. Many proposed algorithms achieve good performances but only in the presence of some specific kinds of [...] Read more.
Real-time detecting moving objects in infrared video sequences may be particularly challenging because of the characteristics of the objects, such as their size, contrast, velocity and trajectory. Many proposed algorithms achieve good performances but only in the presence of some specific kinds of objects, or by neglecting the computational time, becoming unsuitable for real-time applications. To obtain more flexibility in different situations, we developed an algorithm capable of successfully dealing with small and large objects, slow and fast objects, even if subjected to unusual movements, and poorly-contrasted objects. The algorithm is also capable to handle the contemporary presence of multiple objects within the scene and to work in real-time even using cheap hardware. The implemented strategy is based on a fast but accurate background estimation and rejection, performed pixel by pixel and updated frame by frame, which is robust to possible background intensity changes and to noise. A control routine prevents the estimation from being biased by the transit of moving objects, while two noise-adaptive thresholding stages, respectively, drive the estimation control and allow extracting moving objects after the background removal, leading to the desired detection map. For each step, attention has been paid to develop computationally light solution to achieve the real-time requirement. The algorithm has been tested on a database of infrared video sequences, obtaining promising results against different kinds of challenging moving objects and outperforming other commonly adopted solutions. Its effectiveness in terms of detection performance, flexibility and computational time make the algorithm particularly suitable for real-time applications such as intrusion monitoring, activity control and detection of approaching objects, which are fundamental task in the emerging research area of Smart City. Full article
Show Figures

Graphical abstract

Back to TopTop