Special Issue "Big Data Computing for Geospatial Applications"

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: 30 September 2019.

Special Issue Editors

Guest Editor
Dr. Zhenlong Li

Department of Geography, University of South Carolina, USA
Website | E-Mail
Interests: geospatial big data processing and analytics; high performance computing, cloud computing/Hadoop; geospatial cyberinfrastructure/CyberGIS; Interoperability, spatial web services, SOA; application areas: climate change, disaster management, etc.
Guest Editor
Assoc. Prof. Wenwu Tang

Department of Geography and Earth Sciences, University of North Carolina at Charlotte, USA
Website | E-Mail
Interests: GIS & Spatial Analysis and Modeling; Agent-based Models and Spatiotemporal Simulation; Cyberinfrastructure and High-performance Computing; Complex Adaptive Spatial Systems; Land Use and Land Cover Change
Guest Editor
Dr. Qunying Huang

Department of Geography, University of Wisconsin-Madison, Madison, USA
Website | E-Mail
Interests: Cloud computing, social media, data mining, distributed computing, natural hazards, human mobility, spatial web and moible portals, GIScience
Guest Editor
Dr. Eric Shook

Department of Geography, Environment, and Society, University of Minnesota, USA
Website | E-Mail
Interests: geographic information science; cyberGIS; geocomputing; big data analytics and modeling; social media data analytics
Guest Editor
Prof. Qingfeng Guan

School of Information Engineering, China University of Geosciences, China
Website | E-Mail
Interests: Remote Sensing; Geographic Information System; Parallel Computing; Geocomputation; Cellular Automata; Spatial intelligence

Special Issue Information

Dear Colleagues,

Earth observation systems and model simulations are generating massive volumes of disparate, dynamic, and geographically distributed geospatial data with increasingly finer spatiotemporal resolutions. Meanwhile, the propagation of smart devices and social media also provide extensive geo-information about daily life activities. Efficiently analyzing those geospatial big data streams enables us to investigate unknown and complex patterns and develop new decision-support systems, thus provides unprecedented values for business, sciences, and engineering.

However, handling the "Vs" (volume, variety, velocity, veracity, and value) of big data is a challenging task. This is especially true for geospatial big data since the massive datasets often need to be analyzed in the context of dynamic space and time. Following a series of successful sessions organized at AAG, this special issue on “Big Data Computing for Geospatial Applications” by the ISPRS International Journal of Geo-Information aims to capture the latest efforts on utilizing, adapting, and developing new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges for supporting geospatial applications in different domains such as climate change, disaster management, human dynamics, public health, and environment and engineering.

Potential topics include (but are not limited to) the following:

  • Geo-cyberinfrastructure integrating spatiotemporal principles and advanced computational technologies (e.g., high-performance computing, cloud computing, and deep learning).
  • New computing and programming frameworks and architecture or parallel computing algorithms for geospatial applications.
  • New geospatial data management strategies and data storage models coupled with high-performance computing for efficient data query, retrieval, and processing (e.g. new spatiotemporal indexing mechanisms).
  • New computing methods considering spatiotemporal collocation (locations and relationships) of users, data, and computing resources.
  • Geospatial big data processing, mining and visualization methods using high-performance computing and artificial intelligence.
  • Integrating scientific workflows in cloud computing and/or high performance computing environment.
  • Any other research, development, education, and visions related to geospatial big data computing.

Interested authors are encouraged to notify the guest editors of their intention by sending an abstract to Dr. Zhenlong Li ([email protected]). The deadline for submissions of the final papers is September 30, 2019. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website.

Dr. Zhenlong Li
Assoc. Prof. Wenwu Tang
Dr. Qunying Huang
Dr. Eric Shook
Prof. Qingfeng Guan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Open AccessArticle
High-Performance Overlay Analysis of Massive Geographic Polygons That Considers Shape Complexity in a Cloud Environment
ISPRS Int. J. Geo-Inf. 2019, 8(7), 290; https://doi.org/10.3390/ijgi8070290
Received: 24 March 2019 / Revised: 19 June 2019 / Accepted: 24 June 2019 / Published: 26 June 2019
PDF Full-text (4004 KB) | HTML Full-text | XML Full-text
Abstract
Overlay analysis is a common task in geographic computing that is widely used in geographic information systems, computer graphics, and computer science. With the breakthroughs in Earth observation technologies, particularly the emergence of high-resolution satellite remote-sensing technology, geographic data have demonstrated explosive growth. [...] Read more.
Overlay analysis is a common task in geographic computing that is widely used in geographic information systems, computer graphics, and computer science. With the breakthroughs in Earth observation technologies, particularly the emergence of high-resolution satellite remote-sensing technology, geographic data have demonstrated explosive growth. The overlay analysis of massive and complex geographic data has become a computationally intensive task. Distributed parallel processing in a cloud environment provides an efficient solution to this problem. The cloud computing paradigm represented by Spark has become the standard for massive data processing in the industry and academia due to its large-scale and low-latency characteristics. The cloud computing paradigm has attracted further attention for the purpose of solving the overlay analysis of massive data. These studies mainly focus on how to implement parallel overlay analysis in a cloud computing paradigm but pay less attention to the impact of spatial data graphics complexity on parallel computing efficiency, especially the data skew caused by the difference in the graphic complexity. Geographic polygons often have complex graphical structures, such as many vertices, composite structures including holes and islands. When the Spark paradigm is used to solve the overlay analysis of massive geographic polygons, its calculation efficiency is closely related to factors such as data organization and algorithm design. Considering the influence of the shape complexity of polygons on the performance of overlay analysis, we design and implement a parallel processing algorithm based on the Spark paradigm in this paper. Based on the analysis of the shape complexity of polygons, the overlay analysis speed is improved via reasonable data partition, distributed spatial index, a minimum boundary rectangular filter and other optimization processes, and the high speed and parallel efficiency are maintained. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Figures

Figure 1

Open AccessArticle
Geographic Knowledge Graph (GeoKG): A Formalized Geographic Knowledge Representation
ISPRS Int. J. Geo-Inf. 2019, 8(4), 184; https://doi.org/10.3390/ijgi8040184
Received: 25 February 2019 / Revised: 15 March 2019 / Accepted: 4 April 2019 / Published: 8 April 2019
PDF Full-text (6322 KB) | HTML Full-text | XML Full-text
Abstract
Formalized knowledge representation is the foundation of Big Data computing, mining and visualization. Current knowledge representations regard information as items linked to relevant objects or concepts by tree or graph structures. However, geographic knowledge differs from general knowledge, which is more focused on [...] Read more.
Formalized knowledge representation is the foundation of Big Data computing, mining and visualization. Current knowledge representations regard information as items linked to relevant objects or concepts by tree or graph structures. However, geographic knowledge differs from general knowledge, which is more focused on temporal, spatial, and changing knowledge. Thus, discrete knowledge items are difficult to represent geographic states, evolutions, and mechanisms, e.g., the processes of a storm “{9:30-60 mm-precipitation}-{12:00-80 mm-precipitation}-…”. The underlying problem is the constructors of the logic foundation (ALC description language) of current geographic knowledge representations, which cannot provide these descriptions. To address this issue, this study designed a formalized geographic knowledge representation called GeoKG and supplemented the constructors of the ALC description language. Then, an evolution case of administrative divisions of Nanjing was represented with the GeoKG. In order to evaluate the capabilities of our formalized model, two knowledge graphs were constructed by using the GeoKG and the YAGO by using the administrative division case. Then, a set of geographic questions were defined and translated into queries. The query results have shown that GeoKG results are more accurate and complete than the YAGO’s with the enhancing state information. Additionally, the user evaluation verified these improvements, which indicates it is a promising powerful model for geographic knowledge representation. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Figures

Figure 1

Open AccessArticle
A Novel Method of Missing Road Generation in City Blocks Based on Big Mobile Navigation Trajectory Data
ISPRS Int. J. Geo-Inf. 2019, 8(3), 142; https://doi.org/10.3390/ijgi8030142
Received: 29 December 2018 / Revised: 28 February 2019 / Accepted: 11 March 2019 / Published: 14 March 2019
PDF Full-text (17124 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of cities, the geographic information of urban blocks is also changing rapidly. However, traditional methods of updating road data cannot keep up with this development because they require a high level of professional expertise for operation and are very [...] Read more.
With the rapid development of cities, the geographic information of urban blocks is also changing rapidly. However, traditional methods of updating road data cannot keep up with this development because they require a high level of professional expertise for operation and are very time-consuming. In this paper, we develop a novel method for extracting missing roadways by reconstructing the topology of the roads from big mobile navigation trajectory data. The three main steps include filtering of original navigation trajectory data, extracting the road centerline from navigation points, and establishing the topology of existing roads. First, data from pedestrians and drivers on existing roads were deleted from the raw data. Second, the centerlines of city block roads were extracted using the RSC (ring-stepping clustering) method proposed herein. Finally, the topologies of missing roads and the connections between missing and existing roads were built. A complex urban block with an area of 5.76 square kilometers was selected as the case study area. The validity of the proposed method was verified using a dataset consisting of five days of mobile navigation trajectory data. The experimental results showed that the average absolute error of the length of the generated centerlines was 1.84 m. Comparative analysis with other existing road extraction methods showed that the F-score performance of the proposed method was much better than previous methods. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Figures

Figure 1

Open AccessArticle
Social Media Big Data Mining and Spatio-Temporal Analysis on Public Emotions for Disaster Mitigation
ISPRS Int. J. Geo-Inf. 2019, 8(1), 29; https://doi.org/10.3390/ijgi8010029
Received: 30 October 2018 / Revised: 21 December 2018 / Accepted: 10 January 2019 / Published: 15 January 2019
Cited by 1 | PDF Full-text (6670 KB) | HTML Full-text | XML Full-text
Abstract
Social media contains a lot of geographic information and has been one of the more important data sources for hazard mitigation. Compared with the traditional means of disaster-related geographic information collection methods, social media has the characteristics of real-time information provision and low [...] Read more.
Social media contains a lot of geographic information and has been one of the more important data sources for hazard mitigation. Compared with the traditional means of disaster-related geographic information collection methods, social media has the characteristics of real-time information provision and low cost. Due to the development of big data mining technologies, it is now easier to extract useful disaster-related geographic information from social media big data. Additionally, many researchers have used related technology to study social media for disaster mitigation. However, few researchers have considered the extraction of public emotions (especially fine-grained emotions) as an attribute of disaster-related geographic information to aid in disaster mitigation. Combined with the powerful spatio-temporal analysis capabilities of geographical information systems (GISs), the public emotional information contained in social media could help us to understand disasters in more detail than can be obtained from traditional methods. However, the social media data is quite complex and fragmented, both in terms of format and semantics, especially for Chinese social media. Therefore, a more efficient algorithm is needed. In this paper, we consider the earthquake that happened in Ya’an, China in 2013 as a case study and introduce the deep learning method to extract fine-grained public emotional information from Chinese social media big data to assist in disaster analysis. By combining this with other geographic information data (such population density distribution data, POI (point of interest) data, etc.), we can further assist in the assessment of affected populations, explore emotional movement law, and optimize disaster mitigation strategies. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Figures

Figure 1

Open AccessArticle
A Task-Oriented Knowledge Base for Geospatial Problem-Solving
ISPRS Int. J. Geo-Inf. 2018, 7(11), 423; https://doi.org/10.3390/ijgi7110423
Received: 5 September 2018 / Revised: 4 October 2018 / Accepted: 27 October 2018 / Published: 31 October 2018
PDF Full-text (6298 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, the rapid development of cloud computing and web technologies has led to a significant advancement to chain geospatial information services (GI services) in order to solve complex geospatial problems. However, the construction of a problem-solving workflow requires considerable expertise for [...] Read more.
In recent years, the rapid development of cloud computing and web technologies has led to a significant advancement to chain geospatial information services (GI services) in order to solve complex geospatial problems. However, the construction of a problem-solving workflow requires considerable expertise for end-users. Currently, few studies design a knowledge base to capture and share geospatial problem-solving knowledge. This paper abstracts a geospatial problem as a task that can be further decomposed into multiple subtasks. The task distinguishes three distinct granularities: Geooperator, Atomic Task, and Composite Task. A task model is presented to define the outline of problem solution at a conceptual level that closely reflects the processes for problem-solving. A task-oriented knowledge base that leverages an ontology-based approach is built to capture and share task knowledge. This knowledge base provides the potential for reusing task knowledge when faced with a similar problem. Conclusively, the details of implementation are described through using a meteorological early-warning analysis as an example. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Figures

Figure 1

ISPRS Int. J. Geo-Inf. EISSN 2220-9964 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top