Next Issue
Previous Issue

Table of Contents

ISPRS Int. J. Geo-Inf., Volume 7, Issue 7 (July 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) What are the stories behind the data we see in maps? To reveal the narrative content implicit in [...] Read more.
View options order results:
result details:
Displaying articles 1-54
Export citation of selected articles as:
Open AccessArticle Utilizing MapReduce to Improve Probe-Car Track Data Mining
ISPRS Int. J. Geo-Inf. 2018, 7(7), 287; https://doi.org/10.3390/ijgi7070287
Received: 29 May 2018 / Revised: 1 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
Cited by 1 | PDF Full-text (2080 KB) | HTML Full-text | XML Full-text
Abstract
With the rapidly increasing popularization of the automobile, challenges and greater demands have come to the fore, including traffic congestion, energy crises, traffic safety, and environmental pollution. To address these challenges and demands, enhanced data support and advanced data collection methods are crucial
[...] Read more.
With the rapidly increasing popularization of the automobile, challenges and greater demands have come to the fore, including traffic congestion, energy crises, traffic safety, and environmental pollution. To address these challenges and demands, enhanced data support and advanced data collection methods are crucial and highly in need. A probe-car serves as an important and effective way to obtain real-time urban road traffic status in the international Intelligent Transportation System (ITS), and probe-car technology provides the corresponding solution through advanced navigation data, offering more possibilities to address the above problems. In addition, massive spatial data-mining technologies associated with probe-car tracking data have emerged. This paper discusses the major problems of spatial data-mining technologies for probe-car tracking data, such as true path restoration and the close correlation of spatial data. To address the road-matching issue in massive probe-car tracking data caused by the strong correlation combining road topology with map matching, this paper presents a MapReduce-based technology in the second spatial data model. The experimental results demonstrate that by implementing the proposed spatial data-mining system on distributed parallel computing, the computational performance was effectively improved by five times and the hardware requirements were significantly reduced. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Figures

Figure 1

Open AccessArticle Water Level Reconstruction Based on Satellite Gravimetry in the Yangtze River Basin
ISPRS Int. J. Geo-Inf. 2018, 7(7), 286; https://doi.org/10.3390/ijgi7070286
Received: 22 June 2018 / Revised: 17 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
PDF Full-text (5311 KB) | HTML Full-text | XML Full-text
Abstract
The monitoring of hydrological extremes requires water level measurement. Owing to the decreasing number of continuous operating hydrological stations globally, remote sensing indices have been advocated for water level reconstruction recently. Nevertheless, the feasibility of gravimetrically derived terrestrial water storage (TWS) and its
[...] Read more.
The monitoring of hydrological extremes requires water level measurement. Owing to the decreasing number of continuous operating hydrological stations globally, remote sensing indices have been advocated for water level reconstruction recently. Nevertheless, the feasibility of gravimetrically derived terrestrial water storage (TWS) and its corresponding index for water level reconstruction have not been investigated. This paper aims to construct a correlative relationship between observed water level and basin-averaged Gravity Recovery and Climate Experiment (GRACE) TWS and its Drought Severity Index (GRACE-DSI), for the Yangtze river basin on a monthly temporal scale. The results are subsequently compared against traditional remote sensing, Palmer’s Drought Severity Index (PDSI), and El Niño Southern Oscillation (ENSO) indices. Comparison of the water level reconstructed from GRACE TWS and its index, and that of remote sensing against observed water level reveals a Pearson Correlation Coefficient (PCC) above 0.90 and below 0.84, with a Root-Mean-Squares Error (RMSE) of 0.88–1.46 m, and 1.41–1.88 m and a Nash-Sutcliffe model efficiency coefficient (NSE) above 0.81 and below 0.70, respectively. The ENSO-reconstructed water levels are comparable to those based on remote sensing, whereas the PDSI-reconstructed water level shows a similar performance to that of GRACE TWS. The water level predicted at the location of another station also exhibits a similar performance. It is anticipated that the basin-averaged, remotely-sensed hydrological variables and their standardized forms (e.g., GRACE TWS and GRACE-DSI) are viable alternatives for reconstructing water levels for large river basins affected by the hydrological extremes under ENSO influence. Full article
Figures

Figure 1

Open AccessArticle Reduction Method for Mobile Laser Scanning Data
ISPRS Int. J. Geo-Inf. 2018, 7(7), 285; https://doi.org/10.3390/ijgi7070285
Received: 30 May 2018 / Revised: 1 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
PDF Full-text (9227 KB) | HTML Full-text | XML Full-text
Abstract
Mobile Laser Scanning (MLS) technology acquires a huge volume of data in a very short time. In many cases, it is reasonable to reduce the size of the dataset with eliminating points in such a way that the datasets, after reduction, meet specific
[...] Read more.
Mobile Laser Scanning (MLS) technology acquires a huge volume of data in a very short time. In many cases, it is reasonable to reduce the size of the dataset with eliminating points in such a way that the datasets, after reduction, meet specific optimization criteria. Various methods exist to decrease the size of point cloud, such as raw data reduction, Digital Terrain Model (DTM) generalization or generation of regular grid. These methods have been successfully applied on data captured from Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS), however, they have not been fully analyzed on data captured by an MLS system. The paper presents our new approach, called the Optimum Single MLS Dataset method (OptD-single-MLS), which is an algorithm for MLS data reduction. The tests were carried out in two variants: (1) for raw sensory measurements and (2) for a georeferenced 3D point cloud. We found that the OptD-single-MLS method provides a good solution in both variants; therefore, the choice of the reduction variant depends only on the user. Full article
Figures

Figure 1

Open AccessArticle Semi-Supervised Classification for Hyperspectral Images Based on Multiple Classifiers and Relaxation Strategy
ISPRS Int. J. Geo-Inf. 2018, 7(7), 284; https://doi.org/10.3390/ijgi7070284
Received: 11 May 2018 / Revised: 11 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
PDF Full-text (1393 KB) | HTML Full-text | XML Full-text
Abstract
Hyperspectral image (HSI) classification is a fundamental and challenging problem in remote sensing and its various applications. However, it is difficult to perfectly classify remotely sensed hyperspectral data by directly using classification techniques developed in pattern recognition. This is partially owing to a
[...] Read more.
Hyperspectral image (HSI) classification is a fundamental and challenging problem in remote sensing and its various applications. However, it is difficult to perfectly classify remotely sensed hyperspectral data by directly using classification techniques developed in pattern recognition. This is partially owing to a multitude of noise points and the limited training samples. Based on multinomial logistic regression (MLR), the local mean-based pseudo nearest neighbor (LMPNN) rule, and the discontinuity preserving relaxation (DPR) method, in this paper, a semi-supervised method for HSI classification is proposed. In pre-processing and post-processing, the DPR strategy is adopted to denoise the original hyperspectral data and improve the classification accuracy, respectively. The application of two classifiers, MLR and LMPNN, can automatically acquire more labeled samples in terms of a few labeled instances per class. This is termed the pre-classification procedure. The final classification result of the HSI is obtained by employing the MLRsub approach. The effectiveness of the proposal is experimentally evaluated by two real hyperspectral datasets, which are widely used to test the performance of the HSI classification algorithm. The comparison results using several competing methods confirm that the proposed method is effective, even for limited training samples. Full article
Figures

Figure 1

Open AccessArticle Shape Similarity Assessment Method for Coastline Generalization
ISPRS Int. J. Geo-Inf. 2018, 7(7), 283; https://doi.org/10.3390/ijgi7070283
Received: 12 May 2018 / Revised: 16 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
PDF Full-text (3928 KB) | HTML Full-text | XML Full-text
Abstract
Although shape similarity is one fundamental element in coastline generalization quality, its related research is still inadequate. Consistent with the hierarchical pattern of shape recognition, the Dual-side Bend Forest Shape Representation Model is presented by reorganizing the coastline into bilateral bend forests, which
[...] Read more.
Although shape similarity is one fundamental element in coastline generalization quality, its related research is still inadequate. Consistent with the hierarchical pattern of shape recognition, the Dual-side Bend Forest Shape Representation Model is presented by reorganizing the coastline into bilateral bend forests, which are made of continuous root-bends based on Constrained Delaunay Triangulation and Convex Hull. Subsequently, the shape contribution ratio of each level in the model is expressed by its area distribution in the model. Then, the shape similarity assessment is conducted on the model in a top–down layer by layer pattern. Contrast experiments are conducted among the presented method and the Length Ratio, Hausdorff Distance and Turning Function, showing the improvements of the presented method over the others, including (1) the hierarchical shape representation model can distinguish shape features of different layers on dual-side effectively, which is consistent with shape recognition, (2) its usability and stability among coastlines and scales, and (3) it is sensitive to changes in main shape features caused by coastline generalization. Full article
Figures

Figure 1

Open AccessArticle Assessing the Impacts of Streamside Ordinance Protection on the Spatial and Temporal Variability in Urban Riparian Vegetation
ISPRS Int. J. Geo-Inf. 2018, 7(7), 282; https://doi.org/10.3390/ijgi7070282
Received: 31 May 2018 / Revised: 6 July 2018 / Accepted: 19 July 2018 / Published: 23 July 2018
PDF Full-text (2746 KB) | HTML Full-text | XML Full-text
Abstract
Preserving riparian vegetation is important for maintaining water quality and riparian functions. Streamside protection ordinances have been widely established in many rapidly urbanizing areas, however, there has been a lack of assessment of the effectiveness of such ordinances. A study was conducted to
[...] Read more.
Preserving riparian vegetation is important for maintaining water quality and riparian functions. Streamside protection ordinances have been widely established in many rapidly urbanizing areas, however, there has been a lack of assessment of the effectiveness of such ordinances. A study was conducted to determine the effectiveness of riparian vegetation preservation with and without ordinance protection. SPOT imagery was used to classify landscape changes over time (1992 through 2012) across multiple jurisdictions and pre- and post-ordinance implementation periods. Results indicated the spatial and temporal patterns of vegetation change differed by administrative areas and ordinance boundaries. The rate of tree loss and gains in developed lands in ordinance-protected areas generally increased following implementation of ordinances but at a lower rate than in non-ordinance areas. These findings suggest spatial and temporal monitoring of riparian ordinance implementation across adjacent jurisdictions is important to ensure the full effects of the ordinance protection on stream systems. Such monitoring and assessments can be used by local decision makers to adapt existing ordinances or in the development of new ordinances. Full article
(This article belongs to the Special Issue Urban Environment Mapping Using GIS)
Figures

Figure 1

Open AccessArticle Using Eye Tracking to Evaluate the Usability of Flow Maps
ISPRS Int. J. Geo-Inf. 2018, 7(7), 281; https://doi.org/10.3390/ijgi7070281
Received: 26 May 2018 / Revised: 11 July 2018 / Accepted: 19 July 2018 / Published: 21 July 2018
PDF Full-text (3178 KB) | HTML Full-text | XML Full-text
Abstract
Flow maps allow users to perceive not only the location where interactions take place, but also the direction and volume of events. Previous studies have proposed numerous methods to produce flow maps. However, how to evaluate the usability of flow maps has not
[...] Read more.
Flow maps allow users to perceive not only the location where interactions take place, but also the direction and volume of events. Previous studies have proposed numerous methods to produce flow maps. However, how to evaluate the usability of flow maps has not been well documented. In this study, we combined eye-tracking and questionnaire methods to evaluate the usability of flow maps through comparisons between (a) straight lines and curves and (b) line thicknesses and color gradients. The results show that curved flows are more effective than straight flows. Maps with curved flows have more correct answers, fixations, and percentages of fixations in areas of interest. Furthermore, we find that the curved flows require longer finish times but exhibit smaller times to first fixation than straight flows. In addition, we find that using color gradients to indicate the flow volume is significantly more effective than the application of different line thicknesses, which is mainly reflected by the presence of more correct answers in the color-gradient group. These empirical studies could help improve the usability of flow maps employed to visualize geo-data. Full article
Figures

Figure 1a

Open AccessArticle Using Geospatial Analysis and Hydrologic Modeling to Estimate Climate Change Impacts on Nitrogen Export: Case Study for a Forest and Pasture Dominated Watershed in North Carolina
ISPRS Int. J. Geo-Inf. 2018, 7(7), 280; https://doi.org/10.3390/ijgi7070280
Received: 31 May 2018 / Revised: 10 July 2018 / Accepted: 20 July 2018 / Published: 21 July 2018
PDF Full-text (2990 KB) | HTML Full-text | XML Full-text
Abstract
Many watersheds are currently experiencing streamflow and water quality related problems that are caused by excess nitrogen. Given that weather is a major driver of nitrogen transport through watersheds, the objective of this study was to predict climate change impacts on streamflow and
[...] Read more.
Many watersheds are currently experiencing streamflow and water quality related problems that are caused by excess nitrogen. Given that weather is a major driver of nitrogen transport through watersheds, the objective of this study was to predict climate change impacts on streamflow and nitrogen export. A forest and pasture dominated watershed in North Carolina Piedmont region was used as the study area. A physically-based Soil and Water Assessment Tool (SWAT) model parameterized using geospatial data layers and spatially downscaled temperature and precipitation estimates from eight different General Circulation Models (GCMs) were used for this study. While temperature change predictions are fairly consistent across the GCMs for the study watershed, there is significant variability in precipitation change predictions across the GCMs, and this leads to uncertainty in the future conditions within the watershed. However, when the downscaled GCM projections were taken as a model ensemble, the results suggest that both high and low emission scenarios would result in an average increase in streamflow of 14.1% and 12.5%, respectively, and a decrease in the inorganic nitrogen export by 12.1% and 8.5%, respectively, by the end of the century. The results also show clear seasonal patterns with streamflow and nitrogen loading both increasing in fall and winter months by 97.8% and 50.8%, respectively, and decreasing by 20.2% and 35.5%, respectively, in spring and summer months by the end of the century. Full article
Figures

Graphical abstract

Open AccessArticle 3D WebGIS: From Visualization to Analysis. An Efficient Browser-Based 3D Line-of-Sight Analysis
ISPRS Int. J. Geo-Inf. 2018, 7(7), 279; https://doi.org/10.3390/ijgi7070279
Received: 30 May 2018 / Revised: 6 July 2018 / Accepted: 19 July 2018 / Published: 21 July 2018
PDF Full-text (1385 KB) | HTML Full-text | XML Full-text
Abstract
3D WebGIS systems have been mentioned in the literature almost since the beginning of the graphical web era in the late 1990s. The potential use of 3D WebGIS is linked to a wide range of scientific and application domains, such as planning, controlling,
[...] Read more.
3D WebGIS systems have been mentioned in the literature almost since the beginning of the graphical web era in the late 1990s. The potential use of 3D WebGIS is linked to a wide range of scientific and application domains, such as planning, controlling, tracking or simulation in crisis management, military mission planning, urban information systems, energy facilities or cultural heritage management, just to name a few. Nevertheless, many applications or research prototypes entitled as 3D WebGIS or similar are mainly about 3D visualization of GIS data or the visualization of analysis results, rather than about performing the 3D analysis itself online. This research paper aims to step forward into the direction of web-based 3D geospatial analysis. It describes how to overcome speed and memory restrictions in web-based data management by adapting optimization strategies, developed earlier for web-based 3D visualization. These are applied in a holistic way in the context of a fully 3D line-of-sight computation over several layers with split (tiled) and unsplit (static) data sources. Different optimization approaches are combined and evaluated to enable an efficient client side analysis and a real 3D WebGIS functionality using new web technologies such as HTML5 and WebGL. Full article
(This article belongs to the Special Issue Web and Mobile GIS)
Figures

Figure 1

Open AccessArticle The Impact of the Parameterisation of Physiographic Features of Urbanised Catchment Areas on the Spatial Distribution of Components of the Water Balance Using the WetSpass Model
ISPRS Int. J. Geo-Inf. 2018, 7(7), 278; https://doi.org/10.3390/ijgi7070278
Received: 22 May 2018 / Revised: 18 July 2018 / Accepted: 19 July 2018 / Published: 21 July 2018
PDF Full-text (4906 KB) | HTML Full-text | XML Full-text
Abstract
An analysis was conducted of the activity of individual homogeneous Hydrological Response Units (HRUs) and their impact on the components of the spatially distributed water balance based on the example of two urbanised catchments of the city of Poznań (Western Poland). Water balance
[...] Read more.
An analysis was conducted of the activity of individual homogeneous Hydrological Response Units (HRUs) and their impact on the components of the spatially distributed water balance based on the example of two urbanised catchments of the city of Poznań (Western Poland). Water balance was developed using the WetSpass model and GIS spatial data, based on hydrometeorological data from the reference period of 1961–2000 including projected land usage changes and precipitation changes expected by 2025 in the city. The catchments were parameterised with reference to land usage, soil permeability, terrain declivities and the level of groundwater waters in summer and winter. The dependence between HRUs and their impact on components of the water balance was determined. Water balance forecasts have shown two-way changes in the components of approximately 12% of the catchments. A significant increase of surface runoff (an increase of 20–30 mm/HRU) at the expense of effective infiltration reduction (by 15–20 mm/HRU) was determined for arable land intended for development. An increase of infiltration and evapotranspiration at the expense of the surface runoff reduction is forecast for areas designed for urban afforestation. The tendency of increase of atmospheric precipitation within the city until 2025 was indicated by changes in the water balance components. Changes in the landscape resulting from urban expansion may lead to detrimental hydrological effects: accumulation of surface runoffs and occurrence of local flash flooding, as confirmed by the simulations carried out using the WetSpass model. The results may contribute to a more accurate understanding of the impact of urban landscape modification patterns on the water balance at the regional and local scale. Full article
Figures

Figure 1

Open AccessArticle Employing Incremental Outlines for OpenStreetMap Data Updating
ISPRS Int. J. Geo-Inf. 2018, 7(7), 277; https://doi.org/10.3390/ijgi7070277
Received: 23 April 2018 / Revised: 6 July 2018 / Accepted: 19 July 2018 / Published: 20 July 2018
PDF Full-text (8533 KB) | HTML Full-text | XML Full-text
Abstract
The updating of changing information plays a significant role in ensuring the quality of OpenStreetMap, which is usually completed by mapping the whole changing objects with a high degree of uncertainty. The incremental object-based approach provides opportunities to reduce the unreliability of data,
[...] Read more.
The updating of changing information plays a significant role in ensuring the quality of OpenStreetMap, which is usually completed by mapping the whole changing objects with a high degree of uncertainty. The incremental object-based approach provides opportunities to reduce the unreliability of data, while challenges of data inaccuracy and redundancy remain. This paper provides an incremental outline-based approach for OpenStreetMap data updating to solve this issue. First, incremental outlines are delineated from the changed objects and distinguished through a spatial classification. Then, attribute information corresponding to incremental outlines is proposed to assist in describing the physical changes. Finally, through a geometric calculation based on both the spatial and attribute information, updating operations are constructed with a variety of rules to activate the data updating process. The proposed approach was verified by updating an area in the OpenStreetMap datasets. The result shows that the incremental outline-based updating approach can reduce both the time and storage costs compared to incremental objects and further improve data quality in the updating process. Full article
Figures

Figure 1

Open AccessArticle Design and Implementation of a 4D Web Application for Analytical Visualization of Smart City Applications
ISPRS Int. J. Geo-Inf. 2018, 7(7), 276; https://doi.org/10.3390/ijgi7070276
Received: 8 June 2018 / Revised: 28 June 2018 / Accepted: 6 July 2018 / Published: 12 July 2018
Cited by 1 | PDF Full-text (7463 KB) | HTML Full-text | XML Full-text
Abstract
Contemporary development of computer hardware and software, WebGIS and geo-web services as well as the availability of semantic 3D city models, facilitate flexible and dynamic implementation of web applications. The aim of this paper is to introduce 4D CANVAS, a web-based application for
[...] Read more.
Contemporary development of computer hardware and software, WebGIS and geo-web services as well as the availability of semantic 3D city models, facilitate flexible and dynamic implementation of web applications. The aim of this paper is to introduce 4D CANVAS, a web-based application for dynamic visualization of 3D geospatial data for improved decision making in smart city applications. It is based on the Cesium Virtual Globe, an open-source JavaScript library developed with HTML5 and WebGL. At first, different data formats such as JSON, GeoJSON, Cesium Markup Language (CZML) and 3D Tiles are evaluated for their suitability in 4D visualization applications. Then, an interactive Graphical User Interface (GUI) is built observing the principle of cartographic standards to view, manage, understand and explore different simulation outputs at multiple spatial (3D surface of buildings) and temporal (hourly, daily, monthly) resolutions. In this regard, multiple tools such as aggregation, data classification, etc. are developed utilizing JavaScript libraries. As a proof of concept, two energy simulations and their outputs of different spatial and temporal resolutions are demonstrated in five Asian and European cities. Finally, the 4D CANVAS is deployed both in desktop and multi-touch screens. The proposed application allows easy integration of any other geospatial simulation results, thereby helps the users from different sectors to explore them interactively in 4D. Full article
(This article belongs to the Special Issue Web and Mobile GIS)
Figures

Figure 1

Open AccessArticle Temporal Variations and Associated Remotely Sensed Environmental Variables of Dengue Fever in Chitwan District, Nepal
ISPRS Int. J. Geo-Inf. 2018, 7(7), 275; https://doi.org/10.3390/ijgi7070275
Received: 29 May 2018 / Revised: 3 July 2018 / Accepted: 7 July 2018 / Published: 12 July 2018
PDF Full-text (1584 KB) | HTML Full-text | XML Full-text
Abstract
Dengue fever is one of the leading public health problems of tropical and subtropical countries across the world. Transmission dynamics of dengue fever is largely affected by meteorological and environmental factors, and its temporal pattern generally peaks in hot-wet periods of the year.
[...] Read more.
Dengue fever is one of the leading public health problems of tropical and subtropical countries across the world. Transmission dynamics of dengue fever is largely affected by meteorological and environmental factors, and its temporal pattern generally peaks in hot-wet periods of the year. Despite this continuously growing problem, the temporal dynamics of dengue fever and associated potential environmental risk factors are not documented in Nepal. The aim of this study was to fill this research gap by utilizing epidemiological and earth observation data in Chitwan district, one of the frequent dengue outbreak areas of Nepal. We used laboratory confirmed monthly dengue cases as a dependent variable and a set of remotely sensed meteorological and environmental variables as explanatory factors to describe their temporal relationship. Descriptive statistics, cross correlation analysis, and the Poisson generalized additive model were used for this purpose. Results revealed that dengue fever is significantly associated with satellite estimated precipitation, normalized difference vegetation index (NDVI), and enhanced vegetation index (EVI) synchronously and with different lag periods. However, the associations were weak and insignificant with immediate daytime land surface temperature (dLST) and nighttime land surface temperature (nLST), but were significant after 4–5 months. Conclusively, the selected Poisson generalized additive model based on the precipitation, dLST, and NDVI explained the largest variation in monthly distribution of dengue fever with minimum Akaike’s Information Criterion (AIC) and maximum R-squared. The best fit model further significantly improved after including delayed effects in the model. The predicted cases were reasonably accurate based on the comparison of 10-fold cross validation and observed cases. The lagged association found in this study could be useful for the development of remote sensing-based early warning forecasts of dengue fever. Full article
(This article belongs to the Special Issue Geoprocessing in Public and Environmental Health)
Figures

Figure 1

Open AccessArticle The Classification of Noise-Afflicted Remotely Sensed Data Using Three Machine-Learning Techniques: Effect of Different Levels and Types of Noise on Accuracy
ISPRS Int. J. Geo-Inf. 2018, 7(7), 274; https://doi.org/10.3390/ijgi7070274
Received: 8 May 2018 / Revised: 12 June 2018 / Accepted: 26 June 2018 / Published: 12 July 2018
PDF Full-text (2308 KB) | HTML Full-text | XML Full-text
Abstract
Remotely sensed data are often adversely affected by many types of noise, which influences the classification result. Supervised machine-learning (ML) classifiers such as random forest (RF), support vector machine (SVM), and back-propagation neural network (BPNN) are broadly reported to improve robustness against noise.
[...] Read more.
Remotely sensed data are often adversely affected by many types of noise, which influences the classification result. Supervised machine-learning (ML) classifiers such as random forest (RF), support vector machine (SVM), and back-propagation neural network (BPNN) are broadly reported to improve robustness against noise. However, only a few comparative studies that may help investigate this robustness have been reported. An important contribution, going beyond previous studies, is that we perform the analyses by employing the most well-known and broadly implemented packages of the three classifiers and control their settings to represent users’ actual applications. This facilitates an understanding of the extent to which the noise types and levels in remotely sensed data impact classification accuracy using ML classifiers. By using those implementations, we classified the land cover data from a satellite image that was separately afflicted by seven-level zero-mean Gaussian, salt–pepper, and speckle noise. The modeling data and features were strictly controlled. Finally, we discussed how each noise type affects the accuracy obtained from each classifier and the robustness of the classifiers to noise in the data. This may enhance our understanding of the relationship between noises, the supervised ML classifiers, and remotely sensed data. Full article
Figures

Graphical abstract

Open AccessArticle Using High-Performance Computing to Address the Challenge of Land Use/Land Cover Change Analysis on Spatial Big Data
ISPRS Int. J. Geo-Inf. 2018, 7(7), 273; https://doi.org/10.3390/ijgi7070273
Received: 30 April 2018 / Revised: 28 June 2018 / Accepted: 6 July 2018 / Published: 11 July 2018
PDF Full-text (6679 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Land use/land cover change (LUCC) analysis is a fundamental issue in regional and global geography that can accurately reflect the diversity of landscapes and detect the differences or changes on the earth’s surface. However, a very heavy computational load is often unavoidable, especially
[...] Read more.
Land use/land cover change (LUCC) analysis is a fundamental issue in regional and global geography that can accurately reflect the diversity of landscapes and detect the differences or changes on the earth’s surface. However, a very heavy computational load is often unavoidable, especially when processing multi-temporal land cover data with fine spatial resolution using more complicated procedures, which often takes a long time when performing the LUCC analysis over large areas. This paper employs a graph-based spatial decomposition that represents the computational loads as graph vertices and edges and then uses a balanced graph partitioning to decompose the LUCC analysis on spatial big data. For the decomposing tasks, a stream scheduling method is developed to exploit the parallelism in data moving, clipping, overlay analysis, area calculation and transition matrix building. Finally, a change analysis is performed on the land cover data from 2015 to 2016 in China, with each piece of temporal data containing approximately 260 million complex polygons. It took less than 6 h in a cluster with 15 workstations, which was an indispensable task that may surpass two weeks without any optimization. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Figures

Graphical abstract

Open AccessArticle Data Extraction Algorithm for Energy Performance Certificates (EPC) to Estimate the Maximum Economic Damage of Buildings for Economic Impact Assessment of Floods in Flanders, Belgium
ISPRS Int. J. Geo-Inf. 2018, 7(7), 272; https://doi.org/10.3390/ijgi7070272
Received: 17 April 2018 / Revised: 9 May 2018 / Accepted: 18 June 2018 / Published: 10 July 2018
PDF Full-text (2029 KB) | HTML Full-text | XML Full-text
Abstract
Floods cause major disruptions to energy supply and transportation facilities and lead to significant impacts on the society, economy, and environment. As a result, there is a compelling need for resilience and adaptation against extreme flood events under a changing climate. An accurate
[...] Read more.
Floods cause major disruptions to energy supply and transportation facilities and lead to significant impacts on the society, economy, and environment. As a result, there is a compelling need for resilience and adaptation against extreme flood events under a changing climate. An accurate focal priority analysis of how societies can adapt to these changing events can provide insight into practical solutions. Besides the social, ecological, and cultural impact assessments of floods, an accurate economic impact analysis is required to define priority zones and priority measures. Unfortunately, studies show that economic impact assessments can be highly inaccurate because of the margin of error in economic value estimation of residential and industrial buildings, as they account for a large part of the total economic damage value. Therefore, tools that can accurately estimate the maximum economic damage value (or replacement value) of residential and industrial buildings are imperative. This paper outlines a methodology to estimate the maximum economic value of buildings by using a data extraction algorithm for Energy Performance Certificates (EPC), through which the replacement value can be calculated for all of the buildings in Flanders, and in addition, across Europe. Full article
Figures

Graphical abstract

Open AccessFeature PaperArticle LandQv2: A MapReduce-Based System for Processing Arable Land Quality Big Data
ISPRS Int. J. Geo-Inf. 2018, 7(7), 271; https://doi.org/10.3390/ijgi7070271
Received: 25 May 2018 / Revised: 24 June 2018 / Accepted: 6 July 2018 / Published: 10 July 2018
Cited by 1 | PDF Full-text (8401 KB) | HTML Full-text | XML Full-text
Abstract
Arable land quality (ALQ) data are a foundational resource for national food security. With the rapid development of spatial information technologies, the annual acquisition and update of ALQ data covering the country have become more accurate and faster. ALQ data are mainly vector-based
[...] Read more.
Arable land quality (ALQ) data are a foundational resource for national food security. With the rapid development of spatial information technologies, the annual acquisition and update of ALQ data covering the country have become more accurate and faster. ALQ data are mainly vector-based spatial big data in the ESRI (Environmental Systems Research Institute) shapefile format. Although the shapefile is the most common GIS vector data format, unfortunately, the usage of ALQ data is very constrained due to its massive size and the limited capabilities of traditional applications. To tackle the above issues, this paper introduces LandQv2, which is a MapReduce-based parallel processing system for ALQ big data. The core content of LandQv2 is composed of four key technologies including data preprocessing, the distributed R-tree index, the spatial range query, and the map tile pyramid model-based visualization. According to the functions in LandQv2, firstly, ALQ big data are transformed by a MapReduce-based parallel algorithm from the ESRI Shapefile format to the GeoCSV file format in HDFS (Hadoop Distributed File System), and then, the spatial coding-based partition and R-tree index are executed for the spatial range query operation. In addition, the visualization of ALQ big data with a GIS (Geographic Information System) web API (Application Programming Interface) uses the MapReduce program to generate a single image or pyramid tiles for big data display. Finally, a set of experiments running on a live system deployed on a cluster of machines shows the efficiency and scalability of the proposed system. All of these functions supported by LandQv2 are integrated into SpatialHadoop, and it is also able to efficiently support any other distributed spatial big data systems. Full article
(This article belongs to the Special Issue Distributed and Parallel Architectures for Spatial Data)
Figures

Figure 1

Open AccessArticle Shared Execution Approach to ε-Distance Join Queries in Dynamic Road Networks
ISPRS Int. J. Geo-Inf. 2018, 7(7), 270; https://doi.org/10.3390/ijgi7070270
Received: 30 April 2018 / Revised: 1 July 2018 / Accepted: 6 July 2018 / Published: 10 July 2018
PDF Full-text (4233 KB) | HTML Full-text | XML Full-text
Abstract
Given a threshold distance ε and two object sets R and S in a road network, an ε-distance join query finds object pairs from R × S that are within the threshold distance ε (e.g., find passenger and taxicab pairs within a
[...] Read more.
Given a threshold distance ε and two object sets R and S in a road network, an ε-distance join query finds object pairs from R × S that are within the threshold distance ε (e.g., find passenger and taxicab pairs within a five-minute driving distance). Although this is a well-studied problem in the Euclidean space, little attention has been paid to dynamic road networks where the weights of road segments (e.g., travel times) are frequently updated and the distance between two objects is the length of the shortest path connecting them. In this work, we address the problem of ε-distance join queries in dynamic road networks by proposing an optimized ε-distance join algorithm called EDISON, the key concept of which is to cluster adjacent objects of the same type into a group, and then to optimize shared execution for the group to avoid redundant network traversal. The proposed method is intuitive and easy to implement, thereby allowing its simple integration with existing range query algorithms in road networks. We conduct an extensive experimental study using real-world roadmaps to show the efficiency and scalability of our shared execution approach. Full article
Figures

Figure 1

Open AccessArticle Geospatial Analysis and the Internet of Things
ISPRS Int. J. Geo-Inf. 2018, 7(7), 269; https://doi.org/10.3390/ijgi7070269
Received: 1 June 2018 / Revised: 25 June 2018 / Accepted: 3 July 2018 / Published: 10 July 2018
Cited by 1 | PDF Full-text (2935 KB) | HTML Full-text | XML Full-text
Abstract
As the Internet of Things (IoT) penetrates our everyday lives, being used to address a wide variety of real-life challenges and problems, the location of things becomes an important parameter. The exact location of measuring the physical world through IoT is highly relevant
[...] Read more.
As the Internet of Things (IoT) penetrates our everyday lives, being used to address a wide variety of real-life challenges and problems, the location of things becomes an important parameter. The exact location of measuring the physical world through IoT is highly relevant to understand local environmental conditions, or to develop powerful, personalized and context-aware location-based services and applications. This survey paper maps and analyzes the IoT based on its location dimension, categorizing IoT applications and projects according to the geospatial analytical methods performed. The survey investigates the opportunities of location-aware IoT, and examines the potential of geospatial analysis in this research area. Full article
(This article belongs to the Special Issue Geospatial Applications of the Internet of Things (IoT))
Figures

Figure 1

Open AccessArticle Landslide Susceptibility Assessment at Mila Basin (Algeria): A Comparative Assessment of Prediction Capability of Advanced Machine Learning Methods
ISPRS Int. J. Geo-Inf. 2018, 7(7), 268; https://doi.org/10.3390/ijgi7070268
Received: 7 May 2018 / Revised: 3 July 2018 / Accepted: 7 July 2018 / Published: 10 July 2018
Cited by 2 | PDF Full-text (17877 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Landslide risk prevention requires the delineation of landslide-prone areas as accurately as possible. Therefore, selecting a method or a technique that is capable of providing the highest landslide prediction capability is highly important. The main objective of this study is to assess and
[...] Read more.
Landslide risk prevention requires the delineation of landslide-prone areas as accurately as possible. Therefore, selecting a method or a technique that is capable of providing the highest landslide prediction capability is highly important. The main objective of this study is to assess and compare the prediction capability of advanced machine learning methods for landslide susceptibility mapping in the Mila Basin (Algeria). First, a geospatial database was constructed from various sources. The database contains 1156 landslide polygons and 16 conditioning factors (altitude, slope, aspect, topographic wetness index (TWI), landforms, rainfall, lithology, stratigraphy, soil type, soil texture, landuse, depth to bedrock, bulk density, distance to faults, distance to hydrographic network, and distance to road networks). Subsequently, the database was randomly resampled into training sets and validation sets using 5 times repeated 10 k-folds cross-validations. Using the training and validation sets, five landslide susceptibility models were constructed, assessed, and compared using Random Forest (RF), Gradient Boosting Machine (GBM), Logistic Regression (LR), Artificial Neural Network (NNET), and Support Vector Machine (SVM). The prediction capability of the five landslide models was assessed and compared using the receiver operating characteristic (ROC) curve, the area under the ROC curves (AUC), overall accuracy (Acc), and kappa index. Additionally, Wilcoxon signed-rank tests were performed to confirm statistical significance in the differences among the five machine learning models employed in this study. The result showed that the GBM model has the highest prediction capability (AUC = 0.8967), followed by the RF model (AUC = 0.8957), the NNET model (AUC = 0.8882), the SVM model (AUC = 0.8818), and the LR model (AUC = 0.8575). Therefore, we concluded that GBM and RF are the most suitable for this study area and should be used to produce landslide susceptibility maps. These maps as a technical framework are used to develop countermeasures and regulatory policies to minimize landslide damages in the Mila Basin. This research demonstrated the benefit of selecting the best-advanced machine learning method for landslide susceptibility assessment. Full article
Figures

Figure 1

Open AccessArticle GIS Application to Regional Geological Structure Relationship Modelling Considering Semantics
ISPRS Int. J. Geo-Inf. 2018, 7(7), 267; https://doi.org/10.3390/ijgi7070267
Received: 11 April 2018 / Revised: 26 June 2018 / Accepted: 6 July 2018 / Published: 9 July 2018
PDF Full-text (12575 KB) | HTML Full-text | XML Full-text
Abstract
GIS modelling, which is often employed to establish the abstract structural forms of geological phenomena and their structural relationships, is of great importance for the expression and analysis of geological structures to describe and express such phenomena accurately and intuitively. However, current GIS
[...] Read more.
GIS modelling, which is often employed to establish the abstract structural forms of geological phenomena and their structural relationships, is of great importance for the expression and analysis of geological structures to describe and express such phenomena accurately and intuitively. However, current GIS modelling schemes value structural forms over structural relationships, and existing geological semantic expressions in the modelling of geological relationships are incomplete. Therefore, this paper categorizes geological relationships into three levels: geological phenomena, geological objects and geological spatial objects: (1) based on their definitions, this work categorizes geological relationships into internal composition relationships and external combined relationships for a total of two categories, eight classes and 27 small groups; (2) this work also improves the system with a total of 33 classified geological objects by transforming the relationships between geological phenomena into relationships between geological objects; and (3) based on the 27 small groups of geological relationships, through the corresponding geometric and semantic expressions between topological rules and geological rules and between relationship rules and geological rules, this work then expresses internal composition relationships as topological relationships between geological spatial objects and expresses external combined relationships as association relationships between geological spatial objects. A GIS model of geological relationships that integrates their geometries and semantics is then built. Finally, taking the Dagang-Danyang section of the Ningzhen mountains as an example, the results show that the proposed GIS modelling method can better store and express geological phenomena, geological objects and geological spatial objects in a way that integrates geometry and semantics. Full article
Figures

Figure 1

Open AccessArticle Association Rules-Based Multivariate Analysis and Visualization of Spatiotemporal Climate Data
ISPRS Int. J. Geo-Inf. 2018, 7(7), 266; https://doi.org/10.3390/ijgi7070266
Received: 24 May 2018 / Revised: 29 June 2018 / Accepted: 3 July 2018 / Published: 9 July 2018
PDF Full-text (20792 KB) | HTML Full-text | XML Full-text
Abstract
Understanding atmospheric phenomena involves analysis of large-scale spatiotemporal multivariate data. The complexity and heterogeneity of such data pose a significant challenge in discovering and understanding the association between multiple climate variables. To tackle this challenge, we present an interactive heuristic visualization system that
[...] Read more.
Understanding atmospheric phenomena involves analysis of large-scale spatiotemporal multivariate data. The complexity and heterogeneity of such data pose a significant challenge in discovering and understanding the association between multiple climate variables. To tackle this challenge, we present an interactive heuristic visualization system that supports climate scientists and the public in their exploration and analysis of atmospheric phenomena of interest. Three techniques are introduced: (1) web-based spatiotemporal climate data visualization; (2) multiview and multivariate scientific data analysis; and (3) data mining-enabled visual analytics. The Arctic System Reanalysis (ASR) data are used to demonstrate and validate the effectiveness and usefulness of our method through a case study of “The Great Arctic Cyclone of 2012”. The results show that different variables have strong associations near the polar cyclone area. This work also provides techniques for identifying multivariate correlation and for better understanding the driving factors of climate phenomena. Full article
Figures

Figure 1

Open AccessArticle Model of Point Cloud Data Management System in Big Data Paradigm
ISPRS Int. J. Geo-Inf. 2018, 7(7), 265; https://doi.org/10.3390/ijgi7070265
Received: 30 April 2018 / Revised: 26 June 2018 / Accepted: 3 July 2018 / Published: 9 July 2018
PDF Full-text (9321 KB) | HTML Full-text | XML Full-text
Abstract
Modern geoinformation technologies for collecting and processing data, such as laser scanning or photogrammetry, can generate point clouds with billions of points. They provide abundant information that can be used for different types of analysis. Due to its characteristics, the point cloud is
[...] Read more.
Modern geoinformation technologies for collecting and processing data, such as laser scanning or photogrammetry, can generate point clouds with billions of points. They provide abundant information that can be used for different types of analysis. Due to its characteristics, the point cloud is often viewed as a special type of geospatial data. In order to efficiently manage such volumes of data, techniques based on a computer cluster have to be used. The Apache Spark framework has proven to be a solution for efficient processing of large volumes of data. This paper thoroughly examines the representation of point cloud data type using Apache Spark constructs. The common operations over point clouds, range queries and k-nearest neighbors queries (kNN) are implemented using Apache Spark DataFrame Application Programming Interface (API). It enabled the design of point cloud related user defined types (UDT) and user defined functions (UDF). The structure of the point cloud for efficient storing in Big Data key-value stores was analyzed and described. The methods presented in this paper were compared to PostgreSQL RDBMS, and the results were discussed. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Figures

Graphical abstract

Open AccessArticle Structured Knowledge Base as Prior Knowledge to Improve Urban Data Analysis
ISPRS Int. J. Geo-Inf. 2018, 7(7), 264; https://doi.org/10.3390/ijgi7070264
Received: 14 May 2018 / Revised: 27 June 2018 / Accepted: 3 July 2018 / Published: 7 July 2018
PDF Full-text (2643 KB) | HTML Full-text | XML Full-text
Abstract
Urban computing at present often relies on a large number of manually extracted features. This may require a considerable amount of feature engineering, and the procedure may miss certain hidden features and relationships among data items. In this paper, we propose a method
[...] Read more.
Urban computing at present often relies on a large number of manually extracted features. This may require a considerable amount of feature engineering, and the procedure may miss certain hidden features and relationships among data items. In this paper, we propose a method to use structured prior knowledge in the form of knowledge graphs to improve the precision and interpretability in applications such as optimal store placement and traffic accident inference. Specifically, we integrate sub-graph feature extraction, sub-knowledge graph gated neural networks, and kernel-based knowledge graph convolutional neural networks as ways of incorporating large urban knowledge graphs into a fully end-to-end learning system. Experiments using data from several large cities showed that our method outperforms the baseline methods. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Figures

Figure 1

Open AccessArticle Mapping Creative Spaces in Omaha, NE: Resident Perceptions versus Creative Firm Locations
ISPRS Int. J. Geo-Inf. 2018, 7(7), 263; https://doi.org/10.3390/ijgi7070263
Received: 30 May 2018 / Revised: 26 June 2018 / Accepted: 3 July 2018 / Published: 4 July 2018
PDF Full-text (15541 KB) | HTML Full-text | XML Full-text
Abstract
In an era increasingly shaped by automation and globalization, industries that rely on creativity, innovation, and knowledge-generation are considered key drivers of economic growth in the U.S. and other advanced capitalist economies. This study examines the spatial distribution of creative firms and how
[...] Read more.
In an era increasingly shaped by automation and globalization, industries that rely on creativity, innovation, and knowledge-generation are considered key drivers of economic growth in the U.S. and other advanced capitalist economies. This study examines the spatial distribution of creative firms and how they might align with perceptions of creativity in Omaha, Nebraska, a mid-sized U.S. urban area. Utilizing a survey, participant mapping exercise, and geospatial analyses, the primary goal was to identify formal and informal spaces of creative production and consumption, and determine to what extent the location of creative firms (both arts/media- and science/technology-focused) may shape perceptions of creativity across the urban landscape. The results suggest that local area residents primarily view dense, vibrant, mixed-use, and often historic urban neighborhoods as particularly creative, whether or not there exists a dense concentration of creative firms. Similarly, creative firms were more spatially diffuse than the clusters of “creative locations” identified by residents, and were more frequently found in suburban locations. Furthermore, while there was no discernible difference among “creative” and “non-creative” workers, science/technology firms were more likely than arts/media firms to be found in suburban locations, and less likely to be associated with perceptions of creativity in Omaha. Full article
(This article belongs to the Special Issue Urban Environment Mapping Using GIS)
Figures

Graphical abstract

Open AccessArticle Historical Collaborative Geocoding
ISPRS Int. J. Geo-Inf. 2018, 7(7), 262; https://doi.org/10.3390/ijgi7070262
Received: 6 April 2018 / Revised: 31 May 2018 / Accepted: 26 June 2018 / Published: 4 July 2018
PDF Full-text (3873 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The latest developments in the field of digital humanities have increasingly enabled the construction of large data sets which can be easily accessed and used. These data sets often contain indirect spatial information, such as historical addresses. Historical geocoding is the process of
[...] Read more.
The latest developments in the field of digital humanities have increasingly enabled the construction of large data sets which can be easily accessed and used. These data sets often contain indirect spatial information, such as historical addresses. Historical geocoding is the process of transforming indirect spatial information into direct locations which can be placed on a map, thus allowing for spatial analysis and cross-referencing. There are many geocoders that work efficiently for current addresses. However, these do not tackle temporal information, and usually follow a strict hierarchy (country, city, street, house number, etc.) which is difficult—if not impossible—to use with historical data. Historical data is filled with uncertainty (pertaining to temporal, textual, and positional accuracy, as well as to the reliability of historical sources) which can neither be ignored nor entirely resolved. Our open source, open data, and extensible solution for geocoding is based on extracting a large number of simple gazetteers composed of geohistorical objects, from historical maps. Geocoding a historical address becomes the process of finding one or several geohistorical objects in the gazetteers which best match the historical address searched by the user. The matching criteria are customisable, weighted, and include several dimensions (fuzzy string, fuzzy temporal, level of detail, positional accuracy). Since our goal is to facilitate historical work, we also put forward web-based user interfaces which help geocode (one address or batch mode) and display results over current or historical maps. Geocoded results can then be checked and edited collaboratively (no source is modified). The system was tested on the city of Paris, France, for the 19th and 20th centuries. It showed high response rates and worked quickly enough to be used interactively. Full article
(This article belongs to the Special Issue Historic Settlement and Landscape Analysis)
Figures

Graphical abstract

Open AccessArticle The Influence of the Shape and Size of the Cell on Developing Military Passability Maps
ISPRS Int. J. Geo-Inf. 2018, 7(7), 261; https://doi.org/10.3390/ijgi7070261
Received: 28 May 2018 / Revised: 25 June 2018 / Accepted: 28 June 2018 / Published: 3 July 2018
PDF Full-text (7437 KB) | HTML Full-text | XML Full-text
Abstract
The necessity to divide the analysed area into basic elements, regardless of the administrative division (cells or pixels, also called primary fields), and use them to prepare thematic maps emerged as early as by the end of the 19th century. The automation of
[...] Read more.
The necessity to divide the analysed area into basic elements, regardless of the administrative division (cells or pixels, also called primary fields), and use them to prepare thematic maps emerged as early as by the end of the 19th century. The automation of map development processes brought a new approach to the function of cells, which made them a carrier that facilitates information processing, and presenting the results of analyses in the form of studies that very often function only in spatial information systems or on the Internet. Cells are currently used to conduct a series of advanced spatial analyses in practically all areas of application. The aim of the presented research was to analyse the influence of the shape and size of cells on the terrain classification results for the purposes of developing military passability maps. The research used the automatic terrain classification method, based on calculating the index of passability, calculated for cells of square, triangular, and hexagonal shapes and of different sizes, ranging from 100 m to 10,000 m. Indices of passability were determined basing on parameters derived from land cover elements that exist in the area of each of the adopted cells. Because of the fact that passability maps are mainly developed for military purposes, the study used a standardised vector spatial database—VMap Level 2. The studies have demonstrated that, if the surface areas of cells are identical, their shapes do not have a significant influence on the resulting passability map. The authors have also determined the sizes of cells that should be adopted for developing passability maps on various levels of accuracy, and, as a consequence, for being used on various levels of command of military troops. Full article
(This article belongs to the Special Issue GIS for Safety & Security Management)
Figures

Figure 1

Open AccessArticle Measuring Selection Diversity of Emergency Medical Service for Metro Stations: A Case Study in Beijing
ISPRS Int. J. Geo-Inf. 2018, 7(7), 260; https://doi.org/10.3390/ijgi7070260
Received: 21 May 2018 / Revised: 25 June 2018 / Accepted: 26 June 2018 / Published: 2 July 2018
PDF Full-text (2239 KB) | HTML Full-text | XML Full-text
Abstract
It is important for station managers and emergency medical service (EMS) managers to understand the available ambulance stations for each metro station. EMS managers can make use of this information to reallocate EMS resources to improve the performance of EMS system. Station managers
[...] Read more.
It is important for station managers and emergency medical service (EMS) managers to understand the available ambulance stations for each metro station. EMS managers can make use of this information to reallocate EMS resources to improve the performance of EMS system. Station managers can pay attention to the safety management of metro stations with fewer medical rescue support. This paper aims to develop a selection diversity index to address two questions: “How many available ambulance stations are available or can serve the rescue of metro stations in emergency?” and “Which ambulance stations are most vulnerable?”. To implement this measure in practice, definition of available ambulance stations for rescue of metro stations is described based on the response time threshold. The selection diversity is proposed to measure the preparability of medical rescue service for metro stations. To show proof of concept, a real-world case is presented to demonstrate the feasibility of the selection diversity index. Full article
Figures

Figure 1

Open AccessArticle Inferencing Human Spatiotemporal Mobility in Greater Maputo via Mobile Phone Big Data Mining
ISPRS Int. J. Geo-Inf. 2018, 7(7), 259; https://doi.org/10.3390/ijgi7070259
Received: 7 June 2018 / Revised: 24 June 2018 / Accepted: 26 June 2018 / Published: 30 June 2018
PDF Full-text (13300 KB) | HTML Full-text | XML Full-text
Abstract
The mobility patterns and trip behavior of people are usually extracted from data collected by traditional survey methods. However, these methods are generally costly and difficult to implement, especially in developing cities with limited resources. The massive amounts of call detail record (CDR)
[...] Read more.
The mobility patterns and trip behavior of people are usually extracted from data collected by traditional survey methods. However, these methods are generally costly and difficult to implement, especially in developing cities with limited resources. The massive amounts of call detail record (CDR) data passively generated by ubiquitous mobile phone usage provide researchers with the opportunity to innovate alternative methods that are inexpensive and easier and faster to implement than traditional methods. This paper proposes a method based on proven techniques to extract the origin–destination (OD) trips from the raw CDR data of mobile phone users and process the data to capture the mobility of those users. The proposed method was applied to 3.4 million mobile phone users over a 12-day period in Mozambique, and the data processed to capture the mobility of people living in the Greater Maputo metropolitan area in different time frames (weekdays and weekends). Subsequently, trip generation maps, attraction maps, and the OD matrix of the study area, which are all practically usable for urban and transportation planning, were generated. Furthermore, spatiotemporal interpolation was applied to all OD trips to reconstruct the population distribution in the study area on an average weekday and weekend. Comparison of the results obtained with actual survey results from the Japan International Cooperation Agency (JICA) indicate that the proposed method achieves acceptable accuracy. The proposed method and study demonstrate the efficacy of mining big data sources, particularly mobile phone CDR data, to infer the spatiotemporal human mobility of people in a city and understand their flow pattern, which is valuable information for city planning. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Figures

Figure 1

Open AccessArticle Multi-Objective Optimisation Based Planning of Power-Line Grid Expansions
ISPRS Int. J. Geo-Inf. 2018, 7(7), 258; https://doi.org/10.3390/ijgi7070258
Received: 16 May 2018 / Revised: 15 June 2018 / Accepted: 24 June 2018 / Published: 29 June 2018
PDF Full-text (44249 KB) | HTML Full-text | XML Full-text
Abstract
German nuclear power phase out in 2022 leads to significant reconstruction of the energy transmission system. Thus, efficient identification of practical transmission routes with minimum impact on ecological and economical interests is of growing importance. Due to the sensitivity of Germany’s public to
[...] Read more.
German nuclear power phase out in 2022 leads to significant reconstruction of the energy transmission system. Thus, efficient identification of practical transmission routes with minimum impact on ecological and economical interests is of growing importance. Due to the sensitivity of Germany’s public to grid expansion (especially in case of overhead lines), the participation and planning process needs to provide a high degree of openness and accountability. Therefore, a new methodological approach for the computer-assisted finding of optimal power-line routes considering planning, ecological and economic decision criteria is presented. The approach is implemented in a tool-chain for the determination of transmission line routes (and sets of transmission line route alternatives) based on multi-criteria optimisation. Additionally, a decision support system, based on common Geographic Information Systems (GIS), consisting of interactive visualisation and exploration of the solution space is proposed. Full article
Figures

Figure 1

Back to Top