Open AccessArticle
Generating Up-to-Date and Detailed Land Use and Land Cover Maps Using OpenStreetMap and GlobeLand30
ISPRS Int. J. Geo-Inf. 2017, 6(4), 125; doi:10.3390/ijgi6040125 (registering DOI) -
Abstract
With the opening up of the Landsat archive, global high resolution land cover maps have begun to appear. However, they often have only a small number of high level land cover classes and they are static products, corresponding to a particular period of
[...] Read more.
With the opening up of the Landsat archive, global high resolution land cover maps have begun to appear. However, they often have only a small number of high level land cover classes and they are static products, corresponding to a particular period of time, e.g., the GlobeLand30 (GL30) map for 2010. The OpenStreetMap (OSM), in contrast, consists of a very detailed, dynamically updated, spatial database of mapped features from around the world, but it suffers from incomplete coverage, and layers of overlapping features that are tagged in a variety of ways. However, it clearly has potential for land use and land cover (LULC) mapping. Thus the aim of this paper is to demonstrate how the OSM can be converted into a LULC map and how this OSM-derived LULC map can then be used to first update the GL30 with more recent information and secondly, enhance the information content of the classes. The technique is demonstrated on two study areas where there is availability of OSM data but in locations where authoritative data are lacking, i.e., Kathmandu, Nepal and Dar es Salaam, Tanzania. The GL30 and its updated and enhanced versions are independently validated using a stratified random sample so that the three maps can be compared. The results show that the updated version of GL30 improves in terms of overall accuracy since certain classes were not captured well in the original GL30 (e.g., water in Kathmandu and water/wetlands in Dar es Salaam). In contrast, the enhanced GL30, which contains more detailed urban classes, results in a drop in the overall accuracy, possibly due to the increased number of classes, but the advantages include the appearance of more detailed features, such as the road network, that becomes clearly visible. Full article
Figures

Figure 1

Open AccessArticle
Optimizing Multi-Way Spatial Joins of Web Feature Services
ISPRS Int. J. Geo-Inf. 2017, 6(4), 123; doi:10.3390/ijgi6040123 -
Abstract
Web Feature Service (WFS) is a widely used spatial web service standard issued by the Open Geospatial Consortium (OGC). In a heterogeneous GIS application, a user can issue a query that relates two or more spatial datasets at different WFS servers. Multi-way spatial
[...] Read more.
Web Feature Service (WFS) is a widely used spatial web service standard issued by the Open Geospatial Consortium (OGC). In a heterogeneous GIS application, a user can issue a query that relates two or more spatial datasets at different WFS servers. Multi-way spatial joins of WFSs are very expensive in terms of computation and transmission because of the time-consuming interactions between the servers and the client. In this paper, we examine the problems of multi-way spatial joins of WFSs, and we present a client-side optimization approach to generate good execution plans for such queries. The spatial semi-join and area partitioning-based methods are combined to prune away non-candidate objects in processing binary spatial joins, and the filtering rate is used as an index to determine the execution strategy for each sub-area. Two partitioning methods were tested, and the experimental results showed that both are effective if a proper threshold to stop the partitioning is chosen. In processing multi-way spatial joins of WFSs, the filtering rate is used as an indicator to determine the ordering of the binary joins. The optimization method is obviously superior to the other two methods when there are adequate spatial objects involved in the join query, or when more datasets are involved in the join query. Full article
Figures

Figure 1

Open AccessArticle
Forecasting Urban Vacancy Dynamics in a Shrinking City: A Land Transformation Model
ISPRS Int. J. Geo-Inf. 2017, 6(4), 124; doi:10.3390/ijgi6040124 -
Abstract
In the past two centuries, many American urban areas have experienced significant expansion in both populating and depopulating cities. The pursuit of bigger, faster, and more growth-oriented planning parallels a situation where municipal decline has also been recognized as a global epidemic. In
[...] Read more.
In the past two centuries, many American urban areas have experienced significant expansion in both populating and depopulating cities. The pursuit of bigger, faster, and more growth-oriented planning parallels a situation where municipal decline has also been recognized as a global epidemic. In recent decades many older industrial cities have experienced significant depopulation, job loss, economic decline, and massive increases in vacant and abandoned properties due primarily to losses in industry and relocating populations. Despite continuous economic decline and depopulation, many of these so-called ‘shrinking cities’ still chase growth-oriented planning policies, due partially to inabilities to accurately predict future urban growth/decline patterns. This capability is critical to understanding land use alternation patterns and predicting future possible scenarios for the development of more proactive land use policies dealing with urban decline and regeneration. In this research, the city of Chicago, Illinois, USA is used as a case site to test an urban land use change model that predicts urban decline in a shrinking city, using vacant land as a proxy. Our approach employs the Land Transformation Model (LTM), which combines Geographic Information Systems and artificial neural networks to forecast land use change. Results indicate that the LTM is a good resource to simulate urban vacant land changes. Mobility and housing market conditions seem to be the primary variables contributing to decline. Full article
Figures

Open AccessArticle
Spatial and Temporal Analysis of the Mitigating Effects of Industrial Relocation on the Surface Urban Heat Island over China
ISPRS Int. J. Geo-Inf. 2017, 6(4), 121; doi:10.3390/ijgi6040121 -
Abstract
Urbanization is typically accompanied by the relocation and reconstruction of industrial areas due to limited space and environmental requirements, particularly in the case of a capital city. Shougang Group, one of the largest steel mill operators in China, was relocated from Beijing to
[...] Read more.
Urbanization is typically accompanied by the relocation and reconstruction of industrial areas due to limited space and environmental requirements, particularly in the case of a capital city. Shougang Group, one of the largest steel mill operators in China, was relocated from Beijing to Hebei Province. To study the thermal environmental changes at the Shougang industrial site before and after relocation, four Landsat images (from 2000, 2005, 2010 and 2016) were used to calculate the land surface temperature (LST). Using the urban heat island ratio index (URI), we compared the LST values for the four images of the investigated area. Following the relocation of Shougang Group, the URI values decreased from 0.55 in 2005 to 0.21 in 2016, indicating that the surface urban heat island effect in the area was greatly mitigated; we infer that this effect was related to steel production. This study shows that the use of Landsat images to assess industrial thermal pollution is feasible. Accurate and rapid extraction of thermal pollution data by remote sensing offers great potential for the management of industrial pollution sources and distribution, and for technical support in urban planning departments. Full article
Figures

Figure 1

Open AccessArticle
Building an Urban Spatial Structure from Urban Land Use Data: An Example Using Automated Recognition of the City Centre
ISPRS Int. J. Geo-Inf. 2017, 6(4), 122; doi:10.3390/ijgi6040122 -
Abstract
It has been suggested that the method of constructing an urban spatial structure typically follows a forward process from planning and design up to expression, as reflected in both graphic and text descriptions of urban planning. Although unorthodox, the original status structures can
[...] Read more.
It has been suggested that the method of constructing an urban spatial structure typically follows a forward process from planning and design up to expression, as reflected in both graphic and text descriptions of urban planning. Although unorthodox, the original status structures can be extracted and constructed from an existing urban land-use map. This approach not only provides the methodological foundation for urban spatial structure evolution and allows for a comparative and quantitative analysis between the existing and planned conditions, but also lays a theoretical basis for failure in scientific decision making during the planning phase. This study attempts to achieve this by identifying the city centre (a typical element of the urban spatial structure) from urban land use data. The city centre is a special region consisting of several units with particular spatial information, including geometric attributes, topological attributes, and thematic attributes. In this paper, we develop a methodology to support the delineation of the city centre, considering these factors. First, using commercial land data, we characterise the city centre as units based on a series of indicators, including geometric and thematic attributes, and integrate them into a composite index of “urban centrality”; Second, a graph-based spatial clustering method that considers both topological proximity and attribute similarity is designed and used to identify the city centre. The precise boundary of the city centre is subsequently delimited using a shape reconstruction method based on the cluster results. Finally, we present a case study to demonstrate the effectiveness and practicability of the methodology. Full article
Figures

Figure 1

Open AccessArticle
Identifying Witness Accounts from Social Media Using Imagery
ISPRS Int. J. Geo-Inf. 2017, 6(4), 120; doi:10.3390/ijgi6040120 -
Abstract
This research investigates the use of image category classification to distinguish images posted to social media that are Witness Accounts of an event. Only images depicting observations of the event, captured by micro-bloggers at the event, are considered Witness Accounts. Identifying Witness Accounts
[...] Read more.
This research investigates the use of image category classification to distinguish images posted to social media that are Witness Accounts of an event. Only images depicting observations of the event, captured by micro-bloggers at the event, are considered Witness Accounts. Identifying Witness Accounts from social media is important for services such as news, marketing and emergency response. Automated image category classification is essential due to the large number of images on social media and interest in identifying witnesses in near real time. This paper begins research of this emerging problem with an established procedure, using a bag-of-words method to create a vocabulary of visual words and classifier trained to categorize the encoded images. In order to test the procedure, a set of images were collected for case study events, Australian Football League matches, from Twitter. Evaluation shows an overall accuracy of 90% and precision and recall for both classes exceeding 83%. Full article
Figures

Figure 1

Open AccessArticle
A GIS-Based Fuzzy Decision Making Model for Seismic Vulnerability Assessment in Areas with Incomplete Data
ISPRS Int. J. Geo-Inf. 2017, 6(4), 119; doi:10.3390/ijgi6040119 -
Abstract
Earthquakes are one of the natural disasters that threaten many lives every year. It is important to estimate seismic damages in advance to be able to reduce future losses. However, seismic vulnerability assessment is a complicated problem, especially in areas with incomplete data,
[...] Read more.
Earthquakes are one of the natural disasters that threaten many lives every year. It is important to estimate seismic damages in advance to be able to reduce future losses. However, seismic vulnerability assessment is a complicated problem, especially in areas with incomplete data, due to incorporated uncertainties. Therefore, it is important to use adequate methods that take into account and handle the associated uncertainties. Although different seismic vulnerability assessment methods at the urban scale have been proposed, the purpose of this research is to introduce a new Geospatial Information System GIS-based model using a modified integration of Analytical Hierarchy Process (AHP), fuzzy sets theory, and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) in a vector-based environment. The proposed method emphasizes handling one of the important uncertainties in areas with incomplete data, namely the ‘vagueness’ of the existing knowledge about influences of the criteria on seismic vulnerability, which is handled using fuzzy sets theory in this research. The applicability of the proposed method is tested in a municipality district of Tabriz, which is in a near vicinity to the fault system. It can be concluded that the proposed method contributes to a pragmatic and efficient assessment of physical seismic vulnerability under uncertainty, which provides useful information for assisting planners in mitigation and preparation stages in less-studied areas. Full article
Figures

Figure 1

Open AccessArticle
Marine Spatial Data Infrastructure Development Framework: Croatia Case Study
ISPRS Int. J. Geo-Inf. 2017, 6(4), 117; doi:10.3390/ijgi6040117 -
Abstract
Spatial data infrastructure (SDI) related to marine spatial data is known as marine SDI (MSDI). In this paper, we determine data themes under the MSDI in the order of usefulness and efficiency. The purpose is to streamline and support the prioritisation of data
[...] Read more.
Spatial data infrastructure (SDI) related to marine spatial data is known as marine SDI (MSDI). In this paper, we determine data themes under the MSDI in the order of usefulness and efficiency. The purpose is to streamline and support the prioritisation of data to be further implemented in the MSDI. This is conceptualised using the logic of decision support systems and a multi-criteria analysis approach that integrates components such as data, stakeholders, and users through multi-criteria methods for priority ranking. This research proposes an MSDI development concept and is validated using the Croatian MSDI case. Full article
Figures

Open AccessArticle
An Improved Density-Based Time Series Clustering Method Based on Image Resampling: A Case Study of Surface Deformation Pattern Analysis
ISPRS Int. J. Geo-Inf. 2017, 6(4), 118; doi:10.3390/ijgi6040118 -
Abstract
Time series clustering algorithms have been widely used to mine the clustering distribution characteristics of real phenomena. However, these algorithms have several limitations. First, they depend heavily on prior knowledge. Second, the algorithms do not simultaneously consider the similarity of spatial locations, spatial-temporal
[...] Read more.
Time series clustering algorithms have been widely used to mine the clustering distribution characteristics of real phenomena. However, these algorithms have several limitations. First, they depend heavily on prior knowledge. Second, the algorithms do not simultaneously consider the similarity of spatial locations, spatial-temporal attribute values, and spatial-temporal attribute trends (trends in terms of the change direction and ranges in addition and deletion over time), which are all important similarity measurements. Finally, the calculation cost based on these methods for clustering analysis is becoming increasingly computationally demanding, because the data volume of the image time series data is increasing. In view of these shortcomings, an improved density-based time series clustering method based on image resampling (DBTSC-IR) has been proposed in this paper. The proposed DBTSC-IR has two major parts. In the first part, an optimal resampling scale of the image time series data is first determined to reduce the data volume by using a new scale optimization function. In the second part, the traditional density-based time series clustering algorithm is improved by introducing a density indicator to control the clustering sequences by considering the spatial locations, spatial-temporal attribute values, and spatial-temporal attribute trends. The final clustering analysis is then performed directly on the resampled image time series data by using the improved algorithm. Finally, the effectiveness of the proposed DBTSC-IR is illustrated by experiments on both the simulated datasets and in real applications. The proposed method can effectively and adaptively recognize the spatial patterns with arbitrary shapes of image time series data with consideration of the effects of noise. Full article
Figures

Figure 1

Open AccessArticle
A Standard Indoor Spatial Data Model—OGC IndoorGML and Implementation Approaches
ISPRS Int. J. Geo-Inf. 2017, 6(4), 116; doi:10.3390/ijgi6040116 -
Abstract
With the recent progress in indoor spatial data modeling, indoor mapping and indoor positioning technologies, several spatial information services for indoor spaces have been provided like for outdoor spaces. In order to support interoperability between indoor spatial information services, IndoorGML was published by
[...] Read more.
With the recent progress in indoor spatial data modeling, indoor mapping and indoor positioning technologies, several spatial information services for indoor spaces have been provided like for outdoor spaces. In order to support interoperability between indoor spatial information services, IndoorGML was published by OGC (Open Geospatial Consortium) as a standard data model and XML-based exchange format. While the previous standards, such as IFC (Industrial Foundation Classes) and CityGML covering also indoor space, aim at feature modeling, the goal of IndoorGML is to establish a standard basis for the indoor space model. As IndoorGML defines a minimum data model for indoor space, more efforts are required to discover its potential aspects, which are not explicitly explained in the standard document. In this paper, we investigate the implications and potential aspects of IndoorGML and its basic concept of the cellular space model and discuss the implementation issues of IndoorGML for several purposes. In particular, we discuss the issues on cell determination, subspacing and the hierarchical structure of indoor space from the IndoorGML viewpoint. Additionally, we also focus on two important issues: computation of indoor distance and the implementation of indoor context-awareness services based on IndoorGML. We expect that this paper will serve as a technical document for better understanding of IndoorGML throughout these discussions. Full article
Figures

Figure 1

Open AccessArticle
Spatial Dynamic Modelling of Future Scenarios of Land Use Change in Vaud and Valais, Western Switzerland
ISPRS Int. J. Geo-Inf. 2017, 6(4), 115; doi:10.3390/ijgi6040115 -
Abstract
We use Bayesian methods with a weights of evidence approach to model the probability of land use change over the Western part of Switzerland. This first model is followed by a cellular automata model for spatial allocation of land use classes. Our results
[...] Read more.
We use Bayesian methods with a weights of evidence approach to model the probability of land use change over the Western part of Switzerland. This first model is followed by a cellular automata model for spatial allocation of land use classes. Our results extend and enhance current land use scenarios studies by applying Dinamica Environment for Geoprocessing Objects (Dinamica EG) to a study area comprising of the upper Rhone river basin in the Cantons of Vaud and Valais. In order to take into account the topography, we divide the study area into four regions, based on their altitude and administrative region. We show that the different regions are affected in differing ways by the same driving forces. We analyse possible outcomes in land use change in 2050 for three different scenarios: “business as usual”, “liberalisation” and a “lowered agriculture production”. The “business-as-usual” scenario results indicate a decrease in agriculture, mostly in extensive agriculture, with a share in the total area of 12.3% in 2009 decreasing by 3.3% in 2050. Losses expected under a “business-as-usual” scenario in agriculture, are mostly due to the conversion to shrubland and forest. Further losses in extensive agriculture are expected under the “liberalisation” scenario, decreasing by 10.3 % in 2050. Along with a marked increase in the closed and open forest area, increasing from 27.1% in 2009 to 42.3% by 2050. Gains in open land habitat with the increase of the share of extensive agriculture area under the “lowered agricultural production” scenario are expected to increase by 3.2% in 2050, while the share of intensive agriculture area is expected to decrease by 5.6%. Full article
Figures

Figure 1

Open AccessArticle
A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution
ISPRS Int. J. Geo-Inf. 2017, 6(4), 114; doi:10.3390/ijgi6040114 -
Abstract
With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout
[...] Read more.
With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data. Full article
Figures

Figure 1

Open AccessArticle
Uncertainties in Classification System Conversion and an Analysis of Inconsistencies in Global Land Cover Products
ISPRS Int. J. Geo-Inf. 2017, 6(4), 112; doi:10.3390/ijgi6040112 -
Abstract
In this study, using the common classification systems of IGBP-17, IGBP-9, IPCC-5 and TC (vegetation, wetlands and others only), we studied spatial and areal inconsistencies in the three most recent multi-resource land cover products in a complex mountain-oasis-desert system and quantitatively discussed the
[...] Read more.
In this study, using the common classification systems of IGBP-17, IGBP-9, IPCC-5 and TC (vegetation, wetlands and others only), we studied spatial and areal inconsistencies in the three most recent multi-resource land cover products in a complex mountain-oasis-desert system and quantitatively discussed the uncertainties in classification system conversion. This is the first study to compare these products based on terrain and to quantitatively study the uncertainties in classification system conversion. The inconsistencies and uncertainties decreased from high to low levels of aggregation (IGBP-17 to TC) and from mountain to desert areas, indicating that the inconsistencies are not only influenced by the level of thematic detail and landscape complexity but also related to the conversion uncertainties. The overall areal inconsistency in the comparison of the FROM-GLC and GlobCover 2009 datasets is the smallest among the three pairs, but the smallest overall spatial inconsistency was observed between the FROM-GLC and MODISLC. The GlobCover 2009 had the largest conversion uncertainties due to mosaic land cover definition, with values up to 23.9%, 9.68% and 0.11% in mountainous, oasis and desert areas, respectively. The FROM-GLC had the smallest inconsistency, with values less than 4.58%, 1.89% and 1.2% in corresponding areas. Because the FROM-GLC dataset uses a hierarchical classification scheme with explicit attribution from the second level to the first, this system is suggested for producers of map land cover products in the future. Full article
Figures

Figure 1

Open AccessArticle
Interpolation and Prediction of Spatiotemporal Data Based on XML Integrated with Grey Dynamic Model
ISPRS Int. J. Geo-Inf. 2017, 6(4), 113; doi:10.3390/ijgi6040113 -
Abstract
Interpolation and prediction of spatiotemporal data are integral components of many real-world applications. Thus, approaches of interpolating and predicting spatiotemporal data have been extensively investigated. Currently, the grey dynamic model has been used to enhance the performance of interpolating and predicting spatiotemporal data.
[...] Read more.
Interpolation and prediction of spatiotemporal data are integral components of many real-world applications. Thus, approaches of interpolating and predicting spatiotemporal data have been extensively investigated. Currently, the grey dynamic model has been used to enhance the performance of interpolating and predicting spatiotemporal data. Meanwhile, the Extensible Markup Language (XML) has unique characteristics of information representation and exchange. In this paper, we first couple the grey dynamic model with the spatiotemporal XML model. Based on a definition of the position part of the spatiotemporal XML model, we extract the corresponding position information of each time interval and propose an algorithm for constructing an AVL tree to store them. Then, we present the architecture of an interpolating and predicting process and investigate change operations in positions. On this basis, we present an algorithm for interpolation and prediction of spatiotemporal data based on XML integrated with the grey dynamic model. Experimental results demonstrate the performance advantages of the proposed approach. Full article
Figures

Figure 1

Open AccessArticle
Towards Understanding Location Privacy Awareness on Geo-Social Networks
ISPRS Int. J. Geo-Inf. 2017, 6(4), 109; doi:10.3390/ijgi6040109 -
Abstract
Users' awareness of the extent of information implicit in their geo-profiles on social networks is limited. This questions the validity of their consent to the collection, storage and use of their data. Tools for location privacy awareness are needed that provide users with
[...] Read more.
Users' awareness of the extent of information implicit in their geo-profiles on social networks is limited. This questions the validity of their consent to the collection, storage and use of their data. Tools for location privacy awareness are needed that provide users with accessible means for understanding the implicit content in their location information as well as a view of the level of risk to their privacy as a consequence of disclosing this information. Towards this goal, an abstract model of location privacy threat levels is first derived from a user study involving 186 users. This is then used to inform the design of a prototype privacy feedback tool for a location-based social network. Another user study involving 338 users of this network is carried out to test the effectiveness of the proposed design. Findings confirm the strong need of users for more transparent access to and control over their location profiles and guide the proposal of recommendations to the design of more privacy-sensitive geo-social networks. Full article
Figures

Figure 1

Open AccessArticle
Attribute Learning for SAR Image Classification
ISPRS Int. J. Geo-Inf. 2017, 6(4), 111; doi:10.3390/ijgi6040111 -
Abstract
This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR) images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space,
[...] Read more.
This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR) images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results. Full article
Figures

Figure 1

Open AccessArticle
Camera Coverage Estimation Based on Multistage Grid Subdivision
ISPRS Int. J. Geo-Inf. 2017, 6(4), 110; doi:10.3390/ijgi6040110 -
Abstract
Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those
[...] Read more.
Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR) without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS) algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s), the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy. Full article
Figures

Figure 1

Open AccessArticle
Adaptive and Optimized RDF Query Interface for Distributed WFS Data
ISPRS Int. J. Geo-Inf. 2017, 6(4), 108; doi:10.3390/ijgi6040108 -
Abstract
Web Feature Service (WFS) is a protocol for accessing geospatial data stores such as databases and Shapefiles over the Web. However, WFS does not provide direct access to data distributed in multiple servers. In addition, WFS features extracted from their original sources are
[...] Read more.
Web Feature Service (WFS) is a protocol for accessing geospatial data stores such as databases and Shapefiles over the Web. However, WFS does not provide direct access to data distributed in multiple servers. In addition, WFS features extracted from their original sources are not convenient for user access due to the lack of connection to high-level concepts. Users are facing the choices of either querying each WFS server first and then integrating the results, or converting the data from all WFS servers to a more expressive format such as RDF (Resource Description Framework) and then querying the integrated data. The first choice requires additional programming while the second choice is not practical for large or frequently updated datasets. The new contribution of this paper is that we propose a novel adaptive and optimized RDF query interface to overcome the aforementioned limitation. Specifically, in this paper, we propose a novel algorithm to query and synthesize distributed WFS data through an RDF query interface, where users can specify data requests to multiple WFS servers using a single RDF query. Users can also define a simple configuration to associate WFS feature types, attributes, and values with RDF classes, properties, and values so that user queries can be written using a more uniform and informative vocabulary. The algorithm translates each RDF query written in SPARQL-like syntax to multiple WFS GetFeature requests, and then converts and integrates the multiple WFS results to get the answers to the original query. The generated GetFeature requests are sent asynchronously and simultaneously to WFS servers to take advantage of the server parallelism. The results of each GetFeature request are cached to improve query response time for subsequent queries that involve one or more of the cached requests. A JavaScript-based prototype is implemented and experimental results show that the query response time can be greatly reduced through fine-grained caching. Full article
Figures

Figure 1

Open AccessArticle
Spatial-Temporal Patterns of Bean Crop in Brazil over the Period 1990–2013
ISPRS Int. J. Geo-Inf. 2017, 6(4), 107; doi:10.3390/ijgi6040107 -
Abstract
The understanding of spatial dependence and distribution of agricultural production factors is a key issue for the territorial planning and regional development. This study evaluates the spatial-temporal dynamics of bean crops in Brazil over the period 1990–2013. Common bean (Phaseolus vulgaris L.)
[...] Read more.
The understanding of spatial dependence and distribution of agricultural production factors is a key issue for the territorial planning and regional development. This study evaluates the spatial-temporal dynamics of bean crops in Brazil over the period 1990–2013. Common bean (Phaseolus vulgaris L.) is one of the staple foods for the Brazilian population, with nationwide production and cultivated mostly by family farmers. The analyzed variables of this crop included harvested area, produced quantity, and average crop yield. We investigated spatial autocorrelations using the Global and Local Moran Index. The global spatial autocorrelation statistics demonstrated a general spatial dependence of bean production over Brazil, while the local spatial autocorrelation statistics detect statistically significant zones of high and low bean-production attributes. Maps of growth and acceleration rate of the variables were constructed, showing the areas that increased, decreased, or stagnated during the time series. The results showed a considerable reduction of the bean harvested area, but there were significant increases in produced quantity and average crop yield. Results showed distinct and significant patterns of bean-production variables in Brazilian territory over the different years. Regional differences and peculiarities are evident, emphasizing the need for directing investments to agricultural research and public policy. Full article
Figures

Figure 1

Open AccessArticle
Ontology-Guided Image Interpretation for GEOBIA of High Spatial Resolution Remote Sense Imagery: A Coastal Area Case Study
ISPRS Int. J. Geo-Inf. 2017, 6(4), 105; doi:10.3390/ijgi6040105 -
Abstract
Image interpretation is a major topic in the remote sensing community. With the increasing acquisition of high spatial resolution (HSR) remotely sensed images, incorporating geographic object-based image analysis (GEOBIA) is becoming an important sub-discipline for improving remote sensing applications. The idea of integrating
[...] Read more.
Image interpretation is a major topic in the remote sensing community. With the increasing acquisition of high spatial resolution (HSR) remotely sensed images, incorporating geographic object-based image analysis (GEOBIA) is becoming an important sub-discipline for improving remote sensing applications. The idea of integrating the human ability to understand images inspires research related to introducing expert knowledge into image object–based interpretation. The relevant work involved three parts: (1) identification and formalization of domain knowledge; (2) image segmentation and feature extraction; and (3) matching image objects with geographic concepts. This paper presents a novel way that combines multi-scaled segmented image objects with geographic concepts to express context in an ontology-guided image interpretation. Spectral features and geometric features of a single object are extracted after segmentation and topological relationships are also used in the interpretation. Web ontology language–query language (OWL-QL) formalize domain knowledge. Then the interpretation matching procedure is implemented by the OWL-QL query-answering. Compared with a supervised classification, which does not consider context, the proposed method validates two HSR images of coastal areas in China. Both the number of interpreted classes increased (19 classes over 10 classes in Case 1 and 12 classes over seven in Case 2), and the overall accuracy improved (0.77 over 0.55 in Case 1 and 0.86 over 0.65 in Case 2). The additional context of the image objects improved accuracy during image classification. The proposed approach shows the pivotal role of ontology for knowledge-guided interpretation. Full article
Figures

Figure 1