Special Issue "Multi-Source Geoinformation Fusion"

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: 1 May 2019

Special Issue Editor

Guest Editor
Prof. Robert Jeansoulin

LIGM UMR8049, Univ. Paris-Est, CNRS, 77454 Marne-la-Vallée, France
Website | E-Mail
Interests: quality of geographic information; uncertainty space-time knowledge representation and reasoning GIS and remote sensing; data extraction and analysis

Special Issue Information

Dear Colleagues,

Twenty years ago, several HAPEX (Hydrologic and Atmospheric Pilot Experiments) and the Alpilles- ReSeDA (Remote Sensing Data Assimilation) were the first large-scale international experiments to specifically explore the fusion of multi-source geoinformation. That geoinformation was almost exclusively made up of satellite and aerial imagery, but what was then called data assimilation, was already a complex operation between data collected at different scales, from different sensors, involving physics "transfer models" in order to "fusing" these data, if evaluated as similar enough.

What has changed in this research field, since then?

Nowadays, the range of sources is considerably larger. Several dozen satellites are observing our planet, from the continental scale, down to streets and neighborhoods. The vision is not just flat: LiDAR or UAVs pictures deliver a 3D vision. Time of delivery is no longer an issue: IoT sensors deliver real-time environmental data, vehicle traffic data, etc.

The number of data sources is not the only factor that has changed, variety also has increased a great deal. Automated cartography and remote sensing are no longer two realms ignoring each other, as was the case 20 years ago. Handling pixel and vector data, together, is no longer a handicap in designing geospatial information. Software development, knowledge representation, and reasoning tools have greatly evolved, allowing for the smooth integration of (ontologically) different sources.

Volume and variety have increased, and velocity has changed radically. Large data files are no longer mailed as digital tapes. You can download data, or you can process it using web services, and download only the results. You can process raw data thoroughly, using your own code, or rely on web applications that apply your chosen corrections and models. Will these models soon be determined by artificial intelligence? Will web services be choosing the models that are most relevant for your applications?

These questions are on the table today; about where geoinformation fusion research and development is heading.

We invite you to contribute to this Special Issue, which could be a big step forward in research on multi-source geoinformation fusion, summing up its different facets and application domains.

Several ISPRS Work Group (WG) are actively working on related topics: WG.III.6 (fusion), ICWG.III/IVb (Remote Sensing quality) and WG.IV.3 (quality), as well as WG.III.7, in the application domain of land-cover/use, to cite a few. National space agencies are designing infrastructure for the large-scale delivery of spatial data, and global Earth observation system of systems (GEOSS) is now a mature international organization devoted to provisioning multi-source data. In addition, there is also an active research community on the more theoretical aspects of geospatial information fusion and revision.

Therefore, we encourage contribution on (but not limited to) the following themes:

  • Theories, frameworks, and paradigms of geospatial information fusion
  • Fusion background improvement: Geoinformation metamodelling, model integration, and uniform knowledge representation
  • Big data's specific impact on geoinformation fusion (not just volume and access)
  • Artificial intelligence (AI)/machine learning in relation with geoinformation fusion
  • Advances in the integration of new sensor sources with classical ones (LiDAR, UAVs imagery, IoT environmental or mobility data, volunteer information, etc.)
  • Applications making intensive use of fusion: Agricultural systems, land use, urban development, etc.
  • Geospatial education and capacity-building efforts with geoinformation fusion
  • Ethical and societal considerations (privately owned data, citizen participation, data integrity variability)

Manuscripts for this Special Issue should be submitted by 30 October 2018, for timely selection, peer-review, and publication in this open access Special Issue of IJGI.

Prof. Robert Jeansoulin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Research

Open AccessArticle A New, Score-Based Multi-Stage Matching Approach for Road Network Conflation in Different Road Patterns
ISPRS Int. J. Geo-Inf. 2019, 8(2), 81; https://doi.org/10.3390/ijgi8020081
Received: 31 December 2018 / Revised: 29 January 2019 / Accepted: 11 February 2019 / Published: 13 February 2019
PDF Full-text (33327 KB) | HTML Full-text | XML Full-text
Abstract
Road-matching processes establish links between multi-sourced road lines representing the same entities in the real world. Several road-matching methods have been developed in the last three decades. The main issue related to this process is selecting the most appropriate method. This selection depends [...] Read more.
Road-matching processes establish links between multi-sourced road lines representing the same entities in the real world. Several road-matching methods have been developed in the last three decades. The main issue related to this process is selecting the most appropriate method. This selection depends on the data and requires a pre-process (i.e., accuracy assessment). This paper presents a new matching method for roads composed of different patterns. The proposed method matches road lines incrementally (i.e., from the most similar matching to the least similar). In the experimental testing, three road networks in Istanbul, Turkey, which are composed of tree, cellular, and hybrid patterns, provided by the municipality (authority), OpenStreetMap (volunteered), TomTom (private), and Basarsoft (private) were used. The similarity scores were determined using Hausdorff distance, orientation, sinuosity, mean perpendicular distance, mean length of triangle edges, and modified degree of connectivity. While the first four stages determined certain matches with regards to the scores, the last stage determined them with a criterion for overlapping areas among the buffers of the candidates. The results were evaluated with manual matching. According to the precision, recall, and F-value, the proposed method gives satisfactory results on different types of road patterns. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Figures

Figure 1

Open AccessArticle Dynamic Monitoring of Forest Land in Fuling District Based on Multi-Source Time Series Remote Sensing Images
ISPRS Int. J. Geo-Inf. 2019, 8(1), 36; https://doi.org/10.3390/ijgi8010036
Received: 28 October 2018 / Revised: 24 December 2018 / Accepted: 10 January 2019 / Published: 16 January 2019
PDF Full-text (4072 KB) | HTML Full-text | XML Full-text
Abstract
Time series remote sensing images can be used to monitor the dynamic changes of forest lands. Due to consistent cloud cover and fog, a single sensor typically provides limited data for dynamic monitoring. This problem is solved by combining observations from multiple sensors [...] Read more.
Time series remote sensing images can be used to monitor the dynamic changes of forest lands. Due to consistent cloud cover and fog, a single sensor typically provides limited data for dynamic monitoring. This problem is solved by combining observations from multiple sensors to form a time series (a satellite image time series). In this paper, the pixel-based multi-source remote sensing image fusion (MulTiFuse) method is applied to combine the Landsat time series and Huanjing-1 A/B (HJ-1 A/B) data in the Fuling district of Chongqing, China. The fusion results are further corrected and improved with spatial features. Dynamic monitoring and analysis of the study area are subsequently performed on the improved time series data using the combination of Mann-Kendall trend detection method and Theil Sen Slope analysis. The monitoring results show that a majority of the forest land (60.08%) has experienced strong growth during the 1999–2013 period. Accuracy assessment indicates that the dynamic monitoring using the fused image time series produces results with relatively high accuracies. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Figures

Figure 1

Open AccessArticle Multisource Hyperspectral and LiDAR Data Fusion for Urban Land-Use Mapping based on a Modified Two-Branch Convolutional Neural Network
ISPRS Int. J. Geo-Inf. 2019, 8(1), 28; https://doi.org/10.3390/ijgi8010028
Received: 4 November 2018 / Revised: 4 January 2019 / Accepted: 9 January 2019 / Published: 14 January 2019
PDF Full-text (8691 KB) | HTML Full-text | XML Full-text
Abstract
Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very [...] Read more.
Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on “Squeeze-and-Excitation Networks”). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Figures

Figure 1

Open AccessArticle Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection
ISPRS Int. J. Geo-Inf. 2018, 7(10), 401; https://doi.org/10.3390/ijgi7100401
Received: 6 September 2018 / Revised: 27 September 2018 / Accepted: 9 October 2018 / Published: 10 October 2018
PDF Full-text (7483 KB) | HTML Full-text | XML Full-text
Abstract
In order to overcome the insufficiency of single remote sensing data in change detection, synthetic aperture radar (SAR) and optical image data can be used together for supplementation. However, conventional image fusion methods fail to address the differences in imaging mechanisms and cannot [...] Read more.
In order to overcome the insufficiency of single remote sensing data in change detection, synthetic aperture radar (SAR) and optical image data can be used together for supplementation. However, conventional image fusion methods fail to address the differences in imaging mechanisms and cannot overcome some practical limitations such as usage in change detection or temporal requirement of the optical image. This study proposes a new method to fuse SAR and optical images, which is expected to be visually helpful and minimize the differences between two imaging mechanisms. The algorithm performs the fusion by establishing relationships between SAR and multispectral (MS) images by using a random forest (RF) regression, which creates a fused SAR image containing the surface roughness characteristics of the SAR image and the spectral characteristics of the MS image. The fused SAR image is evaluated by comparing it to those obtained using conventional image fusion methods and the proposed method shows that the spectral qualities and spatial qualities are improved significantly. Furthermore, for verification, other ensemble approaches such as stochastic gradient boosting regression and adaptive boosting regression are compared and overall it is confirmed that the performance of RF regression is superior. Then, change detection between the fused SAR and MS images is performed and compared with the results of change detection between MS images and between SAR images and the result using fused SAR images is similar to the result with MS images and is improved when compared to the result between SAR images. Lastly, the proposed method is confirmed to be applicable to change detection. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

FusionImage: An R Package for Pan-Sharpening Images in Open-Source Software

Fulgencio Cánovas
Pre-departamental Unit of Civil Engineering, Universidad Politécnica de Cartagena; [email protected]

Image pan-sharpening is the process by which a set of multispectral layers are fused with a panchromatic layer with higher spatial resolution and whose spectral width  encompasses those of the multispectral layers. The objective is to obtain a product with the spatial resolution of the panchromatic and the spectral resolution of the multispectral. Several algorithms have been proposed to perform such fusion whereas other algorithms have been used to evaluate the resulting layers. The objective of this paper is to use three pan-sharpening algorithms: High Pass Filter, Principal Component Analysis and Gram-Schmidt and evaluate their results with three indices: the universal image quality index (Q index), the ERGAS index and the Spatial ERGAS index. A secondary objective is to produce an R package called fusionImage implementing the six aforementioned techniques.

From a qualitative point of view, the images with higher spatial ratio between the multispectral resolution and the panchromatic resolution (QuickBird, Ikonos and Natmur-08, an image obtained with an aerotransported sensor, with spatial resolution ratio ranging from 4 to 4.4) present better results than those obtained with  Landsat 7 and 8 whose spatial resolution ratio is two. These two last sensor presented greater colour distortion.  However, the best quantitative results were obtained with Landsat-7 and Landsat-8 images whichever the fusion method used. This contrasts reflects the importance of taking into account both evaluation approaches.  So, there is no method a priori, better that the others; the results will depend on the characteristics of the sensors, but also on the atmospherics conditions and peculiarities of the study sites.

Another result of this research is an R package  called fusionImage, which implements all the fusion and validation algorithms used in this research. When comparing the results obtained with this software with those of proprietary software, our software obtained, in general, better results.

ISPRS Int. J. Geo-Inf. EISSN 2220-9964 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top