Special Issue "Multi-Source Geoinformation Fusion"

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: closed (1 May 2019).

Special Issue Editor

Prof. Robert Jeansoulin
Website
Guest Editor
LIGM UMR8049, Univ. Paris-Est, CNRS, 77454 Marne-la-Vallée, France
Interests: quality of geographic information; uncertainty space-time knowledge representation and reasoning GIS and remote sensing; data extraction and analysis

Special Issue Information

Dear Colleagues,

Twenty years ago, several HAPEX (Hydrologic and Atmospheric Pilot Experiments) and the Alpilles- ReSeDA (Remote Sensing Data Assimilation) were the first large-scale international experiments to specifically explore the fusion of multi-source geoinformation. That geoinformation was almost exclusively made up of satellite and aerial imagery, but what was then called data assimilation, was already a complex operation between data collected at different scales, from different sensors, involving physics "transfer models" in order to "fusing" these data, if evaluated as similar enough.

What has changed in this research field, since then?

Nowadays, the range of sources is considerably larger. Several dozen satellites are observing our planet, from the continental scale, down to streets and neighborhoods. The vision is not just flat: LiDAR or UAVs pictures deliver a 3D vision. Time of delivery is no longer an issue: IoT sensors deliver real-time environmental data, vehicle traffic data, etc.

The number of data sources is not the only factor that has changed, variety also has increased a great deal. Automated cartography and remote sensing are no longer two realms ignoring each other, as was the case 20 years ago. Handling pixel and vector data, together, is no longer a handicap in designing geospatial information. Software development, knowledge representation, and reasoning tools have greatly evolved, allowing for the smooth integration of (ontologically) different sources.

Volume and variety have increased, and velocity has changed radically. Large data files are no longer mailed as digital tapes. You can download data, or you can process it using web services, and download only the results. You can process raw data thoroughly, using your own code, or rely on web applications that apply your chosen corrections and models. Will these models soon be determined by artificial intelligence? Will web services be choosing the models that are most relevant for your applications?

These questions are on the table today; about where geoinformation fusion research and development is heading.

We invite you to contribute to this Special Issue, which could be a big step forward in research on multi-source geoinformation fusion, summing up its different facets and application domains.

Several ISPRS Work Group (WG) are actively working on related topics: WG.III.6 (fusion), ICWG.III/IVb (Remote Sensing quality) and WG.IV.3 (quality), as well as WG.III.7, in the application domain of land-cover/use, to cite a few. National space agencies are designing infrastructure for the large-scale delivery of spatial data, and global Earth observation system of systems (GEOSS) is now a mature international organization devoted to provisioning multi-source data. In addition, there is also an active research community on the more theoretical aspects of geospatial information fusion and revision.

Therefore, we encourage contribution on (but not limited to) the following themes:

  • Theories, frameworks, and paradigms of geospatial information fusion
  • Fusion background improvement: Geoinformation metamodelling, model integration, and uniform knowledge representation
  • Big data's specific impact on geoinformation fusion (not just volume and access)
  • Artificial intelligence (AI)/machine learning in relation with geoinformation fusion
  • Advances in the integration of new sensor sources with classical ones (LiDAR, UAVs imagery, IoT environmental or mobility data, volunteer information, etc.)
  • Applications making intensive use of fusion: Agricultural systems, land use, urban development, etc.
  • Geospatial education and capacity-building efforts with geoinformation fusion
  • Ethical and societal considerations (privately owned data, citizen participation, data integrity variability)

Manuscripts for this Special Issue should be submitted by 1 May 2019, for timely selection, peer-review, and publication in this open access Special Issue of IJGI.

Prof. Robert Jeansoulin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Multi-Source Geo-Information Fusion in Transition: A Summer 2019 Snapshot
ISPRS Int. J. Geo-Inf. 2019, 8(8), 330; https://doi.org/10.3390/ijgi8080330 - 27 Jul 2019
Cited by 1
Abstract
Since the launch of Landsat-1 in 1972, the scientific domain of geo-information has been incrementally shaped through different periods, due to technology evolutions: in devices (satellites, UAV, IoT), in sensors (optical, radar, LiDAR), in software (GIS, WebGIS, 3D), and in communication (Big Data). [...] Read more.
Since the launch of Landsat-1 in 1972, the scientific domain of geo-information has been incrementally shaped through different periods, due to technology evolutions: in devices (satellites, UAV, IoT), in sensors (optical, radar, LiDAR), in software (GIS, WebGIS, 3D), and in communication (Big Data). Land Cover and Disaster Management remain the main big issues where these technologies are highly required. Data fusion methods and tools have been adapted progressively to new data sources, which are augmenting in volume, variety, and in quick accessibility. This Special Issue gives a snapshot of the current status of that adaptation, as well as looking at what challenges are coming soon. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Research

Jump to: Editorial

Open AccessArticle
Fusion of Multi-Sensor-Derived Heights and OSM-Derived Building Footprints for Urban 3D Reconstruction
ISPRS Int. J. Geo-Inf. 2019, 8(4), 193; https://doi.org/10.3390/ijgi8040193 - 18 Apr 2019
Cited by 2
Abstract
So-called prismatic 3D building models, following the level-of-detail (LOD) 1 of the OGC City Geography Markup Language (CityGML) standard, are usually generated automatically by combining building footprints with height values. Typically, high-resolution digital elevation models (DEMs) or dense LiDAR point clouds are used [...] Read more.
So-called prismatic 3D building models, following the level-of-detail (LOD) 1 of the OGC City Geography Markup Language (CityGML) standard, are usually generated automatically by combining building footprints with height values. Typically, high-resolution digital elevation models (DEMs) or dense LiDAR point clouds are used to generate these building models. However, high-resolution LiDAR data are usually not available with extensive coverage, whereas globally available DEM data are often not detailed and accurate enough to provide sufficient input to the modeling of individual buildings. Therefore, this paper investigates the possibility of generating LOD1 building models from both volunteered geographic information (VGI) in the form of OpenStreetMap data and remote sensing-derived geodata improved by multi-sensor and multi-modal DEM fusion techniques or produced by synthetic aperture radar (SAR)-optical stereogrammetry. The results of this study show several things: First, it can be seen that the height information resulting from data fusion is of higher quality than the original data sources. Secondly, the study confirms that simple, prismatic building models can be reconstructed by combining OpenStreetMap building footprints and easily accessible, remote sensing-derived geodata, indicating the potential of application on extensive areas. The building models were created under the assumption of flat terrain at a constant height, which is valid in the selected study area. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Open AccessArticle
Registration of Multi-Sensor Bathymetric Point Clouds in Rural Areas Using Point-to-Grid Distances
ISPRS Int. J. Geo-Inf. 2019, 8(4), 178; https://doi.org/10.3390/ijgi8040178 - 05 Apr 2019
Cited by 2
Abstract
This article proposes a method for registration of two different point clouds with different point densities and noise recorded by airborne sensors in rural areas. In particular, multi-sensor point clouds with different point densities are considered. The proposed method is marker-less and uses [...] Read more.
This article proposes a method for registration of two different point clouds with different point densities and noise recorded by airborne sensors in rural areas. In particular, multi-sensor point clouds with different point densities are considered. The proposed method is marker-less and uses segmented ground areas for registration.Therefore, the proposed approach offers the possibility to fuse point clouds of different sensors in rural areas within an accuracy of fine registration. In general, such registration is solved with extensive use of control points. The source point cloud is used to calculate a DEM of the ground which is further used to calculate point to raster distances of all points of the target point cloud. Furthermore, each cell of the raster DEM gets a height variance, further addressed as reconstruction accuracy, by calculating the grid. An outlier removal based on a dynamic threshold of distances is used to gain more robustness against noise and small geometry variations. The transformation parameters are calculated with an iterative least-squares optimization of the distances weighted with respect to the reconstruction accuracies of the grid. Evaluations consider two flight campaigns of the Mangfall area inBavaria, Germany, taken with different airborne LiDAR sensors with different point density. The accuracy of the proposed approach is evaluated on the whole flight strip of approximately eight square kilometers as well as on selected scenes in a closer look. For all scenes, it obtained an accuracy of rotation parameters below one tenth degrees and accuracy of translation parameters below the point spacing and chosen cell size of the raster. Furthermore, the possibility of registration of airborne LiDAR and photogrammetric point clouds from UAV taken images is shown with a similar result. The evaluation also shows the robustness of the approach in scenes where a classical iterative closest point (ICP) fails. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Open AccessArticle
A New, Score-Based Multi-Stage Matching Approach for Road Network Conflation in Different Road Patterns
ISPRS Int. J. Geo-Inf. 2019, 8(2), 81; https://doi.org/10.3390/ijgi8020081 - 13 Feb 2019
Cited by 2
Abstract
Road-matching processes establish links between multi-sourced road lines representing the same entities in the real world. Several road-matching methods have been developed in the last three decades. The main issue related to this process is selecting the most appropriate method. This selection depends [...] Read more.
Road-matching processes establish links between multi-sourced road lines representing the same entities in the real world. Several road-matching methods have been developed in the last three decades. The main issue related to this process is selecting the most appropriate method. This selection depends on the data and requires a pre-process (i.e., accuracy assessment). This paper presents a new matching method for roads composed of different patterns. The proposed method matches road lines incrementally (i.e., from the most similar matching to the least similar). In the experimental testing, three road networks in Istanbul, Turkey, which are composed of tree, cellular, and hybrid patterns, provided by the municipality (authority), OpenStreetMap (volunteered), TomTom (private), and Basarsoft (private) were used. The similarity scores were determined using Hausdorff distance, orientation, sinuosity, mean perpendicular distance, mean length of triangle edges, and modified degree of connectivity. While the first four stages determined certain matches with regards to the scores, the last stage determined them with a criterion for overlapping areas among the buffers of the candidates. The results were evaluated with manual matching. According to the precision, recall, and F-value, the proposed method gives satisfactory results on different types of road patterns. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Open AccessArticle
Dynamic Monitoring of Forest Land in Fuling District Based on Multi-Source Time Series Remote Sensing Images
ISPRS Int. J. Geo-Inf. 2019, 8(1), 36; https://doi.org/10.3390/ijgi8010036 - 16 Jan 2019
Cited by 2
Abstract
Time series remote sensing images can be used to monitor the dynamic changes of forest lands. Due to consistent cloud cover and fog, a single sensor typically provides limited data for dynamic monitoring. This problem is solved by combining observations from multiple sensors [...] Read more.
Time series remote sensing images can be used to monitor the dynamic changes of forest lands. Due to consistent cloud cover and fog, a single sensor typically provides limited data for dynamic monitoring. This problem is solved by combining observations from multiple sensors to form a time series (a satellite image time series). In this paper, the pixel-based multi-source remote sensing image fusion (MulTiFuse) method is applied to combine the Landsat time series and Huanjing-1 A/B (HJ-1 A/B) data in the Fuling district of Chongqing, China. The fusion results are further corrected and improved with spatial features. Dynamic monitoring and analysis of the study area are subsequently performed on the improved time series data using the combination of Mann-Kendall trend detection method and Theil Sen Slope analysis. The monitoring results show that a majority of the forest land (60.08%) has experienced strong growth during the 1999–2013 period. Accuracy assessment indicates that the dynamic monitoring using the fused image time series produces results with relatively high accuracies. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Open AccessArticle
Multisource Hyperspectral and LiDAR Data Fusion for Urban Land-Use Mapping based on a Modified Two-Branch Convolutional Neural Network
ISPRS Int. J. Geo-Inf. 2019, 8(1), 28; https://doi.org/10.3390/ijgi8010028 - 14 Jan 2019
Cited by 10
Abstract
Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very [...] Read more.
Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on “Squeeze-and-Excitation Networks”). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Open AccessArticle
Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection
ISPRS Int. J. Geo-Inf. 2018, 7(10), 401; https://doi.org/10.3390/ijgi7100401 - 10 Oct 2018
Cited by 6
Abstract
In order to overcome the insufficiency of single remote sensing data in change detection, synthetic aperture radar (SAR) and optical image data can be used together for supplementation. However, conventional image fusion methods fail to address the differences in imaging mechanisms and cannot [...] Read more.
In order to overcome the insufficiency of single remote sensing data in change detection, synthetic aperture radar (SAR) and optical image data can be used together for supplementation. However, conventional image fusion methods fail to address the differences in imaging mechanisms and cannot overcome some practical limitations such as usage in change detection or temporal requirement of the optical image. This study proposes a new method to fuse SAR and optical images, which is expected to be visually helpful and minimize the differences between two imaging mechanisms. The algorithm performs the fusion by establishing relationships between SAR and multispectral (MS) images by using a random forest (RF) regression, which creates a fused SAR image containing the surface roughness characteristics of the SAR image and the spectral characteristics of the MS image. The fused SAR image is evaluated by comparing it to those obtained using conventional image fusion methods and the proposed method shows that the spectral qualities and spatial qualities are improved significantly. Furthermore, for verification, other ensemble approaches such as stochastic gradient boosting regression and adaptive boosting regression are compared and overall it is confirmed that the performance of RF regression is superior. Then, change detection between the fused SAR and MS images is performed and compared with the results of change detection between MS images and between SAR images and the result using fused SAR images is similar to the result with MS images and is improved when compared to the result between SAR images. Lastly, the proposed method is confirmed to be applicable to change detection. Full article
(This article belongs to the Special Issue Multi-Source Geoinformation Fusion)
Show Figures

Figure 1

Back to TopTop