Multi-Source Geo-Information Fusion in Transition: A Summer 2019 Snapshot

: Since the launch of Landsat-1 in 1972, the scientiﬁc domain of geo-information has been incrementally shaped through di ﬀ erent periods, due to technology evolutions: in devices (satellites, UAV, IoT), in sensors (optical, radar, LiDAR), in software (GIS, WebGIS, 3D), and in communication (Big Data). Land Cover and Disaster Management remain the main big issues where these technologies are highly required. Data fusion methods and tools have been adapted progressively to new data sources, which are augmenting in volume, variety, and in quick accessibility. This Special Issue gives a snapshot of the current status of that adaptation, as well as looking at what challenges are coming soon.


Background. Where Do We Come from?
Defining a special issue is similar to defining a landmark: What is its name, where is it located?For a special issue, the equivalent questions are: what is its topic, at what date is it published?Therefore, the question can be formulated this way: "why examine multi-source geo-information fusion in 2019?".As it frequently happens with what seems natural or obvious, the question was not asked before, but only after the event, simply because there must be a reason why the event occurred, hence a retrospective question is needed.
And, retrospectively, we have an unquestionable landmark in geo-information: the launch of Landsat ERTS-1 in 1972.The release of MSS imagery was rather scarce for several years, and the first special event was the seminal publication "ERTS-1, a new window on our planet" in 1976 [1].For a decade afterwards, remote sensing (RS) imagery, mostly Landsat, plus Meteosat for continent-wide studies, was the only source of global geo-information.Besides satellite imagery, the beginning of automated cartography was limited to a few large actors because it required expensive data and software, and therefore required highly skilled personnel.The two research communities, RS and auto-carto, were rather distinct, working with different data and different tools, holding different conferences: remember AutoCarto 6 in Ottawa, 1983 [2], which was influential in the design of future geographic information systems (GIS), the XVth ISPRS congress in Rio, 1984 [3], mainly devoted to instrument design and calibration, where upcoming SAR and SPOT imagery were presented, then the XVIth ISPRS in Kyoto, 1988, with the delivery of the first spatiocartes (spatial maps) as a first attempt to join the two communities [4].
An instructive way of measuring the popularity of a particular topic is the Ngram-tool, built on top of the Google database of millions of digitized books.Please note that the Google Ngram project unfortunately ended in 2008.Analyzing occurrences of terms in the general literature is a proxy of the visible part of the related research activity, as shown in reference [5]: we can interpret the apparition of a new Ngram as the beginning of a new research field, and interpret important variations (e.g., inflection points) as scientific paradigm shifts.We should note that the delay for dissemination of a scientific term into public awareness is short enough to be neglected.
Let's check the terms remote sensing, geographic information system and data fusion.Figure 1 plots their evolution over 1960-2008.We can distinguish the period 1972-1987, between the Landsat-1 launch, and the inflection point of the use of the term GIS: we name it the "decade-and-a-half of remote sensing" (the RS15's), a period when a totally new research activity was founded, and when no competing activity was mature enough to broaden the topic into geo-information, or geomatics, which gives very similar results.The term geographic information is almost always combined within geographic information system, only as traces during the "RS15's", slightly increasing, then dramatically inflecting its curve between 1985-88 (representing more than a230% increase in the general literature).
variations (e.g., inflection points) as scientific paradigm shifts.We should note that the delay for dissemination of a scientific term into public awareness is short enough to be neglected.
Let's check the terms remote sensing, geographic information system and data fusion.Figure 1 plots their evolution over 1960-2008.We can distinguish the period 1972-1987, between the Landsat-1 launch, and the inflection point of the use of the term GIS: we name it the "decade-and-a-half of remote sensing" (the RS15's), a period when a totally new research activity was founded, and when no competing activity was mature enough to broaden the topic into geo-information, or geomatics, which gives very similar results.The term geographic information is almost always combined within geographic information system, only as traces during the "RS15's", slightly increasing, then dramatically inflecting its curve between 1985-88 (representing more than a230% increase in the general literature).If one refines this evolution, one can see in Figure 2 (Left) that the"automated cartography", a term that was in vogue between 1968 and 1988, was thenreplaced rather abruptly in 1985 by geo-information, or geomatics.Similar behavior can be observed (Right) with data assimilation, alone during the RS15's, then progressively surpassed by data fusion.The multiplicity of data fusion terms has been studied in reference [6].If one refines this evolution, one can see in Figure 2 (Left) that the "automated cartography", a term that was in vogue between 1968 and 1988, was then replaced rather abruptly in 1985 by geo-information, or geomatics.Similar behavior can be observed (Right) with data assimilation, alone during the RS15's, then progressively surpassed by data fusion.The multiplicity of data fusion terms has been studied in reference [6].
ISPRS Int.J. Geo-Inf.2019, 8, x FOR PEER REVIEW 2 of 11 variations (e.g., inflection points) as scientific paradigm shifts.We should note that the delay for dissemination of a scientific term into public awareness is short enough to be neglected.Let's check the terms remote sensing, geographic information system and data fusion.Figure 1 plots their evolution over 1960-2008.We can distinguish the period 1972-1987, between the Landsat-1 launch, and the inflection point of the use of the term GIS: we name it the "decade-and-a-half of remote sensing" (the RS15's), a period when a totally new research activity was founded, and when no competing activity was mature enough to broaden the topic into geo-information, or geomatics, which gives very similar results.The term geographic information is almost always combined within geographic information system, only as traces during the "RS15's", slightly increasing, then dramatically inflecting its curve between 1985-88 (representing more than a230% increase in the general literature).If one refines this evolution, one can see in Figure 2 (Left) that the"automated cartography", a term that was in vogue between 1968 and 1988, was thenreplaced rather abruptly in 1985 by geo-information, or geomatics.Similar behavior can be observed (Right) with data assimilation, alone during the RS15's, then progressively surpassed by data fusion.The multiplicity of data fusion terms has been studied in reference [6].Therefore, the scientific activity related to data fusion is clearly a consequence of a convergence: • the larger the availability of GIS, easier software, the more affordable digital cartography became, which led to the creation of academic geomatics laboratories in several countries, and the creation of the Association of Geographic Information Laboratories for Europe (AGILE) in 1998.
• more and more RS data: SPOT-1 launched in 1986, synthetic aperture radar ERS-1 launched in 1991, as well as more new sensors, new wavelengths or new radar polarizations.These new devices required a full new range of processing methods, algorithms, format conversion procedures, forms of registration, structured recording, etc., The international research community was initiating large-scale experiments to specifically explore the fusion of multi-source geo-information.This was the time of several Hydrologic and Atmospheric Pilot Experiments HAPEX [7], and the Alpilles-ReSeDA (Remote Sensing Data Assimilation) European project [8].
We will show how the profile of the mainstream "geo-info scientist", though not named as such at the beginning, gradually switched from a signal processing scientist, to the more versatile version of what we name today a data scientist.We will show how this Special Issue is providing its contribution to that evolution.

Main Objectives: What's at Stake?
Geo-information has been, and still is used in most application domains that have a daily impact on humans life: from public health, transportation, climate/weather, to biodiversity, conservation, or archaeology.This is sometimes called the Digital Earth paradigm [9].At the same time, and probably as a result of the stakes and the current technology development and new opportunities, the Big Data ecosystem is enabling that paradigm: increasing the variety of sources, increasing the volumes of data, and increasing accessibility and velocity.This doesn't mean that it is merely easier to use the already developed tools and algorithms.On the contrary, it opens new challenges: how to integrate data which weren't purposely designed to work together?How to merge so many data while maintaining an acceptable quality in the result of the fusion?
Among these application domains, some have evolved since the RS15's, and are still on the edge of research, remaining as crucial as they were forty to fifty years ago.Here are two examples.

Land Cover Assessment
Assessing what is located at any place on the Earth has been a major human task throughout history.This is an inescapable task, and is always renewed on a regular basis, in order to know precisely what is there, and how this has evolved.At every stage of the technological development, there has been a new way to perform Land Cover assessment: from Columbus to the first flying machines, and of course, with the first Landsat MSS images, and with the first digital maps.
The CORINE program of the European Commission (Coordination of information on the environment), was launched in 1985 [10] for coordinating the compilation of data on the state of the environment, providing a land cover inventory of the entire EU, and ensuring that information is compatible between the Member States, to facilitate being used at a scale of 1/100,000.
The methodology is based on the fusion and semi-automated interpretation of SPOT, Landsat MSS and Landsat TM imagery.In 2017, an assessment report has been issued [11], indicating the main trends observed and their environmental impacts:

•
Urban and infrastructure expansion, between 1990 and 2012, which has affected land with productive soil and fragments the existing landscape structure.

•
The EU's agricultural land, often in favorable locations, has been decreasing at an average rate of 1000 km 2 per year: Associated biodiversity and traditional landscapes are affected by land take, agricultural intensification and farmland abandonment, cf. Figure 3.

•
The forest area remains stable, while indicating an intensification of forestland use.This may lead to a declining quality of forest ecosystems.Looking at Figure 3, we can detect high changes in several former Eastern European countries (Hungary, Baltic states, ex-DDR, Check Rep.), in Ireland and the Pô valley, also in west Germany.We can also notice low or very low change in some countries (Austria, UK, and Greece).We can suspect that the "compatibility of information between all member states" may still be improved: research in data fusion is still in progress [12], especially folowingthe decision to add new High Resolution Layers (HRL) as a complement to the CORINE Land Cover product [13].
One example is the Earthquake in Haiti, 13 January 2010: The activation allowed the acquisition of 34 high-resolution images, from the satellites SPOT-5, GeoEye-1, QuickBird-2, KOMPSAT-2, ALOS AVNIR-2, over the city of Port-au-Prince and around.Figure 4 plots the coverage of these 34 images over an OSM (OpenStreetMap) background.
Capturing this tragedy required the commitment of about 600 volunteers, students and academics, working on the control and update of local information,who sent the information mostly via emails, and in their day-by-day edition as OSM data.The geo-localization of on-site contributors was possible thanks to GPS, which, like emails, were not depending on relay antennas that were out of order.Looking at Figure 3, we can detect high changes in several former Eastern European countries (Hungary, Baltic states, ex-DDR, Check Rep.), in Ireland and the Pô valley, also in west Germany.We can also notice low or very low change in some countries (Austria, UK, and Greece).We can suspect that the "compatibility of information between all member states" may still be improved: research in data fusion is still in progress [12], especially following the decision to add new High Resolution Layers (HRL) as a complement to the CORINE Land Cover product [13].

Disaster Response Management
The European, French and Canadian space agencies (ESA, CNES, CSA) signed the International Charter "Space and Major Disasters", on 20 October 2000, followed by 14 major space agencies (USA, India, Japan, Russia, China).The Charter has been activated 611 times, as of July 2019, for Tsunamis, earthquakes, floods, cyclones, major fires, etc. (https://disasterscharter.org).
One example is the Earthquake in Haiti, 13 January 2010: The activation allowed the acquisition of 34 high-resolution images, from the satellites SPOT-5, GeoEye-1, QuickBird-2, KOMPSAT-2, ALOS AVNIR-2, over the city of Port-au-Prince and around.Figure 4 plots the coverage of these 34 images over an OSM (OpenStreetMap) background.
Capturing this tragedy required the commitment of about 600 volunteers, students and academics, working on the control and update of local information, who sent the information mostly via emails, and in their day-by-day edition as OSM data.The geo-localization of on-site contributors was possible thanks to GPS, which, like emails, were not depending on relay antennas that were out of order.Therefore, what is referred to as crowd-source geo-information [14], was massively produced to locate gathering points, clean water points, road obstacles, etc., as a complement to the imagery [15].The project: "Haiti Post-earthquake response and recovery 2010-11", was the first of the "Humanitarian OSM Team" HOT (hotosm.org), a volunteer-based group that applies the principles of open source and open data, to humanitarian responses and economic development.Volunteered geographic information (VGI), largely benefited from that international recognition, and then became an important new actor in geo-information.
The second deadliest disaster in the21st century, the 26 December 2004 Indian Ocean earthquake and tsunami, has also triggered another new source of geo-information: A United Nations conference decided to establish an International Early Warning Program (IEWP), and started spreading a network of seismic sensors, of data buoys, all over the Indian Ocean, and automatic stations which fusion those data for issuing alerts whenever needed (e.g., tsunami in 2012, in 2018).

Progress status: What's Up? and What's New?
In recent years, many publications have beenexplicitly mentioning geo-information fusion.Others have also been doing so, but usinga different terminology (map revision, evolution assessment, etc.).

Reviews and Literature Surveys on Specific Data Fusion Subtopics
Besides the present Special Issue, which we detail in the next subsection, other research Journals have published reviews focused on several aspects of geo-information fusion.
Here are a few examples: • An IJGI special issue on "GEOBIA in a Changing World" (14 papers in 2018) [16], investigating how GEographic Object Based Image Analysis can complement pixel-based analysis.Basically, GEOBIA works with both original pixels and an overlaid areal or choropleth map, i.e., a vector representation automatically computed by a segmentation algorithm.This approach is particularly praised in land-cover studies, which are, by nature, object-based (crop fields, built-up areas, etc.,), for instance: mapping rooftop vegetation [17].Therefore, what is referred to as crowd-source geo-information [14], was massively produced to locate gathering points, clean water points, road obstacles, etc., as a complement to the imagery [15].The project: "Haiti Post-earthquake response and recovery 2010-11", was the first of the "Humanitarian OSM Team" HOT (hotosm.org), a volunteer-based group that applies the principles of open source and open data, to humanitarian responses and economic development.Volunteered geographic information (VGI), largely benefited from that international recognition, and then became an important new actor in geo-information.
The second deadliest disaster in the21st century, the 26 December 2004 Indian Ocean earthquake and tsunami, has also triggered another new source of geo-information: A United Nations conference decided to establish an International Early Warning Program (IEWP), and started spreading a network of seismic sensors, of data buoys, all over the Indian Ocean, and automatic stations which fusion those data for issuing alerts whenever needed (e.g., tsunami in 2012, in 2018).

Progress status: What's Up? and What's New?
In recent years, many publications have been explicitly mentioning geo-information fusion.Others have also been doing so, but using a different terminology (map revision, evolution assessment, etc.).

Reviews and Literature Surveys on Specific Data Fusion Subtopics
Besides the present Special Issue, which we detail in the next subsection, other research Journals have published reviews focused on several aspects of geo-information fusion.
Here are a few examples: • An IJGI special issue on "GEOBIA in a Changing World" (14 papers in 2018) [16], investigating how GEographic Object Based Image Analysis can complement pixel-based analysis.Basically, GEOBIA works with both original pixels and an overlaid areal or choropleth map, i.e., a vector representation automatically computed by a segmentation algorithm.This approach is particularly praised in land-cover studies, which are, by nature, object-based (crop fields, built-up areas, etc.), for instance: mapping rooftop vegetation [17].[25], multi-temporal UAVs [26], and using a variety of approaches such that subjective logics [27], a modified Dempster-Shafer approach, or Kohonen-based Credal sets [28], etc., In that "Fusion 2018" conference, many papers discussed an intense research effort in autonomous vehicle driving, obstacle detection, and multi-target tracking.

What Does This Special Issue Bring to the Reader?
This Special Issue is representative of the present variety of objectives and solutions studied by the global community of researchers as part of geo-information.
It addresses mostly the classical application domains of geo-information, namely: Land Cover and Land Cover Change detection, which includes rural areas, forests, as well as more recent facets of urban land cover, and also road networks detection and correction.
And it brings some new insights in the following methods and tools: • Random Forest Regression [29]: applied individually to a RGB multispectral image and to a SAR image pre-processed by gray-level co-occurrence matrix descriptors (e.g., energy, homogeneity, angular second moment), to train independent classes by K-means, then training samples are selected and used in the learning process, determining the RF for each class, then applying the resulting RF to the entire image.This results in the fusion of thesurface roughness characteristics of the SAR image and the spectral characteristics of the MS image; • Two-Branch Convolutional Neural Network [30]: CNN layers are applied to the respective input features: a principal component analysis (PCA) is performed on an hyper-spectral 144-bands image (branch 1-image provided at the IEEE-GRSS 2013 data fusion contest), then, in next layers, a series of filters whose parameters are tuned by supervised learning.The LiDAR image is processed in a similar manner (branch 2).A residual block is utilized in each branch to extract multi-scale features.Then, a fusion module is used to integrate HSI and LiDAR features, based on "squeeze-and-excitation networks" (adaptive adjust of the weighs).

•
Multi-Source Time Series [31]: processing Landsat-7 ETM and Huanjing-1, over 13 years, with K-means classification, then analyzing the resulting time series with a combination of Mann-Kendall trend detection method and Theil-Sen slope analysis, to monitor the evolution of the NDVI on forest land.
The other three papers propose methods and algorithms for assessing distance, either for linear features matching or for height matching between different sources:

•
Similarity scores for Road-matching [32]: Hausdorff distance (between vertices), orientation (octant), sinuosity, mean perpendicular distance, mean length of the edges of a triangulated irregular network (TIN), and degree of connectivity (valence of intersections), are six indices used to compute a similarity score.The sources are the Istanbul Metropolitan authority, OSM, TomTom and Basarsoft (both of the latter are private navigation data).

•
Point-to-Grid Distances for Point Clouds [33]: an improvement of the Iterative Closest Point (ICP) algorithm, including point-to-DEM pixel estimation, thus allowing the fusion of a point cloud (LiDAR flight campaign Apr.2017) with the digital elevation model computed from the LiDAR flight campaign Dec.2012 with a lower sampling interval (0.5 versus 0.1m.).

•
3D Reconstruction [34]: prismatic 3D building models are built by adding height values to a 2D footprint (source: OSM).The heights are computed with two different approaches, depending on the availability of sources providing a sufficient high-resolution spatial coverage at an urban scale.Sources are: (1) a pair of independent elevation models (CartoSat-1 and TanDEM-X), whose fusion results into a more accurate reference DEM;(2) a pair of a SAR and an optical image (TerraSAR-X and WorldView-2), processed by stereo grammetry fusion yielding to a DEM.High-resolution airborne LiDAR point cloud data are available only at some sub-areas, and serve as an accuracy assessment.

New Needs, New Challenges, New Issues
We all live on the same Planet, we all share the same geographic main basic features: land cover (what's there), distance (how to get there), neighborhoods (what's located around there), etc., and we share (we should all share) the same rapidly evolving technology that allows to gather data, to process it for transforming it in valuable information.After the RS15's, followed by a decade of GIS expansion, then the widespread of Internet mapping [5], now the era of Big Data is transforming the landscape of geo-information: a detailed literature survey is provided in reference [35].

Geo-Information Confronted to the Big Data Paradigm
Limiting the Big Data concept to the simple, now familiar, "Volume-Variety-Velocity" paradigm, it is obvious that Volume and Variety are dramatically expanding in geo-information: • new RS satellites: the European Sentinel-2 [36], and Earth Observation satellites from many new countries in the last decade, e.g., Dubai, Indonesia, Nigeria, and Venezuela, to cite a few; • new sensors, passive or active, that are bringing more repetitive coverage, are more spatially accurate, and have a wider spectral range; • new, non-azimuthal, observation viewpoints, e.g., UAV imagery [37], or "street-view" image series [38], important for 3D urban observation; • new ground sources of data: active crowdsourcing (VGI), or passive crowd sourcing, e.g.: mobile phone locations [39,40]; • and the forthcoming spreading of IoT sensors, whose fusion with other geo-information data sources starts being studied recently [41].In the marine domain, Fish Aggregating Devices (FAD's), possibly remotely monitored, are a rare source of "ground" information for oceans [42], whose development is growing very fast (though under almost no control).
Therefore, "Velocity" is now in reach too, for geo-information fusion, using a combination of multiple satellites for a highly repetitive remote observation (see below: constellations and data-cubes), and possibly using near real-time updates from ground sensors, street cameras, etc.
International organizations are working on ways to provide directly usable very large and compatible datasets (e.g., "Landsat-like") from that huge stack of RS images: • the CEOS "virtual constellations": an inter-Agency collaboration to coordinating space-based, and ground-based data delivery systems to meet a common set of requirements within specific domains: Atmospheric Composition, Land Surface Imaging, Precipitations, and Ocean (sea surface topography and temperature, color, wind).Regarding Land Surface Imaging, the "Analysis Ready Data for Land" are satellite data processed to a minimum set of requirements, and organized into an interoperable form that allows immediate analysis with a minimum of additional user effort [43]; • the "data cube" solution: an online analytical processing (OLAP) software, to piling up data layers, over a 2D geographic grid, making cubes with many dimensions [44].
The dizzying profusion of data, and the ability to process them in near real-time, is requiring more and more data fusion studies, in order to allow geo-information to fully enter the Big Data era.

Geo-Information to the Industry, and Back
As an example, let's consider how rapid the evolution in data fusion technology can be, when money is there: the case of autonomous vehicles, such as illustrated by Figure 5.
ISPRS Int.J. Geo-Inf.2019, 8, x FOR PEER REVIEW 8 of 11 • the"data cube" solution: an online analytical processing (OLAP) software, to piling up data layers, over a 2D geographic grid, making cubes with many dimensions [44].
The dizzying profusion of data, and the ability to process them in near real-time, is requiring more and more data fusion studies, in order to allow geo-information to fully enter the Big Data era.

Geo-Information to the Industry, and Back
As an example, let's consider how rapid the evolution in data fusion technology can be, when money is there: the case ofautonomous vehicles, such as illustrated by Figure 5.The author writes [45]: "The challenge of detecting and tracking multiple objects of interest, using multiple disparate sensors, can generally be broken down into the following steps: aligning the data from different sensors from both a spatial and temporal perspective associating the data recorded by disparate sources against the same object of interest mathematically fusing the different pieces of data/information."

Sounds familiar, doesn'tit?
A lot of research is presently conducted in that field of autonomous vehicle software, therefore the field ofmulti-source geo-information fusion should benefit from the expected improvements brought about by the automobile industry.
Stay tuned!The future of multi-source geo-information fusion is coming fast.

Conclusions
Lewis Carroll, in the chapter 11 of Sylvie and Bruno Concluded [46], discusses animaginary country improving its best map (six inches to the mile), to a six yards to the mile, then up to a mile to the mile: but the map "has never been spread out", because of farmers complaining that that would block out the Sun and crops would fail.
Is this a joke?Consider how far we have come since the Landsat launch in 1972.A digital map at a mile to the mile scale today?Is it really "Mission: Impossible"?
With digital information, we wouldn'tblock the Sun from shining on the crops…, butthe data centers focusing on energy consumption may reach unsustainable volumes in a not too far future, and may need to be scrutinized everywhere and all the time (and recorded from cradle to tomb) which may someday become as unbearable for us as it was for the farmers of Lewis Carroll.
The future is uncertain, that's what makes it the future.The author writes [45]: "The challenge of detecting and tracking multiple objects of interest, using multiple disparate sensors, can generally be broken down into the following steps: aligning the data from different sensors from both a spatial and temporal perspective associating the data recorded by disparate sources against the same object of interest mathematically fusing the different pieces of data/information."

Sounds familiar, doesn't it?
A lot of research is presently conducted in that field of autonomous vehicle software, therefore the field of multi-source geo-information fusion should benefit from the expected improvements brought about by the automobile industry.
Stay tuned!The future of multi-source geo-information fusion is coming fast.

Conclusions
Lewis Carroll, in the chapter 11 of Sylvie and Bruno Concluded [46], discusses an imaginary country improving its best map (six inches to the mile), to a six yards to the mile, then up to a mile to the mile: but the map "has never been spread out", because of farmers complaining that that would block out the Sun and crops would fail.
Is this a joke?Consider how far we have come since the Landsat launch in 1972.A digital map at a mile to the mile scale today?Is it really "Mission: Impossible"?
With digital information, we wouldn't block the Sun from shining on the crops . . ., but the data centers focusing on energy consumption may reach unsustainable volumes in a not too far future, and may need to be scrutinized everywhere and all the time (and recorded from cradle to tomb) which may someday become as unbearable for us as it was for the farmers of Lewis Carroll.
The future is uncertain, that's what makes it the future.

Figure 1 .
Figure 1.The "remote-sensing decade-and-a-half" and the beginning of GIS and data-fusion.

Figure 2 .Figure 1 .
Figure 2. (Left) geo-informationand geomatics, replace automated cartography.(Right) data fusion joins data assimilation, when Landsat stops being prominent, then takes over.Therefore, the scientific activity related to data fusion is clearly a consequence of a convergence:• the larger the availability of GIS, easier software, the more affordable digital cartography became, whichled to the creation of academic geomatics laboratories in several countries, and

Figure 1 .
Figure 1.The "remote-sensing decade-and-a-half" and the beginning of GIS and data-fusion.

Figure 2 .
Figure 2. (Left) geo-informationand geomatics, replace automated cartography.(Right) data fusion joins data assimilation, when Landsat stops being prominent, then takes over.Therefore, the scientific activity related to data fusion is clearly a consequence of a convergence:• the larger the availability of GIS, easier software, the more affordable digital cartography became, whichled to the creation of academic geomatics laboratories in several countries, and

Figure 2 .
Figure 2. (Left) geo-information and geomatics, replace automated cartography.(Right) data fusion joins data assimilation, when Landsat stops being prominent, then takes over.

Figure 5 .
Figure 5.Multi-sensor data fusion and machine learning are hugely important for detecting and identifyingboth static and dynamic targets.Excerpted from reference [45].

Figure 5 .
Figure 5. data fusion and machine learning are hugely important for detecting and identifyingboth static and dynamic targets.Excerpted from reference [45].
Besides the strict scientific domain of geo-information, the International Society of Information Fusion (isif.org)holds annual conferences on general aspects of fusion, since 1998.It includes geo-information in the most recent conferences, for instance, in 2018: Fusion of SAR interferometer and Street View pictures