Automation of Pan-Sharpening Methods for Pl é iades Images Using GIS Basic Functions

: Pan-sharpening methods allow the transfer of higher resolution panchromatic images to multispectral ones concerning the same scene. Different approaches are available in the literature, and only a part of these approaches is included in remote sensing software for automatic application. In addition, the quality of the results supplied by a speciﬁc method varies according to the characteristics of the scene; for consequence, different algorithms must be compared to ﬁnd the best performing one. Nevertheless, pan-sharpening methods can be applied using GIS basic functions in the absence of speciﬁc pan-sharpening tools, but this operation is expensive and time-consuming. This paper aims to explain the approach implemented in Quantum GIS (QGIS) for automatic pan-sharpening of Pl é iades images. The experiments are carried out on data concerning the Greek island named Lesbo. In total, 14 different pan-sharpening methods are applied to reduce pixel dimensions of the four multispectral bands from 2 m to 0.5 m. The automatic procedure involves basic functions already included in GIS software; it also permits the evaluation of the quality of the resulting images supplying the values of appropriate indices. The results demonstrate that the approach provides the user with the highest performing method every time, so the best possible fused products are obtained with minimal effort in a reduced timeframe.


Introduction
If close-range geomatics techniques are useful for the survey and investigation of civil engineering constructions, such as buildings, bridges and water towers [1], satellite remote sensing is traditionally suitable to support studies on geographic areas, e.g., urban growth effects [2,3], glacier inventory [4,5], desertification [6,7], grassland monitoring [8,9], burned area detection [10,11], seismic damage assessment [12,13], land deformations monitoring aims-landslides [14], land subsidence [15], coastal changes [16], etc. However, a detailed investigation of the Earth's surface and land cover can be performed using Very High Resolution (VHR) satellite images, characterised by pixel dimension of panchromatic (PAN) data equal or less than 1 m. Generally, VHR sensors carried on a satellite can capture also multispectral (MS) images that have a lower resolution than PAN [17,18]. In fact, typical spectral imaging systems supply multiband images of high spatial resolution at a small number of spectral bands or multiband images of high spectral resolution with a lower spatial resolution [19]. Since MS bands are requested for many applications, it is desirable to increase the geometric resolution of MS images. This operation is possible by pan-sharpening, which allows the pixel size of a PAN image to be combined with the radiometric information of MS images at a lower spatial resolution [18,[20][21][22][23]. Pan-sharpening is usually applied to images from the same sensor but can also be used for data supplied by different sensors [24,25].
In the framework of the multi-representation of the geographical data [26], pansharpened images are the most detailed layer of information acquired from space. The pan-sharpened images' field of use is very large. We can distinguish at least three different macro-areas: visualisation, classification and feature extraction.
The first includes the production of orthophotos: substituting the panchromatic image in grey level with Red-Green-Blue (RGB) true-colour composition based on the respective multispectral pan-sharpened bands allows the user to have a better vision of the scene. In fact, most of the high-resolution imagery in Google Earth Maps is the DigitalGlobe Quickbird, which is roughly 65 cm pan-sharpened (65 cm panchromatic at nadir, 2.62 m multispectral at nadir) [27].
Supervised classification algorithms applied on pan-sharpened images produce a more detailed thematic map than in the case of the initial images. However, pan-sharpened products cannot be considered at the same level as real sensor products. In fact, the procedure introduces distortions of the radiometric values, and this influences the classification accuracy. Nevertheless, the benefit of the enhanced geometric resolution is higher than the loss of the radiometric match. Pan-sharpened images are very advantageous to support land cover classification [28], and often they are integrated with other data to perform a better investigation of the considered area, e.g., SAR images [29] to detect environmental hazards [30].
The injection of PAN image details into multispectral images enables the user to perform the geospatial feature extraction process, which has been the subject of extensive research in the last decades. In 2006, Mohammadzadeh et al. [31] proposed an approach based on fuzzy logic and mathematical morphology to extract main road centrelines from pan-sharpened IKONOS images: the results were encouraging, considering that the extracted road centrelines had an average error of 0.504 pixels and a root-mean-square error of 0.036 pixels. More recently (2020), Phinzi et al. [32] applied Machine Learning (ML) algorithms to a Systeme Pour l'Observation de la Terre (SPOT-7) image to extract gullies. They compared three commonly used ML algorithms, including Discriminant Analysis (LDA), Support Vector Machine (SVM), and Random Forest (RF); the pan-sharpened product from SPOT-7 multispectral image successfully discriminated gullies, with an overall accuracy > 95%.
Several methods for pan-sharpening applications are described in literature, and the most frequently used of them are implemented in software for remote sensing. Most of them are based on steps that can be easily executed using typical algorithms of Map Algebra and raster processes that are present in GIS software. Coined by Dana Tomlin [33], the framework, Map Algebra, includes operations and functions that allow the production of a new raster layer starting from one or more raster layers ("maps") of similar dimensions. Depending on the spatial neighbourhood, Map Algebra operations and functions are distinguished into four groups: local, focal, global, and zonal. Local ones work on individual pixels; focal ones work on pixels and their neighbours; global ones work on the entire layer; zonal ones work on areas of pixels presenting the same value [34]. Map Algebra allows basic mathematical functions like addition, subtraction, multiplication and division, as well as statistical operations such as minimum, maximum, average and median. GIS systems use Map Algebra concepts, e.g., ArcGIS implements them in Python (ESRI), MapInfo in MapBasic [35], GRASS GIS in C programming language. Finally, Map Algebra operators and functions are available as specific algorithms in GIS software but can be combined into a procedure or script to perform complex tasks [34].
In this paper, the attention is focused on the possibility to automatise the pansharpening process of VHR satellite images, e.g., Pléiades images, based on raster utilities present in Quantum GIS (QGIS) [36], a free and open-source GIS software. Particularly, the graphical modeller, a simple and easy-to-use interface, is employed to include different phases and algorithms in a single process to facilitate the pan-sharpening application.
Experiments to test the performance of the automatic procedure are developed on Pléiades imagery concerning Lesbo-a Greek island located in the north-eastern Aegean Sea. The remainder of this paper is organised as follows. Section 2 describes 14 pan-sharpening methods and 7 quality indices chosen for this work (Correlation Coefficient (CC), Universal Image Quality Index (UIQI), Root-Mean-Square Error (RMSE), Relative Average Spectral Error (RASE), Erreur Relative Globale Adimensionalle de Synthèse (ERGAS), Spatial Correlation Coefficient (SCC), Zhou Index (ZI)). Section 3 explains the experimental procedure: first, a very brief description of the main characteristics of the Pléiades images used for this study is supplied; then, the implementation of the fusion techniques in the QGIS graphical modeller is illustrated; finally, the procedure steps are explained. Section 4 presents and discusses the results of the automatisation of pan-sharpening method application, highlighting the relevance of the quality index calculation and comparison to support the choice of the best-fused products in relation to the user purposes; particularly, a multi-criteria analysis is proposed as a methodological tool based on weight attribution to each quality index. Section 5 resumes the proposed approach and remarks the efficiency of it in consideration of the good results.

Pan-Sharpening Methods
Many pan-sharpening methods are available in literature to fuse the high spectral resolution of an MS image with the high spatial resolution of a PAN image [37,38]. The pan-sharpening methods can be generalised as the injection of spatial details derived from the PAN image into the up-sampled MS images to obtain high spatial resolution MS images. Currently, the focus is on reducing the spectral distortions of fused images, optimising the spatial details derived from the PAN image, as well as optimising the weights by which the spatial details are multiplied during the injection [39].
Due to their numerousness, it is very difficult to classify the sharpening methods in a few categories that facilitate the reader in understanding the different types of approaches for reversing the spatial detail of PAN images into the MS images. Attempts in this regard and for this purpose [22,40] identify three different groups, namely component substitution (CS), modulation-based (MB), and multi-resolution analysis (MRA).
The CS is a consistent group where methods are characterised by three steps: the transformation of the MS images after their registration to the PAN image; replacement of one component of the new data space similar to the PAN with the higher resolution image; inverse transformation to the original space for fused image production [22]. The CS methods include, among others, the intensity-hue-saturation (IHS) [22], principal component analysis (PCA) [41] and Gram-Schmidt transformation (GS) [42] methods.
MB category includes methods that are centred on the principle of modulating the spatial detail into MS images by multiplying MS images with the ratio of the PAN image to a synthesised image [43]. The MB group includes, among others, Brovey transformation [44] colour normalisation [45], smoothing filter-based intensity modulation (SFIM) [46] and high-pass filtering (HPF) [41].
The MRA group includes methods that are based on the decomposition of an image into a sequence of signals (or pyramid) with decreasingly informative content by applying a given operator in an iterative way [47]. MRA methods are characterised by three main steps: multi-resolution decomposition (MRA application), i.e., using wavelet transformation [48]; replacement of PAN's approximation coefficients by those of the MS band; inverse multiresolution transformation [22]. MRA category includes, among other methods, the additive wavelet fusion algorithm (AWL) [49], the generalised Laplacian pyramid and contextbased decision (GLP-CBD) fusion algorithm [50] and the bi-dimensional empirical mode decomposition (BEMD) fusion method [51].
In general, the pan-sharpened images derived by CS methods have high spatial quality but suffer from spectral distortions; on the other hand, images obtained using MRA techniques are not as sharp as CS methods but present a better spectral consistency [47]. However, the above-described classification is forced in the sense that, for some methods, it is not easy to establish an exclusive association to one of them. For example, the Brovey transformation, included in the MB group, and wavelet transformation, included in the MRA group, are both considered as IHS-like image fusion methods [52].
Even if many pan-sharpening methods seem to be complex, they can be easily implemented by means of GIS basic functions and used in an appropriate way. In addition, the timeframes can be vastly reduced by resorting to the automation of the operations properly programmed in sequence, according to a logic flowchart. To demonstrate this, the The main characteristics of those methods are reported below, including formulas in view of implementing them by means of GIS tools. However, those formulas are firstly inserted in our proposed GIS-based procedure for Pléiades images, then applied manually to the same dataset to verify their correct in an automatic way.

IHS and IHS Fast
Included in the Component Substitution techniques group [22,53], it is based on the projection of multispectral images from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) colour space [54]. The Intensity component (I) is used to fuse PAN, characterized by higher spatial resolution, and MS data, presenting less spatial resolution, because of its similarity with the panchromatic image. According to the fusion framework called Generalized IHS (GIHS) [52], the Intensity component is supplied by: where n represents the number of the multispectral bands. The fused multispectral images MS f k are produced using the following formula: where δ is the difference between PAN and I. An interesting variation of IHS is the so called IHS Fast (IHSF) that introduce weights for each multispectral image. In this case Intensity is supplied by [55]: where ϕ k is the weight of k-th multispectral band. Different solutions are present in literature for weight determination: such as using an empirical approach or the spectral response analysis [56].

Brovey Transformation and Brovey Transformation Fast
Developed by an American scientist to visually increase the contrast in the low and high ends of an image's histogram and thus change the original scene's radiometry [44], the Brovey transformation (BT) normalizes multispectral bands by dividing each of them with the synthetic panchromatic obtained from the same multispectral data. Then, the results are multiplied with the original panchromatic. The fused images are defined by the following equations [57,58]: where MS tot is the combination of the multispectral images according to the formula: Additionally, for this method, weights are introduced, so (5) is substituted by the following formula: This approach is called Brovey Transformation Fast (BTF).

Multiplicative Method
With the Multiplicative method (MLT) [59], the pan-sharpened image is attained by the formula: where, µ PAN is the mean of panchromatic image.

Simple Mean Method
The Simple Mean method (SM) uses a simple mean-averaging equation for each combination of PAN with one multispectral image [60]. Consequently, the pan-sharpened image is supplied by the formula:

Gram-Schmidt and Fast Gram-Schmidt
The Gram-Schmidt pan-sharpening method is based on the Gram-Schmidt transformation (GST), a mathematical approach that ortho-normalizes a set of vectors, usually in the Euclidean space R n , not orthogonal, rotating them until they are orthogonalized; particularly, in the case of images, each band (panchromatic or multispectral) corresponds to one high-dimensional vector [61].
The method is well described in the Laben and Brower patent [42], based on the sequel of steps summarized below.

1.
A lower spatial resolution panchromatic image is simulated from the multispectral band images.

2.
GST is performed on the simulated lower spatial resolution panchromatic image and the lower spatial resolution spectral band images. Particularly, the simulated lower spatial resolution panchromatic image is used as the first band in the Gram-Schmidt process.

3.
The statistics (mean and standard deviation) of the first transform band resulting from the GST are used as a reference to adjust the statistics of the higher spatial resolution panchromatic image; in this way, a modified higher spatial resolution panchromatic image is produced. 4.
The modified higher spatial resolution panchromatic image (with adjusted statistics) takes the place of the first transform band resulting from the GST to produce a new set of transform bands.

5.
The inverse GST is performed on the new set of transform bands to produce the enhanced spatial resolution multispectral digital image.
The simulated lower spatial resolution panchromatic image can be obtained as a linear combination of the n MS bands: In this formula, g k is the gain, given by: where cov(MS k , PAN ) is the covariance between the initial k-th multispectral image and the low-resolution panchromatic image; var(PAN ) is the PAN variance. Different versions of GST are available because of how PAN is generated. The simplest way to produce the lower spatial resolution panchromatic image is supplied by the formula (9): in this case, the method is named GS mode 1 (GS1). If weights are introduced to generate PAN as a weighted average of the MS bands, formula (9) is substituted by the following formula: This method is referred to as Gram-Schmidt Fast (GSF).
Another possibility is to degrade the panchromatic by applying a smoothing filter. The degraded image (D) is then used as follows: This method is known as Gram-Schmidt mode 2 (GS2).

High Pass Filter
The High Pass Filter method (HPF) was introduced by Chavez and Bowel to merge multispectral Landsat Thematic Mapper data with SPOT PAN data [62]. In the HPF method, a small high-pass spatial filter is applied to the PAN image: the results contain the high-frequency component/information that is related mostly to spatial information while the greatest part of the spectral information is removed; the HPF results are added, pixel by pixel, to the lower spatial resolution and higher spectral resolution data set [41].
According to the authors of [63], the high-frequency component of the PAN image can be extracted in an alternative way by applying the smoothing filter to the PAN image and subtracting the result to the PAN. Finally, the pan-sharpened image is obtained by the formula:

Smoothing Filter-Based Intensity Modulation
The smoothing filter-based intensity modulation (SFIM) technique was developed by Liu to fuse a Landsat TM multispectral image with a resolution of 30 m with a SPOT panchromatic image with a resolution of 10 m of south-east Spain [46]. This approach can be considered as a refinement of the methods of Pradines [64] and of Guo and Moore [65]; it extracts by a filtering technique the high frequencies of the SPOT image and injects them into the Landsat imagery [66].
This technique is based on the concept that, by using a ratio between a higher resolution image and its low pass-filtered (with a smoothing filter) image, spatial details can be modulated to a co-registered lower resolution multispectral image without altering its spectral properties and contrast [46]. Therefore, the gains in Equation (13) can be considered as a ratio between the k-th multispectral image and the degraded image: In such way the final formula would be: Liu remarks that the visual evaluation and statistical analysis compared with the IHS and Brovey transform techniques confirmed SFIM as the superior fusion technique for pan-sharpening. However, the authors of [66] propose a more careful analysis of this aspect using the appropriate protocol present in [67].

Modulation Transfer Function-Generalized Laplacian Pyramid
The Laplacian Pyramid (LP) was first proposed by Burt and Adelson [68] for compact image representation. It allows the decomposition of an image using the Gaussian Pyramid (GP), which is a multiresolution image representation obtained through a recursive reduction (low-pass filtering and decimation) of the image data set [69].
To degrade the panchromatic image for pan-sharpening, a generalized LP based on Gaussian modulation function can be used [70,71].
The resulting degraded panchromatic image (D ) can be used to generate the multispectral pan-sharpened images according to the following formula: Assuming g k = 1, this method is known as Modulation Transfer Function-Generalized Laplacian Pyramid (MTF-GLP).
Two variants of this method are applied in this work, as gains are taken into account. If gains are calculated as: the method is named Modulation Transfer Function-Generalized Laplacian Pyramid-High Pass Modulation (MTF-GLP-HPM) [63,72], and the final equation is: If gains are calculated as: the method is named Modulation Transfer Function-Generalized Laplacian Pyramid-Context Based Decision (MTF-GLP-CBD) [37].

Quality Indices
The accuracy of the pan-sharpened images is not easy to determine because a reference image at the same resolution as the fused one does not exist [18]. Several indices are available to evaluate the quality of the pan-sharpened data, and they can be distinguished into two groups based on their ability to assess spectral or spatial fidelity. The dissimilarity between the fused image and the expanded MS image represents the spectral difference introduced by the pan-sharpening [73]. The similarity between the shape of the objects included in the fused image and the corresponding one in the panchromatic image represents the preservation of the spatial details guaranteed by the pan-sharpening. In this study, spectral indices, such as Correlation Coefficient (CC), Universal Image Quality Index (UIQI), Root-Mean-Square Error (RMSE), Relative Average Spectral Error (RASE) and Erreur Relative Globale Adimensionalle de Synthèse (ERGAS), are calculated. In addition, the spatial indices, such as Spatial Correlation Coefficient (SCC) and Zhou Index (ZI), are used.
Although indices support the evaluation of pan-sharpening algorithm performance and facilitate the comparison between multiple options, a visual inspection of the resulting images is useful to assess the colour preservation quality and the spatial improvements in object representation [74]. Consequently, in this study, a visual analysis of the pansharpened multispectral images derived from the 14-method implementation in GIS is conducted, and 7 indices are calculated to support the evaluation process. A brief overview of the adopted indices, including formulas, is reported below.
(1) Correlation Coefficient (CC) measures the correlation between the original multispectral (MS k ) and fused images (MS f k ): values close to one indicate that MS k and MS f k are correlated [75,76].
(2) Universal Image Quality Index (UIQI), proposed by [77], is a product of three components: where (MS k ) is the mean value of MS k ; MS f k is the mean value of MS f k . The first factor in Equation (21) is CC between the two considered images; the second measures the mean shift between the original and fused images; the third evaluates changes in the contrast between the images [18]. The dynamic range of UIQI is [−1, 1]: values close to 1 indicate a good performance of the pan-sharpening application [78,79]; the best value one is achieved if and only if the tested image is equal to the reference image for all pixels [80].
(3) Root-Mean-Square Error (RMSE) is a frequently used method for measuring the similarity between each original image and the corresponding fused image [81] and defined as Equation (23): where MS k (i, j) represents the pixel value in the original (reference) image; MS f k (i, j) represents the pixel value in the corresponding fused image; i and j identify the pixel position in each image; M and N are the number of rows and the number of columns that are present in each image, respectively. The smaller the RMSE value, the better the correspondence between the images.
(4) Relative average spectral error (RASE) characterizes the average performance of a method in the considered spectral bands [82]. This index is calculated including all multispectral images by the following formula [83,84]: where m is the mean value of Brightness Values (BVs) of the n input images (MS k ). (5) Erreur Relative Globale Adimensionalle de Synthèse (ERGAS) quantifies the spectral quality of the different fused images by means of the following formula [67]: Where: h is the spatial resolution of reference image (PAN); l is the spatial resolution of original multispectral images (MS k ); n is the number of spectral bands; RMSE is the Root-Mean-Square Error for k-band between fused (MS f k ) and original bands (MS k ); µ k is the mean of the k-th band of original image.
Low values of ERGAS suggest a likeness between the original and fused bands.
The results of the filtering operations are the High Pass PAN (HPP) and the High Pass

Dataset: Pléiades Images
The Pléiades constellation is composed of two VHR optical Earth-imaging satellites named Pléiades-HR 1A (launched in December 2011) and Pléiades-HR 1B (launched in December 2012). These satellites of CNES (Centre National d'Études Spatiales), the Space Agency of France, provide coverage of the Earth's surface with a repeat cycle of 26 days. Designed for civil and military users, the Pléiades system is suitable for emergency response and change detection [87]. Pléiades imagery consists of panchromatic and multispectral data. The former present a spatial resolution of 0.50 m, and the spectral range is 0.480-0.830 µm. The latter have resolution of 2.00 m and include four Multispectral bands: Blue (0.430-0.550 µm), Green (0.490-0.610 µm), Red (0.600-0.720 µm) and Near Infrared (0.750-0.950 µm) [88]. The spectral response associated with the Pléiades MS and PAN sensors is shown in Figure 1.

QGIS is a free and open-source GIS software licensed under the GNU General Public
License. It is the product of the Open Source Geospatial Foundation (OSGeo), and version 1.0 was released in 2009. Built in C++, it uses Python for scripting and plugins. It is commonly defined as user-friendly, fully functional and relatively lightweight. It runs on Windows, Linux, Unix, Mac OSX and Android and is integrated into OSGeo tools (GRASS, Saga, GDAL, etc.) [89].
QGIS has a graphical modeller built-in that can help to define the workflow of two or more operations and run it with a single invocation. GIS Workflows typically involve many steps, with each step generating intermediate output that is used by the next step. Using the graphical modeller, it is not necessary to run through the entire process again manually to change the input data or a parameter. In other terms, the model is executed as a single algorithm [90].
The following steps are required to create a model.

1.
Definition of necessary inputs. The inputs are added to the parameters window, so the user can set their values when executing the model. Because the model itself is an algorithm, the parameters window is generated automatically as it occurs with all the algorithms available in the processing framework.

2.
Definition of the workflow. The workflow is defined by adding algorithms and selecting how they use the input data of the model or the outputs created by other algorithms already in the model.
For example, to implement the IHS pan-sharpening method for Pléiades images, the workflow reported in Figure 3 has been created. All formulas reported in the previous subparagraph have been implemented using the algorithm named r.mapcalc [91] including in GRASS. Particularly, the workflow highlights that the user is asked to choose the images to which to apply the IHS method and the automatic procedure begins: using the algorithm r.mapcalc, the routine firstly implement formula (1) and then formula (2), finally generating the MS fused images.

Procedure Steps
Once the pan-sharpening algorithms and the calculation of the indices in QGIS have been implemented, the procedure is fully automated by following the steps below: 1.
Selection of the images to use in order to achieve pan-sharpening from the Pléiades image set; 2.
Application of the selected pan-sharpening methods among the 14 available ones; 3.
Visualization of the fusion products; 4.
Calculation of 7 different indices to evaluate the quality of the products; 5.
Comparison of quality indices, through an evaluation process based on the attribution of different weights, to detect the best-fused products.
The above-described steps are reported in the flow-chart shown in Figure 4. The process is created for the Pléiades dataset, but it is repeatable for any set of satellite images.
The weights assigned to each index can be freely chosen by the user based on the required needs. Finally, the best performing method is detected by multi-criteria analysis [92].
As the purpose of this manuscript is not to compare different pan-sharpening methods but to automate the image fusion process and support the choice of the best pan-sharpening method, we decided to repeat all operations manually in QGIS to have images produced in the usual way to compare with the automatic outputs. In this way, the accuracy of the proposed approach is evaluated.

Results and Discussion
Considering that the starting multispectral images are 4 and the methods implemented are 14, the implementation of our proposal generates 56 pan-sharpened images. The products obtained from the application of each method are quantitatively evaluated using the quality indices mentioned in Section 2.2. The results of these metrics are shown in the following tables (Tables 1-14), one for each method.  Table 15 is a comparative table of all the methods (note: for each index, only the mean value is reported for an easier synoptic view). However, the results highlight some significant aspects, as reported below. Even if all spectral indices do not supply the same classification, underlying trends are evident. For example, the level of correlation between each pan-sharpened image and the corresponding original one is high in many cases, testifying to the effectiveness of the analysed method. The lowest values are usually obtained for the blue band as a consequence of the low level of overlap between the PAN sensor's spectral response and the Blue sensor's spectral response (Figure 1). In addition, all spectral indices supply the same result for the best performing method (MTF-GLP-CBD); however, they do not identify the same method as the weakest (i.e., BT for CC and MLT for ERGAS).
On the other hand, spatial indices provide a ranking that is in some respects different from that given by spectral indices. In fact, the best performing method in terms of spectral fidelity, MTF-GLP-CBD, supply poor results in terms of spatial fidelity. As testified by other studies [44,93], the improvement of the spatial quality of one image means the deterioration of the spectral quality and vice versa.
In light of the above considerations, a compromise must be sought in terms of spectral preservation and spatial enhancement to ensure good pan-sharpened products.
To better compare the results, we decided to assign a score to each method. In particular, as a first step, a ranking is made for the methods in consideration of each indicator, assigning a score from 1 to 14. The spectral indicators are then mediated between them, as well as the spatial indicators. Figure 5 shows the results obtained by each method taking into account the mean spectral indicator and the mean spatial indicator. As the last step, a general ranking can be obtained by introducing weights for Mean Spectral Indicator and Mean Spatial Indicator. For example, using the same weight (0.5) for both indicators, the resulting classification is shown in Table 16. Table 16. Ranking of the pan-sharpening methods.

Method
Ranking To give an idea of the quality of the derived images, Figure 6 shows the zoom of the RGB composition of the initial multispectral images as well as the zoom of the RGB composition of the multispectral pan-sharpened images obtained from each method. Comparing these results by means of visual inspection, the increase of the geometric resolution as well as the level of the reliability of the obtained colours are evident. In some cases, e.g., for the images resulting from the GSF, the colours are correctly preserved, and the details injected by the panchromatic data are impressive. This result confirms the high performance certified for this method by our classification based on the Multi-criteria approach. In other cases, however, e.g., for the images deriving from the BTF or HPF, there is a lower level of fidelity to the original gradations of the colours. Finally, as in the case of SFIM-derived images, there is a loss of spatial detail that is not opportunely injected in the multispectral products.
All the operations conducted in an automated way by means of GIS tools were also executed in a non-automated way by the authors to test the exactness of the algorithms implemented in the graphical modeller. Particularly, for each method, the differences between the single multispectral pan-sharpened image obtained in an automated way and the corresponding one obtained in a non-automated way were calculated. The residuals were zero in every case. Ultimately, both products are perfectly coincident, and the proposed approach is positively tested, so it can be used as a valuable tool to deal with a critical aspect of pan-sharpening. In fact, according to [44], there is no single method or processing chain for image fusion: in order to obtain the best results, a good understanding of the principles of fusing operations and especially good knowledge of the data characteristics are compulsory. Since each method can give different performances in different situations, applications of several algorithms and comparison of the results are preferable. This approach is very demanding and time-consuming, so the automation of pan-sharpening methods using GIS basic functions proposed in this work enables the user to achieve the best results in a rapid, easy and effective way.

Conclusions
The work presented here analyses the possibility to apply pan-sharpening methods to Pléiades images by means of GIS basic functions, without using specific pan-sharpening tools. The free and open-source GIS software named QGIS was chosen for all applications carried out. Finally, 14 methods are automatically applied and compared using quality indices and multi-criteria analysis.
The results demonstrate that the transfer of the higher geometric resolution of panchromatic data to multispectral ones does not require specific tools because it can be implemented using filters and Map Algebra functions. QGIS software supplies the Raster calculator, a simple but powerful tool to support specific operations that are fundamentals for pan-sharpening method applications, i.e., direct and reverse transformations between RGB and IHS space, synthetic image production, co-variance estimation. This tool is also useful to calculate indices for a quantitative evaluation of the quality of the resulting pan-sharpened images. The whole process can be automatised using a graphical modeller to simplify the user's task by reducing it to select data.
Since the best performing method cannot be fixed in an absolute way but rather depends on the characteristics of the scene, several algorithms must be compared to select one providing suitable results for a defined purpose each time. For evaluating the quality indices by means of multi-criteria analysis, weights can be introduced in accordance with the user needs, i.e., putting spatial requirements before spectral ones or vice versa. In other words, our proposal aids the user by giving the automatic execution of pan-sharpening methods but also supporting the choice of the best-fused products. For this reason, the calculation of quality indices and the comparison of their values are both necessary.
Concerning the future developments of this work, further applications will be focused on the possibility to integrate other pan-sharpening methods in order to increase the number of available options for the user. In addition, we will be mainly focused on the possibility to facilitate the choice of the best performing method, also supplying other results in an automatic way, i.e., feature extractions in sample area to compare them with known shape objects for accuracy estimation.
Author Contributions: C.P. has conceived the article and designed the experiments; E.A. and A.V. conducted the bibliographic research; C.P. organized data collection and supervised the GIS applications; E.A. and A.V. conducted the experiments on pan-sharpening applications; E.A. conducted the quality tests; C.P. designed the flow-charts of the graphical modeller; A.V. supervised the algorithm implementation; all authors took part in the result analysis and writing the paper. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the University of Naples "Parthenope".