Next Article in Journal
A New Ammonium Smart Sensor with Interference Rejection
Previous Article in Journal
Field-Based Calibration of Unmanned Aerial Vehicle Thermal Infrared Imagery with Temperature-Controlled References
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images

Faculty of Civil Engineering and Geodesy, Military University of Technology, 00-908 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(24), 7100; https://doi.org/10.3390/s20247100
Submission received: 26 October 2020 / Revised: 8 December 2020 / Accepted: 9 December 2020 / Published: 11 December 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The growing demand for high-quality imaging data and the current technological limitations of imaging sensors require the development of techniques that combine data from different platforms in order to obtain comprehensive products for detailed studies of the environment. To meet the needs of modern remote sensing, the authors present an innovative methodology of combining multispectral aerial and satellite imagery. The methodology is based on the simulation of a new spectral band with a high spatial resolution which, when used in the pansharpening process, yields an enhanced image with a higher spectral quality compared to the original panchromatic band. This is important because spectral quality determines the further processing of the image, including segmentation and classification. The article presents a methodology of simulating new high-spatial-resolution images taking into account the spectral characteristics of the photographed types of land cover. The article focuses on natural objects such as forests, meadows, or bare soils. Aerial panchromatic and multispectral images acquired with a digital mapping camera (DMC) II 230 and satellite multispectral images acquired with the S2A sensor of the Sentinel-2 satellite were used in the study. Cloudless data with a minimal time shift were obtained. Spectral quality analysis of the generated enhanced images was performed using a method known as “consistency” or “Wald’s protocol first property”. The resulting spectral quality values clearly indicate less spectral distortion of the images enhanced by the new methodology compared to using a traditional approach to the pansharpening process.

1. Introduction

Image data are a rich source of information about the surface of the Earth. They are widely used in many disciplines, from agriculture to state defense policy [1]. However, a single image is usually not enough to make a comprehensive analysis of the land cover. Due to design limitations, sensors operating at both satellite and aerial photography altitudes do not yield data with a high spatial and spectral resolution at the same time, or the cost of obtaining such data (particularly aerial) is too high. In order to carry out detailed studies of the Earth’s surface, it is necessary to have images of both high spectral and spatial quality [2]. Higher-quality data enable more accurate spatial analysis, thus improving the decision-making process. For long-term analyses, archival aerial photos play a crucial role. These are usually single-band or RGB images (red, green, and blue bands) that often are the only source of information on the surface of the Earth. Their high level of detail makes them extremely valuable for land-cover analysis. Unfortunately, just like aerial image data from current photogrammetric missions, they are characterized by a much lower spectral resolution compared to satellite data.
The solution to the problem of obtaining high-quality image data from one platform is the process of fusing images from different sensors [3]. Multi-sensor data fusion is the process of integrating data from different sensors to obtain a composite image which, due to its greater information capacity, is conducive to analyses of the photographed land cover [4]. The resulting enhanced images are used in the main remote sensing processes: object identification, land-cover classification, and change detection. Pansharpening methods are widely used for the fusion of remote sensing images. The goal of pansharpening is to integrate high-spatial-resolution data with high-spectral-resolution data. The output image of this process would ideally have the same spatial characteristics as the high-spatial-resolution input image and the same spectral characteristics as the high-spectral-resolution input image. However, it is known that the fusion process results in a partial loss of this information.
When data from different sensors are combined, more factors adversely affect the quality of the sharpened image compared to integrating data from the same platform. There are four factors with a fundamental impact on the quality of the enhanced images. They are all critical in the process of fusion of satellite and aerial images. Firstly, when working with data from different sensors, it is highly probable that images will be acquired at different times. Due to this fact, changes may occur in the studied area related to the vegetation period of plants, illumination conditions, natural disasters, or anthropogenic activity, which in turn increases the spectral distortion of the enhanced image [2]. It has been demonstrated that, when combining high-resolution UAV (Unmanned Aerial Vehicle) and multispectral satellite data, the difference for natural areas should not exceed two weeks [5]. Furthermore, the resolution ratio (RR), i.e., the difference between the GSD (Ground Sampling Distance) values of the input data, should be taken into account. For data from the same platform, the ratio should not exceed 1:5 [6].
In contrast, for data from different sensors, the ratio may be much higher, e.g., for aerial panchromatic data and multispectral satellite data (Sentinel-2), it may amount to 1:70. However, as the authors’ previous research has shown [7,8,9], with a ratio higher than 1:5, it is also possible to obtain a product that can improve terrain analysis compared to the interpretation of two separate products in the form of an aerial photo with a high spatial resolution and satellite imagery with a high spectral resolution. Another aspect is the co-registration of data. The results of research conducted by Blanc et al. [10] indicated that the standard deviation of geometric distortion values of 0.1 pixels already has a noticeable influence on the quality of the enhanced image. The last factor is the mutual coverage of the spectral input ranges in the pansharpening process. A larger spectral coverage between the panchromatic and multispectral bands results in a lower spectral distortion of the sharpened image.
No known fusion method can eliminate the impact of the factors mentioned above. The authors did not find any known universal method that would be appropriate regardless of the type of input data and the use of enhanced images [1,2,11]. Compared to research conducted on data from the same altitude, very few studies attempted to find a method of integrating images from different altitudes in order to produce enhanced images of the highest possible spectral quality while maintaining high spatial resolution. The high quality of the enhanced image is extremely important as it favors advanced qualitative and quantitative analyses of the environment, including land-cover classification [2,12]. For this purpose, methods of combining UAV and satellite optical data [5,8,13,14], methods of integrating data obtained from aircraft with satellite data [15,16], or methods of combining optical and radar satellite data [17,18,19,20] have been proposed. In their studies [7,21,22], the authors proved the increase in the interpretation potential of archival single-band aerial photographs as a result of combining these data with archival satellite images.
All of the mentioned studies focused on the issue of increasing the interpretation or classification potential of the enhanced image through an improvement in its spectral quality. However, the studies were carried out with reference to the entire obtained image showing different types of land cover or fragments of the image on which a group of objects with similar features—e.g., natural and artificial objects—was photographed, without taking into account the individual features of the objects. In [9], the authors proved that a panchromatic image, which is appropriate for maintaining high spectral quality in the context of the entire analyzed image, is not always appropriate for best maintaining the spectral reflectance characteristics of the individual types of the photographed land cover. This article describes the research carried out to solve this problem.

2. Purpose of the Study

The purpose of the study was to develop a methodology for increasing the spectral quality of images enhanced by the fusion of data from different sensors. The authors aimed to obtain, in the process of pansharpening of satellite images and aerial photographs, a resulting product of higher spectral quality than that using the original aerial panchromatic band as high-spatial-resolution data. Satellite images constituted data of high spectral resolution, and aerial photographs provided data of high spatial resolution. The purpose of the study was achieved by developing an innovative method which involved simulating a new image with a spatial resolution equal to that of the original aerial panchromatic image.
When the high-resolution panchromatic band is available, the choice of pansharpening method is one of the most important issues. However, when integrating high-resolution RGB image with multispectral data and the panchromatic band is not available, the simulation of the panchromatic band is crucial before application of a data fusion algorithm. Therefore, panchromatic channel simulation is a common problem. One of the simplest methods of panchromatic band simulation is the average of the high-resolution band in the multispectral image, i.e., the average of R, G, B, and NIR (near-infrared) bands [23]. When combining different data types, calculation of the arithmetic mean of all multispectral or hyperspectral bands covering the spectrum of available multispectral bands is a common practice [24]. Another approach is based on the separate application of simulated high-resolution bands for the visible and near-infrared range [25]. Other studies used the fact that spectral reflectance coefficients of similar terrain objects are practically invariable in two spectral ranges, and the simulation of missing high-resolution bands was based on that relationship [26].
The novelty presented herein was the assumption that, for individual objects—types of land cover identified on image data (aerial and satellite)—functional dependencies which are necessary to generate a new high-spatial-resolution band were developed. Functional dependencies express the relationship between the aerial panchromatic image and aerial multispectral images: red, green, and blue. The resulting new high- spatial-resolution image is combined using the pansharpening method with multispectral satellite imaging. It was expected that the image generated via the fusion process would have less spectral distortion than the sharpened image due to the integration of the original aerial panchromatic image and the same satellite data. The proposed new method of integrating data from different sensors not only yields images of higher spectral quality, but it also solves the problem of some platforms, e.g., Sentinel-2, not recording the panchromatic band. It should also be noted that the presented method applies not only to aerial multispectral data acquired during current photogrammetric missions, but also to archival aerial multispectral images obtained in the red, green, and blue bands as long as there is appropriate archival satellite multispectral imagery.

3. Data and Preprocessing

This article used imagery data showing the eastern part of the Podlaskie province of Poland, covering the town of Michałowo and the areas located to the northwest of the town. The study area was selected due to the availability of aerial images and free-of-charge satellite images with a low time shift, as well as the high variety of the forms of land cover (Figure 1).
This study used panchromatic and multispectral aerial images (in the blue, green, and red range) recorded with a digital mapping camera (DMC) II 230 provided by MGGP Aero and multispectral satellite images recorded by the Sentinel-2 satellite, retrieved from https://scihub.copernicus.eu. The digital mapping camera (DMC) is a digital large-format camera equipped with five nadir-looking camera heads. It includes four multispectral cameras in red, green, blue, and near-infrared ranges and one high-resolution panchromatic camera head (Table 1). Each multispectral camera includes a CCD (Charge Coupled Device) with 7.2 µm pixel size and 45 mm focal length. The panchromatic camera has a CCD with 5.6 µm pixel size and 92 mm focal length [27,28]. The multispectral medium-resolution source of imagery data was the European Union Copernicus satellite Sentinel-2 [29,30]. The multispectral sensor mounted on Sentinel-2 platform provides 10 m, 20 m, and 60 m multispectral data in a wide range of the electromagnetic spectrum, from the visible range to short-wavelength infrared (SWIR). The Sentinel-2 constellation provides optical imagery of the worldwide land surface with a 5 day revisit period [31]. The aerial data were acquired on 20 August 2016, while the satellite imagery was recorded on 28 August 2016. The spatial resolution of the aerial images was 0.3 m. Four satellite bands with a spatial resolution of 10 m (bands 2–4 and 8 of Sentinel-2) and six bands with a resolution of 20 m (bands 5–7, 9, and 11–12 of Sentinel-2) were used in the study.
Table 1 shows the spectral reflectance characteristics of the aerial sensor (DMC II 230) and the satellite sensor (S2A) on the Sentinel-2 platform, showing the differences in the sensitivity of these sensors.
Both satellite and aerial data were geometrically corrected. The geometric correction process included orthorectification, spatial registration in the UTM/WGS84 (Universal Transverse Mercator/World Geodetic System ’84) projection, and image co-registration. Satellite images carrying information on DN (digital number) values were calibrated using the coefficients provided by the manufacturer to obtain top-of-atmosphere (TOA) reflectance values. These images were then subjected to atmospheric correction with the Quick Atmospheric Correction algorithm [32].

4. Methodology

The study consisted of two main stages (Figure 2). The first was a test stage which enabled the development of functional relationships between the aerial panchromatic band (PANK) and aerial multispectral bands (RK, GK, and BK) for each studied class of land cover. These relationships were determined for several samples selected manually—types of land cover photographed by an aerial sensor. Natural objects (forests, meadows, fields, and bare soils) were selected for the study (Appendix A). The surface area of the samples was not less than 200 m2 (not less than 2800 pixels) [33,34,35]. The determination of mathematical functions describing the relationships between the pixel values of two images, PANK and individual MSK bands, was based on regression models (Equations (1)–(3)).
PANK = fR(RK),
PANK = fG(GK),
PANK = fB(BK),
where K is the type of land cover (K = 1, …, N).
The parameters of the regression models were estimated using the least-squares method. The regression model, which describes the relationships between empirical data, was selected as a result of evaluating the coefficients of determination (R2).
In the second stage, new high-spatial bands based on the previously developed functional relationships were simulated. The process of combining high-spatial-resolution data with high-spectral-resolution data was carried out, and the spectral quality of the resulting enhanced images was evaluated. Work started with the detection of objects belonging to the same land-cover classes that were used for tests but located at different points in the images. Isolation of individual classes of objects was based on the analysis of texture and digital numbers of pixels. Objects, for which no changes were observed over the time (i.e., no difference occurred between the acquisition of the aerial and satellite images), were selected for the study. In this way, sections of aerial and satellite images with surface areas of at least 7000 m2 were manually obtained (Figure 3).
The size of these sections of images was significantly limited by the dimensions of the fields of land. Using an original approach, new bands with high spatial resolution were simulated separately for each class of land cover. For this purpose, the aerial multispectral image (red, green, and blue bands) and the formulas established within the first stage were applied. The new image was generated through three procedures. The first consisted of summing up three components, each of which expressed the pixel values of the aerial panchromatic band as a function of the pixel values of individual aerial multispectral bands, i.e., red, green, and blue (Equation (4)). The second procedure was to average these three components (Equation (5)), and the third was to weigh the components on the basis of coefficients used by the National Television System Committee (NTSC) in the YUV color space (Equation (6)). The values of the weights refer to the perception capacity of humans [36].
SPANK = fR(RK) + fG(GK) + fB(BK).
SPAN Km   =   [ f R ( R K ) +   f G ( G K ) +   f B ( B K ) ] 3 .
SPANKn = 0.299∙fR(RK) + 0.587∙fG(GK) + 0.114∙fB(BK).
Then, the generated synthetic images of high spatial resolution (equal to the resolution of aerial data) and the multispectral satellite images were integrated for each of the tested objects. The data fusion was carried out using the Gram-Schmidt pansharpening method. This method was selected because of its speed and ease of implementation and its relatively good preservation of spectral information [37]. For comparison purposes, the original panchromatic and multispectral aerial images (red, green, and blue bands) were also integrated with the multispectral satellite images for each of the tested land-cover classes. The method known as “consistency” or “Wald’s protocol first property” [2,3] was used to assess the quality of the enhanced images. The spatial resolution of each composite image was degraded to the spatial resolution of the multispectral image. Statistical analysis was performed using five spectral quality assessment indicators: universal image quality index (UIQI), peak signal-to-noise ratio index (PSNR), correlation coefficient (CC), structural similarity (SSIM) index, and spectral angle mapper (SAM) [23,38,39,40,41]. The purpose of this evaluation was to determine the degree of spectral compatibility between the enhanced image degraded to the spatial resolution of the original satellite multispectral image and the original satellite multispectral image. Additionally, the spectral reflectance characteristics of the studied objects collected from enhanced images and satellite imaging were compared. A comparison of the high-spatial-resolution bands used in the fusion process in relation to their information amount was also made. For this purpose, the values of information entropy (Shannon entropy) were used. Information entropy is a measure of the occurrence of the randomness or uncertainty in an image or a signal, i.e., the information richness of an image or a signal. Shannon H (S) entropy is defined as follows [42,43,44,45]:
H ( S ) = i = 1 n p ( S i ) l o g 2 p ( S i ) .
Concerning image analysis, Si denotes the pixel value and p(Si) denotes the probability of this value occurring in the image. A higher entropy value denotes higher image information content and, hence, higher image quality [43,46].
Nevertheless, it is known that the Shannon parameter cannot be used to quantify the spatial information (in terms of the configurational-spatial distribution of pixels and some of the compositional information, e.g., contrast) [47,48].
In order to measure the spatial information content of an image, Boltzmann entropy (BE) has been used relatively recently [48,49]. This parameter was first proposed in the 1870s to determine the configuration and composition of a thermodynamic system in physics. It is defined as follows (Equation (8)):
S = k B l o g ( W ) ,
where S is the Boltzmann entropy value for the given system, W is the number of distinguishable microstates of that system, and kB is a constant value [47]. An obstacle in the use of Boltzamn entropy, not only in thermodynamics, is the lack of a universal definition of a macrostate system and the need to develop a method for determining the number of microstates in the macrostate. In [48], a method for the application of Boltzmann entropy to images was proposed. The macrostate is defined here as an upscaled version of an original image generated by resampling this image with a 2 × 2 pixel window. At the same time, W is the number of all achievable results of the macrostate rescaling operation to the resolution of the original image with the assumption that all rescaling results have the same mean, minimum, and maximum values as the input data. Once W is determined, it becomes possible to find the relative entropy (or relative configurational entropy) using the Boltzmann equation (Equation (9)).
S = k B l o g ( W ) = l o g 10 ( W ) ,
where kB = 1 [50]; the logarithm at base ten is used, while bases two and e are also allowed.
As a result of summing the relative entropies (calculated between two adjacent abstract levels), the absolute entropy is obtained. Using absolute entropy, it becomes possible to compare two images [48]. By using Boltzmann entropy, it is possible to quantify not only the statistical information but also the spatial information of a dataset. Moreover, it can be used as the correlation between spatial patterns and thermodynamic interpretations.
The absolute Boltzmann entropy of an image is a characterization of the uncertainty of downscaling from a single pixel to the original image. Given that uncertainty is the cornerstone of information theory, the relative and absolute Boltzmann entropies can serve as a basis for spatial information theory [51].
In [52], Boltzmann entropy was employed to quantify the spatial information of raster data, i.e., images, maps, digital elevation models, landscape mosaics, etc.
In spatial information theory, one of the fundamental issues is the measurement of information content. When fusing two images, the fundamental question is whether the information content of the fused image is greater than that of the original ones or not. Entropy is, by far, the most popular and widely accepted measure for information content [51].
Therefore, in our future research, we will deal with the spatial information aspect and the assessment of the degree of preservation of this information in the pansharpening process.

5. Results

This study began with the development of mathematical functions describing the relationships between the pixel values of the aerial panchromatic image (PANk) and the aerial images obtained in the blue (Bk), green (Gk), and red (Rk) ranges for each type of land cover studied. Below are diagrams (Figure 4, Figure 5 and Figure 6) presenting the functional relationships for three natural objects selected from the sample collection. For forest, the pixel number was 4977, for bare soil, the pixel number was 3588, and, for mowed meadow, the pixel number was 27,590. Before determining the function, the data were filtered on the basis of the value of the standard deviation. About 20% of the samples were rejected for each dataset.
The values of the coefficient of determination (R2) ranged from 0.6019 to 0.7247. The use of data filtering increased the accuracy of matching the designated functions; however, the values of the coefficient varied due to the number of samples used and the type of land cover for which the pixel values were not homogeneous. The functional relationships described above were then used to simulate new bands with high spatial resolution. For this purpose, new, corresponding fragments were selected from aerial and satellite images representing the same land-cover classes for which the mathematical functions were developed. Using the aerial multispectral image and mathematical functions, new synthetic images with a spatial resolution equal to that of the aerial image were generated for each of the objects under study. The simulated images were then integrated with the satellite imagery from the Sentinel-2 satellite. The original aerial images acquired in the blue, green, and red ranges were also combined with a satellite multispectral image. Several dozen enhanced images were generated following this procedure. Fragments of the three enhanced images are presented in Appendix B. The spectral quality of each of the samples—classes of natural land cover—was assessed. The results are presented in Table 2, Table 3 and Table 4.
In the case of the forest sample, the highest spectral quality was obtained when using the original aerial red band as the high-spatial-resolution data in the fusion process. Four out of five coefficients (excluding SAM) indicated an increase in the spectral quality of the image enhanced with the use of the aerial red band. The increase in quality, however, was insignificant in relation to the use of the original aerial panchromatic band.
The highest values of spectral quality coefficients for the mown meadow were obtained using a simulated SPANK band (Equation (4)). The highest increase in the value of the coefficients was recorded for bands 6–9 (vegetation red edge bands 6, 7, and 8a, as well as NIR band 8). The increase in spectral quality for the mown meadow sample was significant compared to the results obtained for the forest sample. Relative to the values obtained using the original panchromatic image in the fusion process, the values of the coefficients for the mown meadow sample in the case of the red edge and NIR bands increased from 8% to 14%.
The enhanced image of the highest spectral quality showing bare soil was obtained using SPANKn (Equation (6)) as the high-spatial-resolution data in the pansharpening process. All the coefficients used clearly indicated an increase in the spectral quality of the enhanced image compared to the original aerial panchromatic image. For the bare soil sample, the highest increase in the values of the coefficients was observed for bands 5 (vegetation red edge), 11 (SWIR), and 12 (SWIR). For these bands, the values of the ratios increased relatively from 12% to 69%.
The spectral reflectance characteristics of the studied samples were also compared (Figure 7). Three diagrams were generated for each sample: the first one based on the sharpened image obtained from the original aerial PAN image, the second one based on the enhanced image with the highest spectral quality, and the third containing values obtained from the original Sentinel-2 multispectral image.
For the forest and bare soil samples, no difference was observed between the characteristics obtained from the enhanced images (both diagrams for each sample overlap). On the other hand, a noticeable improvement was observed for the mown meadow sample in bands corresponding to wavelengths of 0.705, 0.740, 0.783, 0.842, 0.865, 1.61, and 2.19 nm (bands 5–8a and 11–12 of Sentinel-2).
The entropy measure presented by Shannon can be used for measurements of the amount of information within an image [43,53]. In this paper, the entropy measure was used for the evaluation of information content [42] of simulated panchromatic bands used in the data fusion process. According to entropy theory, higher values indicate an image with richer detail (Table 5).
Obtained results showed that simulated panchromatic bands reached a higher information content than other potential panchromatic channels used in the pansharpening process, and, in the case of meadows, the information content was even increased.

6. Discussion and Conclusions

The process of combining images from different sensors is intended to obtain complete environmental information compared to a single image. This article presented a methodology for combining aerial and satellite data to improve the spectral quality of the sharpened image, thus meeting the expectations for the development of techniques of integrating data recorded by different sensors [4].
The research proved that in order to retain the highest possible spectral quality in the pansharpening process, it is necessary to simulate a new synthetic image with high spatial resolution, depending on the type of objects studied. The tests led to the development of a methodology for combining aerial data obtained with a DMC II 230 with Sentinel-2 satellite data for selected natural objects. The use of simulated high-spatial bands with high information content in the fusion process allowed the authors to obtain enhanced images with less spectral distortion than the original aerial panchromatic image for each sample (the type of land cover). This was confirmed by the values of the spectral quality assessment coefficients and by the spectral reflectance characteristics of the objects. However, the degree of spectral compatibility of the generated enhanced image characterized by the highest spectral quality and satellite multispectral imagery included in the integration process was different for the objects studied. In the case of the forest sample, the increase in spectral quality was negligible, whereas, for the remaining samples (mown meadow and bare soil), a significant increase in retained spectral information was found. The low values of the coefficients for the forest were probably due to the occurrence of tree shadows in the image, which could disturb the analysis. No method was found to simulate a new band with high spatial resolution or to select a suitable multispectral aerial band that would be universal for all objects studied. The essential difficulties (i.e., time-shift, resolution ratio, co-registration, differences in spectral resolution) of multi-sensor data fusion were described in Section 1. At the current stage of research, the main difficulty in applying the presented methodology is its degree of automation. The samples for the presented tests were selected manually, and the simulation of new channels and their fusion with satellite imagery were carried out separately for each of the tested land-cover types. This paper focused primarily on the essence of the simulation process of new high-spatial-resolution bands for the process of combining aerial and satellite images and the method of carrying out this process. In the next stage of the research, the authors will focus on the automation of the presented methodology by implementing segmentation and classification algorithms, so that the simulation process will be performed simultaneously for the entire image and not fragmentarily (for each sample separately). Additionally, the research will be extended to artificial objects.
Few studies are available concerning the fusion of aerial and satellite data. In the analyzed studies concerning the spectral quality of images enhanced by the integration of optical data from the same (such as [54,55,56]) or different platforms operating at the same altitudes (such as [57,58,59]), the entire images recorded by the sensor are subject to processing in the same way. The diversity of the types of land cover present in the image is not considered in the fusion process and the associated registration of different reflectance characteristics. Usually, the research involves checking which of the existing methods of pansharpening results in the highest quality of the enhanced image. Due to the differences in the degree of preservation of spectral information in the pansharpening process depending on the type of land cover, there is a need to take into account the spectral reflectance characteristics of the photographed objects in this process.
This article presented an innovative methodology of combining the optical data acquired from the satellite and aerial altitudes, which takes into account the individual spectral characteristics of the photographed objects by using new, simulated high-spatial-resolution bands or appropriate multispectral high-spatial-resolution bands. It is a methodology that applies both to multispectral aerial images acquired today and to archival aerial data registered in three spectral ranges (blue, green, and red), as long as there is an appropriate archival satellite multispectral images. The authors did not explore the usefulness of pansharpening methods in general, but rather focused on the role of the high-spatial-resolution band in the fusion process. Thanks to the developed methodology, the spectral information processed through pansharpening was retained to a higher degree than with the original panchromatic image. When combining data from different satellite platforms, the authors of [57,58] achieved high spectral quality of the enhanced images. For the best pansharpening methods applied, the values of spectral quality coefficients CC and SSIM were close to 1, while, for the worst methods, the CC values ranged from 0.1 to 0.9 and the SSIM values ranged from 0.1 to 0.4. In this study, mean CC values in the range of 0.3–0.9 and mean SSIM values in the range of 0.5–0.9 were obtained, depending on the object studied. However, when combining image data from such different platforms, it is expected that the resulting enhanced images will be encumbered with more spectral distortion than when combining data from the same platform. In previous studies [7,9,21], the authors proved that, despite the spectral distortions that occur, the enhanced images obtained by combining aerial and satellite data are useful for environmental analyses (interpretation and classification of land cover). In this article, the authors went one step further in developing the integration of aerial and satellite image data, and they presented the methodology for increasing the spectral quality of sharpened images, thus giving rise to the development of new techniques for combining images recorded at different altitudes.

Author Contributions

K.S. designed and developed the research idea and wrote the manuscript. I.E. and A.J. contributed to result interpretation and revision of the manuscript. All authors read and agreed to the published version of the manuscript.

Funding

The research was conducted within the scope of the project financed by the Military University of Technology.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Locations of samples used for the study (1, 4, and 7—bare soils; 2—forest; 3,5,8, and 10—meadows; 6—mowed meadow; 9 and 11—farmlands).
Figure A1. Locations of samples used for the study (1, 4, and 7—bare soils; 2—forest; 3,5,8, and 10—meadows; 6—mowed meadow; 9 and 11—farmlands).
Sensors 20 07100 g0a1

Appendix B

Figure A2. 1, 3, and 5—samples of the best high-spatial images; 2, 4, and 6—samples of images in natural color composition enhanced with the best high-spatial-resolution image.
Figure A2. 1, 3, and 5—samples of the best high-spatial images; 2, 4, and 6—samples of images in natural color composition enhanced with the best high-spatial-resolution image.
Sensors 20 07100 g0a2

References

  1. Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar] [CrossRef] [Green Version]
  2. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef] [Green Version]
  3. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  4. Dong, J.; Zhuang, D.; Huang, Y.; Fu, J. Advances in multi-sensor data fusion: Algorithms and applications. Sensors 2009, 9, 7771–7784. [Google Scholar] [CrossRef] [Green Version]
  5. Jenerowicz, A.; Siok, K.; Woroszkiewicz, M.; Dabrowski, R. The fusion of satellite and UAV data. The accuracy analysis of data fusion results. In Fifth Recent Advances in Quantitative Remote Sensing; Sobrino, J.A., Ed.; Universitat de València: Valencia, Spain, 2018; pp. 425–429. [Google Scholar]
  6. Ehlers, M.; Jacobsen, K.; Schiewe, J. High resolution image data and GIS. In ASPRS Manual GIS; Madden, M., Ed.; American Society for Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2009; pp. 721–777. [Google Scholar]
  7. Siok, K.; Ewiak, I. The simulation approach to the interpretation of archival aerial photographs. Open Geosci. 2020, 12, 1–10. [Google Scholar] [CrossRef] [Green Version]
  8. Jenerowicz, A.; Siok, K.; Woroszkiewicz, M.; Orych, A. The fusion of satellite and UAV data: Simulation of high spatial resolution band. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XIX, Warsaw, Poland, 11–14 September 2017; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10421, p. 104211Z. [Google Scholar]
  9. Siok, K.; Jenerowicz, A.; Ewiak, I. A simulation approach to the spectral quality of multispectral images enhancement. Comput. Electron. Agric. 2020, 174, 105432. [Google Scholar] [CrossRef]
  10. Blanc, P.; Wald, L.; Ranchin, T. Importance and Effect of Co-Registration Quality in an Example of “Pixel to pIxel” Fusion Process. In Proceedings of the 2nd International Conference “Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images”, Sophia Antipolis, France, 28–30 January 1998; SEE/URISCA: Nice, France, 1998; pp. 67–74. [Google Scholar]
  11. Švab, A.; Oštir, K. High-resolution image fusion: Methods to preserve spectral and spatial resolution. Photogramm. Eng. Remote Sens. 2006, 72, 565–572. [Google Scholar] [CrossRef]
  12. Yuhendra, J.; Kuze, H. Performance analyzing of high resolution pan-sharpening techniques: Increasing image Quality for Classification using supervised kernel support vector machine. Res. J. Inf. Technol. 2011, 8, 12–28. [Google Scholar]
  13. Jenerowicz, A.; Woroszkiewicz, M. The pan-sharpening of satellite and UAV imagery for agricultural applications. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XVIII, Edinburgh, UK, 26–29 September 2016; Volume 9998, p. 99981S. [Google Scholar]
  14. Gevaert, C.M.; Tang, J.; García-Haro, F.J.; Suomalainen, J.; Kooistra, L. Combining hyperspectral UAV and multispectral Formosat-2 imagery for precision agriculture applications. In Proceedings of the 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar]
  15. Siok, K.; Ewiak, I.; Jenerowicz, A. Enhancement of spectral quality of natural land cover in the pan-sharpening process. In Proceedings of the Image and Signal Processing for Remote Sensing XXIV, Berlin, Germany, 10–12 September 2018; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10789, p. 107891P. [Google Scholar]
  16. Siok, K.; Jenerowicz, A.; Ewiak, I. The simulation of new spectral bands for the purpose of data pan-sharpening. In Fifth Recent Advances in Quantitative Remote Sensing; Sobrino, J.A., Ed.; Servicio Publicacions Universitat de Valencia: Valencia, Spain, 2018; pp. 430–435. [Google Scholar]
  17. Jenerowicz, A.; Siok, K. Fusion of radar and optical data for mapping and monitoring of water bodies. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XIX, Warsaw, Poland, 11–14 September 2017; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10421, p. 1042126. [Google Scholar]
  18. Jenerowicz, A.; Kaczynski, R.; Siok, K.; Schismak, A. Data fusion for high accuracy classification of urban areas. In Proceedings of the Remote Sensing Technologies and Applications in Urban Environments III, Berlin, Germany, 10–11 September 2018; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10793, p. 1079315. [Google Scholar]
  19. Lu, D.; Li, G.; Moran, E.; Dutra, L.; Batistella, M. A comparison of multisensor integration methods for land cover classification in the Brazilian Amazon. GISci. Remote Sens. 2011, 48, 345–370. [Google Scholar] [CrossRef] [Green Version]
  20. Zhu, L.; Tateishi, R. Fusion of multisensor multitemporal satellite data for land cover mapping. Int. J. Remote Sens. 2006, 27, 903–918. [Google Scholar] [CrossRef]
  21. Siok, K.; Jenerowicz, A.; Woroszkiewicz, M. Enhancement of spectral quality of archival aerial photographs using satellite imagery for detection of land cover. J. Appl. Remote Sens. 2017, 11, 36001. [Google Scholar] [CrossRef]
  22. Kaimaris, D.; Patias, P.; Mallinis, G.; Georgiadis, C. Data Fusion of Scanned Black and White Aerial Photographs with Multispectral Satellite Images. Sci 2020, 2, 36. [Google Scholar] [CrossRef] [Green Version]
  23. Hill, J.; Diemer, C.; Stöver, O.; Udelhoven, T. A local correlation approach for the fusion of remote sensing data with different spatial resolutions in forestry applications. Int. Arch. Photogramm. Remote Sens. 1999, 32, 3–4. [Google Scholar]
  24. Chen, Z.; Pu, H.; Wang, B.; Jiang, G.-M. Fusion of hyperspectral and multispectral images: A novel framework based on generalization of pan-sharpening methods. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1418–1422. [Google Scholar] [CrossRef]
  25. Price, J.C. Combining panchromatic and multispectral imagery from dual resolution satellite instruments. Remote Sens. Environ. 1987, 21, 119–128. [Google Scholar] [CrossRef]
  26. Zhang, Y.; He, M. Multi-spectral and hyperspectral image fusion using 3-D wavelet transform. J. Electron. 2007, 24, 218–224. [Google Scholar] [CrossRef]
  27. Z/I DMC® II230 Camera System. Available online: https://www.aerial-survey-base.com (accessed on 11 November 2020).
  28. Petrie, G. The Intergraph DMC II Camera Range. GeoInformatics 2010, 13, 8. [Google Scholar]
  29. Aschbacher, J.; Milagro-Pérez, M.P. The European Earth monitoring (GMES) programme: Status and perspectives. Remote Sens. Environ. 2012, 120, 3–8. [Google Scholar] [CrossRef]
  30. Malenovský, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; García-Santos, G.; Fernandes, R.; Berger, M. Sentinels for science: Potential of Sentinel-1,-2, and-3 missions for scientific observations of ocean, cryosphere, and land. Remote Sens. Environ. 2012, 120, 91–101. [Google Scholar] [CrossRef]
  31. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  32. Bernstein, L.S.; Adler-Golden, S.M.; Sundberg, R.L.; Levine, R.Y.; Perkins, T.C.; Berk, A.; Ratkowski, A.J.; Felde, G.; Hoke, M.L. Validation of the QUick atmospheric correction (QUAC) algorithm for VNIR-SWIR multi- and hyperspectral imagery. In Proceedings of the Proc. SPIE 5806, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, Orlando, FL, USA, 28 March–1 April 2005; p. 668. [Google Scholar]
  33. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2014; ISBN 111834328X. [Google Scholar]
  34. Tempfli, K.; Huurneman, G.; Bakker, W.; Janssen, L.L.F.; Feringa, W.F.; Gieske, A.S.M.; Grabmaier, K.A.; Hecker, C.A.; Horn, J.A.; Kerle, N. Principles of Remote Sensing: An Introductory Textbook; ITC: Enschede, The Netherlands, 2009.
  35. Adamczyk, J.; Będkowski, K. Metody cyfrowe w teledetekcji; Warsaw University of Life Sciences: Warsaw, Poland, 2007. [Google Scholar]
  36. Pratt, W.K. Image enhancement. In Digital Image Processing: PIKS Inside, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2001; pp. 247–305. [Google Scholar]
  37. Yusuf, Y.; Sri Sumantyo, J.T.; Kuze, H. Spectral information analysis of image fusion data for remote sensing applications. Geocarto Int. 2013, 28, 291–310. [Google Scholar] [CrossRef]
  38. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A Global Quality Measurement of Pan-Sharpened Multispectral Imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  39. Jagalingam, P.; Hegde, A.V. A review of quality metrics for fused image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  40. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sens. 2016, 8, 354. [Google Scholar] [CrossRef] [Green Version]
  41. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  42. Wang, C.; Shen, H.-W. Information theory in scientific visualization. Entropy 2011, 13, 254–273. [Google Scholar] [CrossRef] [Green Version]
  43. Tsai, D.-Y.; Lee, Y.; Matsuyama, E. Information entropy measure for evaluation of image quality. J. Digit. Imaging 2008, 21, 338–347. [Google Scholar] [CrossRef] [Green Version]
  44. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A non-reference image fusion metric based on mutual information of image features. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
  45. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
  46. Zeng, Y.; Huang, W.; Liu, M.; Zhang, H.; Zou, B. Fusion of satellite images in urban area: Assessing the quality of resulting images. In Proceedings of the 2010 18th International Conference on Geoinformatics, Beijing, China, 18–20 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–4. [Google Scholar]
  47. Gao, P.; Wang, J.; Zhang, H.; Li, Z. Boltzmann entropy-based unsupervised band selection for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 462–466. [Google Scholar] [CrossRef]
  48. Gao, P.; Zhang, H.; Li, Z. A hierarchy-based solution to calculate the configurational entropy of landscape gradients. Landsc. Ecol. 2017, 32, 1133–1146. [Google Scholar] [CrossRef]
  49. Gao, P.; Zhang, H.; Li, Z. An efficient analytical method for computing the Boltzmann entropy of a landscape gradient. Trans. GIS 2018, 22, 1046–1063. [Google Scholar] [CrossRef]
  50. Cushman, S.A. Calculating the configurational entropy of a landscape mosaic. Landsc. Ecol. 2016, 31, 481–489. [Google Scholar] [CrossRef]
  51. Gao, P. Boltzmann Entropy for Spatial Information of Images; Hong Kong Polytechnic University-Dissertations: Hong Kong, China, 2018. [Google Scholar]
  52. Gao, P.; Zhang, H.; Li, Z. Boltzmann Entropy for the Spatial Information of Raster Data. Abstr. Int. Cartogr. Assoc. 2019, 1, 86. [Google Scholar] [CrossRef]
  53. Sparavigna, A.C. Entropy in Image Analysis. Entropy 2019, 21, 502. [Google Scholar] [CrossRef] [Green Version]
  54. Sekrecka, A.; Kedzierski, M. Integration of Satellite Data with High Resolution Ratio: Improvement of Spectral Quality with Preserving Spatial Details. Sensors 2018, 18, 4418. [Google Scholar] [CrossRef] [Green Version]
  55. Palubinskas, G. Joint quality measure for evaluation of pansharpening accuracy. Remote Sens. 2015, 7, 9292–9310. [Google Scholar] [CrossRef] [Green Version]
  56. Li, H.; Jing, L.; Tang, Y. Assessment of pansharpening methods applied to worldview-2 imagery fusion. Sensors 2017, 17, 89. [Google Scholar] [CrossRef]
  57. Ehlers, M.; Klonus, S.; Johan Åstrand, P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  58. Al-Wassai, F.A.; Kalyankar, N.V.; Al-Zaky, A.A. Multisensor images fusion based on feature-level. arXiv 2011, arXiv:1108.4098. [Google Scholar]
  59. Ghimire, P.; Lei, D.; Juan, N. Effect of Image Fusion on Vegetation Index Quality—A Comparative Study from Gaofen-1, Gaofen-2, Gaofen-4, Landsat-8 OLI and MODIS Imagery. Remote Sens. 2020, 12, 1550. [Google Scholar] [CrossRef]
Figure 1. The data used for the research: (a) aerial MS (multispectral) image, (b) aerial PAN (panchromatic) image, and (c) satellite MS imagery.
Figure 1. The data used for the research: (a) aerial MS (multispectral) image, (b) aerial PAN (panchromatic) image, and (c) satellite MS imagery.
Sensors 20 07100 g001
Figure 2. Flowchart for the data fusion.
Figure 2. Flowchart for the data fusion.
Sensors 20 07100 g002
Figure 3. Locations of samples (red for forest, blue for mown meadow, and green for bare soil).
Figure 3. Locations of samples (red for forest, blue for mown meadow, and green for bare soil).
Sensors 20 07100 g003
Figure 4. Diagrams presenting the functional relationships for the forest sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Figure 4. Diagrams presenting the functional relationships for the forest sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Sensors 20 07100 g004
Figure 5. Diagrams presenting the functional relationships for the mowed meadow sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Figure 5. Diagrams presenting the functional relationships for the mowed meadow sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Sensors 20 07100 g005
Figure 6. Diagrams presenting the functional relationships for the bare soil sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Figure 6. Diagrams presenting the functional relationships for the bare soil sample: (a) between PAN and red bands; (b) between PAN and green bands; (c) between PAN and blue bands.
Sensors 20 07100 g006
Figure 7. Spectral reflectance characteristics of the studied samples.
Figure 7. Spectral reflectance characteristics of the studied samples.
Sensors 20 07100 g007
Table 1. The sensitivity of aerial (DMC II 230) and satellite sensors (S2A); the values for the aerial sensor were read from the camera calibration report and are approximate values.
Table 1. The sensitivity of aerial (DMC II 230) and satellite sensors (S2A); the values for the aerial sensor were read from the camera calibration report and are approximate values.
DMC II 230S2A
Spectral BandsFWHW 1 (nm)Centre Wavelength (nm)Spectral BandsFWHW (nm)Centre Wavelength (nm)
LowerUpperUpper—LowerLowerUpperUpper—Lower
Blue43048555457.5Blue459.4525.466492.4
Green50556055532.5Green541.8577.836559.8
Red60066565632.5Red649.1680.131664.6
PAN450690240570.0Vegetation Red Edge696.6711.615704.1
The spectral range of the aerial panchromatic band mostly includes the spectral ranges of the blue, green, and red bands of Sentinel-2. For the other Sentinel-2 bands, there is no coverage. The aerial blue band spectrally covers the satellite blue band to some extent; mutual coverage between the red bands is present to a lesser extent. The aerial green band, on the other hand, partially covers the range of the blue band and the green band of the Sentinel-2 satellite.Vegetation Red Edge73374815740.5
Vegetation Red Edge772.8792.820782.8
NIR779.8885.8106832.8
Vegetation Red Edge854.2875.221864.7
SWIR 21568.21659.2911613.7
SWIR2114.92289.91752202.4
1 full width at half maximum. 2 short-wave infrared.
Table 2. Spectral quality values for the forest sample (the best results are underlined).
Table 2. Spectral quality values for the forest sample (the best results are underlined).
Number of Sentinel-2 Band (Spatial Resolution (m))Forest
UIQISSIMPSNRCCSAM
Orig. Aerial PANOrig. Aerial RedOrig. Aerial PANOrig. Aerial RedOrig. Aerial PANOrig. Aerial RedOrig. Aerial PANOrig. Aerial RedOrig. Aerial PANOrig. Aerial RED
2 (10)0.2240.2260.5910.59333.70433.7910.4560.4610.0270.028
3 (10)0.2270.2320.8950.89642.03542.1050.4130.422
4 (10)0.2430.2480.9480.94945.40945.4810.4300.440
5 (20)0.1290.1360.5980.60832.04932.1210.1460.155
6 (20)0.1130.1210.2180.22622.47022.5420.1290.138
7 (20)0.1110.1190.1880.19720.96321.0350.1260.136
8 (10)0.2270.2310.3160.31925.17225.2590.4270.434
8a (20)0.1130.1220.1710.17919.59919.6710.1290.139
11 (20)0.1230.1310.3190.32825.85925.9300.1400.149
12 (20)0.1420.1500.6280.63932.62332.6230.1610.170
Arithmetic Mean:0.1650.1720.4870.49329.98830.0560.2560.264
Table 3. Spectral quality values for the mown meadow sample (the best results are underlined).
Table 3. Spectral quality values for the mown meadow sample (the best results are underlined).
Number of Sentinel-2 Band (Spatial Resolution (m))Mown Meadow
UIQISSIMPSNRCCSAM
Orig. Aerial PANSPANKOrig. Aerial PANSPANKOrig. Aerial PANSPANKOrig. Aerial PANSPANKOrig. Aerial PANSPANK
2 (10)0.870.930.900.9433.135.10.880.920.0720.040
3 (10)0.890.930.910.9535.437.40.890.93
4 (10)0.910.940.940.9637.539.10.910.94
5 (20)0.780.850.800.8625.727.30.790.85
6 (20)0.740.840.750.8418.820.90.750.84
7 (20)0.730.840.740.8417.419.60.740.84
8 (10)0.820.900.820.9019.321.60.830.90
8a (20)0.740.840.740.8415.918.10.740.84
11 (20)0.780.850.780.8519.521.10.790.85
12 (20)0.790.850.800.8625.427.00.790.85
Arithmetic Mean:0.800.880.820.8824.826.70.810.87
Table 4. Spectral quality values for the bare soil sample (the best results are underlined).
Table 4. Spectral quality values for the bare soil sample (the best results are underlined).
Number of Sentinel-2 Band (Spatial Resolution (m))Bare Soil
UIQISSIMPSNRCCSAM
Orig. Aerial PANSPANKnOrig. Aerial PANSPANKnOrig. Aerial PANSPANKnOrig. Aerial PANSPANKnOrig. Aerial PANSPANKn
2 (10)0.610.640.990.9954.054.40.790.840.0230.022
3 (10)0.600.650.960.9948.048.70.680.78
4 (10)0.620.660.870.9341.342.40.750.87
5 (20)0.460.550.790.8932.138.80.570.66
6 (20)0.550.570.850.8732.739.30.630.67
7 (20)0.560.560.800.8230.336.50.620.64
8 (10)0.620.670.900.9344.746.50.820.89
8a (20)0.560.580.790.8129.636.10.630.65
11 (20)0.300.470.520.6623.431.00.360.60
12 (20)0.440.570.550.6721.829.00.530.68
Arithmetic Mean:0.530.590.800.8635.840.30.640.73
Table 5. Entropy values for original aerial panchromatic band, new simulated (the best results are underlined).
Table 5. Entropy values for original aerial panchromatic band, new simulated (the best results are underlined).
Panchromatic BandForestMown MeadowBare Soil
orig. aerial PAN3.15512.83602.4999
orig. aerial blue2.02701.70481.3274
orig. aerial green2.15972.05891.4207
orig. aerial red2.50232.38711.6698
SPANK2.20883.13941.8792
SPANKm2.27802.81251.8269
SPANKn2.32942.96781.9965
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Siok, K.; Ewiak, I.; Jenerowicz, A. Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images. Sensors 2020, 20, 7100. https://doi.org/10.3390/s20247100

AMA Style

Siok K, Ewiak I, Jenerowicz A. Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images. Sensors. 2020; 20(24):7100. https://doi.org/10.3390/s20247100

Chicago/Turabian Style

Siok, Katarzyna, Ireneusz Ewiak, and Agnieszka Jenerowicz. 2020. "Multi-Sensor Fusion: A Simulation Approach to Pansharpening Aerial and Satellite Images" Sensors 20, no. 24: 7100. https://doi.org/10.3390/s20247100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop