A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network
Abstract
:1. Introduction
- A new RDFNet pan-sharpening model with powerful robustness and improved generalization performance is proposed, motivated by distributed framework and residual learning.
- A new three-branch pan-sharpening structure is proposed, two branches of which are used to extract MS and PAN images features, respectively. The most important is the third branch, realizing data fusion of three channels, which concatenates the two feature branches and the previous layer’s fusion results, layer by layer, yielding pan-sharpened images.
- A large number of experiments are carried out to verify the robustness and generalization of the proposed RDFNet, employing four different sensors and typical comparison methods, including traditional and DL methods.
2. Background
2.1. Distributed Fusion Structure
2.2. Residual Network
3. Methods
- MS and PAN images fed into RDFNet need to be preprocessed. Due to the different levels of remote sensing data obtained by different researchers, different preprocessing operations are also needed for remote sensing images; for example, radiometric correction including radiometric calibration and atmospheric correction, registration, and so forth. On account of the Landsat-8 and Landsat-7, the obtained data is L1T level, which has finished geometric accurate correction and radiometric correction; we only register the data following [54,55]. The GF-2 data is the 2A level, which has finished primary geometric correction and radiometric correction, so we carry out geometric accurate correction for it to make use of ENVI. QuickBird data is used as a standard product, and then we carry out geometric accurate correction for it to make use of ENVI. GF-2 and QuickBird data are registered with the same method as Landsat-8.
- The LRMS and PAN images are fused to generate the HRMS images. In fact, there are no MS images with the same spatial resolution as the fused HRMS images. Consequently, according to Wald’s protocol [56], the original-scale MS and PAN images are downsampled, denoted as DLMS and LPAN images, respectively; the specific process is shown in Figure 3. According to the resolution of MS and PAN images, the scaling factor is determined. As the size of MS and PAN images fed into RDFNet has to be kept the same, it is necessary to interpolate DLMS images to the size of LPAN image. Therefore, the original MS images can be used as ground truth.
- The DULMS and LPAN images are fed into RDFNet, and the original MS images are the output of RDFNet, as shown in Figure 4. DULMS, LPAN, and MS images are randomly cropped 64 × 64 subimages to form training samples. By adjusting the super parameters and structure of the network, and after sufficient training, the optimal network is obtained. As shown in Figure 5 and Figure 6, the parameters of the well-trained network are then frozen and the performance of the network is tested on reduced-resolution and full-resolution MS and PAN images, respectively.
- Eventually, the pan-sharpened images with reduced resolution are evaluated subjectively and quantitatively with the original MS images, making use of the full-reference metrics mentioned in Section 4.2, as shown in Figure 5. Additionally, the full-resolution, pan-sharpened images are evaluated subjectively and quantitatively, making use of the no-reference metrics mentioned in Section 4.2, as shown in Figure 6. Proceeding to the next step, the pan-sharpening performance of the proposed network is verified by analyzing the indicators and by subjective visual evaluation.
3.1. Overall Structure
3.2. Network Structure
3.3. Loss Function
4. Experimental Results and Analysis
4.1. Study Area and Datasets
4.1.1. Training Datasets
4.1.2. Testing Datasets
4.2. Fusion Quality Metrics
4.3. Implementation Details
4.4. Simulation Datasets Experimental Results
4.5. Full-Resolution Datasets’ Experimental Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
MS | Multispectral |
HRMS | High-resolution multispectral |
LRMS | Low-resolution multispectral |
Pan-sharpening | Panchromatic sharpening |
PAN | Panchromatic |
CNN | Convolutional neural network |
RCNN | Residual CNN |
HRHM | High spatial resolution and high spectral resolution |
HRHS | High-resolution hyperspectral |
CS | Component substitution |
MRA | Multiresolution analysis |
IHS | Intensity–hue–saturation |
PCA | Principal component analysis |
BT | Brovey transform |
WT | Wavelet transform |
GS | Gram–Schmidt |
NSCT | Non-subsampled contourlet transform |
SFIM | Smoothing filter-based intensity modulation |
SR | Sparse representation |
OCDL | Online coupled dictionary learning method |
DL | Deep learning |
SRCNN | Super-Resolution Convolutional Neural Network |
PNN | pan-sharpening by convolutional neural network |
DRPNN | residual network based panchromatic sharpening |
HS | hyperspectral |
GAN | generative adversarial network |
RED-cGAN | residual encoder–decoder conditional generative adversarial network |
BN | batch normalization |
OLI | Operational Land Imager |
TIRS | Thermal Infrared Sensor |
CC | correlation coefficient |
RMSE | Root Mean Square Error |
SSIM | structural similarity |
SAM | Spectral angle mapping |
ERGAS | Erreur Relative Globale Adimensionnelle de Synthése |
UIQI | universal image quality index |
QNR | Quality with no reference |
ULMS | up-sampled low resolution multispectral |
DLMS | down-sampled low resolution multispectral |
DULMS | down-sampled and up-sampled low resolution multispectral |
LPAN | low resolution panchromatic |
GF-2 | Gaofen-2 |
References
- Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
- Qian, X.; Lin, S.; Cheng, G.; Yao, X.; Ren, H.; Wang, W. Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion. Remote Sens. 2020, 12, 143. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2016, 33, 100–112. [Google Scholar] [CrossRef]
- Meng, X.; Shen, H.; Li, H.; Zhang, L.; Fu, R. Review of the Pansharpening Methods for Remote Sensing Images Based on the Idea of Meta-analysis: Practical Discussion and Challenges. Inf. Fusion 2018, 46, 102–113. [Google Scholar] [CrossRef]
- Lu, Z.; Xu, B.; Sun, L.; Zhan, T.; Tang, S. 3D Channel and Spatial Attention Based Multi-Scale Spatial Spectral Residual Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4311–4324. [Google Scholar] [CrossRef]
- Carper, W.J.; Lillesand, T.M.; Kiefer, P.W. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
- Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
- Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A Fast Intensity–Hue–Saturation Fusion Technique With Spectral Adjustment for IKONOS Imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
- Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An Adaptive IHS Pan-Sharpening Method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef] [Green Version]
- Gonzalez-Audicana, M.; Saleta, J.L.; Catalan, R.G.; Garcia, R. Fusion of Multispectral and Panchromatic Images Using Improved IHS and PCA Mergers Based on Wavelet Decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar] [CrossRef]
- Shahdoosti, H.R. Combining the spectral PCA and spatial PCA fusion methods by an optimal filter. Inf. Fusion 2016, 27, 150–160. [Google Scholar] [CrossRef]
- Shah, V.P.; Younan, N.H.; King, R.L. An Efficient Pan-Sharpening Method via a Combined Adaptive PCA Approach and Contourlets. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1323–1335. [Google Scholar] [CrossRef]
- Li, S.; Yang, B.; Hu, J. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar] [CrossRef]
- Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
- Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1204–1211. [Google Scholar] [CrossRef] [Green Version]
- Upla, K.P.; Joshi, M.V.; Gajjar, P.P. An Edge Preserving Multiresolution Fusion: Use of Contourlet Transform and MRF Prior. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3210–3220. [Google Scholar] [CrossRef]
- Chang, X.; Jiao, L.; Liu, F.; Xin, F. Multicontourlet-Based Adaptive Fusion of Infrared and Visible Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 549–553. [Google Scholar] [CrossRef]
- Cunha, D.; Arthur, L.; Zhou, J.; Minh, N. The Nonsubsampled Contourlet Transform: Theory, Design, and Applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef] [Green Version]
- Liu, J.G. Smoothing Filter-based Intensity Modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
- Zhong, S.; Zhang, Y.; Chen, Y.; Wu, D. Combining Component Substitution and Multiresolution Analysis: A Novel Generalized BDSD Pansharpening Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2867–2875. [Google Scholar] [CrossRef]
- Cheng, M.; Wang, C.; Li, J. Sparse Representation Based Pansharpening Using Trained Dictionary. IEEE Geosci. Remote Sens. Lett. 2014, 11, 293–297. [Google Scholar] [CrossRef]
- Cheng, J.; Liu, H.; Liu, T.; Wang, F.; Li, H. Remote sensing image fusion via wavelet transform and sparse representation. Isprs J. Photogramm. Remote Sens. 2015, 104, 158–173. [Google Scholar] [CrossRef]
- Vicinanza, M.; Restaino, R.; Vivone, G.; Dalla Mura, M.; Chanussot, J. A Pansharpening Method Based on the Sparse Representation of Injected Details. IEEE Geosci. Remote Sens. Lett. 2015, 12, 180–184. [Google Scholar] [CrossRef]
- Burt, P.J.; Adelson, E.H. The Laplacian Pyramid as a Compact Image Code. Read Comput. Vis. 1987, 31, 671–679. [Google Scholar]
- Vivone, G.; Marano, S.; Chanussot, J. Pansharpening: Context-Based Generalized Laplacian Pyramids by Robust Regression. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6152–6167. [Google Scholar] [CrossRef]
- Ghahremani, M.; Ghassemian, H. Remote-sensing image fusion based on curvelets and ICA. Int. J. Remote Sens. 2015, 36, 4131–4143. [Google Scholar] [CrossRef]
- Ji, X.; Zhang, G. Image fusion method of SAR and infrared image based on Curvelet transform with adaptive weighting. Multimed. Tools Appl. 2015, 76, 17633–17649. [Google Scholar] [CrossRef]
- Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Bayesian Fusion of Multi-Band Images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef] [Green Version]
- Guo, M.; Zhang, H.; Li, J.; Zhang, L.; Shen, H. An Online Coupled Dictionary Learning Approach for Remote Sensing Image Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1284–1294. [Google Scholar] [CrossRef]
- Huang, B.; Song, H.; Cui, H.; Peng, J. Spatial and Spectral Image Fusion Using Sparse Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1693–1704. [Google Scholar] [CrossRef]
- Sun, L.; He, C.; Zheng, Y.; Tang, S. SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sens. 2020, 12, 2979. [Google Scholar] [CrossRef]
- Joshi, M.V.; Chaudhuri, S.; Panuganti, R. A learning-based method for image super-resolution from zoomed observations. IEEE Trans. Syst. Man Cybern. Part B 2005, 35, 527–537. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
- Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef]
- Huang, W.; Xiao, L.; Liu, H.; Wei, Z.; Tang, S. A New Pan-Sharpening Method With Deep Neural Networks. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1037–1041. [Google Scholar] [CrossRef]
- Giuseppe, M.; Davide, C.; Luisa, V.; Giuseppe, S. Pansharpening by Convolutional Neural Networks. Remote Sens. 2016, 8, 594. [Google Scholar]
- Azarang, A.; Ghassemian, H. A new pansharpening method using multi resolution analysis framework and deep neural networks. In Proceedings of the International Conference on Pattern Recognition and Image Analysis, Shahrekord, Iran, 19–20 April 2017; pp. 1–5. [Google Scholar]
- Rao, Y.; He, L.; Zhu, J. A residual convolutional neural network for pan-shaprening. In Proceedings of the International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 19–21 May 2017; pp. 1–4. [Google Scholar]
- Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A Deep Network Architecture for Pan-Sharpening. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1753–1761. [Google Scholar]
- Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A Multi-Scale and Multi-Depth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef] [Green Version]
- Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Multispectral and Hyperspectral Image Fusion Using a 3-D-Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 639–643. [Google Scholar] [CrossRef]
- Shao, Z.; Cai, J. Remote Sensing Image Fusion With Deep Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1656–1669. [Google Scholar] [CrossRef]
- Scarpa, G.; Vitale, S.; Cozzolino, D. Target-Adaptive CNN-Based Pansharpening. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5443–5457. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Liu, Q.; Wang, Y. Remote sensing image fusion based on two-stream fusion network. Inf. Fusion 2020, 55, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Wang, Y.; Liu, Q. Psgan: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening. In Proceedings of the IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018; pp. 873–877. [Google Scholar]
- Shao, Z.; Lu, Z.; Ran, M.; Fang, L.; Zhou, J.; Zhang, Y. Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1573–1577. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Jiang, J. Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
- Mnks, U.; Trsek, H.; Dürkop, L.; Genei, V.; Lohweg, V. Towards distributed intelligent sensor and information fusion. Mechatronics 2016, 34, 63–71. [Google Scholar] [CrossRef]
- Liggins, M.E.; Hall, D.L.; James, L. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2008; pp. 165–176. [Google Scholar]
- Liggins, M.E.; Chong, C.Y.; Kadar, I.; Alford, M.; Vannicola, V.; Thomopoulos, S. Distributed fusion architectures and algorithms for target tracking. Proc. IEEE 1997, 85, 95–107. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity Mappings in Deep Residual Networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 630–645. [Google Scholar]
- Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A Critical Comparison Among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored Multiscale Fusion of High-resolution MS and Pan Imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
- Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Aiazzi, B.; Baronti, S.; Selva, M.; Alparone, L. Bi-cubic interpolation for shift-free pan-sharpening. Isprs J. Photogramm. Remote Sens. 2013, 86, 65–76. [Google Scholar] [CrossRef]
- Zhu, X.X.; Bamler, R. A Sparse Image Fusion Algorithm with Application to Pan-Sharpening. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2827–2836. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the Summaries of the Third Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; pp. 147–149. [Google Scholar]
- Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and Panchromatic Data Fusion Assessment Without Reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
- Qu, Y.; Baghbaderani, R.K.; Qi, H.; Kwan, C. Unsupervised Pansharpening Based on Self-Attention Mechanism. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3192–3208. [Google Scholar] [CrossRef]
Satellite | Band | Wavelength (nm) | Resolution (m) |
---|---|---|---|
Landsat-8 | PAN | 500–680 (8—Pan) | 15 |
MS | 450–515 (2—Blue) | ||
525–600 (3—Green) | 30 | ||
630–680 (4—Red) | |||
Landsat-7 | PAN | 520–900 (8—Pan) | 15 |
MS | 450–515 (1—Blue) | ||
525–605 (2—Green) | 30 | ||
630–690 (3—Red) | |||
GF-2 | PAN | 450–900 (Pan) | 1 |
MS | 450–520 (2—Blue) | ||
520–590 (3—Green) | 4 | ||
630–690 (4—Red) | |||
QuickBird | PAN | 450–900 (Pan) | 0.7 |
MS | 450–520 (2—Blue) | ||
520–600 (3—Green) | 2.8 | ||
630–690 (4—Red) |
Landsat-8 | QuickBird | |||||
---|---|---|---|---|---|---|
Method | ||||||
EXP | 0 | 0.0907 | 0.9093 | 0 | 0.0476 | 0.9524 |
Brovey | 0.0570 | 0.2063 | 0.7485 | 0.0592 | 0.0583 | 0.8860 |
GS | 0.0531 | 0.1840 | 0.7727 | 0.0437 | 0.0621 | 0.8969 |
SFIM | 0.0497 | 0.0912 | 0.8636 | 0.0526 | 0.0524 | 0.8978 |
IFCNN | 0.2631 | 0.3316 | 0.4925 | 0.3170 | 0.4295 | 0.3897 |
PNN | 0.0384 | 0.0415 | 0.9217 | 0.0352 | 0.0503 | 0.9163 |
DRPNN | 0.0254 | 0.0377 | 0.9379 | 0.0225 | 0.0927 | 0.8869 |
PanNet | 0.0586 | 0.1398 | 0.8098 | 0.2579 | 0.0482 | 0.7063 |
ResTFNet | 0.0239 | 0.0349 | 0.9420 | 0.0342 | 0.0497 | 0.9178 |
RDFNet | 0.0216 | 0.0314 | 0.9477 | 0.0349 | 0.0461 | 0.9206 |
Landsat-7 | GF-2 | |||||
---|---|---|---|---|---|---|
Method | ||||||
EXP | 0 | 0.0729 | 0.9271 | 0 | 0.0580 | 0.9420 |
Brovey | 0.0792 | 0.2691 | 0.6730 | 0.0629 | 0.2014 | 0.7484 |
GS | 0.0553 | 0.1527 | 0.8004 | 0.0493 | 0.1387 | 0.8188 |
SFIM | 0.1887 | 0.3092 | 0.5604 | 0.1367 | 0.2892 | 0.6136 |
IFCNN | 0.2350 | 0.3159 | 0.5233 | 0.2107 | 0.3676 | 0.4992 |
PNN | 0.0361 | 0.0791 | 0.8877 | 0.0695 | 0.0915 | 0.8454 |
DRPNN | 0.0199 | 0.0683 | 0.9132 | 0.0319 | 0.0672 | 0.9030 |
PanNet | 0.2297 | 0.0721 | 0.7148 | 0.0617 | 0.0881 | 0.8556 |
ResTFNet | 0.0189 | 0.0491 | 0.9329 | 0.0297 | 0.0524 | 0.9195 |
RDFNet | 0.0207 | 0.0367 | 0.9434 | 0.0112 | 0.0303 | 0.9588 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Y.; Huang, M.; Li, Y.; Feng, S.; Wu, D. A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network. Remote Sens. 2021, 13, 2556. https://doi.org/10.3390/rs13132556
Wu Y, Huang M, Li Y, Feng S, Wu D. A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network. Remote Sensing. 2021; 13(13):2556. https://doi.org/10.3390/rs13132556
Chicago/Turabian StyleWu, Yuanyuan, Mengxing Huang, Yuchun Li, Siling Feng, and Di Wu. 2021. "A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network" Remote Sensing 13, no. 13: 2556. https://doi.org/10.3390/rs13132556
APA StyleWu, Y., Huang, M., Li, Y., Feng, S., & Wu, D. (2021). A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network. Remote Sensing, 13(13), 2556. https://doi.org/10.3390/rs13132556