Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)
Abstract
:1. Introduction
- Multi-spectral images are 3D data cubes, and it is difficult for ordinary CNN to extract high-fidelity detailed information. The attention mechanism can capture information from orientation and position perception, which can help the model locate and identify the target of interest more accurately.
- The traditional pan-sharpening method has the advantage of high fidelity to spatial-spectral feature information. The DL-based approach relies on large-scale dataset training to extract spectral information from LRMS images and spatial details from the PAN images. After the training phase, pan-sharpening images can be easily predicted or calculated by learning nonlinear mapping. Thus, it is an innovative idea to combine the traditional method with the DL method.
- This study integrates the coordinate attention block in the feature extraction module, which, in turn, can effectively extract mode-specific features by encoding channel relationships and remote dependencies through accurate position information using a two-stream feature extractor to obtain mode-specific features from PAN and LRMS images.
- Our approach pays special attention to CS and MRA frameworks, and inspired by this traditional method, using a high-pass filter for detail extraction, the spectral direction features contained in LRMS images are merged (injected) with the high-resolution spatial detail pass information from PAN images several times to solve the problem of losing details in the fusing process.
- We present a combination of three simple optimization terms to constrain the spectral fidelity and spatial accuracy of pan-sharpening results. The first two optimization terms constrain the difference between predicted HRMS and LRMS, as well as PAN images to generate a similar structural distribution. Another optimization constraint provides spatial and spectral consistency between HRMS and GroundTruth images.
2. Related Work
2.1. Pan-Sharpening
2.2. Coordinate Attention
3. Methods
3.1. Overall Network Architecture
3.2. Loss Function
4. Results
4.1. Experiment Settings
4.1.1. Datasets
4.1.2. Comparison Method and Evaluation Index
4.1.3. Optimize the Environment and Details
4.2. Comparative Experiment
4.2.1. IKONOS Experiment Results
4.2.2. QuickBird Experiment Results
4.2.3. WorldView-2 Experimental Results
4.3. Ablation Experiments
4.3.1. Selection of the Attention Mechanism
4.3.2. Coordinate Attention and Detail Injection Modules
4.3.3. Detail Injection Module
4.3.4. Weight of the Loss Function
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, B.; Wu, D.; Zhang, L.; Li, J.Q. Application of Hyperspectral Remote Sensing for Environment Monitoring in Mining Areas. Environ. Earth Sci. 2012, 65, 3. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image Fusion Meets Deep Learning: A Survey and Perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar] [CrossRef]
- Jones, E.G.; Wong, S.; Milton, A.; Sclauzero, J.; Whittenbury, H.; McDonnell, M.D. The Impact of Pan-Sharpening and Spectral Resolution on Vineyard Segmentation through Machine Learning. Remote Sens. 2020, 12, 934. [Google Scholar] [CrossRef]
- Gao, J.; Li, J.; Su, X.; Jiang, M.; Yuan, Q. Deep Image Interpolation: A Unified Unsupervised Framework for Pansharpening. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 609–618. [Google Scholar]
- Chavez, J.P.; Kwarteng, A. Extracting Spectral Contrast in Landsat Thematic Mapper Image Data Using Selective Principal Component Analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
- Carper, W.J.; Lillesand, T.M.; Kiefer, R.W. The Use of Intensity-Hue-Saturation Transformations for Merging SPOT Panchromatic and ~ultispectraIlmage Data. Photogramm. Eng. 1990, 56, 459–467. [Google Scholar]
- Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patet No.6011875, 4 January 2000. [Google Scholar]
- Aiazzi, B.; Baronti, S.; Selva, M. Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
- Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE Pan Sharpening of Very High Resolution Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
- Haydn, R.; Dalke, G.W.; Henkel, J.; Bare, J.E. Application of the IHS Color Transform to the Processing of Multisensor Data and Image Enhancement. In Proceedings of the International Symposium on Remote Sensing of Environment, First Thematic Conference: Remote sensing of arid and semi-arid lands, Cairo, Egypt, 19–25 January 1982. [Google Scholar]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-Driven Fusion of High Spatial and Spectral Resolution Images Based on Oversampled Multiresolution Analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
- Restaino, R.; Vivone, G.; Addesso, P.; Chanussot, J. A Pansharpening Approach Based on Multiple Linear Regression Estimation of Injection Coefficients. IEEE Geosci. Remote Sens. Lett. 2020, 17, 102–106. [Google Scholar] [CrossRef]
- Liu, J.G. Smoothing Filter-Based Intensity Modulation: A Spectral Preserve Image Fusion Technique for Improving Spatial Details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
- Otazu, X.; Gonzalez-Audicana, M.; Fors, O.; Nunez, J. Introduction of Sensor Spectral Response into Image Fusion Methods. Application to Wavelet-Based Methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef]
- Shensa, M.J. The Discrete Wavelet Transform: Wedding the a Trous and Mallat Algorithms. IEEE Trans. Signal Process. 1992, 40, 2464–2482. [Google Scholar] [CrossRef]
- Vivone, G.; Marano, S.; Chanussot, J. Pansharpening: Context-Based Generalized Laplacian Pyramids by Robust Regression. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6152–6167. [Google Scholar] [CrossRef]
- Vivone, G.; Restaino, R.; Chanussot, J. Full Scale Regression-Based Injection Coefficients for Panchromatic Sharpening. IEEE Trans. Image Process. 2018, 27, 3418–3431. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Yin, H.; Fang, L. Remote Sensing Image Fusion via Sparse Representations Over Learned Dictionaries. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4779–4789. [Google Scholar] [CrossRef]
- Zhang, Y.; Duijster, A.; Scheunders, P. A Bayesian Restoration Approach for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3453–3462. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef]
- Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by Convolutional Neural Networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef]
- Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A Deep Network Architecture for Pan-Sharpening. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5449–5457. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef]
- Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef]
- Jin, C.; Deng, L.-J.; Huang, T.-Z.; Vivone, G. Laplacian Pyramid Networks: A New Approach for Multispectral Pansharpening. Inf. Fusion 2022, 78, 158–170. [Google Scholar] [CrossRef]
- Cai, J.; Huang, B. Super-Resolution-Guided Progressive Pansharpening Based on a Deep Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5206–5220. [Google Scholar] [CrossRef]
- Shao, Z.; Lu, Z.; Ran, M.; Fang, L.; Zhou, J.; Zhang, Y. Residual Encoder–Decoder Conditional Generative Adversarial Network for Pansharpening. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1573–1577. [Google Scholar] [CrossRef]
- Liu, X.; Wang, Y.; Liu, Q. PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10227–10242. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Jiang, J. Pan-GAN: An Unsupervised Pan-Sharpening Method for Remote Sensing Image Fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
- Liu, X.; Liu, Q.; Wang, Y. Remote Sensing Image Fusion Based on Two-Stream Fusion Network. Inf. Fusion 2020, 55, 1–15. [Google Scholar] [CrossRef]
- Wu, Z.C.; Huang, T.Z.; Deng, L.J.; Hu, J.F.; Vivone, G. VO+Net: An Adaptive Approach Using Variational Optimization and Deep Learning for Panchromatic Sharpening. IEEE Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Liu, L.; Wang, J.; Zhang, E.; Li, B.; Zhu, X.; Zhang, Y.; Peng, J. Shallow–Deep Convolutional Network and Spectral-Discrimination-Based Detail Injection for Multispectral Imagery Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1772–1783. [Google Scholar] [CrossRef]
- He, L.; Rao, Y.; Li, J.; Chanussot, J.; Plaza, A.; Zhu, J.; Li, B. Pansharpening via Detail Injection Based Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1188–1204. [Google Scholar] [CrossRef]
- Benzenati, T.; Kessentini, Y.; Kallel, A.; Hallabia, H. Generalized Laplacian Pyramid Pan-Sharpening Gain Injection Prediction Based on CNN. IEEE Geosci. Remote Sens. Lett. 2020, 17, 651–655. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. arXiv 2018, arXiv:1807.06521. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
- Su, X.; Li, J.; Hua, Z. Transformer-Based Regression Network for Pansharpening Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5407423. [Google Scholar] [CrossRef]
- Nie, Z.; Chen, L.; Jeon, S.; Yang, X. Spectral-Spatial Interaction Network for Multispectral Image and Panchromatic Image Fusion. Remote Sens. 2022, 14, 4100. [Google Scholar] [CrossRef]
- Ni, J.; Shao, Z.; Zhang, Z.; Hou, M.; Zhou, J.; Fang, L.; Zhang, Y. LDP-Net: An Unsupervised Pansharpening Network Based on Learnable Degradation Processes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5468–5479. [Google Scholar] [CrossRef]
- Meng, X.; Xiong, Y.; Shao, F.; Shen, H.; Sun, W.; Yang, G.; Yuan, Q.; Fu, R.; Zhang, H. A Large-Scale Benchmark Data Set for Evaluating Pansharpening Performance: Overview and Implementation. IEEE Geosci. Remote Sens. 2021, 9, 18–52. [Google Scholar] [CrossRef]
- He, X.; Condat, L.; Bioucas-Dias, J.M.; Chanussot, J.; Xia, J. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors. IEEE Trans. Image Process. 2014, 23, 4160–4174. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Y.; Ding, X.; Zeng, D.; Huang, Y.; Paisley, J. Pan-Sharpening With a Hyper-Laplacian Penalty. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 540–548. [Google Scholar]
- Choi, J.; Yu, K.; Kim, Y. A New Adaptive Component-Substitution-Based Satellite Image Fusion by Using Partial Replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
- Ciotola, M.; Poggi, G.; Scarpa, G. Unsupervised Deep Learning-Based Pansharpening With Jointly Enhanced Spectral and Spatial Fidelity. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5405417. [Google Scholar] [CrossRef]
- Yuhas, R.H.; Goetz, A.F.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the 1992 JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; Volume 1. [Google Scholar]
- Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses Des MINES: Paris, France, 2002. [Google Scholar]
- Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and Panchromatic Data Fusion Assessment Without Reference. ASPRS J. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C. A Universal Image Quality Index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Zhou, J.; Civco, D.L.; Silander, J.A. A Wavelet Transform Method to Merge Landsat TM and SPOT Panchromatic Data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
Satellite | Type of Image | Spatial Accuracy | Number of Spectral Bands | Size | Total Numbers | Training Numbers | Verification Numbers | Testing Numbers |
---|---|---|---|---|---|---|---|---|
IKONOS | PAN | 1 m | 1 band | 1024 × 1024 | 200 | 112 | 28 | 60 |
LRMS | 4 m | 4 bands | 256 × 256 × 4 | |||||
QuickBird | PAN | 0.7 m | 1 band | 1024 × 1024 | 721 | 403 | 403 | 217 |
LRMS | 2.8 m | 4 bands | 256 × 256 × 4 | |||||
WorldView-2 | PAN | 0.46 m | 1 band | 1024 × 1024 | 1173 | 657 | 657 | 352 |
LRMS | 1.84 m | 4 bands | 256 × 256 × 4 |
Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|
GS | 2.6098 | 2.0241 | 0.7586 | 0.7753 | 0.9078 | 0.1028 | 0.1911 | 0.7319 |
IHS | 2.8214 | 2.1569 | 0.7202 | 0.7435 | 0.8838 | 0.1721 | 0.2441 | 0.6352 |
Brovey | 2.7520 | 2.1136 | 0.7231 | 0.7469 | 0.8905 | 0.1516 | 0.2276 | 0.6629 |
PRACS | 2.7562 | 2.1330 | 0.8029 | 0.8009 | 0.8901 | 0.1257 | 0.1619 | 0.7332 |
PNN | 2.1375 | 1.5205 | 0.8349 | 0.8456 | 0.9300 | 0.0856 | 0.1057 | 0.8251 |
PanNet | 2.4550 | 1.8111 | 0.7973 | 0.8075 | 0.9061 | 0.1343 | 0.1249 | 0.7605 |
TFNet | 2.3028 | 1.6740 | 0.8279 | 0.8397 | 0.9278 | 0.0926 | 0.0593 | 0.8571 |
MSDCNN | 2.0119 | 1.4374 | 0.8502 | 0.8571 | 0.9387 | 0.0950 | 0.1071 | 0.8177 |
SRPPNN | 1.7580 | 1.2817 | 0.8695 | 0.8757 | 0.9489 | 0.0816 | 0.0983 | 0.8358 |
-PNN | 2.0174 | 1.4455 | 0.8551 | 0.8613 | 0.9388 | 0.0819 | 0.0889 | 0.8382 |
TAMINet | 1.6407 | 1.3159 | 0.8445 | 0.8889 | 0.9568 | 0.0795 | 0.1007 | 0.8364 |
Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|
GS | 2.9600 | 2.2395 | 0.7152 | 0.7311 | 0.8836 | 0.0337 | 0.0701 | 0.8995 |
IHS | 3.3501 | 2.4878 | 0.6402 | 0.6747 | 0.8122 | 0.1019 | 0.1222 | 0.7894 |
Brovey | 3.2152 | 2.3940 | 0.6554 | 0.6846 | 0.8330 | 0.0843 | 0.1137 | 0.8125 |
PRACS | 3.7735 | 2.9583 | 0.7728 | 0.7668 | 0.8571 | 0.0478 | 0.0678 | 0.8885 |
PNN | 1.7489 | 1.3446 | 0.8674 | 0.8723 | 0.9393 | 0.0484 | 0.0440 | 0.9108 |
PanNet | 1.7779 | 1.3766 | 0.8684 | 0.8706 | 0.9401 | 0.0360 | 0.0453 | 0.9210 |
TFNet | 1.5284 | 1.1881 | 0.8882 | 0.8917 | 0.9555 | 0.0701 | 0.0463 | 0.8884 |
MSDCNN | 1.7528 | 1.3024 | 0.8786 | 0.8821 | 0.9451 | 0.0588 | 0.0504 | 0.8948 |
SRPPNN | 1.3692 | 1.0353 | 0.9009 | 0.9025 | 0.9703 | 0.0378 | 0.0390 | 0.9252 |
-PNN | 1.6612 | 1.3022 | 0.8766 | 0.8834 | 0.9436 | 0.0217 | 0.0466 | 0.8840 |
TAMINet | 1.2935 | 0.9819 | 0.9094 | 0.9122 | 0.9737 | 0.0468 | 0.0306 | 0.9248 |
Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|
GS | 2.6649 | 2.2789 | 0.8010 | 0.8152 | 0.8747 | 0.0214 | 0.0767 | 0.8746 |
IHS | 2.7153 | 2.3693 | 0.7442 | 0.7856 | 0.8744 | 0.1019 | 0.0974 | 0.8153 |
Brovey | 2.5360 | 2.3218 | 0.7696 | 0.8007 | 0.8782 | 0.0832 | 0.0938 | 0.8333 |
PRACS | 2.7545 | 2.2170 | 0.8392 | 0.8265 | 0.8530 | 0.0569 | 0.1035 | 0.8466 |
PNN | 1.7284 | 1.3545 | 0.9014 | 0.9037 | 0.9367 | 0.0159 | 0.0590 | 0.9267 |
PanNet | 1.7027 | 1.3581 | 0.9045 | 0.9075 | 0.9412 | 0.0233 | 0.0652 | 0.9137 |
TFNet | 1.3475 | 1.0733 | 0.9253 | 0.9260 | 0.9590 | 0.0337 | 0.0615 | 0.9088 |
MSDCNN | 1.6110 | 1.2770 | 0.9099 | 0.9105 | 0.9436 | 0.0205 | 0.0702 | 0.9117 |
SRPPNN | 1.3991 | 1.1338 | 0.9185 | 0.9226 | 0.9560 | 0.0172 | 0.0635 | 0.9215 |
-PNN | 1.6808 | 1.3525 | 0.8976 | 0.9089 | 0.9388 | 0.0129 | 0.0617 | 0.9271 |
TAMINet | 1.3110 | 1.0459 | 0.9280 | 0.9286 | 0.9608 | 0.0281 | 0.0568 | 0.9191 |
Method | FLOPS | Time (s) | #Params |
---|---|---|---|
PNN | 5263.85 | 0.0078 | 0.08 M |
PanNet | 5135.93 | 0.0117 | 0.08 M |
TFNet | 30,749.49 | 0.0268 | 2.36 M |
MSDCNN | 12,423.53 | 0.0238 | 0.19 M |
SRPPNN | 8823.77 | 0.0189 | 1.83 M |
-PNN | 15,022.17 | 0.0320 | 0.23 M |
TAMINet | 30749.49 | 0.0268 | 2.36 M |
Dataset | Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|---|
IKONOS | SE | 1.9694 | 1.9694 | 0.8597 | 0.8630 | 0.9404 | 0.1038 | 0.1305 | 0.7899 |
CBAM | 2.0676 | 1.4612 | 0.8497 | 0.8566 | 0.9379 | 0.1150 | 0.1477 | 0.7636 | |
CA | 1.6407 | 1.3159 | 0.8445 | 0.8889 | 0.9568 | 0.0795 | 0.1007 | 0.8364 | |
QuickBird | SE | 1.2999 | 0.9950 | 0.9058 | 0.9098 | 0.9735 | 0.0457 | 0.0339 | 0.9226 |
CBAM | 1.3434 | 1.0448 | 0.9018 | 0.9081 | 0.9703 | 0.0521 | 0.0328 | 0.9225 | |
CA | 1.2690 | 0.9682 | 0.9098 | 0.9130 | 0.9745 | 0.0438 | 0.0302 | 0.9279 | |
WorldView-2 | SE | 1.3145 | 1.0454 | 0.9281 | 0.9285 | 0.9604 | 0.0273 | 0.0626 | 0.9146 |
CBAM | 1.3145 | 1.0453 | 0.9272 | 0.9287 | 0.9605 | 0.0272 | 0.0619 | 0.9144 | |
CA | 1.3110 | 1.0459 | 0.9284 | 0.9288 | 0.9609 | 0.0281 | 0.0568 | 0.9191 |
Dataset | Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|---|
IKONOS | baseline | 2.3028 | 1.6740 | 0.8279 | 0.8397 | 0.9278 | 0.0926 | 0.0593 | 0.8571 |
CA | 2.2129 | 1.6537 | 0.8323 | 0.8380 | 0.9273 | 0.1009 | 0.0650 | 0.8449 | |
DI | 1.9497 | 1.3847 | 0.8601 | 0.8650 | 0.9417 | 0.0952 | 0.1238 | 0.8020 | |
ALL | 1.6407 | 1.3159 | 0.8445 | 0.8889 | 0.9568 | 0.0795 | 0.1007 | 0.8364 | |
QuickBird | baseline | 1.5284 | 1.1881 | 0.8882 | 0.8917 | 0.9555 | 0.0701 | 0.0463 | 0.8884 |
CA | 1.5313 | 1.1972 | 0.8887 | 0.8919 | 0.9523 | 0.0649 | 0.0457 | 0.8934 | |
DI | 1.2935 | 0.9819 | 0.9094 | 0.9122 | 0.9737 | 0.0468 | 0.0306 | 0.9248 | |
ALL | 1.2690 | 0.9682 | 0.9098 | 0.9130 | 0.9745 | 0.0438 | 0.0302 | 0.9279 | |
WorldView-2 | baseline | 1.3475 | 1.0733 | 0.9253 | 0.9260 | 0.9590 | 0.0337 | 0.0615 | 0.9088 |
CA | 1.3358 | 1.0650 | 0.9262 | 0.9271 | 0.9591 | 0.0279 | 0.0562 | 0.9185 | |
DI | 1.3123 | 1.0443 | 0.9280 | 0.9286 | 0.9608 | 0.0262 | 0.0634 | 0.9142 | |
ALL | 1.3110 | 1.0459 | 0.9284 | 0.9288 | 0.9609 | 0.0281 | 0.0568 | 0.9191 |
Dataset | Method | ↓ | ↓ | ↑ | ↑ | ↑ | ↓ | ↓ | ↑ |
---|---|---|---|---|---|---|---|---|---|
IKONOS | DI-high-pass | 2.0993 | 1.4712 | 0.8446 | 0.8534 | 0.9359 | 0.1260 | 0.1471 | 0.7583 |
DI-up-LRMS | 2.0320 | 1.4589 | 0.8498 | 0.8567 | 0.9376 | 0.0998 | 0.1098 | 0.8092 | |
DI3-all | 1.9641 | 1.3946 | 0.8541 | 0.8653 | 0.9423 | 0.0910 | 0.1211 | 0.8085 | |
QuickBird | DI-high-pass | 1.3797 | 1.0442 | 0.9013 | 0.9042 | 0.9705 | 0.0604 | 0.0335 | 0.9091 |
DI-up-LRMS | 1.2733 | 0.9820 | 0.9071 | 0.9110 | 0.9739 | 0.0475 | 0.0283 | 0.9260 | |
DI3-all | 1.2935 | 0.9819 | 0.9094 | 0.9122 | 0.9737 | 0.0468 | 0.0306 | 0.9248 | |
WorldView-2 | DI-high-pass | 1.3430 | 1.0765 | 0.9264 | 0.9270 | 0.9590 | 0.0301 | 0.0602 | 0.9127 |
DI-up-LRMS | 1.3053 | 1.3053 | 0.9281 | 0.9292 | 0.9602 | 0.0223 | 0.0568 | 0.9250 | |
DI3-all | 1.3110 | 1.0459 | 0.9280 | 0.9286 | 0.9608 | 0.0281 | 0.0545 | 0.9191 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Miao, J.; Li, G.; Tan, Y.; Yu, S.; Liu, X.; Zeng, L.; Li, G. Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet). Remote Sens. 2024, 16, 75. https://doi.org/10.3390/rs16010075
Wang J, Miao J, Li G, Tan Y, Yu S, Liu X, Zeng L, Li G. Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet). Remote Sensing. 2024; 16(1):75. https://doi.org/10.3390/rs16010075
Chicago/Turabian StyleWang, Jing, Jiaqing Miao, Gaoping Li, Ying Tan, Shicheng Yu, Xiaoguang Liu, Li Zeng, and Guibing Li. 2024. "Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)" Remote Sensing 16, no. 1: 75. https://doi.org/10.3390/rs16010075
APA StyleWang, J., Miao, J., Li, G., Tan, Y., Yu, S., Liu, X., Zeng, L., & Li, G. (2024). Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet). Remote Sensing, 16(1), 75. https://doi.org/10.3390/rs16010075