A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images
Highlights
- A systematic analysis reveals that resolution enhancement alters scattering mechanisms, and a high-resolution SLC benchmark dataset is constructed to fill the research gap, verifying that end-to-end learning yields superior segmentation performance.
- An Orthogonal Dual-Resolution Network (ODRNet) is developed to decompose complex-valued data into real and imaginary components, incorporating a bilateral fusion mechanism, to achieve high accuracy as a benchmark.
- The study validates that the orthogonal representation of SLC preserves signal integrity, establishing a more generalize data representation strategy for deep learning applications.
- The proposed ODRNet effectively balances semantic context and fine spatial details, offering a high-precision solution for building footprint extraction in complex urban environments.
Abstract
1. Introduction
- 1.
- By utilizing the expanded data from the MSAW dataset [32], we have generated building footprints for 202 PolSAR SLC and formatted it as a semantic segmentation dataset. And the created dataset can serve as a standard dataset for related research and comparison, filling the gap in related research data. Based on the dataset, we analyze the scattering of buildings at sub-meter resolution and evaluate the impact of the spatial average on the network.
- 2.
- We propose ODRNet that adopts a Dual-resolution Branch (DRB) with BIF to integrate the real and imaginary part of the data effectively with features of different scales. Additionally, the inclusion of Auxiliary heads for Polarization Orientation Angle (POA) prediction provides explicit physical supervision, ensuring that the network learns meaningful polarimetric representations from the separate inputs while facilitating enhanced optimization and convergence.
- 3.
- Due to the significant scale differences between buildings and the scattering distribution related to the scale, we propose the Multi-scales Aggregation Pyramid Pooling Module (MAPPM). This module can fuse and extract multi-scale scattering information from the feature maps at different stages of the low-resolution branch. Additionally, to integrate information from dual-branch and fully utilize the information of complex-valued data, we design a Pixel-attention Fusion (PAF) module.
2. Analysis of High-Resolution PolSAR Image
2.1. Representation of PolSAR Images
2.2. The Relationship Between Resolution and Scattering
2.3. The Feature Selection
3. Method
3.1. The Overall Structure and the Format of Input Data
3.2. Dual-Resolution Branch
3.2.1. Backbones
3.2.2. Bilateral Information Fusion
3.3. Multi-Scale Aggregation Pyramid Pooling Module
3.4. Pixel-Attention Fusion Module
3.5. Loss Function
4. Experiments
4.1. Experimental Setting
4.1.1. Dataset Description
4.1.2. Implementation Details
4.1.3. Evaluation Metrics
4.2. Results and Comparison
4.3. Ablation Experiments
4.3.1. Analysis of the Input Format
4.3.2. Analysis of the Key Components
4.3.3. Analysis of the MAPPM
4.3.4. Analysis of the PAF
5. Discussion
5.1. Influence of the Auxiliary Head
5.2. Influence of the Polarimetric Feature
5.3. Large-Scale VHR PolSAR Image Validation
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Belgiu, M.; Drǎguţ, L. Comparing Supervised and Unsupervised Multiresolution Segmentation Approaches for Extracting Buildings from Very High Resolution Imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [PubMed]
- Adriano, B.; Yokoya, N.; Xia, J.; Miura, H.; Liu, W.; Matsuoka, M.; Koshimura, S. Learning from Multimodal and Multitemporal Earth Observation Data for Building Damage Mapping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 132–143. [Google Scholar] [CrossRef]
- Hu, Y.; Fan, J.; Wang, J. Classification of PolSAR Images Based on Adaptive Nonlocal Stacked Sparse Autoencoder. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1050–1054. [Google Scholar] [CrossRef]
- Deng, L.; Wang, C. Improved Building Extraction with Integrated Decomposition of Time-Frequency and Entropy-Alpha Using Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4058–4068. [Google Scholar] [CrossRef]
- Quan, S.; Xiong, B.; Xiang, D.; Zhao, L.; Zhang, S.; Kuang, G. Eigenvalue-Based Urban Area Extraction Using Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 458–471. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, W.; Wang, R.; Wang, L.; Ge, D.; Liu, X.; Wang, C.; Liu, B. An Improved Urban Area Extraction Method for PolSAR Data Using Eigenvalues and Optimal Roll-Invariant Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 6455–6467. [Google Scholar] [CrossRef]
- Hu, C.; Wang, Y.; Sun, X.; Quan, S.; Xiang, D. Model-Based Polarimetric Target Decomposition with Power Redistribution for Urban Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8795–8808. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, W.; Hou, W. Five-Component Decomposition Methods of Polarimetric SAR and Polarimetric SAR Interferometry Using Coupling Scattering Mechanisms. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2021, 14, 6662–6676. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nat 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Wu, G.; Shao, X.; Guo, Z.; Chen, Q.; Yuan, W.; Shi, X.; Xu, Y.; Shibasaki, R. Automatic Building Segmentation of Aerial Imagery Using Multi-Constraint Fully Convolutional Networks. Remote Sens. 2018, 10, 407. [Google Scholar] [CrossRef]
- Shao, Z.; Tang, P.; Wang, Z.; Saleem, N.; Yam, S.; Sommai, C. BRRNet: A Fully Convolutional Neural Network for Automatic Building Extraction from High-Resolution Remote Sensing Images. Remote Sens. 2020, 12, 1050. [Google Scholar] [CrossRef]
- Wang, L.; Fang, S.; Meng, X.; Li, R. Building Extraction with Vision Transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–11. [Google Scholar] [CrossRef]
- Wu, F.; Wang, C.; Zhang, H.; Li, J.; Li, L.; Chen, W.; Zhang, B. Built-up Area Mapping in China from GF-3 SAR Imagery Based on the Framework of Deep Learning. Remote Sens. Environ. 2021, 262, 112515. [Google Scholar] [CrossRef]
- Xia, J.; Yokoya, N.; Adriano, B.; Zhang, L.; Li, G.; Wang, Z. A Benchmark High-Resolution GaoFen-3 SAR Dataset for Building Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5950–5963. [Google Scholar] [CrossRef]
- Zhang, B.; Wu, Q.; Wu, F.; Huang, J.; Wang, C. A Lightweight Pyramid Transformer for High-Resolution SAR Image-Based Building Classification in Port Regions. Remote Sens. 2024, 16, 3218. [Google Scholar] [CrossRef]
- Geng, J.; Zhang, Y.; Jiang, W. Polarimetric SAR Image Classification Based on Hierarchical Scattering-Spatial Interaction Transformer. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
- Kuang, Z.; Liu, K.; Bi, H.; Li, F. PolSAR Image Classification With Complex-Valued Diffusion Model as Representation Learners. IEEE Trans. Aerosp. Electron. Syst. 2025, 61, 12184–12201. [Google Scholar] [CrossRef]
- Yu, L.; Zeng, Z.; Liu, A.; Xie, X.; Wang, H.; Xu, F.; Hong, W. A Lightweight Complex-Valued DeepLabv3+ for Semantic Segmentation of PolSAR Image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 930–943. [Google Scholar] [CrossRef]
- Xu, R.; Zhang, S.; Dong, C.; Mei, S.; Zhang, J.; Zhao, Q. Lightweight Attention Refined and Complex-Valued BiSeNetV2 for Semantic Segmentation of Polarimetric SAR Image. Remote Sens. 2025, 17, 3527. [Google Scholar] [CrossRef]
- Wu, J.H.; Zhang, S.Q.; Jiang, Y.; Zhou, Z.H. Complex-valued neurons can learn more but slower than real-valued neurons via gradient descent. In Proceedings of the 37th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 10–16 December 2023. NIPS ’23. [Google Scholar]
- Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J. Deep Complex Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Almansoori, M.K.; Telek, M. Performance evaluation of Complex-Valued Neural Networks on real and complex-valued classification and reconstruction tasks. Mach. Learn. Appl. 2025, 22, 100742. [Google Scholar] [CrossRef]
- Zeng, X.; Wang, Z.; Feng, K.; Gao, X.; Sun, X. TS-SHES: Terrain Segmentation in Complex-Valued PolSAR Images via Scattering Harmonization and Explicit Supervision. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–20. [Google Scholar] [CrossRef]
- Ai, J.; Mao, Y.; Luo, Q.; Jia, L.; Xing, M. SAR Target Classification Using the Multikernel-Size Feature Fusion-Based Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Ai, J.; Tian, R.; Luo, Q.; Jin, J.; Tang, B. Multi-Scale Rotation-Invariant Haar-Like Feature Integrated CNN-Based Ship Detection Algorithm of Multiple-Target Environment in SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10070–10087. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Shermeyer, J.; Hogan, D.; Brown, J.; Van Etten, A.; Weir, N.; Pacifici, F.; Hansch, R.; Bastidas, A.; Soenen, S.; Bacastow, T.; et al. SpaceNet 6: Multi-Sensor All Weather Mapping Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 196–197. [Google Scholar]
- Sato, A.; Yamaguchi, Y.; Singh, G.; Park, S.-E. Four-Component Scattering Power Decomposition with Extended Volume Scattering Model. IEEE Geosci. Remote Sens. Lett. 2012, 9, 166–170. [Google Scholar] [CrossRef]
- Zhao, F.; Mallorqui, J.J.; Lopez-Sanchez, J.M. Impact of SAR Image Resolution on Polarimetric Persistent Scatterer Interferometry with Amplitude Dispersion Optimization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
- Kang, J.; Wang, Z.; Zhu, R.; Xia, J.; Sun, X.; Fernandez-Beltran, R.; Plaza, A. DisOptNet: Distilling Semantic Knowledge from Optical Images for Weather-Independent Building Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Kim, H.; Song, J.; Natsuaki, R.; Hirose, A. Dependence of Polarimetric Characteristics on Sar Resolutions: Experimental Analysis. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; p. 3289. [Google Scholar] [CrossRef]
- Lee, J.S.; Ainsworth, T.L.; Kelly, J.P.; Lopez-Martinez, C. Evaluation and Bias Removal of Multilook Effect on Entropy/Alpha/Anisotropy in Polarimetric SAR Decomposition. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3039–3052. [Google Scholar] [CrossRef]
- Lee, J.; Hoppel, K. Noise Modeling and Estimation of Remotely-Sensed Images. In Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, 10–14 July 1989; Volume 2, pp. 1005–1008. [Google Scholar] [CrossRef]
- Xu, J.; Luo, C.; Parr, G.; Luo, Y. A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2020, 9, 1629–1632. [Google Scholar] [CrossRef]
- Shao, M.; Li, D.; Hong, S.; Qi, J.; Sun, H. IQFormer: A Novel Transformer-Based Model with Multi-Modality Fusion for Automatic Modulation Recognition. IEEE Trans. Cogn. Commun. Netw. 2025, 11, 1623–1634. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the International Conference on Machine Learning (ICML), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Sun, G.; Huang, H.; Zhang, A.; Li, F.; Zhao, H.; Fu, H. Fusion of Multiscale Convolutional Neural Networks for Building Extraction in Very High-Resolution Images. Remote Sens. 2019, 11, 227. [Google Scholar] [CrossRef]
- Trockman, A.; Kolter, J.Z. Patches Are All You Need? arXiv 2022, arXiv:2201.09792. [Google Scholar]
- Ruder, S. An Overview of Gradient Descent Optimization Algorithms. arXiv 2017, arXiv:1609.04747. [Google Scholar] [CrossRef]
- Yuan, Y.; Chen, X.; Wang, J. Object-Contextual Representations for Semantic Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 173–190. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. In Proceedings of the Neural Information Processing Systems (NeurIPS), Virtually, 6–14 December 2021. [Google Scholar]
- Li, Y.; Li, X.; Dai, Y.; Hou, Q.; Liu, L.; Liu, Y.; Cheng, M.M.; Yang, J. LSKNet: A Foundation Lightweight Backbone for Remote Sensing. Int. J. Comput. Vis. 2024, 133, 1410–1431. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision ICCV, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]



















| Low-Resolution | High-Resolution | |
|---|---|---|
| Stage 1 | Basic block × 2 Basic block stride = 2 Basic block | Basic block × 2 Basic block stride = 2 Basic block |
| Stage 2 | Basic block stride = 2 Basic block | Basic block × 2 |
| Stage 3 | Basic block stride = 2 Basic block | Basic block × 2 |
| Stage 4 | Bottleneck, stride = 2 | Bottleneck, stride = 1 |
| Dataset | MSAW | Expanded MSAW |
|---|---|---|
| Sensors | Capella Space’s sensor | Capella Space’s sensor |
| Data format | Intensity | Single Look Complex |
| Band | X | X |
| Polarization | Full | Full |
| Resolution | 0.5 m × 0.5 m | 0.25 m × 0.25 m |
| Size | 900 × 900 pixels | 512 × 512 pixels |
| Training set | 3401 images | 32,716 images |
| Validation set | ∼ | 22,298 images |
| Method | Input | IoU (%) | Precision (%) | Recall (%) | F1 (%) | Params (M) | GFLOPs |
|---|---|---|---|---|---|---|---|
| FCN [11] | Re(); Im() | 59.65 | 76.15 | 73.36 | 74.73 | 47.1 | 198.05 |
| Unet [12] | Re(); Im() | 58.55 | 71.86 | 75.98 | 73.86 | 29.0 | 203.22 |
| Ocrnet [47] | Re(); Im() | 59.75 | 80.23 | 70.06 | 74.80 | 70.5 | 167.83 |
| Deeplabv3+ [31] | Re(); Im() | 59.72 | 79.14 | 70.88 | 74.78 | 42.0 | 176.81 |
| Segformer [48] | Amplitude | 56.55 | 73.65 | 70.52 | 72.05 | 64.1 | 95.7 |
| Re(); Im() | 57.45 | 74.36 | 71.62 | 72.98 | 64.1 | 95.8 | |
| LSKNet [49] | Amplitude | 60.14 | 76.21 | 73.72 | 74.94 | 26.1 | 101.3 |
| Re(); Im() | 61.25 | 77.21 | 74.73 | 75.97 | 26.1 | 101.4 | |
| LAM-CV-BiSeNetV2 [24] | 58.47 | 77.10 | 70.69 | 73.76 | 61.7 | 403 | |
| TS-SHES [28] | Amplitude; Phase | 56.2 | 69.19 | 76.17 | 71.96 | 86.9 | 191.39 |
| L-CV-Deeplabv3+ [23] | 44.82 | 61.43 | 62.38 | 61.9 | 38.1 | 112.93 | |
| Ours | Re(); Im() | 64.3 | 79.88 | 76.73 | 78.27 | 76.57 | 66.036 |
| Input Data | IoU (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|
| Low: Real High: Imaginary | 64.3 | 79.88 | 76.73 | 78.27 |
| Low: Imaginary High: Real | 63.95 | 79.49 | 76.59 | 78.01 |
| Low: Amplitude High: Phase | 62.53 | 78.4 | 75.54 | 76.94 |
| Low: Phase High: Amplitude | 61.8 | 79.49 | 73.52 | 76.39 |
| Model | BIF | MAPPM | PAF | Auxiliary | IoU (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|---|---|---|---|
| Baseline | 59.93 | 74.92 | 74.97 | 74.94 | ||||
| ✓ | 61.51 | 77.1 | 75.26 | 76.17 | ||||
| ✓ | ✓ | 62.2 | 79.56 | 74.03 | 76.7 | |||
| ✓ | ✓ | ✓ | 62.6 | 78.61 | 75.46 | 77.0 | ||
| ✓ | ✓ | ✓ | 63.38 | 78.73 | 76.73 | 78.27 | ||
| One-Branch | 58.93 | 76.65 | 71.83 | 74.16 | ||||
| ODRNet | ✓ | ✓ | ✓ | ✓ | 64.3 | 79.88 | 76.73 | 78.27 |
| MAPPM | IoU (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|
| w/o | 62.78 | 78.9 | 75.45 | 77.14 |
| w/ | 64.3 | 79.88 | 76.73 | 78.27 |
| Method | IoU (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|
| Concatenate | 63.81 | 79.46 | 76.41 | 77.9 |
| Sum | 63.84 | 79.87 | 76.09 | 77.93 |
| PAF | 64.3 | 79.88 | 76.73 | 78.27 |
| Data | Multi-Looking | IoU (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|---|
| 3 × 3 | 48.19 | 61.96 | 68.45 | 65.04 | |
| 5 × 5 | 46.4 | 64.5 | 62.32 | 63.39 | |
| 7 × 7 | 43.12 | 77.78 | 49.18 | 60.26 | |
| 9 × 9 | 42.12 | 72.98 | 49.9 | 59.27 | |
| H/A/ | 5 × 5 | 48.95 | 63.85 | 67.72 | 65.73 |
| Hu’s decomposition [7] | 5 × 5 | 47.79 | 62.50 | 67.00 | 64.67 |
| 1 × 1 | 64.3 | 79.88 | 76.73 | 78.27 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Ni, S.; Zhao, F.; Zheng, M.; Chen, Z.; Liu, X. A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images. Remote Sens. 2026, 18, 305. https://doi.org/10.3390/rs18020305
Ni S, Zhao F, Zheng M, Chen Z, Liu X. A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images. Remote Sensing. 2026; 18(2):305. https://doi.org/10.3390/rs18020305
Chicago/Turabian StyleNi, Songhao, Fuhai Zhao, Mingjie Zheng, Zhen Chen, and Xiuqing Liu. 2026. "A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images" Remote Sensing 18, no. 2: 305. https://doi.org/10.3390/rs18020305
APA StyleNi, S., Zhao, F., Zheng, M., Chen, Z., & Liu, X. (2026). A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images. Remote Sensing, 18(2), 305. https://doi.org/10.3390/rs18020305

