Data-Model Complexity Trade-Off in UAV-Acquired Ultra-High-Resolution Remote Sensing: Empirical Study on Photovoltaic Panel Segmentation
Abstract
Highlights
- Ultra-high-resolution offsets the benefit of adding spectral bands.
- State-of-the-art architectures do not guarantee better segmentation performance.
- A public ultra-high-resolution PV segmentation benchmark dataset is available.
- ResNet50+UNet with spatially diverse data is recommended as a strong baseline.
Abstract
1. Introduction
1.1. Background
1.2. Related Works
1.3. Motivations and Contributions
2. Materials and Methods
2.1. Data Collection
2.2. Data Pre-Processing
2.3. Experimental Design
2.4. Training and Evaluation Framework
3. Results
3.1. Effects of Data Variations on Training
3.2. Effects of Data Variations on Testing
3.3. Effects of Model Variations on Training
3.4. Effects of Model Variations on Testing
4. Discussion
5. Conclusions
- (1)
- Limited impact of spectral band augmentation: the incorporation of NIR and Red Edge bands into standard RGB inputs did not significantly improve segmentation performance, but did reduce inference speed.
- (2)
- Sample diversity outweighs dataset volume: while both training data volume and diversity contribute to model generalization, models trained on geographically diverse datasets consistently outperformed those trained on single-site data of comparable size.
- (3)
- Architecture matters more than size: ResUNet models consistently achieved higher performance than DeepLabV3 and SegFormer across scenarios. Specifically, the average accuracy of ResUNet reached 0.9873, compared to 0.9742 for SegFormer and 0.9322 for DeepLabV3; for IoU, the corresponding means were 0.9579 (ResUNet), 0.9110 (SegFormer), and 0.8145 (DeepLabV3).
- (4)
- Moderate model sizes offer optimal trade-offs: although increasing model size improved training stability and accuracy, medium-sized models often matched the performance of larger counterparts. Across all experimental settings, the average IoU of medium-sized models was 0.8966, nearly identical to that of larger models (0.8970), suggesting that they represent a practical balance between efficiency and effectiveness.
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| UAV | Unmanned aerial vehicle |
| UHR | Ultra-high resolution |
| GSD | Ground sampling distance |
| IoU | Intersection over Union |
| GPU | Graphics processing unit |
| FPS | Frames per second |
References
- Kwon, D.Y.; Kim, J.; Park, S.; Hong, S. Advancements of remote data acquisition and processing in unmanned vehicle technologies for water quality monitoring: An extensive review. Chemosphere 2023, 343, 140198. [Google Scholar] [CrossRef]
- Feng, L.; Chen, S.; Zhang, C.; Zhang, Y.; He, Y. A comprehensive review on recent applications of unmanned aerial vehicle remote sensing with various sensors for high-throughput plant phenotyping. Comput. Electron. Agric. 2021, 182, 106033. [Google Scholar] [CrossRef]
- Hassler, S.C.; Baysal-Gurel, F. Unmanned Aircraft System (UAS) Technology and Applications in Agriculture. Agronomy 2019, 9, 618. [Google Scholar] [CrossRef]
- Sozzi, M.; Kayad, A.; Gobbo, S.; Cogato, A.; Sartori, L.; Marinello, F. Economic comparison of Satellite, Plane and UAV-acquired NDVI images for site-specific nitrogen application: Observations from Italy. Agronomy 2021, 11, 2098. [Google Scholar] [CrossRef]
- Khaliq, A.; Comba, L.; Biglia, A.; Ricauda Aimonino, D.; Chiaberge, M.; Gay, P. Comparison of satellite and UAV-based multispectral imagery for vineyard variability assessment. Remote Sens. 2019, 11, 436. [Google Scholar] [CrossRef]
- Jiang, J.; Johansen, K.; Tu, Y.-H.; McCabe, M.F. Multi-sensor and multi-platform consistency and interoperability between UAV, Planet CubeSat, Sentinel-2, and Landsat reflectance data. GISci. Remote Sens. 2022, 59, 936–958. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Yuan, X.; Shi, J.; Gu, L. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 2021, 169, 114417. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. arXiv 2016, arXiv:1612.03144. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Jiao, L.; Wang, M.; Liu, X.; Li, L.; Liu, F.; Feng, Z.; Yang, S.; Hou, B. Multiscale Deep Learning for Detection and Recognition: A Comprehensive Survey. IEEE Trans Neural Netw Learn. Syst 2025, 36, 5900–5920. [Google Scholar] [CrossRef]
- Ying, X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. arXiv 2021, arXiv:2105.15203. [Google Scholar] [CrossRef]
- Shang, J.; Xu, J.; Zhang, A.A.; Liu, Y.; Wang, K.C.P.; Ren, D.; Zhang, H.; Dong, Z.; He, A. Automatic Pixel-level pavement sealed crack detection using Multi-fusion U-Net network. Measurement 2023, 208, 112475. [Google Scholar] [CrossRef]
- Gao, Y.; Li, Y.; Jiang, R.; Zhan, X.; Lu, H.; Guo, W.; Yang, W.; Ding, Y.; Liu, S. Enhancing Green Fraction Estimation in Rice and Wheat Crops: A Self-Supervised Deep Learning Semantic Segmentation Approach. Plant Phenomics 2023, 5, 0064. [Google Scholar] [CrossRef] [PubMed]
- Gibril, M.B.A.; Shafri, H.Z.M.; Al-Ruzouq, R.; Shanableh, A.; Nahas, F.; Al Mansoori, S. Large-Scale Date Palm Tree Segmentation from Multiscale UAV-Based and Aerial Images Using Deep Vision Transformers. Drones 2023, 7, 93. [Google Scholar] [CrossRef]
- Shi, Y.; Han, L.; Zhang, X.; Sobeih, T.; Gaiser, T.; Thuy, N.H.; Behrend, D.; Srivastava, A.K.; Halder, K.; Ewert, F. Deep Learning Meets Process-Based Models: A Hybrid Approach to Agricultural Challenges. arXiv 2025, arXiv:2504.16141. [Google Scholar] [CrossRef]
- Long, A.; Han, W.; Huang, X.; Li, J.; Wang, Y.; Chen, J. Distributed Deep Learning for Big Remote Sensing Data Processing on Apache Spark: Geological Remote Sensing Interpretation as a Case Study. In Web and Big Data. APWeb-WAIM 2023. Lecture Notes in Computer Science; Song, X., Feng, R., Chen, Y., Li, J., Min, G., Eds.; Springer: Singapore, 2024; pp. 96–110. [Google Scholar]
- Chen, Y.; Zhou, J.; Chen, Y.; Wang, J.; Zhang, X.; Ge, Y.; Ma, H. Edge-enhanced SAM for extracting photovoltaic power plants from remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2025, 140, 104580. [Google Scholar] [CrossRef]
- Jiang, H.; Yao, L.; Lu, N.; Qin, J.; Liu, T.; Liu, Y.; Zhou, C. Multi-resolution dataset for photovoltaic panel segmentation from satellite and aerial imagery. Earth Syst. Sci. Data 2021, 13, 5389–5401. [Google Scholar] [CrossRef]
- Kleebauer, M.; Marz, C.; Reudenbach, C.; Braun, M. Multi-resolution segmentation of solar photovoltaic systems using deep learning. Remote Sens. 2023, 15, 5687. [Google Scholar] [CrossRef]
- Meng, Z.; Hu, Y.; Ren, G.; Zhu, W.; Wang, J.; Liu, S.; Ma, Y. Remote sensing monitoring of seagrass bed dynamics using cross-temporal-spatial domain transfer learning in Yellow river Delta. Int. J. Remote Sens. 2024, 45, 1972–1996. [Google Scholar] [CrossRef]
- Ge, F.; Wang, G.; He, G.; Zhou, D.; Yin, R.; Tong, L. A hierarchical information extraction method for large-scale centralized photovoltaic power plants based on multi-source remote sensing images. Remote Sens. 2022, 14, 4211. [Google Scholar] [CrossRef]
- Zhu, R.; Guo, D.; Wong, M.S.; Qian, Z.; Chen, M.; Yang, B.; Chen, B.; Zhang, H.; You, L.; Heo, J.; et al. Deep solar PV refiner: A detail-oriented deep learning network for refined segmentation of photovoltaic areas from satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103134. [Google Scholar] [CrossRef]
- Guo, Z.; Zhuang, Z.; Tan, H.; Liu, Z.; Li, P.; Lin, Z.; Shang, W.-L.; Zhang, H.; Yan, J. Accurate and generalizable photovoltaic panel segmentation using deep learning for imbalanced datasets. Renew. Energy 2023, 219, 119471. [Google Scholar] [CrossRef]
- Cheng, J.; Deng, C.; Su, Y.; An, Z.; Wang, Q. Methods and datasets on semantic segmentation for Unmanned Aerial Vehicle remote sensing images: A review. ISPRS J. Photogramm. Remote Sens. 2024, 211, 1–34. [Google Scholar] [CrossRef]
- Stroner, M.; Urban, R.; Reindl, T.; Seidl, J.; Broucek, J. Evaluation of the Georeferencing Accuracy of a Photogrammetric Model Using a Quadrocopter with Onboard GNSS RTK. Sensors 2020, 20, 2318. [Google Scholar] [CrossRef]
- Taddia, Y.; Stecchi, F.; Pellegrinelli, A. Coastal Mapping Using DJI Phantom 4 RTK in Post-Processing Kinematic Mode. Drones 2020, 4, 9. [Google Scholar] [CrossRef]
- Dimyati, M.; Supriatna, S.; Nagasawa, R.; Pamungkas, F.D.; Pramayuda, R. A Comparison of Several UAV-Based Multispectral Imageries in Monitoring Rice Paddy (A Case Study in Paddy Fields in Tottori Prefecture, Japan). ISPRS Int. J. Geo-Inf. 2023, 12, 36. [Google Scholar] [CrossRef]
- Shafiee, S.; Mroz, T.; Burud, I.; Lillemo, M. Evaluation of UAV multispectral cameras for yield and biomass prediction in wheat under different sun elevation angles and phenological stages. Comput. Electron. Agric. 2023, 210, 107874. [Google Scholar] [CrossRef]
- Franzini, M.; Ronchetti, G.; Sona, G.; Casella, V. Geometric and Radiometric Consistency of Parrot Sequoia Multispectral Imagery for Precision Agriculture Applications. Appl. Sci. 2019, 9, 5314. [Google Scholar] [CrossRef]
- Pádua, L.; Guimarães, N.; Adão, T.; Sousa, A.; Peres, E.; Sousa, J.J. Effectiveness of Sentinel-2 in Multi-Temporal Post-Fire Monitoring When Compared with UAV Imagery. ISPRS Int. J. Geo-Inf. 2020, 9, 225. [Google Scholar] [CrossRef]
- Mazzia, V.; Comba, L.; Khaliq, A.; Chiaberge, M.; Gay, P. UAV and Machine Learning Based Refinement of a Satellite-Driven Vegetation Index for Precision Agriculture. Sensors 2020, 20, 2530. [Google Scholar] [CrossRef]
- Micikevicius, P.; Narang, S.; Alben, J.; Diamos, G.; Elsen, E.; Garcia, D.; Ginsburg, B.; Houston, M.; Kuchaiev, O.; Venkatesh, G.; et al. Mixed Precision Training. arXiv 2017, arXiv:1710.03740. [Google Scholar] [CrossRef]
- Cheng, B.; Girshick, R.; Dollar, P.; Berg, A.C.; Kirillov, A. Boundary IoU: Improving Object-Centric Image Segmentation Evaluation. arXiv 2021, arXiv:2103.16562v1. [Google Scholar]
- Koubaa, A.; Ammar, A.; Abdelkader, M.; Alhabashi, Y.; Ghouti, L. AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs. Remote Sens. 2023, 15, 1873. [Google Scholar] [CrossRef]
- Yang, R.; He, G.; Yin, R.; Wang, G.; Peng, X.; Zhang, Z.; Long, T.; Peng, Y.; Wang, J. A large-scale ultra-high-resolution segmentation dataset augmentation framework for photovoltaic panels in photovoltaic power plants based on priori knowledge. Appl. Energy 2025, 390, 125879. [Google Scholar] [CrossRef]
- Collin, A.; Dubois, S.; James, D.; Houet, T. Improving Intertidal Reef Mapping Using UAV Surface, Red Edge, and Near-Infrared Data. Drones 2019, 3, 67. [Google Scholar] [CrossRef]
- Furukawa, F.; Laneng, L.A.; Ando, H.; Yoshimura, N.; Kaneko, M.; Morimoto, J. Comparison of RGB and Multispectral Unmanned Aerial Vehicle for Monitoring Vegetation Coverage Changes on a Landslide Area. Drones 2021, 5, 97. [Google Scholar] [CrossRef]
- Zefri, Y.; ElKettani, A.; Sebari, I.; Ait Lamallam, S. Thermal Infrared and Visual Inspection of Photovoltaic Installations by UAV Photogrammetry—Application Case: Morocco. Drones 2018, 2, 41. [Google Scholar] [CrossRef]
- Tan, H.; Guo, Z.; Zhang, H.; Chen, Q.; Lin, Z.; Chen, Y.; Yan, J. Enhancing PV panel segmentation in remote sensing images with constraint refinement modules. Appl. Energy 2023, 350, 121757. [Google Scholar] [CrossRef]
- Zhang, X.; Wu, H.; Qi, K.; Qian, Y.; Zhang, Y.; Wang, L.; Wang, J. Detailed PV Monitor: A Highly Generalized Photovoltaic Panels Segmentation Network Integrating Context-Aware and Deep Feature Reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 10131–10143. [Google Scholar] [CrossRef]
- Jie, Y.; Ji, X.; Yue, A.; Chen, J.; Deng, Y.; Chen, J.; Zhang, Y. Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification. Energies 2020, 13, 6742. [Google Scholar] [CrossRef]
- da Costa, M.V.C.V.; de Carvalho, O.L.F.; Orlandi, A.G.; Hirata, I.; de Albuquerque, A.O.; e Silva, F.V.; Guimarães, R.F.; Gomes, R.A.T.; Júnior, O.A.d.C. Remote Sensing for Monitoring Photovoltaic Solar Plants in Brazil Using Deep Semantic Segmentation. Energies 2021, 14, 2960. [Google Scholar] [CrossRef]
- Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A.A.M.; Abid, A.; Fisch, A.; Brown, A.R.; Santoro, A.; Gupta, A.; Garriga-Alonso, A.; et al. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. arXiv 2022, arXiv:2206.04615. [Google Scholar] [CrossRef]
















| Model | Input Bands | Batch Size | Fwd/Bwd Pass Size (MB) | Trainable Parameters (M) | Train GPU Memory (MiB) | Validation GPU Memory (MiB) |
|---|---|---|---|---|---|---|
| DeepLabV3_18 | 3 | 110 | 5133.98 | 15.9 | 19,857.27 | 19,864.44 |
| 4 | 15.9 | 19,864.79 | 19,869.6 | |||
| 5 | 15.91 | 19,951.02 | 19,953.6 | |||
| DeepLabV3_50 | 3 | 35 | 26,072.09 | 39.63 | 21,313.47 | 21,316.5 |
| 4 | 39.64 | 21,396.41 | 21,398.72 | |||
| 5 | 39.64 | 21,470.6 | 21,474.17 | |||
| DeepLabV3_101 | 3 | 24 | 36,339.74 | 58.63 | 19,950.15 | 19,951.67 |
| 4 | 58.63 | 20,007.91 | 20,028.86 | |||
| 5 | 58.63 | 20,220.95 | 20,223.44 | |||
| ResUNet18 | 3 | 43 | 18,689.29 | 12.91 | 21,450.95 | 21,454.24 |
| 4 | 12.92 | 21,628.44 | 21,638.32 | |||
| 5 | 12.92 | 21,671.14 | 21,673.89 | |||
| ResUNet50 | 3 | 26 | 27,549.24 | 44.31 | 20,954.92 | 20,957.88 |
| 4 | 44.31 | 20,960.28 | 20,961.27 | |||
| 5 | 44.32 | 21,765.62 | 21,767.05 | |||
| ResUNet101 | 3 | 17 | 25,285.89 | 63.3 | 18,439.77 | 18,433.43 |
| 4 | 63.31 | 20,062.14 | 20,059.79 | |||
| 5 | 63.31 | 20,915.45 | 20,916.1 | |||
| SegFormerB1 | 3 | 23 | 10,201.6 | 11.45 | 21,346.68 | 21,348.21 |
| 4 | 11.45 | 21,579.26 | 21,581.35 | |||
| 5 | 11.46 | 21,591.96 | 21,594.03 | |||
| SegFormerB4 | 3 | 14 | 18,827.18 | 46.63 | 22,242.63 | 22,245.07 |
| 4 | 46.63 | 22,342.3 | 22,344.28 | |||
| 5 | 46.63 | 20,362.44 | 20,350.79 | |||
| SegFormerB5 | 3 | 11 | 17,336.11 | 62.24 | 20,249.75 | 20,250.7 |
| 4 | 62.25 | 20,753.57 | 20,748.75 | |||
| 5 | 62.25 | 21,566.58 | 21,555.58 |
| Model | Input Bands | FPS | Avg Time per Image (ms) | Avg GPU Memory (MiB) | Max GPU Memory (MiB) |
|---|---|---|---|---|---|
| DeepLabV3_18 | 3 | 61.46 | 16.65 | 121.12 | 506.34 |
| 4 | 45.74 | 22.07 | 132.91 | 518.36 | |
| 5 | 37.1 | 28.95 | 144.71 | 530.37 | |
| DeepLabV3_50 | 3 | 38.71 | 26.03 | 223.93 | 921.28 |
| 4 | 31.97 | 31.75 | 224.59 | 922.03 | |
| 5 | 27.9 | 36.24 | 236.38 | 934.04 | |
| DeepLabV3_101 | 3 | 33.47 | 30 | 285.09 | 982.3 |
| 4 | 27.91 | 35.91 | 296.86 | 994.31 | |
| 5 | 25.4 | 39.43 | 405.83 | 1103.49 | |
| ResUNet18 | 3 | 58.37 | 17.47 | 139.74 | 1047.06 |
| 4 | 41.21 | 24.66 | 120.16 | 1027.7 | |
| 5 | 35.51 | 28.58 | 170.18 | 1078.06 | |
| ResUNet50 | 3 | 41.09 | 24.42 | 229.34 | 1434.56 |
| 4 | 32.64 | 30.76 | 241.15 | 1446.57 | |
| 5 | 28.54 | 35.11 | 253.01 | 1458.62 | |
| ResUNet101 | 3 | 35.68 | 28.11 | 302.42 | 1507.63 |
| 4 | 29.44 | 34.09 | 314.2 | 1519.64 | |
| 5 | 26.01 | 38.58 | 325.98 | 1531.65 | |
| SegFormerB1 | 3 | 43.36 | 23.21 | 111.59 | 7118.41 |
| 4 | 34.55 | 29.1 | 123.39 | 7130.42 | |
| 5 | 30.01 | 33.56 | 135.17 | 7142.43 | |
| SegFormerB4 | 3 | 26.94 | 37.17 | 247.62 | 7254.74 |
| 4 | 23.39 | 42.87 | 259.54 | 7266.76 | |
| 5 | 21.25 | 47.19 | 271.33 | 7278.77 | |
| SegFormerB5 | 3 | 23.87 | 41.93 | 307.66 | 7314.54 |
| 4 | 21.18 | 47.3 | 319.51 | 7326.6 | |
| 5 | 19.25 | 52.05 | 331.28 | 7338.61 |
| Model | Recall-BG | Recall-PV | Error-FP (%) | Error-TN (%) | Error-Both (%) |
|---|---|---|---|---|---|
| DeepLabV3_18 | 0.87 ± 0.01 | 0.99 ± 0.01 | 0.01 ± 0.01 | 0.01 ± 0.00 | 0.02 ± 0.01 |
| DeepLabV3_50 | 0.95 ± 0.04 | 0.98 ± 0.01 | 0.01 ± 0.01 | 0.01 ± 0.00 | 0.01 ± 0.01 |
| DeepLabV3_101 | 0.95 ± 0.04 | 0.98 ± 0.01 | 0.01 ± 0.01 | 0.00 ± 0.00 | 0.01 ± 0.01 |
| ResUNet18 | 0.99 ± 0.01 | 0.98 ± 0.01 | 0.02 ± 0.01 | 0.00 ± 0.00 | 0.02 ± 0.01 |
| ResUNet50 | 0.99 ± 0.01 | 0.99 ± 0.00 | 0.01 ± 0.01 | 0.00 ± 0.00 | 0.02 ± 0.01 |
| ResUNet101 | 0.99 ± 0.01 | 0.99 ± 0.00 | 0.01 ± 0.01 | 0.00 ± 0.00 | 0.01 ± 0.01 |
| SegFormerB1 | 0.97 ± 0.00 | 0.98 ± 0.01 | 0.10 ± 0.04 | 0.00 ± 0.00 | 0.10 ± 0.04 |
| SegFormerB4 | 0.97 ± 0.01 | 0.98 ± 0.00 | 0.05 ± 0.03 | 0.00 ± 0.00 | 0.06 ± 0.03 |
| SegFormerB5 | 0.97 ± 0.01 | 0.98 ± 0.00 | 0.05 ± 0.03 | 0.00 ± 0.00 | 0.06 ± 0.03 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, Z.; Zhou, X.; Yang, P.; Liu, J.; Yang, W. Data-Model Complexity Trade-Off in UAV-Acquired Ultra-High-Resolution Remote Sensing: Empirical Study on Photovoltaic Panel Segmentation. Drones 2025, 9, 619. https://doi.org/10.3390/drones9090619
Zou Z, Zhou X, Yang P, Liu J, Yang W. Data-Model Complexity Trade-Off in UAV-Acquired Ultra-High-Resolution Remote Sensing: Empirical Study on Photovoltaic Panel Segmentation. Drones. 2025; 9(9):619. https://doi.org/10.3390/drones9090619
Chicago/Turabian StyleZou, Zhigang, Xinhui Zhou, Pukaiyuan Yang, Jingyi Liu, and Wu Yang. 2025. "Data-Model Complexity Trade-Off in UAV-Acquired Ultra-High-Resolution Remote Sensing: Empirical Study on Photovoltaic Panel Segmentation" Drones 9, no. 9: 619. https://doi.org/10.3390/drones9090619
APA StyleZou, Z., Zhou, X., Yang, P., Liu, J., & Yang, W. (2025). Data-Model Complexity Trade-Off in UAV-Acquired Ultra-High-Resolution Remote Sensing: Empirical Study on Photovoltaic Panel Segmentation. Drones, 9(9), 619. https://doi.org/10.3390/drones9090619

