Land Cover Type Classification Using High-Resolution Orthophotomaps and Convolutional Neural Networks: Case Study of Tatra National Park
Highlights
- CNNs can be successfully applied to mapping diverse and numerous classes (18 classes) over a spatially large mountainous protected region using high-resolution (0.12-m) orthophotomaps.
- The mapping accuracies are exceedingly high for typical land cover classes but are not satisfactory for more sophisticated classes, such as plant species.
- Ordinary orthophotomaps are a suitable substrate for land cover type mapping but might not be sufficient for delimiting complex classes, such as plant species or plant habitats.
- Orthophotomaps combined with CNNs could be a sufficient data source to perform mapping of numerous land cover types, resulting in maps with a superior spatial resolution.
- The use of high-resolution orthophotomaps and CNNs for mapping plant species, habitats, or otherwise complex classes should be investigated further.
Abstract
1. Introduction
- To show the feasibility of using high-resolution orthophotomaps to create high-fidelity multiclass land cover maps using CNNs:
- ○
- To create a high-accuracy land cover map of TNP;
- ○
- To map multiple distinct classes on an RGB orthophotomap;
- ○
- To create methods for creating land cover maps using convolutional neural networks.
2. Data and Methods
2.1. Study Area
2.2. Data
2.2.1. Reference Data
2.2.2. Orthophotomap
3. Methods
- Each training or validation polygon was transformed into a mesh of points in such a way that each point was always at least half of the window size (in pixels) away from any of its neighbors. Additionally, for particularly large polygons, this distance was increased to balance the overall number of image signatures across all classes.
- For each point, an image signature was created by clipping a window (n by n pixels) centered around that point. In this particular case, image signatures were 9-by-9-pixel image cutouts. All spectral bands were included in the image signature. Additional information about the image signature, such as the class it represented and the unique ID of the reference polygon from which it came, was added.
- A batch size of 1024;
- An epoch number of 256;
- An initial learning rate of 0.05;
- The Adam training optimizer;
- A categorical cross-entropy loss function;
- A reduction in the learning rate to 90% of its previous value, every 10 epochs;
- A dynamic reduction in the learning rate in the event of a training plateau (the network no longer trains itself successfully).
- For each pixel on the image,
- ○
- Create an image signature (9-by-9-pixel square window), centered on a given pixel;
- ○
- Infer class label of image signature via model inference;
- ○
- Assign a class label to the pixel.
4. Results
5. Discussion
6. Conclusions
- Lack of finesse: The presented method shows simple use for CNNs, without delving into developing custom layers or translation layers. Nevertheless, this can also be considered an advantage.
- Computationally expensive: Performing classification for each pixel with the corresponding image signature creates a lot of redundant calculations. Typically, neighboring pixels share around 8/9 of the pixels between them. Diffusion or transposed convolution approaches remove this inefficiency but might introduce other challenges. This should be investigated in the future.
- In general, the results seem to agree with reality but lack detail and refinement (blur-like features on the post-classification map). This effect is exaggerated by the large window size. A large window size tends to add a “halo” of dissimilar classes around objects (which can be seen around large rocks, patches of dwarf pine, and forest edges). Smaller kernels (3 by 3) result in almost no “halo” but yield lower accuracies and are characterized by a heavy “salt-and-pepper” effect, especially prevalent in heterogeneous mountain tops and high-mountain meadows, where bare soil, rocks, and vegetation form a diverse spatiospectral mixture. The overall picture looks promising, but analyzing fine details leaves users wondering if it is just an illusion. High accuracy is not a substitute for a good map. Given the size of the area mapped and spatial resolution (12 cm), it is hard to assess the true accuracy of our map, even with expert knowledge about the lay of the land in TNP.
- Due to CNNs’ nature, a trained model cannot be reasoned with and is of limited usability for more thorough analyses. Moreover, it does not answer key questions and only delivers results.
- Competitive accuracies: Our work achieved classification accuracies within what was reported in the literature. More broadly defined classes (such as types for forest) were classified with an F-score above or around 0.9, which is excellent. Only more sophisticated classes regarding different plant species showed lacking results. This result suggests that either the algorithm used or the data are not suitable to delimit plant species at this scale. While mapping plant species, it is crucial to consider their phenological cycle, especially the flowering phase, which is often cited as a reason for high accuracy.
- CNNs can exploit both spatial and spectral domains, which might be an important addition to image classification techniques. The model incorporates structural and textural information into the solution.
- The model works on very-high-resolution imagery, delivering sensible results.
- The model works with three-band imagery, possibly with single-band imagery too, but with reduced efficiency. This alone opens many previously unused datasets for investigation and use in future studies (panchromatic aerial photos). Three-band RGB imagery is one of the cheapest and most widely available imagery. Moreover, the data acquisition is competitively priced.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Murphy, J.H. An Overview of Convolutional Neural Network Architectures for Deep Learning; Microway Inc.: Plymouth, MA, USA, 2016; pp. 1–22. Available online: https://api.semanticscholar.org/CorpusID:35625222 (accessed on 3 March 2024).
- Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep Learning in Forestry Using UAV-Acquired RGB Data: A Practical Review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
- Zagajewski, B.; Kluczek, M.; Raczko, E.; Njegovec, A.; Dabija, A.; Kycko, M. Comparison of Random Forest, Support Vector Machines, and Neural Networks for Post-Disaster Forest Species Mapping of the Krkonoše/Karkonosze Transboundary Biosphere Reserve. Remote Sens. 2021, 13, 2581. [Google Scholar] [CrossRef]
- Zagajewski, B.; Kluczek, M.; Zdunek, K.B.; Holland, D. Sentinel-2 versus PlanetScope Images for Goldenrod Invasive Plant Species Mapping. Remote Sens. 2024, 16, 636. [Google Scholar] [CrossRef]
- Marcinkowska-Ochtyra, A.; Zagajewski, B.; Raczko, E.; Ochtyra, A.; Jarocińska, A. Classification of High-Mountain Vegetation Communities within a Diverse Giant Mountains Ecosystem Using Airborne APEX Hyperspectral Imagery. Remote Sens. 2018, 10, 570. [Google Scholar] [CrossRef]
- Sabat-Tomala, A.; Raczko, E.; Zagajewski, B. Comparison of Support Vector Machine and Random Forest Algorithms for Invasive and Expansive Species Classification Using Airborne Hyperspectral Data. Remote Sens. 2020, 12, 516. [Google Scholar] [CrossRef]
- Jarocińska, A.; Kopeć, D.; Niedzielko, J.; Wylazłowska, J.; Halladin-Dąbrowska, A.; Charyton, J.; Piernik, A.; Kamiński, D. The utility of airborne hyperspectral and satellite multispectral images in identifying Natura 2000 non-forest habitats for conservation purposes. Sci. Rep. 2023, 13, 4549. [Google Scholar] [CrossRef] [PubMed]
- Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and application of advanced airborne hyperspectral imaging technology: A review. Infrared Phys. Technol. 2020, 104, 103115. [Google Scholar] [CrossRef]
- Egli, S.; Höpke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
- Pham, T.T.; Dang, K.B.; Giang, T.L.; Hoang, T.H.N.; Le, V.H.; Ha, H.N. Deep learning models for monitoring landscape changes in a UNESCO Global Geopark. J. Environ. Manag. 2024, 354, 120497. [Google Scholar] [CrossRef]
- Demeter, L.; Molnár, A.P.; Bede-Fazekas, A.; Öllerer, K.; Varga, A.; Szabados, K.; Tucakov, M.; Kiš, A.; Biró, M.; Marinkov, J.; et al. Controlling invasive alien shrub species, enhancing biodiversity and mitigating flood risk: A win–win–win situation in grazed floodplain plantations. J. Environ. Manag. 2021, 295, 113053. [Google Scholar] [CrossRef]
- Schuchardt, M.A.; Berauer, B.J.; Duc, A.L.; Ingrisch, J.; Niu, Y.; Bahn, M.; Jentsch, A. Increases in functional diversity of mountain plant communities is mainly driven by species turnover under climate change. Oikos 2023, 2023, e09922. [Google Scholar] [CrossRef]
- Hasan, S.S.; Zhen, L.; Miah, G.; Ahamed, T.; Samie, A. Impact of land use change on ecosystem services: A review. Environ. Dev. 2020, 34, 100527. [Google Scholar] [CrossRef]
- Decuyper, M.; Roberto, O.; Chávez, R.O.; Lohbeck, M.; Lastra, J.A.; Tsendbazar, N.; Hackländer, J.; Herold, M.; Vågen, T. Continuous monitoring of forest change dynamics with satellite time series. Remote Sens. Environ. 2022, 269, 112829. [Google Scholar] [CrossRef]
- Aryal, K.; Apan, A.; Maraseni, T. Comparing global and local land cover maps for ecosystem management in the Himalayas. Remote Sens. Appl. Soc. Environ. 2023, 30, 100952. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural networks. Commun. ACM 2012, 60, 6. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Sawant, S.; Garg, R.D.; Meshram, V.; Mistry, S. Sen-2 LULC: Land use land cover dataset for deep learning approaches. Data Brief 2023, 51, 109724. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Bretz, M.; Dewan, A.A.; Delavar, M.A. Machine learning in modeling land-use and land cover-change (LULCC): Current status, challenges and prospects. Sci. Total Environ. 2022, 822, 153559. [Google Scholar] [CrossRef]
- Lin, F.C.; Chuang, Y.C. Interoperability Study of Data Preprocessing for Deep Learning and High-Resolution Aerial Photographs for Forest and Vegetation Type Identification. Remote Sens. 2021, 13, 4036. [Google Scholar] [CrossRef]
- Gao, L.; Luo, J.; Xia, L.; Wu, T.; Sun, Y.; Liu, H. Topographic constrained land cover classification in mountain areas using fully convolutional network. Int. J. Remote Sens. 2019, 40, 7127–7152. [Google Scholar] [CrossRef]
- Deigele, W.; Brandmeier, M.; Straub, C.A. Hierarchical Deep-Learning Approach for Rapid Windthrow Detection on PlanetScope and High-Resolution Aerial Image Data. Remote Sens. 2020, 12, 2121. [Google Scholar] [CrossRef]
- Truong, V.T.; Hirayama, S.; Phan, D.C.; Hoang, T.T.; Tadono, T.; Nasahara, K.N. JAXA’s new high-resolution land use land cover map for Vietnam using a time-feature convolutional neural network. Sci. Rep. 2024, 14, 3926. [Google Scholar] [CrossRef]
- Kycko, M.; Zagajewski, B.; Lavender, S.; Dabija, A. In Situ Hyperspectral Remote Sensing for Monitoring of Alpine Trampled and Recultivated Species. Remote Sens. 2019, 11, 1296. [Google Scholar] [CrossRef]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv 2015, arXiv:1603.04467. [Google Scholar] [CrossRef]
- Chollet, F. Keras: Deep Learning for Humans. 2015. Available online: https://keras.io (accessed on 12 February 2024).
- McKinney, W. Data Structures for Statistical Computing in Python. In Proceedings of the 9th Python in Science Conference, Austin, TX, USA, 28 June–3 July 2010; pp. 56–61. [Google Scholar] [CrossRef]
- Jordahl, K.; Van Den Bossche, J.; Fleischmann, M.; Wasserman, J.; McBride, J.; Gerard, J.; Tratner, J.; Perry, M.; Garcia Badaracco, A.; Farmer, C.; et al. GeoPandas, Python Tools for Geographic Data; Zenodo: Geneva, Switzerland, 2014. [Google Scholar] [CrossRef]
- Gillies, S.; Ward, B.; Petersen, A. Rasterio: Geospatial Raster i/o for Python Programmers. 2019. Available online: https://github.com/rasterio/rasterio (accessed on 12 February 2025).
- Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
- Jiang, J.; Feng, X.; Liu, F.; Xu, Y.; Huang, H. Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN. IEEE Access 2019, 7, 20607–20613. [Google Scholar] [CrossRef]
- Ayhan, B.; Kwan, C.; Budavari, B.; Kwan, L.; Lu, Y.; Perez, D.; Li, J.; Skarlatos, D.; Vlachos, M. Vegetation Detection Using Deep Learning and Conventional Methods. Remote Sens. 2020, 12, 2502. [Google Scholar] [CrossRef]
- Kwan, C.; Ayhan, B.; Budavari, B.; Lu, Y.; Perez, D.; Li, J.; Bernabe, S.; Plaza, A. Deep Learning for Land Cover Classification Using Only a Few Bands. Remote Sens. 2020, 12, 2000. [Google Scholar] [CrossRef]
- Jutras-Perreault, M.-C.; Gobakken, T.; Næsset, E.; Ørka, H.O. Comparison of Different Remotely Sensed Data Sources for Detection of Presence of Standing Dead Trees Using a Tree-Based Approach. Remote Sens. 2023, 15, 2223. [Google Scholar] [CrossRef]
- Yrttimaa, T.; Saarinen, N.; Luoma, V.; Tanhuanpää, T.; Kankare, V.; Liang, X.; Hyyppä, J.; Holopainen, M.; Vastaranta, M. Detecting and characterizing downed dead wood using terrestrial laser scanning. ISPRS J. Photogramm. Remote Sens. 2019, 151, 76–90. [Google Scholar] [CrossRef]
- Marchi, N.; Pirotti, F.; Lingua, E. Airborne and Terrestrial Laser Scanning Data for the Assessment of Standing and Lying Deadwood: Current Situation and New Perspectives. Remote Sens. 2018, 10, 1356. [Google Scholar] [CrossRef]
- Kluczek, M.; Bogdan Zagajewski, B. Mapping spatiotemporal mortality patterns in spruce mountain forests using Sentinel-2 data and environmental factors. Ecol. Inform. 2025, 86, 103074. [Google Scholar] [CrossRef]
- Majidi, S.; Babapour, G.; Shah-Hosseini, R. An encoder–decoder network for land cover classification using a fusion of aerial images and photogrammetric point clouds. Surv. Rev. 2024, 57, 55–64. [Google Scholar] [CrossRef]
- Alshari, E.A.; Abdulkareem, M.B.; Gawali, B.W. Classification of land use/land cover using artificial intelligence (ANN-RF). Front. Artif. Intell. 2023, 5, 964279. [Google Scholar] [CrossRef] [PubMed]
- Tejasree, G.; Agilandeeswari, L. Land use/land cover (LULC) classification using deep-LSTM for hyperspectral images. Egypt. J. Remote Sens. Space Sci. 2024, 27, 52–68. [Google Scholar] [CrossRef]
- Islam, T.; Islam, R.; Uddin, P.; Ulhaq, A. Spectrally Segmented-Enhanced Neural Network for Precise Land Cover Object Classification in Hyperspectral Imagery. Remote Sens. 2024, 16, 807. [Google Scholar] [CrossRef]
- Cao, C.; Dragićević, S.; Li, S. Land-Use Change Detection with Convolutional Neural Network Methods. Environments 2019, 6, 25. [Google Scholar] [CrossRef]
- Yao, X.; Yang, H.; Wu, Y.; Wu, P.; Wang, B.; Zhou, X.; Wang, S. Land Use Classification of the Deep Convolutional Neural Network Method Reducing the Loss of Spatial Features. Sensors 2019, 19, 2792. [Google Scholar] [CrossRef]
- Al-Najjar, H.A.H.; Kalantar, B.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Mansor, S. Land Cover Classification from fused DSM and UAV Images Using Convolutional Neural Networks. Remote Sens. 2019, 11, 1461. [Google Scholar] [CrossRef]









| Number of Reference Polygons (Area in ha) | ||
|---|---|---|
| Class Name | Variant A | Variant B |
| Alpine natural grasslands | 382 (7.93) | - |
| Betulo-Adenostyletea | - | 192 (0.99) |
| Juncetea trifidi | - | 389 (3.34) |
| Nardo-Callunetea | - | 151 (0.63) |
| Salicetea herbaceae | - | 102 (0.48) |
| Seslerietea variae | - | 171 (2.50) |
| Asphalt | 260 (0.81) | |
| Bare rocks and rock debris | 143 (11.31) | |
| Broad-leaved forest | 259 (3.14) | |
| Built-up areas | 504 (3.02) | |
| Coniferous forest | 141 (13.15) | |
| Lying deadwood | 97 (0.08) | |
| Dwarf pine | 122 (7.07) | |
| No-vegetation area | 100 (0.24) | |
| Seminatural grassland–meadows | 258 (19.78) | |
| Shadow | 400 (219.69) | |
| Standing deadwood | 1005 (12.08) | |
| Subalpine dwarf scrub communities | 297 (5.84) | |
| Waterbodies | 388 (47.77) | |
| Number of Reference Polygons | Number of Reference Pixels | |||||||
|---|---|---|---|---|---|---|---|---|
| Variant A | Variant B | Variant A | Variant B | |||||
| Class Name | Training | Validation | Training | Validation | Training | Validation | Training | Validation |
| Alpine natural grasslands | 635 | 370 | x | x | 23,371 | 13,506 | x | x |
| Betulo-Adenostyletea | x | x | 121 | 71 | x | x | 3570 | 2145 |
| Juncetea trifidi | x | x | 246 | 143 | x | x | 9812 | 5069 |
| Nardo-Callunetea | x | x | 95 | 56 | x | x | 2623 | 1306 |
| Salicetea herbaceae | x | x | 65 | 37 | x | x | 2087 | 1196 |
| Seslerietea variae | x | x | 108 | 63 | x | x | 6176 | 2893 |
| Asphalt | 164 | 95 | 164 | 96 | 4842 | 2806 | 5224 | 2424 |
| Bare rocks and rock debris | 164 | 96 | 164 | 95 | 7825 | 5343 | 8815 | 4353 |
| Broad-leaved forest | 256 | 147 | 256 | 147 | 12,451 | 7644 | 12,512 | 7583 |
| Built-up areas | 188 | 109 | 188 | 109 | 6929 | 3572 | 6708 | 3793 |
| Coniferous forest | 90 | 53 | 90 | 53 | 10,306 | 5617 | 9802 | 6121 |
| Lying deadwood | 163 | 95 | 163 | 95 | 5920 | 2745 | 548 | 284 |
| Dwarf pine | 63 | 37 | 63 | 37 | 5920 | 2745 | 5756 | 2909 |
| No-vegetation area | 245 | 143 | 245 | 143 | 1655 | 1005 | 1628 | 1196 |
| Seminatural grassland–meadows | 89 | 52 | 89 | 52 | 9210 | 5434 | 9370 | 5274 |
| Shadow | 319 | 185 | 319 | 185 | 38,437 | 25,747 | 39,289 | 24,895 |
| Standing deadwood | 77 | 45 | 77 | 45 | 8201 | 4874 | 8372 | 4703 |
| Subalpine dwarf scrub communities | 241 | 141 | 251 | 141 | 13,152 | 7645 | 13,505 | 7292 |
| Waterbodies | 61 | 36 | 61 | 36 | 9010 | 4787 | 8535 | 5262 |
| Dwarf Pine | Waterbodies | Seminatural Grassland–Meadows | Alpine Natural Grasslands | Standing Deadwood | Asphalt | Broad-Leaved Forest | Shadow | Subalpine Dwarf Scrub Communities | No-Vegetation Area | Lying Deadwood | Built-Up Areas | Coniferous Forest | Bare Rocks and Rock Debris | User Acc. (%) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Dwarf pine | 2672 | 0 | 1 | 1 | 1 | 0 | 37 | 0 | 23 | 0 | 0 | 0 | 10 | 0 | 97.34 |
| Waterbodies | 0 | 4751 | 0 | 0 | 0 | 0 | 0 | 26 | 0 | 0 | 0 | 1 | 0 | 9 | 99.25 |
| Seminatural grassland–meadows | 2 | 0 | 5240 | 58 | 4 | 0 | 2 | 0 | 0 | 16 | 0 | 0 | 111 | 1 | 96.43 |
| Alpine natural grasslands | 3 | 0 | 85 | 12,027 | 0 | 0 | 77 | 13 | 1261 | 5 | 2 | 0 | 0 | 33 | 89.05 |
| Standing deadwood | 4 | 0 | 4 | 1 | 4533 | 0 | 5 | 119 | 0 | 18 | 2 | 6 | 174 | 8 | 93.00 |
| Asphalt | 0 | 0 | 0 | 0 | 31 | 2636 | 0 | 0 | 0 | 22 | 1 | 92 | 0 | 24 | 93.94 |
| Broad-leaved forest | 77 | 0 | 10 | 221 | 38 | 0 | 6927 | 6 | 278 | 3 | 0 | 0 | 84 | 0 | 90.62 |
| Shadow | 8 | 476 | 0 | 0 | 32 | 0 | 2 | 25,054 | 0 | 0 | 0 | 9 | 166 | 0 | 97.31 |
| Subalpine dwarf scrub communities | 19 | 0 | 1 | 2317 | 0 | 0 | 139 | 1 | 5168 | 0 | 0 | 0 | 0 | 0 | 67.60 |
| No-vegetation area | 0 | 0 | 5 | 6 | 26 | 12 | 16 | 0 | 0 | 932 | 2 | 4 | 0 | 2 | 92.74 |
| Lying deadwood | 0 | 0 | 0 | 2 | 55 | 1 | 0 | 0 | 0 | 6 | 187 | 5 | 0 | 49 | 61.31 |
| Built-up areas | 0 | 19 | 0 | 0 | 117 | 95 | 6 | 46 | 0 | 36 | 8 | 3149 | 2 | 94 | 88.16 |
| Coniferous forest | 154 | 1 | 55 | 0 | 60 | 0 | 37 | 190 | 0 | 0 | 0 | 2 | 5118 | 0 | 91.12 |
| Bare rocks and rock debris | 12 | 1 | 1 | 41 | 32 | 2 | 8 | 12 | 1 | 15 | 20 | 49 | 28 | 5121 | 95.85 |
| Producer Acc. (%) | 90.55 | 90.53 | 97.00 | 81.96 | 91.97 | 95.99 | 95.47 | 98.38 | 76.78 | 88.51 | 84.23 | 94.94 | 89.90 | 95.88 | |
| F1-score | 0.94 | 0.95 | 0.97 | 0.85 | 0.92 | 0.95 | 0.93 | 0.98 | 0.72 | 0.91 | 0.71 | 0.91 | 0.91 | 0.96 |
| Dwarf Pine | Waterbodies | Semi-Natural Grassland–Meadows | Betulo-Adenostyletea | Standing Deadwood | Asphalt | Broad-Leaved Forest | Shadow | Subalpine Dwarf Scrub Communities | No-Vegetation Area | Lying Deadwood | Built-Up Areas | Coniferous Forest | Bare Rocks and Rock Debris | Juncetea trifidi | Nardo-Callunetea | Salicetea herbaceae | Seslerietea variae | User Acc. (%) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Dwarf pine | 2744 | 0 | 4 | 0 | 6 | 0 | 30 | 17 | 14 | 0 | 0 | 0 | 94 | 0 | 0 | 0 | 0 | 0 | 94.33 |
| Waterbodies | 0 | 5232 | 0 | 0 | 0 | 0 | 0 | 27 | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 99.43 |
| Seminatural grassland–meadows | 0 | 0 | 4969 | 4 | 4 | 0 | 2 | 0 | 0 | 19 | 0 | 0 | 71 | 0 | 34 | 48 | 1 | 122 | 94.22 |
| Betulo-Adenostyletea | 1 | 0 | 0 | 894 | 0 | 0 | 51 | 0 | 111 | 0 | 0 | 0 | 0 | 1 | 707 | 118 | 42 | 220 | 41.68 |
| Standing deadwood | 17 | 0 | 15 | 0 | 4293 | 0 | 7 | 149 | 0 | 25 | 6 | 1 | 184 | 6 | 0 | 0 | 0 | 0 | 91.28 |
| Asphalt | 0 | 0 | 0 | 0 | 20 | 2299 | 0 | 0 | 0 | 2 | 0 | 89 | 0 | 14 | 0 | 0 | 0 | 0 | 94.84 |
| Broad-leaved Forest | 106 | 0 | 18 | 82 | 32 | 0 | 6778 | 10 | 336 | 2 | 0 | 0 | 67 | 3 | 35 | 7 | 72 | 35 | 89.38 |
| Shadow | 3 | 662 | 0 | 0 | 28 | 0 | 2 | 24,025 | 0 | 0 | 0 | 5 | 168 | 2 | 0 | 0 | 0 | 0 | 96.51 |
| Subalpine dwarf scrub communities | 23 | 0 | 0 | 42 | 3 | 0 | 140 | 3 | 5249 | 0 | 0 | 0 | 0 | 0 | 1287 | 103 | 99 | 343 | 71.98 |
| No-vegetation area | 0 | 0 | 3 | 0 | 54 | 20 | 6 | 0 | 0 | 898 | 9 | 4 | 0 | 14 | 1 | 2 | 12 | 9 | 87.02 |
| Lying deadwood | 0 | 0 | 0 | 0 | 45 | 0 | 0 | 0 | 0 | 7 | 196 | 1 | 0 | 33 | 0 | 0 | 2 | 0 | 69.01 |
| Built-up areas | 0 | 4 | 0 | 0 | 208 | 260 | 3 | 135 | 0 | 76 | 24 | 2949 | 3 | 131 | 0 | 0 | 0 | 0 | 77.75 |
| Coniferous forest | 131 | 5 | 83 | 0 | 109 | 0 | 23 | 301 | 0 | 0 | 0 | 0 | 5468 | 1 | 0 | 0 | 0 | 0 | 89.33 |
| Bare rocks and rock debris | 9 | 2 | 1 | 0 | 31 | 2 | 7 | 14 | 0 | 0 | 13 | 32 | 20 | 4216 | 0 | 0 | 4 | 2 | 96.85 |
| Juncetea trifidi | 0 | 0 | 2 | 262 | 0 | 0 | 24 | 0 | 938 | 0 | 0 | 0 | 0 | 7 | 3148 | 386 | 134 | 168 | 62.10 |
| Nardo-Callunetea | 0 | 0 | 0 | 6 | 0 | 0 | 1 | 0 | 79 | 0 | 0 | 0 | 0 | 0 | 270 | 693 | 28 | 229 | 53.06 |
| Salicetea herbaceae | 1 | 0 | 0 | 65 | 0 | 0 | 5 | 0 | 356 | 0 | 1 | 0 | 0 | 25 | 266 | 47 | 316 | 114 | 26.42 |
| Seslerietea variae | 1 | 0 | 35 | 77 | 0 | 0 | 3 | 13 | 258 | 0 | 0 | 0 | 0 | 2 | 531 | 138 | 65 | 1770 | 61.18 |
| Producer acc. (%) | 90.38 | 88.60 | 96.86 | 62.43 | 88.83 | 89.07 | 95.71 | 97.29 | 71.50 | 87.27 | 78.71 | 95.72 | 90.01 | 94.57 | 50.14 | 44.94 | 40.77 | 58.76 | |
| F1-score | 0.92 | 0.94 | 0.96 | 0.50 | 0.90 | 0.92 | 0.92 | 0.97 | 0.72 | 0.87 | 0.74 | 0.86 | 0.90 | 0.96 | 0.55 | 0.49 | 0.32 | 0.60 |
| Algorithm | Datasets | Resolution (m) | Land Cover Class Number | Study Area (km2) | OA% | |
|---|---|---|---|---|---|---|
| Cao et al. (2019) [42] | CNN | UC-Merced Land Use Dataset | 10; 30 | 6 | 7.2 | 95–98% |
| Gao et al. (2019) [21] | U-Net | RGB, NIR, NDVI, and DEM | 20 | 5 | 133 | 90.60% |
| Yao et al. (2019) [43] | DCNN, U-Net, SegNet, and Deeplab-V3 | IRRG | 0.05; 0.09 | 5 | Not provided/regional | 81.36–89.48% |
| Al-Najjar et al. (2019) [44] | CNN | RGB, DSM | 0.1 | 7 | ~1.68 | 98% |
| Ayhan et al. (2020) [32] | DeepLabV3+ and CNN | RGB, NIR, NDVI | 10; 20 | 4 | ~1 | 80.15–82.98% |
| Kwan et al. (2020) [33] | CNN | RGB, NIR, LiDAR | 2.5 | 15 | Not provided/local | 61.79–87.96% |
| Alshari et al. (2023) [39] | ANN and ANN-RF | RGB | 10; 30 | 7 | Regional (thousands of square km) | 61.69–82.52% |
| Sawant et al. (2023) [18] | U-Net | RGB | 10 | 7 | Not provided/regional | 88–99% |
| Tejasree et al. (2024) [40] | 2D-CNN, 3D-CNN, LSTM, and Deep-LSTM | RGB | 1.3; 10 | 9, 16 | Not provided/local | 82.94–98.46% |
| Islam et al. (2024) [41] | SENN and CNN | RGB | 1.3; 3.7 | 9, 16 | Not Provided/Local | 96.32–99.58% |
| This work | CNN | RGB | 0.12 | 14, 18 | ~211 | 86–92% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Raczko, E.; Kycko, M.; Kluczek, M. Land Cover Type Classification Using High-Resolution Orthophotomaps and Convolutional Neural Networks: Case Study of Tatra National Park. Remote Sens. 2026, 18, 114. https://doi.org/10.3390/rs18010114
Raczko E, Kycko M, Kluczek M. Land Cover Type Classification Using High-Resolution Orthophotomaps and Convolutional Neural Networks: Case Study of Tatra National Park. Remote Sensing. 2026; 18(1):114. https://doi.org/10.3390/rs18010114
Chicago/Turabian StyleRaczko, Edwin, Marlena Kycko, and Marcin Kluczek. 2026. "Land Cover Type Classification Using High-Resolution Orthophotomaps and Convolutional Neural Networks: Case Study of Tatra National Park" Remote Sensing 18, no. 1: 114. https://doi.org/10.3390/rs18010114
APA StyleRaczko, E., Kycko, M., & Kluczek, M. (2026). Land Cover Type Classification Using High-Resolution Orthophotomaps and Convolutional Neural Networks: Case Study of Tatra National Park. Remote Sensing, 18(1), 114. https://doi.org/10.3390/rs18010114

