The Potential of U-Net in Detecting Mining Activity: Accuracy Assessment Against GEE Classifiers
Abstract
1. Introduction
ID | Sensor/Data | Signal/Index | Primary Task |
---|---|---|---|
1 | Landsat 8/9, Sentinel-2 | NDVI, NDMI, NDWI/MNDWI, time series | Detect deforestation fronts, open pits, river sediment plumes |
2 | PlanetScope (3–5 m) VHR | True color, texture, plume extent | Near-operational hotspot mapping |
3 | Sentinel-1 SAR | Amplitude/texture change, coherence loss, DInSAR | All-weather detection of pits, slope instability |
4 | Optical + SAR fusion (S2 + S1, ALOS PALSAR) | Multispectral + backscatter | Large-area mapping of disturbed surfaces |
5 | Airborne LiDAR + VHR | Elevation change, canopy removal | Quantify carbon loss; detect micro-features |
6 | Deep learning segmentation (U-Net, FCN) | Spectral + spatial patterns | Semantic segmentation of mining/tailings |
7 | Nighttime lights (VIIRS/DNB) | Radiance anomalies | Proxy detection of mining camp activity through nighttime light patterns |
ID | Pros | Cons/Limitations | Reported Accuracy | Country |
---|---|---|---|---|
1 | Long-term archive; free/open; large AOIs | Cloud cover; 10–30 m may miss small sites | OA 85–92%, F1 ∼0.80–0.88 | Peru, Venezuela, Indonesia [3,4,6,11,12,13,16,17] |
2 | High spatial detail; daily revisit | Commercial; limited history | Visual confirmation, >90% correct ID | Peru, Venezuela [3,4,11,12,16,17] |
3 | Cloud-independent; sensitive to roughness/moisture | Lower thematic specificity without optical | OA ∼82%, F1 ∼0.78 | Indonesia, Myanmar [6,13,18] |
4 | Complements sensor limits; better under mixed conditions | Co-registration; complex processing | OA up to 94%, F1 ∼0.90 | Indonesia [6,13,18,19] |
5 | High accuracy; 3D capability | Expensive; limited coverage | RMSE ∼0.15 m; >99% detection pits >0.02 ha | Peru [3,4,11,12] |
6 | High delineation accuracy; transferable models | Needs large training sets | OA 92–96%, F1 ∼0.91–0.94 | Peru, Indonesia [3,4,6,11,12,13,18] |
7 | Cloud-independent; unaffected by vegetation cover | Low resolution; light sources from settlements or fires may confound interpretation | not available | Cross-regional (tropics) |
- Standard LULC datasets often lack dedicated mining classes [36];
- Proposing a mining-aware LULC classification scheme that incorporates seasonally distinct crop phases and explicitly introduces a dedicated quarry class;
- Developing a workflow that integrates U-Net deep learning with Google Earth Engine classifiers, and emphasizes the importance of proper accuracy metric interpretation in imbalanced datasets;
- Demonstrating the applicability of the proposed workflow for detecting and mapping potentially illegal mining sites by combining remote sensing classification with open geospatial datasets.
2. Materials and Methods
- U-Net dataset repository (https://github.com/bh-del/lulc_mining).
- GEE code editor script (https://code.earthengine.google.com/81bdb7baa344d3599c6b4088599b16bb).
- GEE training points asset (https://code.earthengine.google.com/?asset=projects/urban-atlas-benchmark/assets/train_points_labels).
- GEE test points asset (https://code.earthengine.google.com/?asset=projects/urban-atlas-benchmark/assets/test_points_labels).
2.1. Test Site and Benchmarks
- 1.
- Level-1 detail, characterized by general categories covering basic types of land use and cover:
- (a)
- EuroSAT 2018 [32]
- Ten classes: industrial, pasture, river, forest, annual crop, permanent crop, highway, herbaceous vegetation, residential, sea lake.
- Ten countries: Austria, Belgium, Finland, Ireland, Kosovo, Lithuania, Luxembourg, Portugal, Serbia, and Switzerland.
- (b)
- Seven classes: snow/ice, water, bare ground artificial, bare ground natural, woody, vegetation cultivated, vegetation semi-natural.
- Scope: Global.
- 2.
- Level-3 detail covers more detailed categories of land use and cover:
- (a)
- Nineteen classes.
- Ten countries: the same as those included in EuroSAT.
- (b)
- Fourteen classes.
- Area: Eastern part of France.
- (c)
- Thirty-three classes.
- Area: Germany.
2.1.1. Limitations of Existing LULC Datasets for Mapping Mineral Extraction Sites in Poland
- BigEarthNet lacks any class directly corresponding to mineral extraction.
- MultiSenNA and SEASONET do include such classes, namely Class 12–Open Space, Mineral and Class 7–Mineral Extraction Sites, respectively, but they are geographically restricted (France and Germany) and not calibrated for Polish landscapes.
2.1.2. CORINE
- Level 1 encompasses five main land cover types: artificial surfaces, agricultural areas, forests and semi-natural ecosystems, wetlands, and water bodies.
- Level 2 differentiates 15 land cover forms.
- Level 3 specifies 44 classes.
2.2. Proposed Mining-Aware LULC Scheme
- Conifer forest;
- Mix forest;
- Urban;
- Crops: crops before harvest or in spring before carrying out agrotechnical treatments;
- Bare soils;
- Permanent grassland;
- Roads;
- Waters;
- Crops in vegetation stage;
- Quarries, open pits.
2.3. Comparison of Deep Learning and GEE-Based Classification Methods
2.3.1. U-Net Model Training and Evaluation
- U-Net++: Expected to improve classification accuracy but with substantially higher computational requirements;
- FCN: A lighter architecture with lower computational load, but potentially weaker performance in delineating small or irregular features;
- U-Net: Selected as the most suitable compromise between accuracy, efficiency, and ease of implementation.
- Increased number of convolutions from 2 to 3.
- Added BatchNorm2d (Batch Normalization).
- Use of padding = same.
- Added ReplicationPad2d in the decoder before concatenation.
- Training and validation:
- Strzegom: 20210619, 20220619, 20220719, 20221012, 20230209, 20230301;
- Kolbuszowa: 20210327, 20210411, 20210509, 20210728, 20210906;
- Krakow: 20220603, 20220603, 20220603.
- Testing:
- Strzegom: 20230709;
- Kolbuszowa: 20210725.
- The use or non-use of class weights (class balancing).
- Various values of two hyperparameters: learning rate and weight decay with the CrossEntropyLoss loss function.
2.3.2. Supervised Classification in Google Earth Engine
- 10 m: B2 (blue), B3 (green), B4 (red), B8 (NIR).
- 20 m: B5, B6, B7, B8A, B11, B12.
- 60 m: B1 and B9 (B10 is omitted in L2A).
2.3.3. Google Earth Engine: Data Selection and Preprocessing
- Classification and Regression Trees (CART)—A rule-based, non-parametric method that recursively splits the data into binary decision trees to maximize class purity [52].
- Random Forest (RF)—An ensemble learning method that aggregates predictions from multiple decision trees trained on bootstrap samples. RF is robust to overfitting and handles high-dimensional data well [53].
- Support Vector Machine (SVM)—A margin-based classifier that constructs an optimal hyperplane to separate classes. We used a linear kernel and C-SVM formulation [54].
- Selecting the region of interest (Strzegom) and Sentinel-2 scene;
- Extracting and clipping spectral bands to ROI;
- Sampling training pixels from labeled vector data;
- Training three classifiers: CART, RF (10 trees) and SVM (linear kernel);
- Applying the models to classify the image;
- Visualization of classification maps using predefined palettes;
- Export of classified outputs;
- Performance evaluation using accuracy metrics derived from confusion matrices.
2.4. Code and Data Availability
- GEE script for classification: GEE code editor script (https://code.earthengine.google.com/81bdb7baa344d3599c6b4088599b16bb, accessed on 24 August 2025).
- Training data (vec_train): GEE training points asset (https://code.earthengine.google.com/?asset=projects/urban-atlas-benchmark/assets/train_points_labels, accessed on 24 August 2025).
- Test data (vec_test): GEE test points asset (https://code.earthengine.google.com/?asset=projects/urban-atlas-benchmark/assets/test_points_labels, accessed on 24 August 2025).
2.5. Accuracy Metrics
- Accuracy (ACC) quantifies the proportion of correctly predicted instances on the total number of samples.
- Specificity (TNR) measures the ability to correctly identify negative cases.
- Overall Accuracy (OA) represents the proportion of all correctly classified pixels (i.e., the sum of all true positives) in the total sample size.
- Producer Accuracy (PA), also known as recall or sensitivity, reflects the ability to correctly classify a given class.
- User Accuracy (UA), or precision, indicates how many of the predicted instances of a class are correct.
- F1-score is the harmonic mean of precision and recall, providing a balanced measure for uneven class distributions.
2.5.1. Binary Classification Case Study
2.5.2. Multiclass Classification Case Study
2.5.3. Macro-Averaged Metrics
3. Results
3.1. Classification Using the Trained U-Net Network
3.2. Classification Using GEE: CART, RF and SVM
- CART showed balanced performance across classes, with average PA, UA, and F1-score values close to 0.77. Its robustness in identifying less represented classes (e.g., Class 5 and Class 10) was notable.
- RF, while delivering the best OA, exhibited lower recall (PA = 0.7098), indicating a tendency to miss some true class instances, despite strong precision (UA = 0.7295).
- SVM demonstrated better recall (PA = 0.7421) than RF, but with slightly lower overall precision (UA = 0.7385). F1-scores for SVM were overall higher than CART, showing its strength in classifying well-defined land cover types.
3.3. Comparison of U-Net and GEE-Based Classification Approaches
- Overall Accuracy (OA): 94.60%.
- Mean PA: 92.07%.
- Mean UA: 84.04%.
- Mean F1-score: 87.07%.
4. Discussion
4.1. Effectiveness of LULC Classification for the Detection of Open Pits
4.2. Analysis of Detected Areas for Potential Illegal Exploitation
4.3. Training Data Diversity and Methodological Considerations
5. Conclusions
- Remote sensing combined with machine learning enables the effective and scalable detection of open-pit mining activities in complex landscapes. It offers an efficient alternative to manual or field-based monitoring, especially in data-scarce or inaccessible regions.
- Custom LULC classes, such as separating crops in different vegetation stages (e.g., Class 4: crops before harvest or in spring before agrotechnical treatments vs. Class 9: crops in vegetation stage) and distinguishing bare soil from active quarry zones (e.g., Class 5: bare soil vs. Class 10: quarries and pits), significantly improved classification performance by reducing spectral confusion between visually similar categories.
- Deep learning models, especially U-Net, proved to be highly effective for pixel-wise segmentation. They outperformed classical classifiers not only in Overall Accuracy, but also in capturing complex spatial patterns and small or irregular features.
- Accuracy assessment should incorporate class-wise metrics such as Producer’s Accuracy (PA) and User’s Accuracy (UA) in addition to Overall Accuracy (OA), particularly when evaluating rare or ambiguous classes. The mean class accuracy (ACC) gives a biased impression of performance, especially in unbalanced datasets.
- Public benchmark datasets typically lack dedicated mining-related classes, limiting their use for monitoring anthropogenic extraction activities. There is a pressing need for custom or augmented reference datasets to bridge this gap and better reflect local land cover dynamics.
Author Contributions
Funding
Conflicts of Interest
References
- Authority, H.M. Activities of Mining Offices in 2015–2023 Related to the Determination of the Increased Fee in Connection with the Conduct of Illegal Mining Operations. 2024. Available online: https://www.wug.gov.pl/o_nas/Dzialalnosc__okregowych_urzedow_gorniczych (accessed on 11 November 2024).
- Michałowska, K.; Pirowski, T.; Głowienka, E.; Szypuła, B.; Malinverni, E. Sustainable Monitoring of Mining Activities: Decision-Making Model Using Spectral Indexes. Remote Sens. 2024, 16, 388. [Google Scholar] [CrossRef]
- Csillik, O.; Asner, G.P. Aboveground carbon emissions from gold mining in the Peruvian Amazon. Environ. Res. Lett. 2020, 15, 014006. [Google Scholar] [CrossRef]
- Monitoring of the Andean Amazon Project (MAAP). MAAP #208: Gold Mining in the Southern Peruvian Amazon, Summary 2021–2024; MAAP Program Brief; Amazon Conservation Association: Washington, DC, USA, 2024. [Google Scholar]
- Camalan, S.; Cui, K.; Pauca, V.P.; Alqahtani, S.; Silman, M.; Chan, R.; Plemmons, R.J.; Dethier, E.N.; Fernandez, L.E.; Lutz, D.A. Change Detection of Amazonian Alluvial Gold Mining Using Deep Learning and Sentinel-2 Imagery. Remote Sens. 2022, 14, 1746. [Google Scholar] [CrossRef]
- Nursamsi, I.; Sonter, L.J.; Luskin, M.S.; Phinn, S. Feasibility of multi-spectral and radar data fusion for mapping Artisanal Small-Scale Mining: A case study from Indonesia. Int. J. Appl. Earth Obs. Geoinf. 2024, 132, 104015. [Google Scholar] [CrossRef]
- Ngom, N.M.; Baratoux, D.; Bolay, M.; Dessertine, A.; Saley, A.A.; Baratoux, L.; Mbaye, M.; Faye, G.; Yao, A.K.; Kouamé, K.J. Artisanal Exploitation of Mineral Resources: Remote Sensing Observations of Environmental Consequences, Social and Ethical Aspects. Surv. Geophys. 2023, 44, 225–247. [Google Scholar] [CrossRef]
- Nursamsi, I.; Phinn, S.R.; Levin, N.; Luskin, M.S.; Sonter, L.J. Remote sensing of artisanal and small-scale mining: A review of scalable mapping approaches. Sci. Total Environ. 2024, 951, 175761. [Google Scholar] [CrossRef] [PubMed]
- Chen, G.; Jia, Y.; Yin, Y.; Fu, S.; Liu, D.; Wang, T. Remote sensing image dehazing using a wavelet-based generative adversarial networks. Sci. Rep. 2025, 15, 3634. [Google Scholar] [CrossRef] [PubMed]
- Kozinska, P.; Górniak-Zimroz, J. A review of methods in the field of detecting illegal open-pit mining activities. IOP Conf. Ser. Earth Environ. Sci. 2021, 942, 012027. [Google Scholar] [CrossRef]
- Curtis, P.G.; Slay, C.M.; Harris, N.L.; Tyukavina, A.; Hansen, M.C. Heightened levels and seasonal inversion of riverine suspended sediment in response to artisanal gold mining in Madre de Dios, Peru. Proc. Natl. Acad. Sci. USA 2019, 116, 24966–24973. [Google Scholar] [CrossRef]
- Monitoring of the Andean Amazon Project (MAAP). MAAP #96: Gold Mining Deforestation at Record High Levels in the Southern Peruvian Amazon; PlanetScope and Sentinel-2 evidence for La Pampa and Malinowski; Amazon Conservation Association: Washington, DC, USA, 2018. [Google Scholar]
- Kimijima, S.; Sakakibara, M.; Nagai, M. Characterizing Time-Series Roving Artisanal and Small-Scale Gold Mining Activities in Indonesia Using Sentinel-1 Data. Int. J. Environ. Res. Public Health 2022, 19, 6266. [Google Scholar] [CrossRef]
- Wang, M.; Fang, Z.; Li, X.; Kang, J.; Wei, Y.; Wang, S.; Liu, T. Research on the Prediction Method of 3D Surface Deformation in Filling Mining Based on InSAR-IPIM. Energy Sci. Eng. 2025, 13, 2401–2414. [Google Scholar] [CrossRef]
- Tu, B.; Ren, Q.; Li, J.; Cao, Z.; Chen, Y.; Plaza, A. NCGLF2: Network combining global and local features for fusion of multisource remote sensing data. Inf. Fusion 2024, 104, 102192. [Google Scholar] [CrossRef]
- SOS Orinoco. Presence, Activity and Influence of Organized Armed Groups in Mining Operations South of the Orinoco River; SOS Orinoco: Caracas, Venezuela, 2022. [Google Scholar]
- SOS Orinoco. SOS Orinoco: Reports on Illegal Mining in the Amazonia and Orinoquia of Venezuela; SOS Orinoco: Caracas, Venezuela, 2021. [Google Scholar]
- Lin, Y.N.; Park, E.; Wang, Y.; Quek, Y.P.; Lim, J.; Alcantara, E.; Loc, H.H. The 2020 Hpakant Jade Mine Disaster, Myanmar: A multi-sensor investigation for slope failure. ISPRS J. Photogramm. Remote Sens. 2021, 177, 291–305. [Google Scholar] [CrossRef]
- Zhang, J.; Yan, F.; Lyne, V.; Wang, X.; Su, F.; Cao, Q.; He, B. Monitoring of ecological security patterns based on long-term land use changes in Langsa Bay, Indonesia. Int. J. Digit. Earth 2025, 18, 2495740. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Hu, F.; Xia, G.S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11211, pp. 294–310. [Google Scholar] [CrossRef]
- Garnot, V.S.F.; Landrieu, L.; Giordano, S.; Chehata, N. Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 12325–12334. [Google Scholar] [CrossRef]
- Seydi, S.S.; Ghorbanian, A.; Hasanlou, M.; Ghamisi, P. Crop Classification from Sentinel-2 Time-Series Imagery Using a Dual-Attention Convolutional Neural Network. Remote Sens. 2022, 14, 498. [Google Scholar] [CrossRef]
- Feng, F.; Gao, M.; Liu, R.; Yao, S.; Yang, G. A deep learning framework for crop mapping with reconstructed Sentinel-2 time series images. Comput. Electron. Agric. 2023, 213, 108227. [Google Scholar] [CrossRef]
- Huo, G.; Guan, C. FCIHMRT: Feature Cross-Layer Interaction Hybrid Method Based on Res2Net and Transformer for Remote Sensing Scene Classification. Electronics 2023, 12, 4362. [Google Scholar] [CrossRef]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
- Campos-Taberner, M.; Romero-Soriano, A.; Gómez-Chova, L.; Muñoz-Marí, J.; López-Puigdollers, D.; Alonso, L.; Llovería, R.; Garcia-Haro, F.J. Exploring the potential of Sentinel-2 time series for agricultural land use classification under the EU Common Agricultural Policy. Sci. Rep. 2020, 10, 17789. [Google Scholar] [CrossRef]
- Zhao, H.; Duan, S.; Liu, J.; Sun, L.; Reymondin, L. Evaluation of Five Deep Learning Models for Crop Type Mapping Using Sentinel-2 Time Series Images with Missing Information. Remote Sens. 2021, 13, 2790. [Google Scholar] [CrossRef]
- Li, G.; Cui, J.; Han, W.; Zhang, H.; Huang, S.; Chen, P.; Ao, J. Crop type mapping using time-series Sentinel-2 imagery and U-Net in early growth periods in the Hetao irrigation district in China. Comput. Electron. Agric. 2022, 203, 107478. [Google Scholar] [CrossRef]
- Wenger, R. Land-Use-Land-Cover-Datasets. 2024. Available online: https://github.com/r-wenger/land-use-land-cover-datasets (accessed on 19 November 2024).
- Aryal, J.; Sitaula, C.; Frery, A. Land use and land cover (LULC) performance modeling using machine learning algorithms: A case study of the city of Melbourne, Australia. Sci. Rep. 2023, 13, 13510. [Google Scholar] [CrossRef]
- Ren, Z.; Wang, L.; He, Z. Open-Pit Mining Area Extraction from High-Resolution Remote Sensing Images Based on EMANet and FC-CRF. Remote Sens. 2023, 15, 3829. [Google Scholar] [CrossRef]
- Wang, C.; Chang, L.; Zhao, L.; Niu, R. Automatic Identification and Dynamic Monitoring of Open-Pit Mines Based on Improved Mask R-CNN and Transfer Learning. Remote Sens. 2020, 12, 3474. [Google Scholar] [CrossRef]
- Wang, S.; Lu, X.; Chen, Z.; Zhang, G.; Ma, T.; Jia, P.; Li, B. Evaluating the Feasibility of Illegal Open-Pit Mining Identification Using Insar Coherence. Remote Sens. 2020, 12, 367. [Google Scholar] [CrossRef]
- Kramarczyk, P.; Hejmanowska, B. UNET Neural Network in Agricultural Land Cover Classification Using Sentinel-2. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-1/W3-2023, 85–90. [Google Scholar] [CrossRef]
- Bartold, M.; Kluczek, M.; Dąbrowska-Zielińska, K. An Automated Approach for Updating Land Cover Change Maps Using Satellite Imagery. Econ. Environ. 2025, 92, 1–14. [Google Scholar] [CrossRef]
- Kwoczyńska, B. Analysis of land use changes in the Tri-City metropolitan area based on the multi-temporal classification of Landsat and RapidEye imagery. Geomat. Landmanag. Landsc. 2021, 101–119. [Google Scholar] [CrossRef]
- Alemohammad, H.; Booth, K. LandCoverNet: A global benchmark land cover classification training dataset. arXiv 2020, arXiv:2012.03111. [Google Scholar]
- Alemohammad, S.H.; Ballantyne, A.; Bromberg Gaber, Y.; Booth, K.; Nakanuku-Diggs, L.; Miglarese, A.H. LandCoverNet: A Global Land Cover Classification Training Dataset, Version 1.0. Radiant MLHub, 2020. Available online: https://staging.source.coop/radiantearth/landcovernet (accessed on 24 August 2025).
- Clasen, K.N.; Hackel, L.; Burgert, T.; Sumbul, G.; Demir, B.; Markl, V. reBEN (BigEarthNet v2.0): A Refined Benchmark Dataset of Sentinel-1 & Sentinel-2 Patches. 549,488 Paired Patches, Pixel-Level Reference Maps, 19-Class Labels Derived from CLC2018, Improved Atmospheric Correction and Spatial Splitting. 2024. Available online: https://bigearth.net (accessed on 24 August 2025).
- Clasen, K.N.; Hackel, L.; Burgert, T.; Sumbul, G.; Demir, B.; Markl, V. reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis. arXiv 2024, arXiv:2407.03653. [Google Scholar]
- Wenger, R.; Puissant, A.; Weber, J.; Idoumghar, L.; Forestier, G. MultiSenNA: A Multimodal and Multitemporal Benchmark Dataset over Eastern France. 8157 Patches of Sentinel-1 & Sentinel-2, Eastern France Region. Public Dataset Listing on GitHub. 2024. Available online: https://github.com/r-wenger/land-use-land-cover-datasets (accessed on 24 August 2025).
- Wenger, R.; Puissant, A.; Weber, J.; Idoumghar, L.; Forestier, G. MultiSenGE: A Multimodal and Multitemporal Benchmark Dataset for Land Use/Land Cover Remote Sensing Applications. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, V-3-2022, 635–640. Available online: https://isprs-annals.copernicus.org/articles/V-3-2022/635/2022/ (accessed on 24 August 2025).
- Koßmann, D.; Brack, V.; Wilhelm, T. SeasoNet: A Seasonal Scene Classification, Segmentation and Retrieval Dataset for Satellite Imagery over Germany. arXiv 2022, arXiv:2207.09507. [Google Scholar] [CrossRef]
- Koßmann, D.; Brack, V.; Wilhelm, T. SeasoNet: A Seasonal Scene Classification, Segmentation and Retrieval Dataset for Satellite Imagery over Germany. 1,759,830 Sentinel-2 Patches Covering Germany, with Pixel-Level Labels from LBM-DE2018 (CLC 2018); Includes Four Seasons and Snowy Set. Zenodo, 2022. Available online: https://zenodo.org/records/5850307 (accessed on 24 August 2025).
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 1984; p. 368. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Haifeng, L. Smile: Statistical Machine Intelligence and Learning Engine. Version 2.6.0, 2017. Available online: https://haifengl.github.io/ (accessed on 25 May 2025).
- Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
- Chaves, M.; Picoli, C.; Sanches, I. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sens. 2020, 12, 3062. [Google Scholar] [CrossRef]
- Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Li, Z.; Weng, Q.; Zhou, Y.; Dou, P.; Ding, X. Learning spectral-indices-fused deep models for time-series land use and land cover mapping in cloud-prone areas. Remote Sens. Environ. 2024, 308, 114190. [Google Scholar] [CrossRef]
Year/Ref | Task/Data | Method (NN) |
---|---|---|
Short Summary | ||
2019 [32] LULC; Sentinel-2, 13 bands | CNN (ResNet-50, GoogLeNet) | EuroSAT dataset (27k patches, 64 × 64 px, 10 classes). CNN benchmark; ResNet-50 (RGB) OA ≈ 98.6%. Standard LULC reference. |
2020 [33] LULC/CAP; Sentinel-2 time series | RNN (interpretability) | CAP use case; explainability of RNNs; key bands and phenological stages identified. |
2020 [28] Crops; Sentinel-2 TSI (10 bands) | Pixel-Set Encoder + temporal attention | Pixel-set + temporal self-attention; robust to clouds; reduced computation; SOTA accuracy. |
2021 [34] Crops; dense Sentinel-2 series (with gaps) | CNN/LSTM/GRU | Sequence models outperform classical ML without gap filling; robust to missing observations. |
2022 [29] Crops; Sentinel-2 TSI | Dual-attention CNN | Spectral + spatial attention; OA ≈ 98.5%, 0.98; exceeds RF/XGB/2D–3D CNNs. |
2022 [35] Crops; Sentinel-2 TSI (Hetao, China) | U-Net | Early crop ID via key phenological stages; earlier detection (e.g., sunflower ≈ 20 days sooner). |
2023 [30] Crops; Sentinel-2 TSI (10 bands, 22 dates) | A-BiGRU (attention) | Gap reconstruction + attention; OA ≈ 98%, Macro-F1 ≈ 97.9%, 0.97; beats LSTM/SRNN. |
Component | Original U-Net | Proposed Model |
---|---|---|
Encoder blocks | 2 × (Conv3 × 3, ReLU); MaxPool2 × 2 | 3 × (Conv3 × 3, BN, ReLU); MaxPool2 × 2; padding = same |
Bottleneck | 2 × (Conv3 × 3, ReLU) | 3 × (Conv3 × 3, BN, ReLU); padding = same |
Decoder blocks | ConvTrans2 × 2; skip-concat; 2 × (Conv3 × 3, ReLU) | ConvTrans2 × 2; ReplicationPad2d; skip-concat; 3 × (Conv3 × 3, BN, ReLU) |
Item | Value |
---|---|
Classes | 1–9 |
Epochs | 25 |
Patch size | 100 |
Batch size | 4 |
Balance | 1 |
Loss | FocalLoss () |
Optimizer | Adam |
Learning rate | 0.001 |
Weight decay (sweep) | {0.01, 0.001, 0.0001} |
Gamma | 2 |
CUDA | enabled |
Formula | Metric Name |
---|---|
ACC (Accuracy) | |
TNR (Specificity/True Negative Rate) | |
OA (Overall Accuracy) | |
PA (Producer Accuracy/Recall) | |
UA (User Accuracy/Precision) | |
F1-score |
Actual/Predicted | Positive | Negative |
---|---|---|
Positive | TP = 40 | FN = 10 |
Negative | FP = 5 | TN = 45 |
Total samples | 100 |
Metric | Calculation | Result |
---|---|---|
ACC = Accuracy | 0.85 | |
TNR = Specificity | 0.90 | |
OA = Overall Accuracy | 0.85 | |
PA = Recall | 0.80 | |
UA = Precision | 0.89 | |
F1-score | 0.84 |
Actual/Predicted | A | B | C |
---|---|---|---|
A | 30 | 5 | 5 |
B | 2 | 28 | 10 |
C | 3 | 5 | 22 |
Total samples | 110 |
Class | TP | FP | FN | TN |
---|---|---|---|---|
A | 30 | 5 | 10 | 65 |
B | 28 | 10 | 12 | 60 |
C | 22 | 15 | 8 | 65 |
Class | ACC | TNR | OA | PA (Recall) | UA (Precision) | F1-Score |
---|---|---|---|---|---|---|
A | 0.864 | 0.929 | 0.7270 | 0.750 | 0.857 | 0.799 |
B | 0.800 | 0.857 | 0.7270 | 0.700 | 0.737 | 0.718 |
C | 0.791 | 0.813 | 0.7270 | 0.733 | 0.595 | 0.655 |
Metric | ACC | TNR | OA | PA (Recall) | UA (Precision) | F1-Score |
---|---|---|---|---|---|---|
Mean | 0.818 | 0.866 | 0.7270 | 0.728 | 0.730 | 0.724 |
Class\Ref | 1 | 2 | 3 | 4 | 5 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|
1 | 1202 | 3 | 6 | 36 | 0 | 1 | 0 | 11 | 0 |
2 | 549 | 7568 | 83 | 103 | 0 | 2 | 2 | 135 | 3 |
3 | 0 | 0 | 625 | 22 | 35 | 14 | 0 | 2 | 0 |
4 | 2 | 10 | 14 | 7338 | 133 | 17 | 0 | 18 | 7 |
5 | 0 | 0 | 0 | 6 | 1150 | 0 | 0 | 0 | 0 |
7 | 0 | 0 | 8 | 6 | 0 | 59 | 0 | 11 | 0 |
8 | 0 | 1 | 0 | 0 | 0 | 0 | 635 | 30 | 6 |
9 | 0 | 43 | 25 | 534 | 0 | 67 | 0 | 10,927 | 1 |
10 | 0 | 28 | 0 | 2 | 89 | 0 | 4 | 0 | 6998 |
Class\Ref | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|
1 | 1105 | 7 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
2 | 42 | 1591 | 1 | 9 | 6 | 0 | 0 | 0 | 0 |
3 | 2 | 0 | 324 | 22 | 4 | 0 | 23 | 0 | 0 |
4 | 0 | 0 | 9 | 2813 | 45 | 11 | 65 | 0 | 3 |
5 | 0 | 0 | 6 | 72 | 1221 | 2 | 2 | 0 | 0 |
6 | 10 | 102 | 18 | 4 | 5 | 530 | 0 | 0 | 1 |
7 | 0 | 0 | 12 | 3 | 6 | 0 | 105 | 0 | 0 |
8 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 21 | 0 |
9 | 15 | 5 | 11 | 21 | 0 | 105 | 0 | 0 | 1105 |
Class | ACC | TNR | PA | UA | F1 |
---|---|---|---|---|---|
1 | 0.9842 | 0.9852 | 0.9547 | 0.6857 | 0.7981 |
2 | 0.9751 | 0.9972 | 0.8962 | 0.9889 | 0.9402 |
3 | 0.9946 | 0.9964 | 0.8954 | 0.8213 | 0.8568 |
4 | 0.9764 | 0.9772 | 0.9733 | 0.9119 | 0.9416 |
5 | 0.9932 | 0.9931 | 0.9948 | 0.8173 | 0.8974 |
7 | 0.9967 | 0.9974 | 0.7024 | 0.3688 | 0.4836 |
8 | 0.9989 | 0.9998 | 0.9449 | 0.9906 | 0.9673 |
9 | 0.9773 | 0.9923 | 0.9422 | 0.9814 | 0.9614 |
10 | 0.9964 | 0.9995 | 0.9827 | 0.9976 | 0.9901 |
Mean | 0.9881 | 0.9931 | 0.9207 | 0.8404 | 0.8707 |
Overall | 0.9464 | – | – | – | – |
Class | ACC | TNR | PA | UA | F1 |
---|---|---|---|---|---|
1 | 0.9919 | 0.9917 | 0.9928 | 0.9412 | 0.9663 |
2 | 0.9818 | 0.9854 | 0.9648 | 0.9331 | 0.9487 |
3 | 0.9885 | 0.9936 | 0.8640 | 0.8482 | 0.8560 |
4 | 0.9720 | 0.9798 | 0.9549 | 0.9552 | 0.9550 |
5 | 0.9844 | 0.9919 | 0.9371 | 0.9487 | 0.9429 |
6 | 0.9727 | 0.9866 | 0.7910 | 0.8179 | 0.8042 |
7 | 0.9882 | 0.9903 | 0.8333 | 0.5357 | 0.6522 |
8 | 0.9998 | 1.0000 | 0.9130 | 1.0000 | 0.9545 |
9 | 0.9830 | 0.9995 | 0.8756 | 0.9964 | 0.9321 |
Mean | 0.9847 | 0.9910 | 0.9030 | 0.8863 | 0.8902 |
Overall | 0.9311 | – | – | – | – |
Model | OA | PA (Recall) | UA (Precision) | F1-Score |
---|---|---|---|---|
CART (GEE) | 0.7679 | 0.7701 | 0.7672 | 0.7657 |
RF (GEE) | 0.8000 | 0.7098 | 0.7295 | 0.7100 |
SVM (GEE) | 0.7857 | 0.7421 | 0.7385 | 0.7241 |
Class | ACC | TNR | PA | UA | F1 |
---|---|---|---|---|---|
1 | 0.9643 | 0.9741 | 0.7000 | 0.5000 | 0.5833 |
2 | 0.9357 | 0.9673 | 0.7143 | 0.7576 | 0.7353 |
3 | 0.9036 | 0.9238 | 0.8429 | 0.7867 | 0.8138 |
4 | 0.8964 | 0.9429 | 0.7571 | 0.8154 | 0.7852 |
5 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
7 | 0.9214 | 0.9538 | 0.5000 | 0.4545 | 0.4762 |
8 | 0.9679 | 0.9885 | 0.7000 | 0.8235 | 0.7568 |
9 | 0.9500 | 0.9720 | 0.7667 | 0.7667 | 0.7667 |
10 | 0.9964 | 1.0000 | 0.9500 | 1.0000 | 0.9744 |
Mean | 0.9484 | 0.9692 | 0.7701 | 0.7672 | 0.7657 |
Overall | 0.7679 | – | – | – | – |
Class | ACC | TNR | PA | UA | F1 |
---|---|---|---|---|---|
1 | 0.9571 | 0.9667 | 0.7000 | 0.4375 | 0.5385 |
2 | 0.9500 | 0.9837 | 0.7143 | 0.8621 | 0.7813 |
3 | 0.9071 | 0.9095 | 0.9000 | 0.7683 | 0.8289 |
4 | 0.9214 | 0.9619 | 0.8000 | 0.8750 | 0.8358 |
5 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
7 | 0.9357 | 0.9615 | 0.6000 | 0.5455 | 0.5714 |
8 | 0.9714 | 1.0000 | 0.6000 | 1.0000 | 0.7500 |
9 | 0.9607 | 0.9760 | 0.8333 | 0.8065 | 0.8197 |
10 | 0.9964 | 1.0000 | 0.9500 | 1.0000 | 0.9744 |
Mean | 0.9600 | 0.9759 | 0.7098 | 0.7295 | 0.7100 |
Overall | 0.8000 | – | – | – | – |
Class | ACC | TNR | PA | UA | F1 |
---|---|---|---|---|---|
1 | 0.9750 | 0.9741 | 1.0000 | 0.5882 | 0.7407 |
2 | 0.9536 | 0.9714 | 0.8286 | 0.8056 | 0.8169 |
3 | 0.8893 | 0.9238 | 0.7857 | 0.7746 | 0.7801 |
4 | 0.9179 | 0.9714 | 0.7571 | 0.8983 | 0.8217 |
5 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
7 | 0.9036 | 0.9154 | 0.7500 | 0.4054 | 0.5263 |
8 | 0.9750 | 1.0000 | 0.6500 | 1.0000 | 0.7879 |
9 | 0.9607 | 0.9920 | 0.7000 | 0.9130 | 0.7925 |
10 | 0.9964 | 1.0000 | 0.9500 | 1.0000 | 0.9744 |
Mean | 0.9571 | 0.9748 | 0.7421 | 0.7385 | 0.7241 |
Overall | 0.7857 | – | – | – | – |
Aspect | U-Net (Deep Learning) | GEE Classifiers (CART/RF/SVM) |
---|---|---|
Data Scope | Multi-region (Strzegom, Kolbuszowa, Kraków) | Single-region (Strzegom only) |
Model Type | Convolutional Neural Network (Semantic Segmentation) | Classical ML (decision trees, SVM) |
Implementation | PyTorch (local, offline) | Google Earth Engine (cloud-based) |
Input Data | Sentinel-2 patches (10 bands) | Sentinel-2 scene (10 bands) |
Training Regions | Three regions | One region |
Output Type | Pixel-wise segmentation map | Pixel-wise classification labels |
Strengths | High accuracy, generalization, robust to variability | Fast, simple to implement, no local resources needed |
Limitations | Requires GPU, longer training time, more data | Limited generalization, lower precision in complex areas |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hejmanowska, B.; Michałowska, K.; Kramarczyk, P.; Głowienka, E. The Potential of U-Net in Detecting Mining Activity: Accuracy Assessment Against GEE Classifiers. Appl. Sci. 2025, 15, 9785. https://doi.org/10.3390/app15179785
Hejmanowska B, Michałowska K, Kramarczyk P, Głowienka E. The Potential of U-Net in Detecting Mining Activity: Accuracy Assessment Against GEE Classifiers. Applied Sciences. 2025; 15(17):9785. https://doi.org/10.3390/app15179785
Chicago/Turabian StyleHejmanowska, Beata, Krystyna Michałowska, Piotr Kramarczyk, and Ewa Głowienka. 2025. "The Potential of U-Net in Detecting Mining Activity: Accuracy Assessment Against GEE Classifiers" Applied Sciences 15, no. 17: 9785. https://doi.org/10.3390/app15179785
APA StyleHejmanowska, B., Michałowska, K., Kramarczyk, P., & Głowienka, E. (2025). The Potential of U-Net in Detecting Mining Activity: Accuracy Assessment Against GEE Classifiers. Applied Sciences, 15(17), 9785. https://doi.org/10.3390/app15179785