Next Article in Journal
The Health-Promoting Potential of Fruit Pomace and Its Application in the Confectionery Industry
Previous Article in Journal
Structural Design of Segmented Linings for High-Pressure CAES in Underground Workings: Method and Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CNN-Based Automatic Detection of Beachlines Using UAVs for Enhanced Waste Management in Tailings Storage Facilities

1
KGHM CUPRUM Ltd.—Research and Development Centre, Gen. W. Sikorskiego Street 2-8, 53-659 Wrocław, Poland
2
KGHM Polska Miedź S.A., ul. M. Skłodowskiej-Curie 48, 59-301 Lubin, Poland
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5786; https://doi.org/10.3390/app15105786
Submission received: 24 April 2025 / Revised: 14 May 2025 / Accepted: 20 May 2025 / Published: 21 May 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Continuous monitoring is key to the safety of such critical infrastructure as Tailings storage facilities. Due to the high risk of liquification of the dams, it is crucial to move the water as far as possible from the dam crest. In order to control the distance from the water to the dam, regular manual inspections need to be carried out. In this article, we propose a method for automatic detection of the water-beach line based on photographs from an unmanned aerial vehicle (UAV). An algorithm based on MobileNet v2 convolutional neural network architecture was developed for the classification of images collected by the UAV. Based on the results of this classification, the border between the water and the beach is defined. Several approaches to the model training were tested. Accuracy for the validation set reaches up to 97% for particular image fragments.

1. Introduction

Tailings storage facilities (TSFs) are critical in the mining industry, as they safely contain the byproducts of mineral processing, which often include toxic or environmentally hazardous materials. Proper monitoring of TSFs is essential to prevent structural failures, environmental contamination, and potential risks to surrounding communities, ensuring the long-term sustainability and safety of mining operations [1]. In recent years, several research papers introducing the internet of things (IoT), machine learning, and artificial intelligence approaches to the problem of TSF monitoring have been published [2,3,4,5,6,7]. One of the key issues in the management and supervision system of the tailings storage facility is the process of beach formation. The technology for discharging flotation waste involves the transport of coarse-grained material (primarily from specific mining regions) mixed with water through pipelines located on the crest of the dam. As a result of this process, so-called beaches are formed, with a slight slope towards the reservoir and varying widths. The material discharged locally (at specific points) near the dam crest undergoes segregation. Coarser material settles closer to the dams, while finer material is found nearer the decant pond. The coarse material is used in the process of raising the dams. In practice, an operational manual dictates the beach formation procedures to ensure the safety of the tailings facility. The most important procedures include
  • Maintaining a minimum beach width of 200 m adjacent to the dams with a slope towards the reservoir;
  • Ensuring a 0.5 m elevation difference between the dam crest and the beach surface, as well as its uniform growth;
  • Limiting the formation of one dam section to no longer than three weeks;
  • Controlling the sequence of beach formation for every other section of the TSF.
Currently, information about the beach and dam crest is obtained through site inspections conducted by personnel during periodic inspection drives. This procedure is time-consuming and requires personnel to drive along the crest and walk around certain parts of the beach. Some markers are also used to monitor whether the beach line exceeds the allowable distance from the dam crest. Consequently, there is a growing demand for quick and automated access to up-to-date, accurate, and continuous digital information on the beach line, accompanied by analytical tools supporting the beach formation process, equipped with predictive functions.
An analysis of the market and the literature did not reveal any ready-made or similar solutions that could meet the current challenges faced by operators responsible for such a large-scale facility. Although it is possible to find examples of shoreline detection based on various types of data and methods. In [8,9], a broad review of methods for defining and determining the shoreline was presented. The authors indicate various types of shoreline definitions and indicators, as well as types of data that can be used for detection, including historical photographs and maps, aerial photogrammetry and ground-based GPS measurements, terrain models with LIDAR, radar imaging (SAR), and video systems. For shoreline detection, it is possible to use multi-spectral data, as proposed by the authors of [10], where they used the Direct Difference Water Index (DDWI) and data from the LIDAR system. In turn, in [11], a method for shoreline detection using histogram equalization and adaptive thresholding techniques based on data from the Indian Remote Sensing Satellite was proposed. In [12], SAR data and neural network method were used to distinguish land and sea. Another example is [13], where the authors used video data to detect the coastline and compared four methods: Shore Line Intensity Maximum, Pixel Intensity Clustering, Artificial Neural Network, and Color Channel Divergence. Images from drones have the potential to obtain much more accurate results than, for instance, satellite data. In turn, the use of neural network algorithms seems to be the right direction due to their confirmed high effectiveness in the context of image analysis and classification. Automation of the beach line detection procedure allows for the possibility of obtaining results faster, reduces labor intensity by eliminating the need for manual image processing, and may allow for a more accurate result resistant to human error.
A review of the current state of knowledge demonstrated the broad range of potentials for the application of unmanned aerial vehicles (UAVs) and data processing in the mining industry. This technology is becoming increasingly common and mature in both developed and developing countries. UAVs are used in open-pit, underground, and closed mines [14,15,16,17,18]. They are particularly useful in small mines, where expensive surveying equipment or professional expertise is often lacking [19]. It is worth noting that with the advent of UAVs, various sensors have become more miniaturized and intelligent. UAVs offer numerous advantages, including low cost, flexibility, and high precision. However, the main limitations of UAVs are battery life, harsh working environments (especially in underground mining), weather and lighting conditions, and local regulations and laws [20,21,22,23].
One of the applications of UAVs in the mining industry is topographic control. In [24], the authors compared conventional technologies with photogrammetry in a small open-pit mine in southern Brazil. The results showed that the level of detail achieved by UAV photogrammetry methods is more accurate, denser in information, and faster compared to traditional techniques, such as GPS point measurements or laser scanning. Given that the active mining area is characterized by constant material movement, numerous surface irregularities, and scattered material, the authors deemed the recorded height differences acceptable within the range of variability. The assessment of stability and deformation behavior of high rock slopes was the subject of research in [25]. The study used a remote sensing approach to characterize rock slopes at different scales and distances, critical for detecting failure mechanisms. The research demonstrated that UAV datasets are effective in performing structural and geomorphological characterization of large-scale areas. The methodology for creating a 3D model of an open-pit mine based on UAV imagery was described in [26]. The article explains how digital images and photogrammetry techniques can be used to develop accurate 3D geometric models. It highlights the potential use of information contained in the 3D model within a GIS environment, including process monitoring, obtaining geometric information about embankments, slopes, road infrastructure, and planning logistical and operational processes. A similar application of low-cost digital photogrammetry for monitoring and documenting terrain surfaces affected by mining activities was described in [19]. The study assessed the accuracy of the digital elevation model (DEM) obtained from UAV photogrammetry in a Slovak open-pit mine case study. The DEM accuracy was deemed acceptable by national regulatory standards. In [27], a rapid and low-cost method for monitoring, the geomorphological changes occurring over years in open-pit mines were presented. The authors proposed using the structure-from-motion photogrammetric technique to build high-resolution digital elevation models (DEMs). The developed procedure allows for the quantitative estimation of area changes (volumetric changes and extracted tonnage) and the assessment of terrace alterations and the surface extent of the open-pit mine. The authors highlighted the method’s potential for evaluating slope stability and identifying hazards such as landslides. Other examples of using drones for DEM construction can be found in the works of Nguyen et al. [28,29,30].
In [31], the use of Unmanned Aerial Systems (UAS) to bridge the gap between spectral data from satellites or aircraft and ground data is described. UAS can rapidly obtain high-resolution hyperspectral images, but complex geometric and radiometric corrections are required, which are particularly important in geological applications such as raw material detection. The authors presented a new toolbox for processing drone-borne hyperspectral data, including automatic co-registration, mosaicking, georeferencing, and topographic and illumination correction. For the first time, the usefulness of such data for geological studies was demonstrated. The authors of [32] discuss the application of UAVs and machine learning algorithms for topographic modeling and the classification of a detailed geological model, using a case study of a phosphate mine in Brazil. An automated classification model between lithological groups was developed. The authors of [33] present the first application of UAVs in geological research on carbonates in inaccessible and hazardous outcrops. The authors proposed a method used to document the spatial distribution and dimensions of diagenetic dolomite geobodies in a Carboniferous limestone host rock. They emphasized the need for further work on drone remote control, increasing payload capacity, and improving software flexibility. An example of using drones for lithological and structural geological mapping in mining areas was described in [34]. The authors clearly highlight the significant improvements in geological operations through UAV applications, which enhance reliability, safety, and efficiency. The acquired imagery was processed using spectroscopic algorithms and machine learning to generate meaningful 2.5D surface maps. Ground-based techniques and UAVs were used to obtain photogrammetry and hyperspectral VNIR, SWIR, and LWIR images. The developed method allowed for much more efficient ground surveys and the creation of an optimal sampling strategy for further structural, geochemical, and petrological studies. Although drones are commonly used in photogrammetry, their use in geological studies faces challenges related to geometric and radiometric correction. The authors in [35] demonstrated that precise corrections are essential for geological mapping and presented a tool for processing drone-borne hyperspectral data, including geometric and topographic corrections. This was the first time the effectiveness of such data in lithological mapping and mineral exploration was proven. As mentioned in [36,37], point clouds collected by drones can be used to gather, process, analyze, and interpret geological and geotechnical data. These can be applied to rock outcrop analysis and discontinuum modeling. Point clouds obtained from rock outcrops provide insights into fracture orientation, roughness, persistence, and spacing. Furthermore, rock mass quality indicators (e.g., Rock Quality Designation—RQD, or Geological Strength Index—GSI) can be determined. The authors of [38] presented the use of drones for the geotechnical characterization of rock masses in underground mines, where access to unsupported workings is hazardous for personnel. Photogrammetric and thermal imaging (FLIR) techniques were used for structural rock analysis and the identification of loose fragments that may pose a hazard. The research was conducted at the Barrick Golden Sunlight Mine, where UAV flights generated 3D models and collected geological data. The results confirm that using available technologies can be an effective geotechnical tool in underground environments.
UAVs have wide applications in search, planning, and rescue operations, which can also be successfully utilized in mining [39,40]. In [41], the authors present a solution for detecting coal fires, monitoring their progress, and identifying potential environmental hazards and risks to local communities. The case study focuses on the Jharia coalfield in India, where fires have been present since mining began in the mid-19th century, both on the surface and underground. In their research, the authors used NOAA/AVHRR and MODIS satellite data (Moderate Resolution Imaging Spectroradiometer) and three models for fire detection in satellite image pixels, namely the thresholding model, the contextual model, and the fuel mask model. A similar use of UAVs for detecting underground coal fires is discussed in [42]. In this case, UAVs were equipped with gas sensors and a method was developed to evaluate the rank of a burning coal seam based on gas coefficients. The obtained characteristics allow for calculating the combustion temperature, burn rate, and theoretical oxygen demand. This, in turn, helps estimate the rate of fire progression, the energy content of the burning coal, and better planning of firefighting and evacuation operations.
Another popular area of application for image processing and drones is the assessment of rock fragmentation after blasting. Ensuring an appropriate degree of rock fragmentation post-blasting directly influences the efficiency of subsequent mining and processing operations. The authors of [43] conducted laboratory-scale studies aimed at evaluating the benefits of using UAVs in this area. The results were compared to conventional methods involving manual data recording by personnel. The use of UAVs allows for significantly improved temporal and spatial resolution of data due to enhanced camera quality, better camera perspective relative to the muck pile, and the automation of data acquisition and processing. In their subsequent work [44], the authors explained several factors affecting the quality and informativeness of the collected data. In another article [45], the authors presented a method for identifying factors contributing to stability problems and extraction challenges in headings with complex geometry in a gold underground mine utilizing sublevel stoping. Since access to the heading was impossible, the use of UAVs provided an opportunity for scanning. The studies demonstrated that UAVs can be employed to make critical decisions regarding operational continuity in any excavation and to calculate pillars and support elements.
UAVs have also found practical applications during procedures related to mine closure and subsequent reclamation efforts. For instance, in [46], a UAV solution was described for monitoring water quality in pit lakes during closure, a requirement frequently mandated by regulatory authorities for mining companies. Traditionally, water samples are manually collected by personnel from boats; however, many sites are inaccessible due to prior damage or pose significant health and safety risks. The authors proposed a solution wherein optimal locations and depths for sampling are determined based on conductivity-temperature-depth (CTD) measurements, along with the minimum number of samples and the physical condition of the pit lake. The proposed UAV solution is equipped with a 2 L water sampling device capable of reaching depths of up to 120 m. Another example of UAV imagery application for supporting reclamation documentation involves the creation of detailed land cover maps, as described in [47]. A limestone quarry was chosen as a case study, where a visual inspection was conducted to assess the development of vegetation, soil cover, and geomorphological features of the area. According to the authors, the inspection of a 10-hectare area took 2 h, while the total time required for data processing was 2 working days. This timeframe is significantly shorter than that of standard methods. Moreover, UAV images provide a much greater level of detail. In [48], drones were utilized to monitor areas affected by acid mine drainage (AMD), exemplified by the reclamation of the Sokolov waste dump in the Czech Republic, where significant amounts of AMD minerals are observed. Mining waste was analyzed for pH, X-ray fluorescence (XRF), and reflectance spectroscopy. The studies demonstrated that UAVs offer a fast, non-invasive, and cost-effective method for monitoring post-mining landscapes.
In this article, we focus on an application of unmanned aerial vehicles for the purposes of better TSF monitoring. The article describes the data acquisition process and introduces a neural network-based method for the analysis of obtained images.

2. Materials and Methods

Data Collection and Preparation

The data for this paper was collected using an operating TSF in Poland using a DJI Matrix 300 RTK unmanned aerial vehicle. DJI Zenmuse P1 fullframe 45 MP digital camera combined with a 35 mm lens was used for data acquisition (see Figure 1).
The process of converting drone images into a metrically accurate orthophoto map involves several key steps. First, aerotriangulation is performed, which involves detecting thousands of common points between the images and calculating the geometric relationships between them. This is followed by generating a dense point cloud, which forms the basis for creating a digital terrain model (DTM). The orthomosaic is then produced by projecting the original images onto the terrain model, ensuring that the final map is geometrically correct. Common software used for this workflow includes Agisoft Metashape 2.1.4. and Bentley ContextCapture 2023.
During a single drone flight, data were collected to automatically delineate the beach line. The DJI P1 camera captured images covering a terrain width of 170 m. The flight lasted 20 min at an altitude of 120 m, with each image having a ground sample distance (GSD) of 1.5 cm. A total of 528 images were taken, amounting to 4.5 GB of data. The resulting orthophoto map had a pixel resolution of 10 cm. Weather conditions were challenging, with prolonged rainfall prior to the flight and heavy cloud cover at low altitude, reducing visibility.
Examples of the final images composed of the raw ones are presented in Figure 2. For the purposes of this article, seven such fragments were used. The process of data preparation and beach line detection included several steps, presented in Figure 3. First, the line was detected manually by an expert. Then, the image was cropped into square fragments with a 100 × 100 resolution. The fragments were manually separated into two classes: the beach and the water. Figure 4 shows example image fragments with assigned labels. The final learning sample included 2769 examples of the beach and 2772 fragments with the water. The data were split into training and validation samples in a 70/30 ratio. Based on these data, a binary image classifier was trained, which allows us to assign either a “beach” or a “water” label to each fragment of the image, based on which the beach line can be defined as.
Convolutional Neural Network (CNN) architecture was selected to perform this task. CNNs are a class of deep learning models primarily designed for processing grid-like data, such as images. A typical CNN consists of multiple layers, including convolutional layers, pooling layers, and fully connected layers, which work together to extract and classify features from input data. The core of a CNN is the convolutional layer, where small filters or kernels slide over the input image, performing element-wise multiplication to detect local patterns, such as edges or textures. This is followed by pooling layers, often max pooling, which reduce the spatial dimensions of the data while retaining important information, thus improving computational efficiency and reducing the risk of overfitting. Finally, fully connected layers are used for classification tasks, where the learned features are mapped to output labels. CNNs have been highly successful in various tasks, including image classification, object detection, and segmentation, due to their ability to automatically learn hierarchical features from raw data [49,50].
For the purpose of this article, a MobileNetV2 pre-trained convolutional model was selected. Compared to heavier models like ResNet or Inception, MobileNetV2 offers a much smaller model size and faster inference times without a substantial trade-off in performance. For simple image classification tasks, where the dataset may not require the complexity of deeper networks, MobileNetV2 provides a good compromise by being both accurate and computationally efficient. This makes it an excellent choice for applications requiring real-time processing or deployment on devices with limited memory and power constraints [51]. The model training algorithm was implemented in Python 3 using the TensorFlow library. The model training architecture is presented in Table 1. The Adam technique was used as the optimizer with a learning rate of 0.0001 and the loss function was cross-entropy.

3. Results

In order to perform the binary classification of the collected images, an additional decision dense layer consisting of two neurons was added. For the purposes of this article, two approaches were tested. In the first approach, the 10 last layers of the base model were unfrozen for model finetuning. In the second approach, the whole base model was frozen first, with only the decision layer available for training. After the initial training for 100 epochs, the 10 last layers of the base model were unfrozen for retraining. For both approaches, an early stopping callback with patience of 15 epochs was implemented, with validation loss as a target for improvement.
The results for the first approach are shown in Figure 5. In this case, evidence of significant model overtraining is observed. The training accuracy reaches a very high value close to 100% in the early epochs of the training. At the same time, the validation accuracy shows much poorer values, reaching only 80%. In this case, the early stoppage callback has been activated after the 26 epochs, as no validation loss improvements were observed after epoch 10.
The results of model training for the second approach are shown in Figure 6. In the initial phase of training, no signs of model overtraining are observed. In this case, both training and validation accuracy increase gradually throughout the whole training and the early stoppage callback has not been activated. The values of training and validation accuracies are much closer, with both exceeding a promising 95%. After revealing the last ten layers of the base model, an additional 1–2% accuracy is gained for both the training and the validation samples. Due to better accuracy, the model trained within the second approach was selected for further validation on a test image.
The results of applying the model to a test image are shown in Figure 7 and Figure 8. Figure 7 shows nine sample fragments classified as either water or beach. Figure 8 shows the entire test image, where the fragments, identified as beach, are marked with yellow color, while the fragments corresponding to water are marked with blue. The accuracy value for the test image is 96%. The model deals with the identification of water almost perfectly, while for the beach, some false water detections are observed. These individual errors may be caused by, for example, a change in environmental illumination or texture alignment. The majority of them occur as single fragments, thus morphological transformation may be useful to eliminate such artifacts.
Table 2 shows the confusion matrix of the classification result of fragments from the training sample. The result omits fragments located at the edge of the image.

4. Discussion

The results obtained from the two approaches to training the MobileNetV2 model highlight important insights into the effectiveness of transfer learning and model fine-tuning in simple image classification tasks, such as beach line detection. The first approach, which involved unfreezing the last 10 layers of the base model from the start, demonstrated significant overfitting. Although the training accuracy rapidly approached 100%, the validation accuracy plateaued at 80%, suggesting that the model was unable to generalize well to unseen data. This phenomenon is common when the model becomes too tailored to the training dataset, leading to poor performance on new examples. These findings are consistent with previous research, which has shown that excessive fine-tuning without proper regularization can result in overfitting, particularly when the dataset is relatively small.
The second approach, where the base model was initially frozen and only the final decision layer was trained, provided better generalization. Both training and validation accuracies were much more aligned, exceeding 95%, and no overfitting was observed during the initial training phase. After fine-tuning the last 10 layers, the model achieved a slight improvement of 1–2% in accuracy. These results suggest that freezing the base model initially and gradually introducing more complex layers for fine-tuning allow the model to learn the domain-specific features more effectively while retaining the general features learned from the pre-training on larger datasets. This strategy is particularly useful when working with limited or unbalanced data, which is typical in environmental monitoring tasks (e.g., beach line detection) where obtaining labeled data can be challenging.
Despite this, the model performed well, indicating the robustness of MobileNetV2 in handling real-world environmental variabilities. However, it is worth noting that further improvements could be made by incorporating techniques such as data augmentation, which could help mitigate the effects of image noise and improve model performance in less ideal conditions [52].
Future research could explore the use of more advanced models, such as EfficientNet, which combines both lightweight architecture and state-of-the-art accuracy, or hybrid approaches that integrate domain-specific knowledge into the learning process. Additionally, further validation of the model on larger and more diverse datasets, especially in poor weather conditions, would be beneficial to assess its scalability and robustness in different environmental contexts.

5. Conclusions

In this article, an artificial neural network was applied to the images of a tailings storage facility, collected by an unmanned aerial vehicle. The study demonstrates that MobileNetV2 is an effective and computationally efficient model for image classification tasks, such as automatic beach line detection. The findings suggest that a two-phase training approach, where the base model is initially frozen, provides better generalization, and reduces the risk of overfitting compared to models that are fine-tuned from the start. Despite the challenging data collection conditions, the model achieved high accuracy, underscoring its robustness. Future work should focus on extending the model’s applicability to larger datasets collected in various weather conditions and exploring more advanced architectures to further enhance performance.
The use of UAVs for monitoring the tailings storage facility eliminates time-consuming field inspections. In turn, such a developed model classifying beach and water areas, and thus enabling the determination of the beach line, allows for the automation of this procedure and the exclusion of the need for manual image processing. This means that the result can be obtained faster and UAV flights can be performed at any frequency. Consequently, it is possible to monitor and provide alerts about undesirable situations more frequently and accurately. Additionally, repeating the measurement allows for tracking changes in the beach line over time by comparing the detection results.

Author Contributions

Conceptualization, S.A. and P.S. (Paweł Stefaniak); methodology, S.A.; software, S.A.; validation, A.S., M.S. and W.K.; formal analysis, P.S. (Paweł Stefaniak) and S.A.; investigation, P.S. (Paweł Stefaniak), S.A. and M.S.; resources, A.S.; data curation, P.S. (Paweł Stefanek); writing—original draft preparation, S.A.; writing—review and editing, W.K.; visualization, S.A. and W.K.; supervision, P.S. (Paweł Stefaniak); project administration P.S. (Paweł Stefanek)and P.S. (Paweł Stefaniak); funding acquisition, P.S. (Paweł Stefaniak). All authors have read and agreed to the published version of the manuscript.

Funding

This work has received funding from EIT RawMaterials GmbH under Framework Partnership Agreement No 21123 (project Sec4TD).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Original data used in this article may be shared upon request after signing the corresponding non-disclosure agreement (NDA).

Conflicts of Interest

Authors S.A., P.S. (Paweł Stefaniak), W.K., A.S. and M.S. were employed by the company KGHM CUPRUM Ltd. Research and Development Centre. Author P. Stefanek was employed by the company KGHM Polska Miedź S.A. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Clarkson, L.; Williams, D. Critical review of tailings dam monitoring best practice. Int. J. Min. Reclam. Environ. 2020, 34, 119–148. [Google Scholar] [CrossRef]
  2. Sun, E.; Zhang, X.; Li, Z. The internet of things (IOT) and cloud computing (CC) based tailings dam monitoring and pre-alarm system in mines. Saf. Sci. 2021, 50, 811–815. [Google Scholar] [CrossRef]
  3. Dong, L.; Shu, W.; Sun, D.; Li, X.; Zhang, L. Pre-alarm system based on real-time monitoring and numerical simulation using internet of things and cloud computing for tailings dam in mines. IEEE Access 2017, 5, 21080–21089. [Google Scholar] [CrossRef]
  4. Koperska, W.; Stachowiak, M.; Duda-Mróz, N.; Stefaniak, P.; Jachnik, B.; Bursa, B.; Stefanek, P. The Tailings Storage Facility (TSF) stability monitoring system using advanced big data analytics on the example of the Żelazny Most Facility. Arch. Civ. Eng. 2022, 68, 297–311. [Google Scholar]
  5. Duda, N.; Jachnik, B.; Stefaniak, P.; Bursa, B.; Stefanek, P. Tailings storage facility stability monitoring using CPT data analytics on the Zelazny Most facility. In Proceedings of the Application of Computers and Operations Research in the Mineral Industries” (APCOM 2021): Minerals Industry 4.0: The Next Digital Transformation in Mining, Johannesburg, South Africa, 30 August–1 September 2021; ISBN 978-1-928410-26-3. [Google Scholar]
  6. Koperska, W.; Stachowiak, M.; Jachnik, B.; Stefaniak, P.; Bursa, B.; Stefanek, P. Machine Learning Methods in the Inclinometers Readings Anomaly Detection Issue on the Example of Tailings Storage Facility. In IFIP International Workshop on Artificial Intelligence for Knowledge Management; Springer: Cham, Switzerland, 2021; pp. 235–249. [Google Scholar]
  7. Duda-Mróz, N.; Koperska, W.; Stefaniak, P.; Anufriiev, S.; Stachowiak, M.; Stefanek, P. Application of Dynamic Time Warping to Determine the Shear Wave Velocity from the Down-Hole Test. Appl. Sci. 2023, 13, 9736. [Google Scholar] [CrossRef]
  8. Boak, E.H.; Turner, I.L. Shoreline definition and detection: A review. J. Coast. Res. 2005, 21, 688–703. [Google Scholar] [CrossRef]
  9. Toure, S.; Diop, O.; Kpalma, K.; Maiga, A.S. Shoreline detection using optical remote sensing: A review. ISPRS Int. J. Geo-Inf. 2019, 8, 75. [Google Scholar] [CrossRef]
  10. Abdelhady, H.U.; Troy, C.D.; Habib, A.; Manish, R. A simple, fully automated shoreline detection algorithm for high-resolution multi-spectral imagery. Remote Sens. 2022, 14, 557. [Google Scholar] [CrossRef]
  11. Aedla, R.; Dwarakish, G.S.; Reddy, D.V. Automatic shoreline detection and change detection analysis of netravati-gurpurrivermouth using histogram equalization and adaptive thresholding techniques. Aquat. Procedia 2015, 4, 563–570. [Google Scholar] [CrossRef]
  12. Tajima, Y.; Wu, L.; Watanabe, K. Development of a shoreline detection method using an artificial neural network based on satellite SAR imagery. Remote Sens. 2021, 13, 2254. [Google Scholar] [CrossRef]
  13. Plant, N.G.; Aarninkhof, S.G.; Turner, I.L.; Kingston, K.S. The performance of shoreline detection models applied to video imagery. J. Coast. Res. 2007, 23, 658–670. [Google Scholar] [CrossRef]
  14. Dang, T.M.; Nguyen, B.D. Applications of UAVs in mine industry: A scoping review. J. Sustain. Min. 2023, 22, 128–146. [Google Scholar]
  15. Ren, H.; Zhao, Y.; Xiao, W.; Hu, Z. A review of UAV monitoring in mining areas: Current status and future perspectives. Int. J. Coal Sci. Technol. 2019, 6, 320–333. [Google Scholar] [CrossRef]
  16. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned aerial vehicles in agriculture: A review of perspective of platform, control, and applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  17. Shahmoradi, J.; Talebi, E.; Roghanchi, P.; Hassanalian, M. A comprehensive review of applications of drone technology in the mining industry. Drones 2020, 4, 34. [Google Scholar] [CrossRef]
  18. Park, S.; Choi, Y. Applications of unmanned aerial vehicles in mining from exploration to reclamation: A review. Minerals 2020, 10, 663. [Google Scholar] [CrossRef]
  19. Kršák, B.; Blišťan, P.; Pauliková, A.; Puškárová, P.; Kovanič, Ľ.; Palková, J.; Zelizňaková, V. Use of low-cost UAV photogrammetry to analyze the accuracy of a digital elevation model in a case study. Measurement 2016, 91, 276–287. [Google Scholar] [CrossRef]
  20. Outay, F.; Mengash, H.A.; Adnan, M. Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges. Transp. Res. Part A Policy Pract. 2020, 141, 116–129. [Google Scholar] [CrossRef]
  21. Kanellakis, C.; Nikolakopoulos, G. Evaluation of visual localization systems in underground mining. In Proceedings of the 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 539–544. [Google Scholar]
  22. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  23. Singhal, G.; Bansod, B.; Mathew, L. Unmanned aerial vehicle classification, applications and challenges: A review. Preprints 2018, 2018110601, 1–19. [Google Scholar]
  24. Beretta, F.; Shibata, H.; Cordova, R.; Peroni, R.d.L.; Azambuja, J.; Costa, J.F.C.L. Topographic modelling using UAVs compared with traditional survey methods in mining. REM-Int. Eng. J. 2018, 71, 463–470. [Google Scholar] [CrossRef]
  25. Stead, D.; Donati, D.; Wolter, A.; Sturzenegger, M. Application of remote sensing to the investigation of rock slopes: Experience gained and lessons learned. ISPRS Int. J. Geo-Inf. 2019, 8, 296. [Google Scholar] [CrossRef]
  26. Filipova, S.; Filipov, D.; Raeva, P. Creating 3D model of an open pit quarry by UAV imaging and analysis in GIS. In Proceedings of the 6th International Conference on Cartography & GIS, Albena, Bulgaria, 13–17 June 2016; Volume 6, p. 652. [Google Scholar]
  27. Xiang, J.; Chen, J.; Sofia, G.; Tian, Y.; Tarolli, P. Open-pit mine geomorphic changes analysis using multi-temporal UAV survey. Environ. Earth Sci. 2018, 77, 220. [Google Scholar] [CrossRef]
  28. Nguyen, Q.L.; Le Thi, T.H.; Tong, S.S.; Kim, T.T.H. UAV photogrammetry-based for open pit coal mine large scale mapping, case studies in Cam Pha City, Vietnam. Устойчивое Развитие Горных Территорий 2020, 12, 501–509. [Google Scholar]
  29. Bui, D.T.; Long, N.Q.; Bui, X.-N.; Nguyen, V.-N.; Van Pham, C.; Van Le, C.; Ngo, P.-T.T.; Bui, D.T.; Kristoffersen, B. Lightweight unmanned aerial vehicle and structure-from-motion photogrammetry for generating digital surface model for open-pit coal mine area and its accuracy assessment. In Advances and Applications in Geospatial Technology and Earth Resources: Proceedings of the International Conference on Geo-Spatial Technologies and Earth Resources; Springer: Berlin/Heidelberg, Germany; pp. 17–33.
  30. Nguyen, N.V. Building DEM for deep open-pit coal mines using DJI Inspire 2. J. Min. Earth Sci. Vol. 2020, 61, 1–10. [Google Scholar]
  31. Jakob, S.; Zimmermann, R.; Gloaguen, R. Processing of drone-borne hyperspectral data for geological applications. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016. [Google Scholar]
  32. Beretta, F.; Rodrigues, A.L.; Peroni, R.L.; Costa, J.F.C.L. Automated lithological classification using UAV and machine learning on an open cast mine. Appl. Earth Sci. 2019, 128, 79–88. [Google Scholar] [CrossRef]
  33. Madjid, M.Y.A.; Vandeginste, V.; Hampson, G.; Jordan, C.J.; Booth, A.D. Drones in carbonate geology: Opportunities and challenges, and application in diagenetic dolomite geobody mapping. Mar. Pet. Geol. 2018, 91, 723–734. [Google Scholar] [CrossRef]
  34. Kirsch, M.; Lorenz, S.; Zimmermann, R.; Tusa, L.; Möckel, R.; Hödl, P.; Booysen, R.; Khodadadzadeh, M.; Gloaguen, R. Integration of terrestrial and drone-borne hyperspectral and photogrammetric sensing methods for exploration mapping and mining monitoring. Remote Sens. 2018, 10, 1366. [Google Scholar] [CrossRef]
  35. Jakob, S.; Zimmermann, R.; Gloaguen, R. The need for accurate geometric and radiometric corrections of drone-borne hyperspectral data for mineral exploration: Mephysto—A toolbox for pre-processing drone-borne hyperspectral data. Remote Sens. 2017, 9, 88. [Google Scholar] [CrossRef]
  36. Lyons-Baral, J.; Kemeny, J. Applications of point cloud technology in geomechanical characterization, analysis and predictive modeling. Min. Eng. 2016, 68, 18–29. [Google Scholar]
  37. Raj, P. Use of Drones in an Underground Mine for Geotechnical Monitoring. Master’s Thesis, The University of Arizona, Tucson, AZ, USA, 2019. [Google Scholar]
  38. Turner, R.M.; Bhagwat, N.P.; Galayda, L.J.; Knoll, C.S.; Russell, E.A.; MacLaughlin, M.M. Geotechnical characterization of underground mine excavations from UAV-captured photogrammetric & thermal imagery. In Proceedings of the 52nd ARMA US Rock Mechanics/Geomechanics Symposium, Seattle, WA, USA, 17–20 June 2018; p. ARMA–2018. [Google Scholar]
  39. Lyu, M.; Zhao, Y.; Huang, C.; Huang, H. Unmanned aerial vehicles for search and rescue: A survey. Remote Sens. 2023, 15, 3266. [Google Scholar] [CrossRef]
  40. Daud, S.M.S.M.; Yusof, M.Y.P.M.; Heo, C.C.; Khoo, L.S.; Singh, M.K.C.; Mahmood, M.S.; Nawawi, H. Applications of drone in disaster management: A scoping review. Sci. Justice 2022, 62, 30–42. [Google Scholar] [CrossRef] [PubMed]
  41. Agarwal, R.; Singh, D.; Chauhan, D.S.; Singh, K.P. Detection of coal mine fires in the Jharia coal field using NOAA/AVHRR data. J. Geophys. Eng. 2006, 3, 212–218. [Google Scholar] [CrossRef]
  42. Dunnington, L.; Nakagawa, M. Fast and safe gas detection from underground coal fire by drone fly over. Environ. Pollut. 2017, 229, 139–145. [Google Scholar] [CrossRef]
  43. Bamford, T.; Esmaeili, K.; Schoellig, A.P. A real-time analysis of rock fragmentation using UAV technology. arXiv 2016, arXiv:1607.04243. [Google Scholar]
  44. Bamford, T.; Esmaeili, K.; Schoellig, A.P. Aerial rock fragmentation analysis in low-light condition using UAV technology. arXiv 2017, arXiv:1708.06343. [Google Scholar]
  45. Freire, G.R.; Cota, R.F. Capture of images in inaccessible areas in an underground mine using an unmanned aerial vehicle. In Proceedings of the UMT 2017: Proceedings of the First International Conference on Underground Mining Technology, Australian Centre for Geomechanics, Sudbury, Canada, 11–13 October 2017. [Google Scholar]
  46. Castendyk, D.N.; Straight, B.J.; Voorhis, J.C.; Somogyi, M.K.; Jepson, W.E.; Kucera, B.L. Using aerial drones to select sample depths in pit lakes. In Proceedings of the 13th International Conference on MINE Closure, Perth, WA, Australia, 3–5 September 2019; pp. 1113–1126. [Google Scholar]
  47. Padró, J.C.; Carabassa, V.; Balagué, J.; Brotons, L.; Alcañiz, J.M.; Pons, X. Monitoring opencast mine restorations using Unmanned Aerial System (UAS) imagery. Sci. Total Environ. 2019, 657, 1602–1614. [Google Scholar] [CrossRef]
  48. Jackisch, R.; Lorenz, S.; Zimmermann, R.; Möckel, R.; Gloaguen, R. Drone-borne hyperspectral monitoring of acid mine drainage: An example from the Sokolov lignite district. Remote Sens. 2018, 10, 385. [Google Scholar] [CrossRef]
  49. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  50. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  51. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  52. Perez, L.; Wang, J. The Effectiveness of Data Augmentation in Image Classification Using Deep Learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
Figure 1. DJI Matrix 300 RTK unmanned aerial vehicle.
Figure 1. DJI Matrix 300 RTK unmanned aerial vehicle.
Applsci 15 05786 g001
Figure 2. Examples of images collected for automatic beach line recognition.
Figure 2. Examples of images collected for automatic beach line recognition.
Applsci 15 05786 g002
Figure 3. Data preparation process (dividing the image into square fragments and assigning them beach and water labels) and general decision scheme for automatic beach line detection using a neural network through classification.
Figure 3. Data preparation process (dividing the image into square fragments and assigning them beach and water labels) and general decision scheme for automatic beach line detection using a neural network through classification.
Applsci 15 05786 g003
Figure 4. Examples of image fragments classified as water and beach, divided into training, validation, and test samples.
Figure 4. Examples of image fragments classified as water and beach, divided into training, validation, and test samples.
Applsci 15 05786 g004
Figure 5. Training and validation loss and accuracy for the case of the base model with the 10 last layers being unfrozen.
Figure 5. Training and validation loss and accuracy for the case of the base model with the 10 last layers being unfrozen.
Applsci 15 05786 g005
Figure 6. Training and validation loss and accuracy for the case of frozen based model training, followed by finetuning of the last 10 layers of the base model.
Figure 6. Training and validation loss and accuracy for the case of frozen based model training, followed by finetuning of the last 10 layers of the base model.
Applsci 15 05786 g006
Figure 7. Examples of fragments of the test sample with the classification result.
Figure 7. Examples of fragments of the test sample with the classification result.
Applsci 15 05786 g007
Figure 8. Application of the developed algorithm to a test image.
Figure 8. Application of the developed algorithm to a test image.
Applsci 15 05786 g008
Table 1. Model architecture.
Table 1. Model architecture.
Layer TypeOutput ShapeNumber of Parameters
Input layer(100, 100, 3)0
Sequential(100, 100, 3)0
True Divide(100, 100, 3)0
Subtract(100, 100, 3)0
Functional—MobileNetV2(4, 4, 1280)2,257,984
Global Average Pooling 2D(1280)0
Dropout(1280)0
Dense(1)1281
Table 2. Confusion matrix obtained for the test sample.
Table 2. Confusion matrix obtained for the test sample.
beachwater
beach3483
water39665
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anufriiev, S.; Stefaniak, P.; Koperska, W.; Stachowiak, M.; Skoczylas, A.; Stefanek, P. CNN-Based Automatic Detection of Beachlines Using UAVs for Enhanced Waste Management in Tailings Storage Facilities. Appl. Sci. 2025, 15, 5786. https://doi.org/10.3390/app15105786

AMA Style

Anufriiev S, Stefaniak P, Koperska W, Stachowiak M, Skoczylas A, Stefanek P. CNN-Based Automatic Detection of Beachlines Using UAVs for Enhanced Waste Management in Tailings Storage Facilities. Applied Sciences. 2025; 15(10):5786. https://doi.org/10.3390/app15105786

Chicago/Turabian Style

Anufriiev, Sergii, Paweł Stefaniak, Wioletta Koperska, Maria Stachowiak, Artur Skoczylas, and Paweł Stefanek. 2025. "CNN-Based Automatic Detection of Beachlines Using UAVs for Enhanced Waste Management in Tailings Storage Facilities" Applied Sciences 15, no. 10: 5786. https://doi.org/10.3390/app15105786

APA Style

Anufriiev, S., Stefaniak, P., Koperska, W., Stachowiak, M., Skoczylas, A., & Stefanek, P. (2025). CNN-Based Automatic Detection of Beachlines Using UAVs for Enhanced Waste Management in Tailings Storage Facilities. Applied Sciences, 15(10), 5786. https://doi.org/10.3390/app15105786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop