Next Article in Journal
Mapping Soybean Maturity and Biochemical Traits Using UAV-Based Hyperspectral Images
Next Article in Special Issue
An Enhanced Target Detection Algorithm for Maritime Search and Rescue Based on Aerial Images
Previous Article in Journal
The Prediction of Transmission Towers’ Foundation Ground Subsidence in the Salt Lake Area Based on Multi-Temporal Interferometric Synthetic Aperture Radar and Deep Learning
Previous Article in Special Issue
Unified Transformer with Cross-Modal Mixture Experts for Remote-Sensing Visual Question Answering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods

1
Geological Survey of Finland, Information Solutions Unit, P.O. Box 96, FI-02151 Espoo, Finland
2
Geological Survey of Finland, Information Solutions Unit, P.O. Box 77, FI-96101 Rovaniemi, Finland
3
Geological Survey of Finland, Mineral Economy Solutions Unit, P.O. Box 77, FI-96101 Rovaniemi, Finland
4
Geological Survey of Finland, Information Solutions Unit, P.O. Box 1237, FI-70211 Kuopio, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(19), 4806; https://doi.org/10.3390/rs15194806
Submission received: 25 July 2023 / Revised: 22 September 2023 / Accepted: 28 September 2023 / Published: 3 October 2023

Abstract

:
Laboratory-based hyperspectral imaging (HSI) is an optical non-destructive technology used to extract mineralogical information from bedrock drill cores. In the present study, drill core scanning in the long-wave infrared (LWIR; 8000–12,000 nm) wavelength region was used to map the dominant minerals in HSI pixels. Machine learning classification algorithms, including random forest (RF) and support vector machine, have previously been applied to the mineral characterization of drill core hyperspectral data. The objectives of this study are to expand semi-automated mineral mapping by investigating the mapping accuracy, generalization potential, and classification ability of cutting-edge methods, such as various ensemble machine learning algorithms and deep learning semantic segmentation. In the present study, the mapping of quartz, talc, chlorite, and mixtures thereof in HSI data was performed using the ENVINet5 algorithm, which is based on the U-net deep learning network and four decision tree ensemble algorithms, including RF, gradient-boosting decision tree (GBDT), light gradient-boosting machine (LightGBM), AdaBoost, and bagging. Prior to training the classification models, endmember selection was employed using the Sequential Maximum Angle Convex Cone endmember extraction method to prepare the samples used in the model training and evaluation of the classification results. The results show that the GBDT and LightGBM classifiers outperformed the other classification models with overall accuracies of 89.43% and 89.22%, respectively. The results of the other classifiers showed overall accuracies of 87.32%, 87.33%, 82.74%, and 78.32% for RF, bagging, ENVINet5, and AdaBoost, respectively. Therefore, the findings of this study confirm that the ensemble machine learning algorithms are efficient tools to analyze drill core HSI data and map dominant minerals. Moreover, the implementation of deep learning methods for mineral mapping from HSI drill core data should be further explored and adjusted.

Graphical Abstract

1. Introduction

Bedrock drilling plays a fundamental role in mineral exploration and resource exploitation [1]. In core drilling, a continuous underground rock core is obtained via a drill rig by using a cylindrical hollow diamond drill. A drill core is an important source of information for assessing chemistry, mineralogy, lithology, and the structure of the bedrock for the mining industry [2]. In recent years, laboratory-based hyperspectral core scanning has become an invaluable tool for quantifying the mineral abundance and distribution in drill core samples [3,4,5,6].
Hyperspectral imaging (HSI) enables geologists and mining companies to estimate detailed information over long core sections quickly and efficiently to replace work traditionally interpreted from laboratory chemistry [7,8]. Hyperspectral sensors are capable of data sampling at a very high spatial resolution and collecting detailed spectral information over a range of the electromagnetic spectrum [9], including visible/near-infrared (VNIR, 400–1000 nm), short-wave infrared (SWIR, 1000–2500 nm), and long-wave infrared (LWIR, >5000 nm). HSI has become a proven technology used to discriminate mineral species and mineral chemistry [10,11]. Such wealth of data has been demonstrated to be useful for geological research and applications including ore-waste discrimination [12], geometallurgy [13], and mineral exploration [14].
Common HSI systems used for drill core scanning provide high spatial resolution data with around a 1 mm pixel size [4,5]. This level of spatial resolution indicates that most pixels in HSI datasets contain a mixture of different minerals, resulting in mixed spectral responses, and pixels with a heterogeneous representation of various minerals. Spectral analysis and exploring absorption features have been traditionally used to recognize materials [15]. One of the most important challenges in extracting useful information from HSI data is that the high dimensional data cube also includes redundant information. Although on the one hand, hyperspectral data provide a large number of bands, enabling materials to be discriminated based on their spectral signatures, they also brings complexity to data analysis [16,17].
In recent years, the availability of big complex datasets, for example, HSI, has made machine learning more prevalent in information extraction and in the analysis of geological data [9]. Machine learning algorithms are data-driven and are used to discover the correlations and connections between spectral information and targets that are too complex for humans to interpret subjectively and consistently. In addition, optimizing supervised machine learning algorithms has made them powerful in spectral data processing and in tackling issues related to data uncertainties [18,19] in comparison to the traditional mapping methods [2]. With regard to widely used traditional HSI mapping methods, the primary issue with spectral matching methods is related to the challenges of dealing with mixed spectra and large user interventions to find an optimal threshold [20]. Despite the fact that unmixing techniques determine the relative material abundance within each pixel, they are subjective to the selection of effective thresholds [21]. The benefits of semi-automatized mineral identification using machine learning algorithms are the efficiency and objectivity of the classification.
Recently, unsupervised and supervised machine learning algorithms have been widely applied to classify minerals from drill core HSI data. Unsupervised machine learning algorithms are methods that do not require any prior knowledge of referenced training samples [22]. Supervised machine learning-based classifiers produce classification models based on the training data with the goal of predicting the unseen data [23,24]. Ensemble machine learning algorithms tend to combine several classifiers, aiming to achieve enhanced classification accuracy compared with using a single classifier [25]. The key benefit of employing ensemble methods is to blend weak classifiers to generate a robust predictor. Bootstrap aggregating (bagging) and boosting are considered two main groups of ensemble algorithms using decision tree classifiers. One of the core disadvantages of the bagging ensemble is that there is no knowledge sharing between the classification models, and only the predictions of models are combined using averaging approaches. In boosting algorithms, the learning process is sequential and each classification model tends to reduce the prediction errors of the previous model [26]. Traditional decision tree classification models can be used to construct bagging ensembles as well as boosting ensembles, such as the AdaBoost algorithm. The random forest algorithm is considered one of the commonly used bagging techniques. Moreover, the gradient-boosting decision tree (GBDT) algorithm is one of the well-known boosting ensembles of decision trees [27]. In addition, the light gradient-boosting machine (LightGBM) is regarded as one of the latest decision tree-based boosting ensemble algorithms, and provides low computational complexity [28].
In recent years, a number of case studies have been published employing machine learning algorithms to map minerals from satellite, airborne, and laboratory-based HSI data. Lin et al. [29] used the RF and GBDT algorithms to classify limonite and chlorite using HSI ZY1-02D satellite data. According to the authors, using ensemble methods was efficient for mineral mapping. Wang et al. [30] investigated the mapping of various minerals, such as goethite, hematite, jarosite, kaolinite, calcite, epidote, and chlorite, from HySpex Airborne HSI data using the RF algorithm. The study demonstrated the ability of the RF algorithm to map minerals in a broad geographic area. In another study by Lobo et al. [31], the mapping of ore minerals, such as cassiterite, wolframite, chalcopyrite, malachite, muscovite, and quartz, was carried out using the RF, SVM, and linear discriminant algorithms from laboratory-based HSI data. The RF classifier’s overall accuracy was reported to be over 95% in the mentioned study. Furthermore, several mapping methods have been used in recent years to classify minerals from drill core HSI data. Laakso et al. [32] reported the implementation of an unsupervised machine learning approach based on the integration of self-organizing maps (SOM) and k-means clustering methods implemented in the GisSOM multivariate open-source tool [33,34] to classify the minerals of a single drill core box. In different studies, the RF classifier [2], RF regression [35], and Canonical Correlation Forest (CCF) [36] classifier, which are mainly considered decision tree ensemble methods, have been investigated. A previous study by Acosta et al. [2] used mineral liberation analysis (MLA) data for producing training labels for five drill core samples. The authors co-registered and resampled the MLA images to the spatial resolution of the VNIR-SWIR HSI data and conducted RF and support vector machine (SVM) algorithms to classify various minerals. In another study, Barker et al. [35] conducted mineral abundance predictions using the RF regression algorithm, which was trained based on labels extracted from resampled micro-X-ray fluorescence (μXRF) imaging. Acosta et al. [36] used the CCF classifier for drill core mineral mapping using fused RGB and HSI inputs and resampled MLA data as a source of training examples. These studies demonstrated that the decision tree ensemble methods were effective in retrieving detailed mineralogical information from HSI data.
Deep learning as a subdomain of machine learning has shown outstanding performance in different applications that use HSI data [37,38,39]. The attention paid to deep learning methods significantly increased when the AlexNet architecture [40], which is based on the Convolutional Neural Network (CNN), showed superior performance compared with conventional supervised classifiers. In contrast to supervised machine learning classifiers, a significant advantage of deep learning is its automated feature extraction process, which does not require manually created features. A recent study by Bell et al. [41] used the CNN semantic segmentation method based on Pyramid Scene Parsing Network (PSPNet) architecture [42] for drill core mineral mapping by including human-annotated labels. Another study by Abdolmaleki et al. [12] used the CNN ENVINet5 semantic segmentation architecture for ore-waste discrimination from drill core HSI data, which was trained using labels produced by the spectral angle mapper classifier. The results demonstrated that the ENVINet5 model has strong potential to classify ore and waste rock samples, and is capable of outperforming the traditional algorithms, such as the spectral angle mapper classifier and k-means clustering. However, the performance of advanced bagging and boosting algorithms was not assessed against the deep learning model in the mentioned study.
Although previous studies have shown promising methodologies to map minerals from drill core HSI data, the current literature lacks a hyperparameter optimization step with respect to supervised ensemble algorithms. Considering the application of deep learning methods for mineral identification from drill core rock samples, a major challenge is the requirement for a large amount of labeled training data, which is commonly not available for laboratory-based HSI data. In that regard, an endmember extraction approach should be studied in order to produce labeled training examples for said machine and deep learning methods. Nevertheless, the application of advanced ensemble algorithms and deep learning methods for mineral mapping from drill core HSI data is in its infancy and should be further investigated to evaluate their performance and reliability.
The overall objective of the present study is to evaluate the performance of different supervised classification algorithms to map dominant minerals based on HSI drill core data. Accordingly, this study aims at (1) expanding on the previous study of Laakso et al. [32] by mapping the dominant mineralogy of a drill core HSI LWIR dataset; (2) testing and comparing the performance of the CNN ENVINet5 architecture and four decision tree-based ensemble algorithms, including AdaBoost, bagging, GBDT, LightGBM, and RF; (3) improving the reliability and predictive power of the ensemble model by performing hyperparameter optimization; and (4) evaluating the use of endmember extraction based on the Sequential Maximum Angle Convex Cone (SMACC) method to produce training samples for the previously mentioned machine learning algorithms.

2. Materials and Methods

The mapping and characterization of the dominant minerals were carried out in several steps (Figure 1). First, drill core rock scanning was conducted using the SisuROCK Gen2 HSI system, and the datasets in the LWIR region were used in the experimental section. Secondly, data preprocessing of the HSI data included: (1) generating an image mask to remove the wooden parts of the drill core trays in the HSI dataset; (2) calculating the drilling depth of each pixel in the HSI dataset to display the mineral classification result on a depth plot; (3) implementing the SMACC endmember extraction to identify the dominant mineral classes; and (4) performing stratified random sampling to divide the class endmembers to training and testing datasets with a ratio of 80/20%. The dominant mineral in a pixel was mapped by evaluating various classification algorithms, including the ENVINet5 deep learning model and several ensemble algorithms, such as bagging, AdaBoost, RF, LightGBM, and GBDT. A grid search method was applied to optimize the hyperparameters of the ensemble algorithms. To visualize the full drill core classification result, depth plots were displayed with respect to the calculated drill core depth. Finally, confusion matrices and different accuracy measures were used to evaluate the accuracy of each mapping output. The McNemar test was utilized to compare each of the two classification algorithms and to determine whether the improvements in the classification accuracy were statistically significant. Furthermore, the spectra from the ECOSTRESS spectral library [43] were utilized as a reference and compared with the average spectra produced based on the image classification results.

2.1. Drill Core Samples and Laboratory Hyperspectral Imaging Spectrometry

Drill core rock samples were acquired from the Hirvilavanmaa gold deposit, which represents a typical gold-only-type orogenic Au deposit [44] located in the NW part of the Central Lapland Greenstone Belt (CLGB) in the northern Fennoscandian Shield (Figure 2). The deposit (0.11 Mt with 2.9 g/t Au) is hosted by 2.05 Ga komatiitic rocks of the Sattasvaara formation (Savukoski group) along a NW-SE-trending belt between the Sirkka shear zone to the north, <1.88 Ga Kumpu group clastic metasedimentary rocks to the south, and approx. 2.00 Ga mafic volcanic rocks of the Kittilä suite to the east (Figure 1, [45]). The mineralization is hosted by a N-NE-trending alteration zone measuring approximately 290 m in length and 90 m width. Native gold is hosted by pyrite, quartz, and carbonate. The original host rock was altered during the Svekofennian orogeny (1.92–1.79 Ga) to a talc–chlorite–carbonate rock/schist. Distal parts of the alteration zone are composed of talc–chlorite–amphibole schist, while the proximal, mineralized zone is composed of albite–carbonate–chlorite rock with quartz veins and breccias.
HSI data acquisition was conducted in a facility located in Rovaniemi, Finland, as a part of a large-scale drill core scanning campaign carried out using a SisuROCK Gen2 core imaging system by Specim (Spectral Imaging Ltd., Oulu, Finland). The system setup of the SisuROCK drill core scanner comprised an RGB camera, a FENIX visible (400–700 nm) to near-infrared (NIR; 700–1300 nm) and short-wave infrared (SWIR; 1300–2500 nm) camera, and an OWL long-wave infrared (LWIR; 7700–12,300 nm) hyperspectral camera. In the present study, 94 hyperspectral LWIR channels of the OWL camera with a 1.5 mm pixel resolution were acquired from 26 drill core boxes for mineral mapping. The LWIR dataset was used to particularly map chlorite, talc, and quartz, because the listed minerals are moderately-to-strongly active in the LWIR region [2]. The aluminum reference panel of the drill core scanner was imaged for each scan, and together with the dark reference, acquired by automatically closing the camera shutter; these were used to convert raw data into reflectance.

2.2. Pre-Processing of the Hyperspectral Image

The VNIR-SWIR and LWIR HSI data from the SisuROCK measurement system were acquired with the same resolution and image size, and with aligned pixels. A tray mask was obtained using the VNIR-SWIR images, and it was then applied to the LWIR images. The VNIR-SWIR images showed a distinct color difference between wood and rock, which was used to generate the mask. As specified by the red box in the Figure 3, the ratio of reflectance at 700 nm and 500 nm was approximately 1 for rock, while for the wood, it was significantly larger, mostly 1.3–1.5. Thus, the tray mask was generated using a threshold of 1.3 for this ratio and by removing pixels that were so dark that this spectral feature could not be distinguished. As the dark pixels in VNIR-SWIR mostly represent surfaces nearly perpendicular to the imaging plane, they can be assumed to be non-informative in the LWIR, as well. The masked individual drill core tray images were then merged together to facilitate their further processing. LWIR images that had saturated pixels and pixels with a reflectance value greater than or equal to 10,000 were removed. Following this, the original number of bands in the image (n = 94) was reduced to 46 to lower data processing times. In practice, this was achieved by removing every other band from the full spectrum. Prior to this, the effects of band reduction were carefully considered to ensure that no spectral features would be lost as a result of the operation.
Finally, the analyzed hyperspectral drill core image was given depth information using the handwritten markings in the upper left corner (minimum depth) and lower right corner (maximum depth) of each drill core box. Each pixel in between these markings was linearly interpolated to a given depth using a Python (version 3.9.7) script written for this purpose using the following equation:
y = y 1 + x x 2 y 2 y 1   x 2 x 1
where x 1   and x 2   are the start and end pixels, respectively; y 1   and y 2   are the start and end depth, respectively; x   is the current pixel; and y   is the interpolated depth.

2.3. Training and Testing Data Used in Classification Algorithms

In order to have enough training data to perform the image classifications and to validate their performance, the drill core data was mosaiced into a single hyperspectral image with 835 × 7924 pixels. The Sequential Maximum Angle Convex Cone (SMACC) method [46], available in ENVI 5.6.2 (L3Harris Geospatial, Boulder, CO, USA; “ENVI” from herein), software was utilized for spectral endmember extraction from the HSI data. The SMACC method is based on linear unmixing and uses the convex cone model to search for pure pixels throughout the dataset. The SMACC algorithm identifies the endmembers that are most distinct and account for the spectral diversity of the material and variability in the illumination effects. The brightest pixel in the image is the first endmember that is selected, followed by pixels with different spectral characteristics. Compared to the previously added spectrum in the feature set, the algorithm constantly covers the most dissimilar spectrums. The selection of endmembers continues until the pixels with contrasting brightness are found in the image, and a user-defined number of endmembers is achieved [46,47].
The SMACC algorithm was parametrized to first select 17 endmembers from the merged image with an RMS error tolerance of 0 and positivity-only unmixing constraint, meaning that all abundance images only have values greater than zero and all requested endmembers are returned. The original number of endmembers (n = 17) was chosen based on the theoretical number of abundant LWIR-active minerals in the sample set. Following this, a subset of 11 endmembers was selected amongst the total number of endmembers to represent the dominant minerals of the sample set. The merging of different endmembers was carried out due to their spectral similarity; different spectra represented the same minerals (e.g., quartz) with some slight variation. Therefore, the data analysis did not benefit from having multiple representatives for the same mineral, so they were merged. One of these endmembers was “aspectral”, meaning a class of pixels with no prominent spectral features to allow for mineral identification. Similar classes were merged together, and, at the end of the process, five endmembers remained to represent the mineralogy of the drill cores. The areas with the highest abundances of endmembers were then highlighted by selecting the pixels with abundances of greater than or equal to more than three standard deviations from the mean. A subset of 20% of samples in each endmember class was selected from the abundances to avoid over-fitting in later stages of data analysis (machine learning classification), caused by a large number of training samples. This was achieved using the Generate Random Sample tool of ENVI. The endmembers were then labeled to represent quartz, talc, chlorite, quartz–carbonate, and aspectral classes in the image.
The obtained endmember samples (n = 247,586) were next divided into training and testing groups. This analysis was conducted by separating 80% of the samples for training and 20% for testing such that the percentages were proportionate to the total number of samples in each endmember class. As the ensemble algorithms required training data to be input in the Python Pandas data frame structure, the generated endmembers with ENVI’s region-of-interest (roi) format were converted to an image using the ConvertROIs to Classification tool in ENVI software. The produced image was then transformed into the required data frame. Since some pixels might contain several endmembers, the mentioned conversion assigned one class label only as the final class, which is always the class type of the last endmember selected for this assignment. The order of mineral classes in this study was quartz, talc, aspectral, chlorite, and quartz–carbonate. For instance, if a pixel contains endmembers of talc and chlorite, the ConvertROIs to Classification tool selects the last endmember, which is chlorite, as the final class. In order to be consistent in comparing the classification models using the tested algorithms, the same training and testing samples were used for all the machine learning methods. The number of training and testing samples extracted from the endmember abundance maps is listed in Table 1.

2.4. Machine Learning Algorithms

2.4.1. Deep Learning

The used CNN model is a U-net [48]-form architecture named ENVINet5 implemented in the ENVI version 5.6.2 ENVI Deep Learning 2.0 version module (L3Harris Geospatial Solutions Inc., Boulder, CO, USA). This model is composed of pixel-based semantic segmentation and employs patch-based training.
In the ENVINet5 model, the training phase is controlled by several hyperparameters, such as patch size, number of epochs, number of patches per epoch, class weight, loss weight, blur distance, and solid distance. In general, there are no exact numbers to be used for these hyperparameters, and the values were selected based upon trial and error. Table 2 presents the information values used for each hyperparameter, along with their description and function.
Figure 4 shows the structure of ENVINet5 for mineral classification, where the blue bars represent varying numbers of a 3 × 3 convolution layers. The top part of the ENVINet5 architecture is an encoder and consists of 4 blocks including convolution with 64, 128, 256, and 512 filters. After each convolution block, a max-pooling operation is performed to reduce the image size by 2 × 2. The down part of the ENVINet5 architecture specifies the decoder containing 4 blocks. The upsampling phase repeats the image by 2 × 2 in 2D based on the transpose convolution. To reduce the size of the input image patch, the encoder segment is used by raising the quantity of the feature maps, and the decoder section diminishes the quantity of the feature maps to produce a matrix to the original dimensions. The final convolution includes one filter to make a prediction and to produce an image classification result including the number of input classes.

2.4.2. Ensemble Machine Learning Models

Although several studies have represented a promising application of ensemble algorithms for mineral mapping from HSI data, the current literature lacks the implementation and comparison of GBDT, LightGBM, bagging, and AdaBoost in dominant mineral mapping from drill core HSI data. In addition, special attention should be paid to studying the impact of the algorithm’s hyperparameters and their tuning on classification accuracy. The following subsections describe the different ensemble algorithms and model hyperparameters, and the hyperparameter optimization.

Random Forest

RF [50] is a type of non-parametric machine learning algorithm that does not consider the data distribution to be Gaussian and is thus not affected by the distribution of the data [51]. As an ensemble algorithm, RF consists of a large number of decision trees, produced based on randomly selected predictors from a randomly selected subset of training examples. This algorithm is able to provide robust generalization potential compared with traditional decision tree methods. Although RF is a powerful machine learning-based algorithm, it has several drawbacks, such as difficulty in visualizing the generated tree-based model and difficulty in interpreting the final ensembled decision trees.

Light Gradient-Boosting Machine (LightGBM)

LightGBM is an ensemble learning machine learning algorithm that is regarded as a tuned and regularized form of gradient-boosting decision tree [52]. This algorithm aims to speed up the training process and mitigate the memory consumption, resulting in improved performance compared with other gradient-boosting decision tree algorithms, such as extreme gradient boosting and gradient-boosting decision tree (GBDT). The LightGBM algorithm operates a set of decision trees in classification, where the gradient of subsequent decision trees is measured and reduced. This process continues until the algorithm reaches the ideal result with the lowest residual level. In this study, the LightGBM algorithm was conducted using the LightGBM Python Package [53].

Gradient-Boosting Decision Tree (GBDT)

GBDT is one of the most important ensemble machine learning algorithms that is based on boosting technology [54]. It generates a scalable and genuine classification model using a group of weak classifiers. GBDT operates decision trees as weak supervised classifiers, and iteratively trains the base learners to enhance the emphasis on datasets processed poorly by the current group of base learners [55]. It first begins from a specific node, and then, frequently expands the tree branches until a criterion is fulfilled. This process lasts until maximized loss reduction is achieved.

AdaBoost

AdaBoost, i.e., Adaptive Boosting, is an ensemble-based boosting algorithm that incorporates multiple weak classifier models to enhance the accuracy of classification [56]. AdaBoost uses the following strategy to classify a dataset: Firstly, the algorithm produces a subset of training examples to generate a base classifier and to assign equal weights to each sample in a training subset. Secondly, the base classifier performs the classification on the training samples and gives higher weights to wrongly classified instances. Finally, the algorithm normalizes the weights of all the samples in the training set and generates a new training subset to establish the next classifier. After achieving satisfactory results, the final classification model is generated by summing the weights of all the classification models.

Bagging

Bagging or Bootstrap Aggregation [57] is an ensemble modeling method that uses a bootstrap resampling approach to enhance the generalization capability of an unstable algorithm, such as decision trees, by merging the predictions of many decision trees. The main concept is to measure statistical information by mitigating the variance and bias from randomly selected instances. Bagging integrates bootstrapping and aggregating approaches. The purpose of bootstrapping is to generate randomly selected instances and replace them with existing samples, and the aggregating phase produces the final result based on the aggregation of predictions made on random instances.

Tuning Ensemble Algorithm Hyperparameters

The reliable performance of any machine learning algorithm requires rigorous hyperparameter tuning. Hyperparameters are important variables that have an impact on the learning process of machine learning algorithms. Before model training, effective tuning of the hyperparameters is required for optimal performance and enhanced convergence of an algorithm. A variety of hyperparameter tuning techniques exist, including genetic algorithm, random search, the Bayesian method, grid search, etc. In the present study, the grid search approach with 5-fold cross-validation, which is a widely used hyperparameter tuning method, was used in the Scikit-learn Python module to obtain the optimal hyperparameter values for the ensemble models.
Table 3 contains the hyperparameters of GBDT, LightGBM, RF, bagging, and AdaBoost that need to be optimized. Hyperparameters can be grouped into: (1) parameters that enable better mapping accuracy, like the learning rate and the number of trees in the model; (2) parameters related to the decision tree configuration and are associated with over-fitting issues, such as the maximum depth and minimum number of leaves in the tree, the minimum number of samples for node splitting, as well as the maximum features when investigating the best split in the tree; and (3) parameters that influence randomness in the tree and assist in managing over-fitting, including a subsample fraction of the features, to fit the individual decision tree. During the optimization process, the Bagging algorithms, such as RF and bagging, focus on mitigating model variance to reduce the potential of over-fitting. The boosting techniques, such as GBDT, LightGBM, and AdaBoost, are committed to combining weak learners to form a stronger data modeler that prevents over-fitting and reduces the bias in the entire model. In the optimization phase of GBDT, more parameters are required to be tuned, and the LightGBM model is selected that has more trees than the other algorithms.
The RF, GBDT, bagging, and AdaBoost models were then run using Scikit-learn Python module [53], and the LightGBM was implemented in Python using its open-source Microsoft framework [52].

2.5. Accuracy Assessment Methods

An accuracy assessment of the classification result was carried out using the widely used confusion matrix [58]. The overall accuracy (OA), kappa coefficient [59,60], precision, recall, and F1-score [61] metrics were used to evaluate the performance of the machine learning models. OA indicates how effective a classifier is, and it determines the number of correctly classified instances from all samples. Moreover, it expresses how well a classifier learns data patterns and how accurately it predicts unseen instances. The kappa coefficient is another quantitative measure, and evaluates the agreement between the classified image and the referenced testing samples. The kappa coefficient value ranges between 0 and 1, and a value close to 1 represents a reliable image classification result and great agreement between the classified categories and the referenced sample, while values near 0 represent weak or no agreement between the classified targets and referenced samples. Precision is a ratio showing the portion of instances selected as positive that are accurately categorized as positive. Recall is a metric indicating the total number of positive instances that were accurately classified as positive. The F1-score is an approach used to combine precision and recall by calculating their weighted/harmonic average, with 1 showing the best result and 0 indicating the worst outcome.
To statistically explore the differences in classification accuracy obtained by the classifiers, an extra evaluation approach was employed using the McNemar test [58]. McNemar is known as a statistical nonparametric test, and uses chi-squared ( x 2 ) statistics with a 2 × 2 matrix to specify the significance of differences between the classification performances using a 95% confidence level (two-tailed p-value). If the p-value falls below 0.05, the null hypothesis is not accepted. In this research, the null hypothesis indicates that the minimal improvements in classification by the best performing classifiers are statistically insignificant compared with the second-best classifiers. The results of the classifiers with the highest OA are statistically significant compared to the others when the p-value is less than 0.05. The McNemar mathematical equation is presented as follows:
x 2 = [ f i ,   j f j ,   i ] 2 [ f i ,   j f j ,   i ] 2
where the term fi,j signifies how many ground truth samples are correctly classified by classification i but not by classification j.  f j ,   i is the number of ground truth samples perfectly classified by classification j but misclassified by classification i.

3. Results

Image Classification and Accuracy Assessment

This section presents the results achieved with different classification models utilizing the LWIR HSI data of the Hirvilavanmaa drill cores. The performance of the classification algorithms is presented in Table 3, including the precision, recall, and F1-score values for each class separately. The results show that the GBDT and LightGBM classifiers were able to outperform the other classifiers as far as the OA is concerned. The GBDT and LightGBM classifiers achieved OAs of 89.43% and 89.22%, respectively, while RF, bagging, ENVINet5, and AdaBoost showed OAs of 87.32%, 87.33%, 82.74%, and 78.32%, respectively (Table 4).
According to the precision, recall, and F1-score values obtained using the GBDT, LightGBM, RF, and bagging algorithms, these classifiers adequately improved the inter-class separability compared to the ENVINet5 and AdaBoost classifiers. GBDT and LightGBM showed precision values close to 89% for the quartz class, which is because of misclassifications that occurred with the talc and quartz–carbonate classes. In addition, the recall values near 92% shows that some quartz pixels were wrongly classified as talc and a mixed class of quartz and carbonate. Due to the spectral confusion and misclassification with chlorite classes, the talc class resulted in precision and recall values near 89% and 90%, respectively. In addition, the mixed quartz–carbonate class showed precision and recall values close to 88% and 83%, respectively, which can mainly be attributed to its spectral similarities and overlaps with the quartz class.
For almost all mineral classes, the AdaBoost ensemble algorithm showed the lowest precision, recall, and F1-score. For example, AdaBoost resulted in precision and recall values of 85.86% and 88.86%, respectively, for the quartz class. These commission and omission errors can be attributed to the incapability of AdaBoost to adequately reduce the spectral mixture of the quartz class with other classes and not being able to reduce misclassification. Although the ENVINet5 model performed well in classifying the mineral classes and resulted in relatively high precision, recall, and F1-score values for different mineral classes, some pixels were left unclassified; thus, the ENVINet5 model obtained a lower OA than the best performing classifiers. The chlorite class resulted in the lowest recall value of 74.01% due to the misclassification with the quartz, talc, and aspectral classes. The 92.58% precision value of the chlorite class is related to spectral overlaps with the other classes and also the classifier mapping most of the pixels from this class as unclassified.
The McNemar test was performed to determine whether the minimal statistical improvement of GBDT and LightGBM is statistically meaningful. The test results (Table 5) showed that the classification accuracy of GBDT and LightGBM was statistically significant compared with the RF and bagging classifiers. Based on this significance test, the p-value in most paired comparisons was less than 0.05, and therefore, the GBDT and LightGBM classifiers resulted in the most reliable results of all the machine learning algorithms assessed.
Figure 5 shows a comparison of the machine learning-based classification results averaged across all drill cores 1–26. Visual comparisons between the results suggest that they are almost identical, with the exception of the ENVINet5 result (Figure 5e). The main difference between the ENVINet5 result and the other classification results (Figure 5a–d,f) is the higher number of unclassified pixels in the ENVINet5 result. However, even in the ENVINet5 result, the main characteristics of the mineralogy can be observed: there is a dominance of talc and chlorite at depths of 8.5–55 m and 110–152.5 m and an increasing number of pixels classified as quartz and quartz–carbonate at a depth of 55–110 m. Figure 6 represents an example classification result achieved for a drill core box, which demonstrates the similarities in the mapping results, unclassified pixels in the ENVINet5 output, and misclassifications in the AdaBoost result.
The visualization of the spectral averages calculated across all drill cores for the classification results reveals a high degree of spectral similarity between different classification methods (Figure 7). Irrespective of the classification method, the five classes (quartz, talc, aspectral, chlorite, quartz–carbonate) are spectrally distinct, though the chlorite and talc classes resemble each other. The quartz and talc minerals represent an “M-shaped” spectral signature [9]. More specifically, the spectral shape of quartz with two spectral peaks at 8367 nm and 9137 nm, in between which there is a reflectance minimum at 8.656 nm, is distinctive. The quartz, chlorite, and quartz–carbonate classes also have a slight peak near 11,255 nm, caused by spectral mixing and attributable to the fundamental bending modes of the C O 3 2 ion. In addition, the talc class has peaks around 9233 nm and 9714 nm, which are also prominently present in the chlorite class. These spectral features, as well as those of quartz, are caused by the asymmetric Si-O-Si stretching modes of the silicate minerals. This interpretation was made based on the known spectral features of minerals, described, e.g., by Salisbury et al. [62,63]. As expected, the aspectral class does not have any distinct spectral features.
By comparing the reference spectra (Figure 7a) with the spectral class averages (Figure 7b,c), it can be determined that there is a high degree of similarity. For instance, the spectral shape of the quartz class is very similar to that of the reference quartz spectrum. The quartz–carbonate class shows spectral features of quartz and carbonate minerals (reference spectrum: calcite). Spectral similarity can also be detected between the reference chlorite spectrum and the class average chlorite. Contrary to this, the talc reference spectrum has a slightly different shape to the reference due to intimate mixtures of chlorite and talc in the samples; the spectral shape of talc is influenced by chlorite. Considering the quartz average spectra, there is a transparency peak located near 11,400 nm [64]; however, this peak cannot be observed in the reference spectra. The volume scattering of very fine-grained particles is the reason for this peak [64,65].

4. Discussion

The present study assesses the performance of the GBDT and LightGBM ensemble algorithms for mineral mapping based on drill core HSI data. They produced higher overall accuracies compared with the RF, bagging, AdaBoost, and ENVINet5 classifiers. GBDT and LightGBM achieved higher precision, recall, and F1-score values for detecting the chlorite class and mixed class of quartz–carbonate, whereas the RF and bagging classifiers showed slightly better values for discriminating the talc class. According to the McNemar test, the minor overall accuracy improvements yielded by GBDT and LightGBM were statistically significant compared to the results of the second-best performing classifiers, including RF and bagging. However, the latter classifiers also resulted in qualitatively and quantitatively accurate results. The accurate results of the RF classifier confirm the effectiveness of this algorithm for characterizing minerals from drill core HSI data [2].
Recently, machine learning algorithms, such as RF, SVM, and ANN, have been suggested for classification and regression tasks in mineral characterization using drill core hyperspectral data. Acosta et al. [2] reported better performance of the SVM than the RF classifier in mineral mapping. Tusa et al. [66], evaluated RF, SVM, and ANN for drill core mineral abundance estimation and achieved the most accurate results using the RF algorithm. One of the main disadvantages of the SVM algorithm for classifying high-dimensional data is related to its computational complexity and lengthy processing times [67,68]. RF has been shown to be robust to imbalance and limited training datasets [66].
The existing literature on implementing supervised machine learning algorithms for classifying drill core HSI data lacks a common hyperparameter tuning step. In general, model parameters are trained during the training phase, while here, the values of the hyperparameters were defined before starting the training process and did not change during a training run. The learning process is impacted by the setting of hyperparameters, and these parameters influence the model parameters that a model learns. Selecting optimal hyperparameter values is essential for achieving accurate prediction outcomes [69]. In this study, instead of utilizing the default values of the hyperparameters, different sets of values were investigated using the grid search method to identify the optimal hyperparameter values for ensemble algorithms.
To build a robust analytical model using machine learning and deep learning algorithms, it is important to have enough training datasets to construct an effective model with reasonable predictive power. To tackle the problem of limited training samples, the integration of MLA and μXRF data with drill core hyperspectral imaging was recently explored in several studies using conventional machine learning algorithms [2,35]. The inclusion of MLA and μXRF data as the training data requires careful co-registration and resampling to the spatial resolution of the HSI data. Deep learning is a relatively new research domain in characterizing minerals in laboratory-based HSI data and offers great potential for mineral identification. The critical success element of the deep learning model is the availability of a massive amount of labeled training data, which enables the model to effectively predict unknown pixels and provide reliable generalization potential. Though the transfer learning approach is a solution to mitigate the needs of large training datasets, it is not meaningful to adopt a pre-trained deep learning model that has trained for a different mapping problem. In a drill core mineral mapping application, it may be difficult to develop massive amounts of labeled training data. A recent study by Bell et al. [41] prepared labeled training samples visually from drill core datasets. Although the authors presented an interesting and promising deep learning application for drill core mineral characterization, the manual labeling process is laborious, time consuming, and may not be practical for large drill core datasets. As an alternative, Abdolmaleki et al. [12] initially classified ore and waste rock classes using the spectral angle mapper (SAM) classifier and produced the required training examples for a deep learning model based on a SAM classification map. In the present study, the integration of labeled endmembers derived from the SMACC method into an ENVINet5 deep learning model was examined. Despite the presence of some unclassified areas in the mapping output, the ENVINet5 model achieved an overall accuracy of 82.74% and relatively high precision, recall, and F1-score values.
Given the spectral diversity and non-linear relationship between the dominant mineral classes, optimized GBDT and LightGBM were best suited to categorize various mineral classes. Although grid-search optimization was employed on all the tested ensemble machine learning algorithms for hyperparameter tuning, GBDT and LightGBM still outperformed the others. The LightGBM algorithm is regarded as a faster and optimized implementation of the GBDT algorithm. The better performance of GBDT and LightGBM can be attributed to the following: (1) the effective weight adjustment in the training process, and (2) the assignment of features to each branch based on the Gini index that contributed to optimal decision tree structure. The reason for the low classification accuracy of AdaBoost is related to the weight of the improperly separated samples at each node, whereas GBDT uses the gradient decent approach as the foundation for model optimization. LightGBM resulted in a relatively large model due to the higher number of trees than GBDT, while GBDT included more parameters in the optimization and the training phases. One of the reasons for the lower accuracy of RF and bagging compared with GBDT is that these two algorithms have only a small number of hyperparameters for optimization. Accordingly, the number of trees and the depth of the decision tree appear to be more important than the other hyperparameters. The other parameters have a greater impact on the complexity of the models and contribute less to the model’s predictive accuracy [70]. In comparison to other ensemble algorithms, GBDT and LightGBM have a learning rate hyperparameter that is committed to scaling the contribution of each tree in the model and to mitigating over-fitting in the model. If the selected learning rate value is small, this increases the number of selected trees and improves the generalization potential of the model [71,72]. Consequently, there is a strong link between the learning rate and the number of trees selected, which contributes to the quality of the prediction results. To construct decision trees, the RF classifier randomly selects subsamples from the training dataset and uses a bootstrap resampling approach. The GBDT, on the other hand, focuses on reducing the loss function to select the parameters for the subsequent weak learner. If there is no parameter optimization in the training phase, GBDT does not incorporate the subsampling strategy, which results in higher variance and a greater chance of over-fitting [73]. By using a grid search approach, the GBDT parameters with more stability were selected, which resulted in higher predictive accuracy than the one with the RF model. In the case of ENVINet5, the trial-and-error process required for the hyperparameter setting is a major hindrance when considering the use of this model. Optimizing ENVINet5’s hyperparameters is challenging since it is a software-based algorithm. Compared to the other machine learning algorithms, the absence of hyperparameter tuning in the ENVI software could be one of the major reasons for ENVINet5 producing lower accuracy compared to the other machine learning algorithms.
Finally, the issue of sample imbalance can affect the quality of the classification results. In this study, endmember extraction was used to identify dominant minerals for training and testing the mapping algorithms. Performing endmember extraction to determine the class targets resulted in a sample imbalance issue due to the varying sample point distribution across the drill core. The sample imbalance issue might force the classification algorithm to under-predict the less dominant classes considering their lower sample proportions. Generally, machine learning algorithms tend to minimize the overall error rate, which can cause under-prediction for the relatively rare classes. Due to the higher number of sample points for the talc and chlorite classes, the issue of sample imbalance can be observed in this study. Future studies could explore different sample balancing strategies, such as equalized stratified random sampling, oversampling the minority class, producing synthetic samples for the minority class [74,75], and transfer-learning [76] to tackle the sample imbalance issue.

5. Conclusions

This study provides an understanding of evaluating ensemble machine learning algorithms and the CNN model for mineral mapping from drill cores using laboratory-based LWIR HSI data. Hyperspectral data were acquired using the SisuROCK Gen2 core imaging system, providing detailed spatial and spectral properties for mineral mapping and characterization. In contrast to previous studies on this topic, hyperparameter optimization was performed for ensemble algorithms to boost their performance and efficiency. To the best of our knowledge, this study evaluates GBDT, LightGBM, bagging, AdaBoost, and ENVINet5 for the first time for drill core mapping application. Our findings indicate that the GBDT, LightGBM, RF, and bagging classifiers significantly outperformed the AdaBoost ensemble classifier and ENVINet5 deep learning model. The best preforming classifiers resulted in high precision, recall, and F1-score accuracy values for the studied mineral classes, including quartz, chlorite, and talc. In addition, GBDT and LightGBM achieved an overall accuracy of 89%, while RF and bagging achieved an overall accuracy of 87%, indicating a slight improvement compared with the RF and bagging classifiers. The McNemar test revealed that the minor mapping improvements of GBDT and LightGBM were statistically significant and reliable compared to the RF and bagging algorithms. The qualitative accuracy of different classes mapped based on LightGBM and GBDT resulted in improved classification results and effective separation of the dominant mineralogy of the HSI drill core dataset at the Hirvilavanmaa study site. Our comparative assessment attests to the reliable application and potential of ensemble classifiers implemented in the LWIR region for discriminating quartz, chlorite, and talc classes and their mixture. According to the ENVINet5 accuracy values, this deep learning architecture is still applicable to drill core mineral mapping; however, the algorithm should be further optimized and evaluated with more labeled training labels.

Author Contributions

Conceptualization, A.H. and K.L.; methodology, A.H., K.L. and J.T.; software, A.H., K.L. and J.T.; validation, A.H. and K.L.; formal analysis, A.H.; investigation, A.H., K.L. and J.T.; resources, M.M.; data curation, K.L. and A.H.; writing—original draft preparation, A.H.; writing—review and editing, A.H., K.L., M.M., T.T., J.K. and J.T.; visualization, A.H.; supervision, K.L.; project administration, M.M.; funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was conducted in two projects of the Geological Survey of Finland (GTK). The HSI data were acquired in the GTK-funded MinMatka project. Data preprocessing and modeling were conducted in the Hyperspectral Lapland (HypeLAP, 2020-2022) project funded through the European Regional Development Fund by a Business Finland grant (#211751/31/2020).

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge Markku Korhonen, Vesa Nykänen, Soile Aatos, Panu Lintinen, and Irmeli Huovinen from GTK for carrying out project management and coordination. Furthermore, the comments from the anonymous reviewers are highly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, Z.; Hu, Y.; Liu, B.; Dai, K.; Zhang, Y. Development of Automatic Electric Drive Drilling System for Core Drilling. Appl. Sci. 2023, 13, 1059. [Google Scholar] [CrossRef]
  2. Acosta, I.C.C.; Khodadadzadeh, M.; Tusa, L.; Ghamisi, P.; Gloaguen, R. A Machine Learning Framework for Drill-Core Mineral Mapping Using Hyperspectral and High-Resolution Mineralogical Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4829–4842. [Google Scholar] [CrossRef]
  3. De La Rosa, R.; Tolosana-Delgado, R.; Kirsch, M.; Gloaguen, R. Automated Multi-Scale and Multivariate Geological Logging from Drill-Core Hyperspectral Data. Remote Sens. 2022, 14, 2676. [Google Scholar] [CrossRef]
  4. De La Rosa, R.; Khodadadzadeh, M.; Tusa, L.; Kirsch, M.; Gisbert, G.; Tornos, F.; Tolosana-Delgado, R.; Gloaguen, R. Mineral Quantification at Deposit Scale Using Drill-Core Hyperspectral Data: A Case Study in the Iberian Pyrite Belt. Ore Geol. Rev. 2021, 139, 104514. [Google Scholar] [CrossRef]
  5. Tuşa, L.; Kern, M.; Khodadadzadeh, M.; Blannin, R.; Gloaguen, R.; Gutzmer, J. Evaluating the Performance of Hyperspectral Short-Wave Infrared Sensors for the Pre-Sorting of Complex Ores Using Machine Learning Methods. Miner. Eng. 2020, 146, 106150. [Google Scholar] [CrossRef]
  6. Linton, P.; Kosanke, T.; Greene, J.; Porter, B. The Application of Hyperspectral Core Imaging for Oil and Gas. Geol. Soc. Lond. Spec. Publ. 2023, 527, SP527-2022. [Google Scholar] [CrossRef]
  7. Kruse, F.A. Identification and Mapping of Minerals in Drill Core Using Hyperspectral Image Analysis of Infrared Reflectance Spectra. Int. J. Remote Sens. 1996, 17, 1623–1632. [Google Scholar] [CrossRef]
  8. Okada, K. A Historical Overview of the Past Three Decades of Mineral Exploration Technology. Nat. Resour. Res. 2021, 30, 2839–2860. [Google Scholar] [CrossRef]
  9. Han, W.; Zhang, X.; Wang, Y.; Wang, L.; Huang, X.; Li, J.; Wang, S.; Chen, W.; Li, X.; Feng, R. A Survey of Machine Learning and Deep Learning in Remote Sensing of Geological Environment: Challenges, Advances, and Opportunities. ISPRS J. Photogramm. Remote Sens. 2023, 202, 87–113. [Google Scholar] [CrossRef]
  10. Van der Meer, F.D.; Van der Werff, H.M.A.; Van Ruitenbeek, F.J.A.; Hecker, C.A.; Bakker, W.H.; Noomen, M.F.; Van Der Meijde, M.; Carranza, E.J.M.; De Smeth, J.B.; Woldai, T. Multi-and Hyperspectral Geologic Remote Sensing: A Review. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 112–128. [Google Scholar] [CrossRef]
  11. Zaini, N.; Van der Meer, F.; Van der Werff, H. Determination of Carbonate Rock Chemistry Using Laboratory-Based Hyperspectral Imagery. Remote Sens. 2014, 6, 4149–4172. [Google Scholar] [CrossRef]
  12. Abdolmaleki, M.; Consens, M.; Esmaeili, K. Ore-Waste Discrimination Using Supervised and Unsupervised Classification of Hyperspectral Images. Remote Sens. 2022, 14, 6386. [Google Scholar] [CrossRef]
  13. Barton, I.F.; Gabriel, M.J.; Lyons-Baral, J.; Barton, M.D.; Duplessis, L.; Roberts, C. Extending Geometallurgy to the Mine Scale with Hyperspectral Imaging: A Pilot Study Using Drone-and Ground-Based Scanning. Mining, Metall. Explor. 2021, 38, 799–818. [Google Scholar] [CrossRef]
  14. Bedini, E. The Use of Hyperspectral Remote Sensing for Mineral Exploration: A Review. J. Hyperspectral Remote Sens. 2017, 7, 189–211. [Google Scholar] [CrossRef]
  15. Laukamp, C.; Rodger, A.; LeGras, M.; Lampinen, H.; Lau, I.C.; Pejcic, B.; Stromberg, J.; Francis, N.; Ramanaidou, E. Mineral Physicochemistry Underlying Feature-Based Extraction of Mineral Abundance and Composition from Shortwave, Mid and Thermal Infrared Reflectance Spectra. Minerals 2021, 11, 347. [Google Scholar] [CrossRef]
  16. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  17. Manolakis, D.; Pieper, M.; Truslow, E.; Lockwood, R.; Weisner, A.; Jacobson, J.; Cooley, T. Longwave Infrared Hyperspectral Imaging: Principles, Progress, and Challenges. IEEE Geosci. Remote Sens. Mag. 2019, 7, 72–100. [Google Scholar] [CrossRef]
  18. Gewali, U.B.; Monteiro, S.T.; Saber, E. Machine Learning Based Hyperspectral Image Analysis: A Survey. arXiv 2018, arXiv:1802.08701. [Google Scholar]
  19. Shirmard, H.; Farahbakhsh, E.; Müller, R.D.; Chandra, R. A Review of Machine Learning in Processing Remote Sensing Data for Mineral Exploration. Remote Sens. Environ. 2022, 268, 112750. [Google Scholar] [CrossRef]
  20. Asadzadeh, S.; de Souza Filho, C.R. A Review on Spectral Processing Methods for Geological Remote Sensing. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 69–90. [Google Scholar] [CrossRef]
  21. Boardman, J.W.; Kruse, F.A. Analysis of Imaging Spectrometer Data Using N-Dimensional Geometry and a Mixture-Tuned Matched Filtering Approach. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4138–4152. [Google Scholar] [CrossRef]
  22. Halder, A.; Ghosh, A.; Ghosh, S. Supervised and Unsupervised Landuse Map Generation from Remotely Sensed Images Using Ant Based Systems. Appl. Soft Comput. 2011, 11, 5770–5781. [Google Scholar] [CrossRef]
  23. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2. [Google Scholar]
  24. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  25. Wei, Y.; Li, X.; Pan, X.; Li, L. Nondestructive Classification of Soybean Seed Varieties by Hyperspectral Imaging and Ensemble Machine Learning Algorithms. Sensors 2020, 20, 6980. [Google Scholar] [CrossRef] [PubMed]
  26. Jafarzadeh, H.; Mahdianpari, M.; Gill, E.; Mohammadimanesh, F.; Homayouni, S. Bagging and Boosting Ensemble Classifiers for Classification of Multispectral, Hyperspectral and PolSAR Data: A Comparative Evaluation. Remote Sens. 2021, 13, 4405. [Google Scholar] [CrossRef]
  27. Dev, V.A.; Eden, M.R. Formation Lithology Classification Using Scalable Gradient Boosted Decision Trees. Comput. Chem. Eng. 2019, 128, 392–404. [Google Scholar] [CrossRef]
  28. Qi, M.L. A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the 2017 Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
  29. Lin, N.; Liu, H.; Li, G.; Wu, M.; Li, D.; Jiang, R.; Yang, X. Extraction of Mineralized Indicator Minerals Using Ensemble Learning Model Optimized by SSA Based on Hyperspectral Image. Open Geosci. 2022, 14, 1444–1465. [Google Scholar] [CrossRef]
  30. Wang, S.; Zhou, K.; Wang, J.; Zhao, J. Identifying and Mapping Alteration Minerals Using HySpex Airborne Hyperspectral Data and Random Forest Algorithm. Front. Earth Sci. 2022, 10, 871529. [Google Scholar] [CrossRef]
  31. Lobo, A.; Garcia, E.; Barroso, G.; Martí, D.; Fernandez-Turiel, J.-L.; Ibáñez-Insa, J. Machine Learning for Mineral Identification and Ore Estimation from Hyperspectral Imagery in Tin–Tungsten Deposits: Simulation under Indoor Conditions. Remote Sens. 2021, 13, 3258. [Google Scholar] [CrossRef]
  32. Laakso, K.; Haavikko, S.; Korhonen, M.; Köykkä, J.; Middleton, M.; Nykänen, V.; Rauhala, J.; Torppa, A.; Torppa, J.; Törmänen, T. Applying Self-Organizing Maps to Characterize Hyperspectral Drill Core Data from Three Ore Prospects in Northern Finland. In Proceedings of the Earth Resources and Environmental Remote Sensing/GIS Applications XIII, Berlin, Germany, 5–7 September 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12268, pp. 239–243. [Google Scholar]
  33. Torppa, J.; Chudasama, B.; Hautala, S.; Kim, Y. GisSOM for Clustering Multivariate Data. 2021. Available online: https://tupa.gtk.fi/raportti/arkisto/52_2021.pdf (accessed on 4 September 2023).
  34. Torppa, J.; Chudasama, B. Gissom Software for Multivariate Clustering of Geoscientific Data. Mineral Prospectivity and Exploration Targeting–MinProXT 2021 Webinar 31. Available online: https://tupa.gtk.fi/raportti/arkisto/57_2021.pdf#page=32 (accessed on 4 September 2023).
  35. Barker, R.D.; Barker, S.L.L.; Cracknell, M.J.; Stock, E.D.; Holmes, G. Quantitative Mineral Mapping of Drill Core Surfaces II: Long-Wave Infrared Mineral Characterization Using ΜXRF and Machine Learning. Econ. Geol. 2021, 116, 821–836. [Google Scholar] [CrossRef]
  36. Contreras Acosta, I.C.; Khodadadzadeh, M.; Gloaguen, R. Resolution Enhancement for Drill-Core Hyperspectral Mineral Mapping. Remote Sens. 2021, 13, 2296. [Google Scholar] [CrossRef]
  37. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  38. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep Learning Classifiers for Hyperspectral Imaging: A Review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  39. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  40. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  41. Bell, A.; del-Blanco, C.R.; Jaureguizar, F.; Jurado, M.J.; García, N. Automatic Mineral Recognition in Hyperspectral Images Using a Semantic-Segmentation-Based Deep Neural Network Trained on a Hyperspectral Drill-Core Database. SSRN 2022. [Google Scholar] [CrossRef]
  42. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  43. Meerdink, S.K.; Hook, S.J.; Roberts, D.A.; Abbott, E.A. The ECOSTRESS Spectral Library Version 1.0. Remote Sens. Environ. 2019, 230, 111196. [Google Scholar] [CrossRef]
  44. Goldfarb, R.J.; Groves, D.I. Orogenic Gold: Common or Evolving Fluid and Metal Sources through Time. Lithos 2015, 233, 2–26. [Google Scholar] [CrossRef]
  45. Hulkki, H.; Keinänen, V. The Alteration and Fluid Inclusion Characteristics of the Hirvilavanmaa Gold Deposit, Central Lapland Greenstone Belt, Finland. Geol. Surv. Finland, Spec. Pap. 2007, 44, 137–153. [Google Scholar]
  46. Gruninger, J.H.; Ratkowski, A.J.; Hoke, M.L. The Sequential Maximum Angle Convex Cone (SMACC) Endmember Model. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery X, Orlando, FL, USA, 12–15 April 2004; SPIE: Bellingham, WA, USA, 2004; Volume 5425, pp. 1–14. [Google Scholar]
  47. Arroyo-Mora, J.P.; Kalacska, M.; Løke, T.; Schläpfer, D.; Coops, N.C.; Lucanus, O.; Leblanc, G. Assessing the Impact of Illumination on UAV Pushbroom Hyperspectral Imagery Collected under Various Cloud Cover Conditions. Remote Sens. Environ. 2021, 258, 112396. [Google Scholar] [CrossRef]
  48. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  49. Liu, L.-Y.; Wang, C.-K.; Huang, A.-T. A Deep Learning Approach for Building Segmentation in Taiwan Agricultural Area Using High Resolution Satellite Imagery. J. Photogramm. Remote Sens. 2022, 27, 1–14. [Google Scholar]
  50. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  51. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  52. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 2017. [Google Scholar]
  53. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  54. Frġedman, J.H. Stochastic Gradient Boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
  55. Elith, J.; Leathwick, J.R.; Hastie, T. A Working Guide to Boosted Regression Trees. J. Anim. Ecol. 2008, 77, 802–813. [Google Scholar] [CrossRef]
  56. Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  57. Bühlmann, P. Bagging, Boosting and Ensemble Methods; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 3642215505. [Google Scholar]
  58. Foody, G.M. Thematic Map Comparison: Evaluating the Statistical Significance of Differences in Classification Accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–634. [Google Scholar] [CrossRef]
  59. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2019; ISBN 0429629354. [Google Scholar]
  60. Stehman, S.V.; Czaplewski, R.L. Design and Analysis for Thematic Map Accuracy Assessment: Fundamental Principles. Remote Sens. Environ. 1998, 64, 331–344. [Google Scholar] [CrossRef]
  61. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. In Proceedings of the Advances in Information Retrieval: 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, 21–23 March 2005; Proceedings 27. Springer: Berlin/Heidelberg, Germany, 2005; pp. 345–359. [Google Scholar]
  62. Salisbury, J.W.; Walter, L.S.; Vergo, N.; D’Aria, D.M. Mid-Infrared (2.1–25 Urn) Snectra of Minerals. 1987. Available online: https://pubs.usgs.gov/of/1987/0263/report.pdf (accessed on 4 September 2023).
  63. Salisbury, J.W.; D’Aria, D.M. Emissivity of Terrestrial Materials in the 8–14 Μm Atmospheric Window. Remote Sens. Environ. 1992, 42, 83–106. [Google Scholar] [CrossRef]
  64. Salisbury, J.W. Infrared (2.1–25 μm) Spectra of Minerals. Johns Hopkins Univ. Press 1991, 267. [Google Scholar] [CrossRef]
  65. Salisbury, J.W.; Walter, L.S. Thermal Infrared (2.5–13.5 Μm) Spectroscopic Remote Sensing of Igneous Rock Types on Particulate Planetary Surfaces. J. Geophys. Res. Solid Earth 1989, 94, 9192–9202. [Google Scholar] [CrossRef]
  66. Tuşa, L.; Khodadadzadeh, M.; Contreras, C.; Rafiezadeh Shahi, K.; Fuchs, M.; Gloaguen, R.; Gutzmer, J. Drill-Core Mineral Abundance Estimation Using Hyperspectral and High-Resolution Mineralogical Data. Remote Sens. 2020, 12, 1218. [Google Scholar] [CrossRef]
  67. Pal, M. Support Vector Machine-based Feature Selection for Land Cover Classification: A Case Study with DAIS Hyperspectral Data. Int. J. Remote Sens. 2006, 27, 2877–2894. [Google Scholar] [CrossRef]
  68. Cracknell, M.J.; Reading, A.M. Geological Mapping Using Remote Sensing Data: A Comparison of Five Machine Learning Algorithms, Their Response to Variations in the Spatial Distribution of Training Data and the Use of Explicit Spatial Information. Comput. Geosci. 2014, 63, 22–33. [Google Scholar] [CrossRef]
  69. Jooshaki, M.; Nad, A.; Michaux, S. A Systematic Review on the Application of Machine Learning in Exploiting Mineralogical Data in Mining and Mineral Industry. Minerals 2021, 11, 816. [Google Scholar] [CrossRef]
  70. Yang, C.; Qiu, F.; Xiao, F.; Chen, S.; Fang, Y. CBM Gas Content Prediction Model Based on the Ensemble Tree Algorithm with Bayesian Hyper-Parameter Optimization Method: A Case Study of Zhengzhuang Block, Southern Qinshui Basin, North China. Processes 2023, 11, 527. [Google Scholar] [CrossRef]
  71. Luo, C.; Chen, X.; Xu, J.; Zhang, S. Research on Privacy Protection of Multi Source Data Based on Improved Gbdt Federated Ensemble Method with Different Metrics. Phys. Commun. 2021, 49, 101347. [Google Scholar] [CrossRef]
  72. Li, G.; Gong, D.; Lu, X.; Zhang, D. Ensemble Learning Based Methods for Crown Prediction of Hot-Rolled Strip. ISIJ Int. 2021, 61, 1603–1613. [Google Scholar] [CrossRef]
  73. Rong, G.; Alu, S.; Li, K.; Su, Y.; Zhang, J.; Zhang, Y.; Li, T. Rainfall Induced Landslide Susceptibility Mapping Based on Bayesian Optimized Random Forest and Gradient Boosting Decision Tree Models—A Case Study of Shuicheng County, China. Water 2020, 12, 3066. [Google Scholar] [CrossRef]
  74. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  75. Qin, R.; Liu, T. A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability. Remote Sens. 2022, 14, 646. [Google Scholar] [CrossRef]
  76. Al-Stouhi, S.; Reddy, C.K. Transfer Learning for Class Imbalance Problems with Inadequate Data. Knowl. Inf. Syst. 2016, 48, 201–228. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Methodology flowchart.
Figure 1. Methodology flowchart.
Remotesensing 15 04806 g001
Figure 2. A geological map of the study area with Hirvilavanmaa and surrounding gold deposits. CLGB = Central Lapland Greenstone Belt (modified from Bedrock of Finland 1:200,000). The figure shows a generalized geology of the northern Fennoscandian Shield.
Figure 2. A geological map of the study area with Hirvilavanmaa and surrounding gold deposits. CLGB = Central Lapland Greenstone Belt (modified from Bedrock of Finland 1:200,000). The figure shows a generalized geology of the northern Fennoscandian Shield.
Remotesensing 15 04806 g002
Figure 3. Typical VNIR-SWIR spectra of wood and rock. Drill core trays: the top tray is the RGB image showing wooden parts and the bottom tray represents the hyperspectral data with wooden parts removed.
Figure 3. Typical VNIR-SWIR spectra of wood and rock. Drill core trays: the top tray is the RGB image showing wooden parts and the bottom tray represents the hyperspectral data with wooden parts removed.
Remotesensing 15 04806 g003
Figure 4. A breakdown of ENVINet5 semantic segmentation architecture used for the drill core mineral classification (image modified from ENVI Deep Learning 1.1.1 Help).
Figure 4. A breakdown of ENVINet5 semantic segmentation architecture used for the drill core mineral classification (image modified from ENVI Deep Learning 1.1.1 Help).
Remotesensing 15 04806 g004
Figure 5. Mineral classification results based on different algorithms ((a) GBDT; (b) LightGBM; (c) bagging; (d) RF; (e) ENVINet5; and (f) AdaBoost) presented along the sample drilling depth.
Figure 5. Mineral classification results based on different algorithms ((a) GBDT; (b) LightGBM; (c) bagging; (d) RF; (e) ENVINet5; and (f) AdaBoost) presented along the sample drilling depth.
Remotesensing 15 04806 g005
Figure 6. (a) RGB image of drill core tray #12. (b) Example of classification results achieved with each classification algorithm.
Figure 6. (a) RGB image of drill core tray #12. (b) Example of classification results achieved with each classification algorithm.
Remotesensing 15 04806 g006
Figure 7. Example of comparing average spectra produced using classification results with the reference spectra: (a) the reference spectra of calcite, chlorite, quartz, and talc from the ECOSTRESS spectral library [43]; (b) GBDT average spectra; (c) ENVINet5 average spectra. Note: avg: average; ref: reference.
Figure 7. Example of comparing average spectra produced using classification results with the reference spectra: (a) the reference spectra of calcite, chlorite, quartz, and talc from the ECOSTRESS spectral library [43]; (b) GBDT average spectra; (c) ENVINet5 average spectra. Note: avg: average; ref: reference.
Remotesensing 15 04806 g007
Table 1. The number of training and testing data for mapping and accuracy evaluation.
Table 1. The number of training and testing data for mapping and accuracy evaluation.
ClassesTraining Set SizeTesting Set Size
Quartz19,7974947
Talc61,96712,012
Chlorite32,40222,380
Quartz–carbonate13,3637084
Aspectral83293094
Table 2. ENVINet5 hyperparameters.
Table 2. ENVINet5 hyperparameters.
HyperparametersValues or ChoicesDescription/Function
Image patch sizes464 pixels × 464 pixelsHSI images were divided into equal-sized images, which were fed to the ENVINet5 model training and validation steps.
Number of epochs25The number of times an algorithm is run over all training samples.
Patches per epoch100The diversity of features needs to be learned, which ENVI can automatically identify or it can be achieved through trial and error.
Class weight(2, 3)The selected values are useful since the dataset contains sparse training samples. Class weight assists in extracting image patches that consist of more feature pixels [49].
Loss weight0.9Assists in detecting feature pixels than categorizing the masked area from drill core wooden boxes.
Blur distance(1, 6)Assists in obtaining useful information about the border of mineral classes by blurring the edges.
Solid distance-Provides buffering around the training points and aids in providing neighborhood information to the training phase.
Note: The solid distance was not useful in the present study as it resulted in mixed features/objects.
Table 3. Optimal parameters for ensemble algorithms selected via grid search method.
Table 3. Optimal parameters for ensemble algorithms selected via grid search method.
ClassifierHyperparameter Search SpaceSelected Hyperparameter Values via Grid Search
RFn_estimators: [100, 150, 200]150
max_depth: [3, 5, 8]8
min_samples_leaf: [1, 2, 4]1
min_samples_split: [2, 5, 10]10
GBDTloss:[log_loss]log_loss
learning_rate: [0.025, 0.05, 0.1]0.1
min_samples_split: [2, 5, 10]5
subsample: [0.15, 0.5, 1.0]1.0
max_depth: [3, 5, 8]8
max_features: [“log2”,”sqrt”]log2
n_estimators: [100, 150, 200]150
LightGBMn_estimators: [100, 150, 200]200
max_depth: [3, 5, 8]8
learning_rate: [0.025, 0.05, 0.1]0.1
Baggingmax_depth: [3, 5, 8]8
max_features: [“auto”,”sqrt”]sqrt
min_samples_split’: [2, 5, 10]2
n_estimators: [100, 150, 200]100
AdaBoostmax_depth: [3, 5, 8]5
max_features: [“auto”,”sqrt”]auto
min_samples_split’: [2, 5, 10]10
n_estimators: [100, 150, 200]200
Table 4. Precision, recall, F1-score, kappa coefficient, and OA of classification algorithms.
Table 4. Precision, recall, F1-score, kappa coefficient, and OA of classification algorithms.
A. GBDT
Predicted class
ClassQuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Quartz454949085448513191.9588.6690.35
Talc6510,810312475112,17689.9988.7889.51
Aspectral01281435819319290.9588.1689.64
Chlorite53111027220,17868922,30290.1690.4890.43
Quartz–carbonate2804255125877671682.9687.5185.27
Total494712,012309422,380708449,517
Overall Accuracy: 89.31%; Kappa Coefficient: 0.848
B. LightGBM
Class QuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Quartz453754086448512591.7188.5390.18
Talc6610,868413396412,34190.4888.0689.39
Aspectral13279937417319490.4787.6389.13
Chlorite54105128820,06770322,16389.6690.5490.21
Quartz–carbonate2893635145852669482.6187.5285.04
Total494712,012309422,380708449,517
Overall Accuracy: 89.10%; Kappa Coefficient: 0.845
C. RF
ClassQuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Quartz458950081501522192.7687.990.34
Talc6211,283220111313,37193.9384.3888.91
Aspectral01289751719343493.6384.3688.78
Chlorite8167118919,095119121,22785.3289.9687.62
Quartz–carbonate215766765360626475.6685.5780.36
Total494712,012309422,380708449,517
Overall Accuracy: 87.29%; Kappa Coefficient: 0.82
D. Bagging
ClassQuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Quartz459752077502522892.9387.9390.45
Talc6211,287220001413,36593.9684.4588.97
Aspectral01289852721344793.6784.0788.60
Chlorite7666418919,106120621,24185.3789.9587.64
Quartz–carbonate212856705341623675.485.6580.24
Total494712,012309422,380708449,517
Overall Accuracy: 87.30%; Kappa Coefficient: 0.82
E. ENVINet5
ClassQuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Unclassified30832011041101347
Quartz4533620141614535091.6384.7388.04
Talc6111,512327064714,32995.8480.3487.40
Aspectral07300380835385397.0677.9486.45
Chlorite542995916,56491517,89174.0192.5882.25
Quartz–carbonate26949910575363674775.7179.4977.55
Total494712012309422,380708449,517
Overall Accuracy: 82.74%; Kappa Coefficient: 0.764
F. AdaBoost
ClassQuartzTalcAspectralChloriteQuartz–carbonateTotalR %P %F1-s %
Quartz4396730104547512088.8685.8687.38
Talc3977551170845954864.5681.2271.92
Aspectral13303365317370798.0381.8289.14
Chlorite4641275218,362125323,84082.0577.0279.51
Quartz–carbonate46554815535222730273.7271.5172.65
Total494712,012309422,380708449,517
Overall Accuracy: 78.29%; Kappa Coefficient: 0.689
Table 5. McNemar test’s findings.
Table 5. McNemar test’s findings.
Classification 1Classification 2χ2p-ValueSignificant?
GBDTLightGBM0.6490.206No
GBDTBagging450.20Yes
GBDTRF4590Yes
GBDTENVINet52101.70Yes
GBDTAdaBoost50550Yes
LightGBMBagging389.90Yes
LightGBMRF398.30Yes
LightGBMENVINet520460Yes
LightGBMAdaBoost49250Yes
BaggingRF0.5810.445No
BaggingENVINet517090Yes
BaggingAdaBoost42060Yes
RFENVINet51729.20Yes
RFAdaBoost42000Yes
ENVINet5AdaBoost830.80Yes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hamedianfar, A.; Laakso, K.; Middleton, M.; Törmänen, T.; Köykkä, J.; Torppa, J. Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods. Remote Sens. 2023, 15, 4806. https://doi.org/10.3390/rs15194806

AMA Style

Hamedianfar A, Laakso K, Middleton M, Törmänen T, Köykkä J, Torppa J. Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods. Remote Sensing. 2023; 15(19):4806. https://doi.org/10.3390/rs15194806

Chicago/Turabian Style

Hamedianfar, Alireza, Kati Laakso, Maarit Middleton, Tuomo Törmänen, Juha Köykkä, and Johanna Torppa. 2023. "Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods" Remote Sensing 15, no. 19: 4806. https://doi.org/10.3390/rs15194806

APA Style

Hamedianfar, A., Laakso, K., Middleton, M., Törmänen, T., Köykkä, J., & Torppa, J. (2023). Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods. Remote Sensing, 15(19), 4806. https://doi.org/10.3390/rs15194806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop