Classiﬁcation of Hyperspectral In Vivo Brain Tissue Based on Linear Unmixing

Featured Application: This paper describes the application of linear unmixing to classify intraoperative hyperspectral images of in vivo brain tissue with a reduced computational cost compared to a machine learning approach without compromising precision


Introduction
Hyperspectral imaging (HSI) is a non-destructive and non-contact optical technique used for multiple applications, such as agricultural and water resources' control, food quality analysis, military defense, or forensic medicine [1][2][3][4][5]. Due to its great potential to distinguish among materials, even when they are similar to the naked eye, HSI has been used as a non-invasive diagnostic tool in medical imaging, especially as a navigation tool in surgical procedures and cancer detection [6,7]. HSI is a technology that combines conventional imaging and spectroscopy to simultaneously obtain the spatial and spectral information of a sample [8]. The hyperspectral (HS) images contain data at hundreds of wavelengths, providing more insight than conventional RGB images. Each pixel exhibits a continuous spectrum (radiance, reflectance, based on optimization algorithms revealed that with only 48 bands, the accuracy of the classification was not affected with respect to the reference results. In this context, linear unmixing by EBEAE is proposed to classify intraoperative HS images of in vivo brain tissue. The main goal is to develop alternative methods with accurate results, but requiring smaller training times compared to a supervised machine learning classifier (SVM-based algorithm). In this way, two methodologies are suggested that rely on identifying the characteristics of end-members related to the studied tissue classes by EBEAE. In the first one, by computing the minimum distance to the end-members' class sets, each pixel is labeled in the HS image. Meanwhile, the second one tries to isolate first the class related to the external materials or substances present in the surgical scenario, and then, the remaining pixels are labeled with the minimum distance philosophy to the tissue end-members' sets. Our proposals could lead to future personalized classification approaches executed intraoperatively in real time, employing calibration spectral signatures obtained directly from the current patient.
The notation used in this work is described next. Scalars, vectors, and matrices are denoted by italic, boldface lower-case, and boldface upper-case letters, respectively. An L-dimensional vector with unitary entries is defined as 1 L . For a vector x, its transpose is represented by x , its l-th component by (x) l , and its Euclidean norm by x = ∑ l (x) 2 l . For a set X , card(X ) denotes its cardinality, i.e., the number of elements in the set. A vector x with independent and identically distributed (i.i.d.) Gaussian entries (zero mean and finite variance) is denoted as x ∼ N (0).

Materials and Methods
In this section, we review the technical, clinical, and protocol details of the studied datasets of in vivo human brain HS images and their ground-truth maps for classification. Next, the pre-processing chain of the HS raw images is briefly outlined, which looks to homogenize and extract the relevant information in the datasets. The mathematical formulation of the studied BLU algorithm is also described, which relies on alternated least-squares and constrained optimization. To compare the proposed methodologies, an SVM-based algorithm is introduced with a supervised philosophy for tissue classification. Finally, some standard classification metrics are presented to quantify the performance of the analyzed algorithms.

In Vivo Human Brain HS Dataset
The in vivo human brain HS images were captured by using a customized intraoperative HS acquisition system developed in [30], which was part of the European project HELICoiD (HypErspectraL Imaging Cancer Detection) (618080) [31]. The system was formed by a push broom HS camera operating in the VNIR (visible and near-infrared) spectral range from 400 to 1000 nm (Hyperspec ® VNIR A-Series, Headwall Photonics Inc., Fitchburg, MA, USA); an illumination system based on a quartz tungsten halogen (QTH) lamp of 150 W with a broadband emission between 400 and 2200 nm; and a scanning platform to provide the necessary movement for the push broom scanning, capable of covering an effective area of 230 mm. The resulting HS cubes contained 826 spectral bands with a spectral sampling of 0.73 nm, a spectral resolution of 2-3 nm, and a spatial resolution of 128.7 µm. The HS cube had a maximum size of 1004 × 1787 pixels with a maximum image size of 129 × 230 mm, where each pixel represents a sample area of 128.7 × 128.7 µm. However, each HS image of the database was manually segmented according to the region of interest, where the exposed brain tissue (parenchymal tissue) was present. The segmented images had a minimum and a maximum of 298 × 253 pixels and 611 × 527 pixels, respectively.
The HS database employed in this study was composed by twenty-six images from sixteen adult patients, and it was described in [32]. Patients underwent craniotomy for resection of intra-axial brain tumor or another type of brain surgery during clinical practice at the University Hospital Doctor Negrin at Las Palmas de Gran Canaria (Spain). Eight different patients were diagnosed with grade IV glioblastoma (GBM) tumor, and eleven HS images of exposed tumor tissue were captured. The remaining patients were affected by other types of tumors or affected by other pathologies that required performing a craniotomy to expose the brain surface. Written informed consent was obtained from all participant subjects, and the study protocol and consent procedures were approved by the Comité Ético de Investigación Clínica-Comité de Ética en la Investigación (CEIC/CEI) of the University Hospital Doctor Negrin.
To acquire the HS images during the surgical procedures, it was necessary to follow the protocol established in [30]. Craniotomy and resection of the dura were performed, and the operating surgeon initially identified the approximate location of the normal brain and tumor (if applicable). The surgeons placed sterilized rubber ring markers on the surface of the brain, where the presence of tumor and normal tissue was identified based on preoperative imaging data. Once the markers were located, the imaging operator captured the HS image, and the clinical expert performed a biopsy of the tissue located within the tumor markers, sending the sample to a pathologist. By this analysis, the physician confirmed the presence or absence of the tumor by histopathological diagnosis and also determined the tumor type and grading. The HS acquisition system was operated by an engineer in charge of positioning the HS camera over the exposed brain surface, controlling the illumination system, and setting the image size to be captured according to the size of the exposed brain surface. The environmental illumination in the operating room did not affect the capture process due to the strong and precise illumination system available in the acquisition system. However, the surgical lighthead had to be turned off during the acquisition procedure, since it could interfere with the acquisition system illumination. The operator must avoid unintentional movements of the acquisition system during the capture. In case of an unexpected movement, the acquisition process was repeated. At the beginning of the surgical procedure, dark and white reference images were captured to perform the HS image calibration as explained in the next section.
When possible, a first HS image was captured before tumor resection (e.g., P008-1 in Figure 1). After that, the HS acquisition system was moved out of the surgical area, and the tumor resection was performed by the surgeon. When the operating surgeon felt that it was safe to temporarily hold the surgery, a second HS image was captured while the tumor was resected (e.g., P008-2 in Figure 1). After acquiring the HS images, a specific set of pixels was labeled using a semi-automatic tool based on the spectral angle mapper (SAM) algorithm [33] developed in [32]. The surgeon, after completing the clinical procedure, selected only a few sets of very reliable pixels using the semi-automatic tool to create the ground-truth map for each captured HS image. After that, the SAM was computed in the entire image with respect to the pixel previously selected. Using a threshold manually established by experts, other pixels with the most similar spectral properties to the selected one could be identified. The pixels were labeled using four different classes: tumor tissue (TT), normal tissue (NT), hypervascularized tissue (HT) (mainly blood vessels), and background (BG). The background class involved other materials or substances present in the surgical scenario, but not relevant for the tumor resection procedure, such as skull bone, dura, skin, or surgical materials.
In this work, only GBM tumor pixels from four different patients were labeled from the eight patients originally affected by GBM tumor in this study, due to inadequate image conditions to perform the initial labeling. The remaining images with GBM tumor were included in the database, but no tumor samples were considered. In total, six HS images (P008-01, P008-02, P012-01, P012-02, P015-01, and P020-01) were labeled with the four studied classes (NT, TT, HT, and BG) and were employed as test datasets. The studied datasets can be observed in Figure 1, where in the synthetic RGB images, the tumor areas are surrounded by a yellow line, and the ground-truth maps of each HS image are shown below. Due to the low number of HS images with labeled tumor pixels, the leave-one-patient-out cross-validation methodology was proposed to evaluate the algorithms. In this methodology, the training dataset was composed by all patients' samples except the one to be tested.

Data Pre-Processing
Before the classification step, each HS image was preprocessed. The HS image preprocessing chain was explained in [27,29] and consisted of four main steps: image calibration, spectral bands removal and selection, noise reduction, and normalization of spectral signatures. The first step was the HS image calibration, which was performed to smooth out the raw spectral signatures and to compensate for the nonlinear response of the HS camera. This step was carried out by using two reference images, the first one obtained on a white surface (W re f ) at the place where the clinical procedure was performed, with the same lighting conditions and the second acquired by taking a capture with the shutter closed, generating a dark reference image (D re f ). Once these two images were obtained, the preprocessed HS image I(·) was computed by a normalization step at the k-th pixel y k : where R(·) is the raw HS image acquired from the HS camera and K is the number of pixels in the HS image.
In the second step, the spectral bands in the low and high frequency ranges were removed due to the high noise generated by the CCD sensor, producing low SNR in the first and last bands. For this reason, bands from 1 to 55 and from 700 to 826 were removed, resulting in an HS cube with 645 spectral bands that covered the spectral range from 440 to 902 nm. In a previous work [32], repeatability experiments were performed to demonstrate this statement. After this process, the spectral signatures were reduced by a 1:5 decimation procedure to avoid the redundant information between contiguous bands and also to reduce the execution time of the algorithms. Hence, each HS image was reduced to 128 spectral bands. The next stage involved a smoothing process by a Gaussian filter in the spectral domain. Finally, the last step in the preprocessing chain consisted of a normalization, also in the spectral domain, to avoid different radiation intensities produced by the non-uniform surface of the brain.

Extended Blind End-Member and Abundance Extraction
The methodology called extended blind end-member and abundance extraction (EBEAE), proposed in [26], allows estimating end-members and their abundances by a linear mixing model in non-negative datasets. In addition, the BLU process by EBEAE is controlled by hyperparameters that adjust the resulting similarity among end-members and the entropy of the abundances.
In the EBEAE formulation, there areK non-negative and normalized (sum-to-one) pixels assumed for a certain class in the HS image. The spectral information in these pixels is expressed as an L-dimensional vector y k ∈ R L with k ∈ {1, . . . ,K}. The set of labeled pixels is denoted as Y = {y 1 , ..., yK}, and since the spatial information in the HS image is not relevant for the BLU process, the ordering in Y is indistinct. Each pixel is represented by a linear mixing model of order N (2 ≤ N < L): where p n ∈ R L is the n-th end-member, α k,n ≥ 0 its abundance in the k-th pixel, and η k ∈ R L represents a noise or uncertainty vector (η k ∼ N (0)). The EBEAE synthesis problem is defined as the following constrained optimization process: min where: λ min (.) represents the minimum eigenvalue of the argument matrix, µ ∈ [0, 1) (entropy weight) and ρ ≥ 0 (similarity weight) are hyperparameters, and: with restrictions 1 N α k = 1, 1 L p n = 1, and α k , p n ≥ 0. Hence, the hyperparameters in EBEAE are defined as (N, µ, ρ). To solve the optimization problem, an alternated least squares approach was used [34] to overcome the non-linear dependence of end-members {p n } N n=1 and abundances {α k }K k=1 in (4), until a convergence condition was met or a maximum number of iterations was exceeded [26]. In this formulation, the end-members {p n } N n=1 identify N characteristic or representative components to reproduce all the pixels in Y.

Support Vector Machines Approach
The SVM strategy is a kernel-based supervised algorithm, which performs the classification by computing the probability of each pixel to belong to a certain class [35]. To compute the probability, the classifier calculates the best hyperplane that separates the data from different classes with a maximum margin. To do that, the hyperplane is computed by using a training dataset. In this sense, the SVM classifier was selected because it has shown good performance in HS data classification for medical applications [29,36]. The results obtained by the BLU-based approaches were compared with the SVM strategy to evaluate the classification performance and execution time. The LIBSVM package developed by Chang et al. [37] and MATLAB ® (R2019b, The MathWorks Inc., Natick, MA, USA) were employed for the SVM implementation, and the linear kernel with the default hyperparameters was used for the SVM configuration.

Classification Performance Metrics
The classification results were evaluated by comparing the studied approaches with the ground-truth datasets. Different metrics were selected for a quantitative evaluation to analyze classifiers that work well with unbalanced data. These metrics were: accuracy, sensitivity, specificity, F1-score, and the Matthews correlation coefficient (MCC) [38,39]. To compute these metrics, the following variables identify the possible outcomes in a binary classification problem: As a result, the classification accuracy, sensitivity, specificity, F1-score, and MCC (normalized) are computed as: Accuracy, sensitivity, and specificity are common metrics in classification tasks that evaluate the precision with respect to positive and negative true values [38]. Meanwhile, the F1-score measures the precision of a test, where a value close to one means high precision and sensitivity [39]. The parameter MCC is used as a measure of the classification quality, and this parameter takes into account true and false positives together with negatives. The MCC is generally regarded as a balanced measure that can be used even if the classes are of very different sizes. In other words, the MCC(normalized) is a correlation coefficient between the observed and predicted binary classifications, returning a value between 0 and 1. A coefficient of 1 represents a perfect prediction, 0.5 no better than a random prediction, and 0 total disagreement between prediction and observation [39].

Classification Methodology Based on Linear Unmixing
In the following sections, we propose two strategies for classifying the six in vivo brain tissue intraoperative HS images based on BLU by EBEAE. As previously described, in each HS image, some pixels were manually labeled by the clinical expert as four classes: NT, TT, HT, and BG, as ground-truth datasets. In fact, the BG class has a wide spectral range since it considers different materials or substances in the surgical scenario [27,29,30]. Hence, the rubber ring markers placed by the surgeon were part of the BG class. However, these rings had flat spectral signatures, which were pretty distinctive from the NT, TT, and HT classes. Therefore, to reduce the variability in the linear unmixing process by EBEAE, the rubber ring markers were segmented initially based on their flat spectral signatures (especially in the lower frequency bands) [29] and to avoid classification errors due to the rubber material, humidity, and light scattering. For this purpose, the energy in the initial twenty spectral signatures was collected, and the value was raised to the power of 3/2 to enlarge the magnitude differences. The resulting image was normalized to grayscale tones. Finally, the Otsu method was applied to the grayscale image to segment the areas assigned to the rubber ring markers [40]. Next, two classification methodologies were explored in this work after this initial segmentation. The rationale behind these two methodologies was to use BLU to identify the most representative spectral signatures/end-members of the ground-truth datasets. Once these representative end-members were estimated, they were used to characterize each class studied at a lower dimension. To reduce the computational cost, each pixel was labeled by the minimum distance to the classes' sets, which was feasible by the low dimension of these sets. The main difference from the second methodology was the assumption that an initial segmentation of the binary classes BG vs. no-BG (NT, TT, and HT) could improve the overall accuracy, without largely increasing the computation time. Next, these methodologies are explained in detail.

Method A
The first methodology consisted of three main stages after the segmentation of the rubber ring markers, which are illustrated in the block diagram of Figure 2. In the initial stage, the characteristic end-members of the four studied classes (NT, TT, HT, and BG) were estimated by the EBEAE algorithm [26]. This estimation process used as training information all the manually labeled pixels in the ground-truth datasets for the remaining five HS images (inter-patient approach). Thus, each labeled pixel in the EBEAE formulation was a vector gathering the 128 spectral bands (i.e., L = 128). In each ground-truth dataset of the HS image, the number of labeled pixels was different, i.e., for each class, the number of measurements in the EBEAE procedure was variable. For each class, the number of representative end-members was related to the variability of the spectral signatures in the dataset [27,30], and this parameter was selected by the procedure in [41]. The representative number of end-members was two for NT (N NT = 2), two for TT (N TT = 2), one for HT (N HT = 1), and three for BG (N BG = 3) for all HS images. Since for the HT class, only one end-member was needed, this was equivalent to using the average of the HT labeled ground-truth pixels. With respect to the hyperparameters of EBEAE, they were manually selected to improve the classification outcome, but following the guidelines in [26]. Thus, since the goal was to reduce the estimation error, the entropy weight was chosen always as zero (µ = 0). Meanwhile, the similarity weight ρ should be small if there is large variability in the dataset. For this purpose, this hyperparameter was set as 0.3 for NT (ρ NT = 0.3), 0.2 for TT (ρ TT = 0.2), and 0.01 for BG (ρ BG = 0.01). Consequently, after this estimation step, we had the sets of characteristic end-members {P NT , P TT , P HT , P BG } for the NT, TT, HT, and BG classes, respectively. In this way, we had low-dimensional sets for each class, i.e., card(P NT ) = 2, card(P TT ) = 2, card(P HT ) = 1, and card(P BG ) = 3.
As a second step, for the k-th pixel y k in each HS image, the distance to each end-member set was computed by the concept of the distance from a point to a set [42], as: where d(·, ·) represents a distance or metric. In this study, five distances were evaluated: where Q ∈ R L×L represents the covariance matrix of the dataset. In the last stage, the k-th pixel y k was classified according to the minimum distance among the four classes: c(y k ) = arg min i∈{NT,TT,HT,BG} d(y k , P i ).

Method B
For the second methodology (see Figure 3), the fundamental aspect was to make an accurate estimation of the BG class first, which presented the largest spectral variability [27,29]. Initially, the calculation of the representative end-members for each HS image followed the same procedure as in Method A. Then, the set of all estimated end-members is defined as: P = P NT ∪ P TT ∪ P HT ∪ P BG .
Next, for the k-th pixel y k in the HS image, the distance to all M = card(P ) estimated end-members in P was computed. As a result, there were M images constructed by the distance to each characteristic end-member. These images were segmented by applying the K-means algorithm in the four different groups [43]. Now, by using the pixels labeled as no-BG (NT, TT, and HT) for each studied patient (intra-patient approach), the regions belonging to the BG class were selected by means of the positions of the spectral signatures of the no-BG class in the HS image. In this last step, a binary image was generated for the BG class, which was concatenated with the regions of the rubber ring markers to build the overall BG image for that patient. The pixels labeled as BG were discarded in the later stages. In the next step of Method B, the distance of the k-th pixel y k outside the BG mask in the HS image was calculated to the end-members sets {P NT , P TT , P HT } by (11). For the last stage, the k-th pixel y k was classified according to the minimum distance in (17), but among only three classes {NT, TT, HT}.

Results and Discussion
In this section, we evaluate the performance of the proposed classification methodologies based on BLU by EBEAE, with respect to the SVM-based approach. We evaluate first the effect of different metrics in Method A, and next, we present the comparison results among Method A, Method B, and the SVM scheme.

Metrics' Evaluation for the Distance to End-Members' Sets
Method A is based on first detecting the rubber ring markers, used by the surgeon to identify tumor and healthy tissues, and then removing this information to estimate the characteristic end-members. The evaluation of this methodology was performed by using the six test HS images in Figure 1 (P008-01, P008-02, P012-01, P012-02, P015-01, and P020-01), and the different distances in Equations (12)-(16): Manhattan metric, Euclidean metric, correlation metric, Mahalanobis metric, and SAM. Figure 4 shows the average performance classification results obtained for each metric. Figure 4a shows the accuracy, sensitivity, and specificity results, where the accuracy value is similar in all metrics. Besides that, the sensitivity performance for the NT and BG classes with all metrics is also equivalent. Meanwhile, the main performance differences are observed in the TT and HT classes. The classification with the Manhattan metric achieves the best sensitivity in the HT class with a value of 70%, while the correlation metric offers the worst performance (~47%). However, related to the TT class, the correlation metric achieves the best sensitivity results (50%), and the worst performance is obtained by the Manhattan metric (~32%). Finally, the specificity results are mostly consistent in all classes; for the NT class, the performance is always higher than 70%, and higher than 90% for the TT, HT, and BG classes. Figure 4b shows the MCC results, where we observe that all metrics offer similar performance, except the correlation metric in the HT class, which exhibits a reduction of~7% related to the best solution. However, this distance achieves the best result in the TT class. For this reason, the correlation metric provides on average the best results, especially highlighted in the TT class, being highly relevant due to the nature of our clinical application.   Figure 5 shows the classification maps obtained with each metric for Method A. Figure 5a illustrates the synthetic RGB images with the tumor area marked by a yellow line, and in Figure 5b-f are the results with all metrics in Equations (12)- (16).
As highlighted by the accuracy performance in Figure 4a, the classification is quite consistent with all metrics. Nonetheless, we obtain a better definition of the tumor areas in the HS images by using the correlation metric, for example in images P012-02 and P015-01. In particular, for image P012-02 with the Manhattan metric (Figure 5b), some HT pixels in the tumor area are misclassified; however, by using the correlation metric (Figure 5d), the tumor area is correctly identified. Furthermore, the correlation metric in the same image shows an homogeneous tumor area with respect to other distances. Note that for the P020-01 image, none of the metrics are able to distinguish the TT class, since the synthetic RGB image highlights that the marked tumor area presents a similar colorization to the NT class, which is consistent with the result in [29]. As stated in such previous work, this misclassification of the TT class could be produced by the lack of a more complete database that takes into account the inter-patient spectra variability.

Comparison Results
In Method B, for the BG class extraction, our analysis shows that the correlation metric is the best option to improve accuracy, and for the tissue classification, the Mahalanobis metric is selected for better performance. Hence, Method A with the correlation metric and Method B with correlation/Mahalanobis metrics are compared to the SVM-based approach. The six test HS images (P008-01, P008-02, P012-01, P012-02, P015-01, and P020-01) are used to evaluate both proposed methodologies against the machine learning approach by a leave-one-patient-out cross-validation. On average, Method A provides an overall accuracy of 67.2 ± 11.5%, while Method B achieves 76.1 ± 12.4%. These results are lower than the accuracy obtained by the SVM-based approach (79.2 ± 15.6%). However, as can be seen in Figure 6, the classification performance obtained with other metrics for each independent class presents some improvements. The most relevant result for this particular application is the increment in the TT class sensitivity (Figure 6a) with respect to the SVM-based method. Method A and Method B achieve a median sensitivity of 47.8% and 31.3%, respectively, which represent an increase of 26.2% and 9.7% with respect to the result obtained with the SVM approach (21.6%). In addition, the median sensitivity of the NT class is quite constant among the three approaches, being higher than 97% and reaching 99.7% in Method A. On the contrary, the HT class is penalized in Methods A and B, decreasing the median sensitivity result to 46.5% and 18%, respectively, compared to the SVM performance (92.9%). Nonetheless, in this particular application, the accurate identification and differentiation of the NT and TT classes are more important than the identification of the hypervascularized tissue that can be distinguished by the naked eye, or can be identified with an image processing algorithm based on the morphological properties of the blood vessels.
With respect to the specificity results (Figure 6b), all the values for the three proposed methods are quite similar, except for the NT and TT classes, where there are some slight differences among the studied methods. In the NT class, Method A drops its median specificity value to 70.4%, while Method B and SVM scheme reach 88.8% and 87.3%, respectively. The specificity of the TT class is also slightly decreased in the two proposed methods, showing higher interquartile ranges (IQR) than the SVM-based approach, but with median values higher than 92%.
Regarding the F1-score results, Figure 6c shows that, in the TT class, Methods A and B improve the median values by 30.7% and 31%, respectively, compared to the result obtained by the SVM-based approach (25.7%). For the NT and BG classes, Method B obtains the best median results of 90.7% and 98.3%. In contrast, the SVM-based approach reaches the best result in the HT class (91.5%), which is consistent with the results obtained for the sensitivity metric.
Finally, Figure 6d shows the normalized MCC results, which take into account the unbalanced dataset. In these results, the median values of the TT class are quite similar (~66%) among the three methods, while the median of the NT class in Method B reaches the best result of 90.4%. On the contrary, the median of the HT class is reduced in Methods A and B by~19% and~5%, respectively, with respect to the SVM-based approach.
The qualitative results represented as classification maps are shown in Figure 7 for Methods A and B and the SVM-based approach. These maps allow evaluating the results obtained in the classification of the all the HS images, including the non-labeled pixels. Figure 7a shows the synthetic RGB images with the tumor area marked by a yellow line, while Figure 7b-d presents the classification maps obtained with the SVM-based approach, Method A and Method B, respectively. In these classification results, the proposed methods improve the labeling of the pixels in the tumor area with respect to the SVM-based approach. However, the proposed methodologies present more false positives in the non-tumor areas. Regarding the other tissue classes, the qualitative results are quite similar, except for the BG class, where Method B shows an accurate identification of the parenchymal area (exposed brain surface) in images P008-01, P008-02, and P012-02. Moreover, by these results, we observe that the low classification performance found in the quantitative results of the HT class in Method A (see Figure 6) is due to the misclassifications between the BG and HT classes in images P008-02 and P015-01, where the main blood vessels are identified as background. This phenomenon does not occur in Method B, where, in general, the hypervascularized areas are well identified.
Regarding the comparison of the execution time among the three methods, Figure 8 shows the average time obtained for the six test HS images. In order to better compare the results, a logarithmic scale is used. This execution time-cost involves both the training and classification for the SVM-based approach and the complete execution of both proposed methods in Figures 2 and 3. These results were computed by using MATLAB ® on an Intel i7-4790K with a working frequency of 4.00 GHz and a RAM memory of 8 GB. The SVM-based approach requires almost four hours to train and perform a classification of one HS image. However, the proposed methods based on the EBEAE algorithm only require an average of~30 s to train and classify the datasets, achieving similar accuracy results as discussed in the previous paragraphs. In summary, the proposed Methods A and B offer speedup factors of~459× and~429×, respectively, compared to the SVM-based approach. The results obtained in this preliminary study were compared with other works that employed the same database for classification purposes. In [27], five in vivo brain surface HS images with confirmed grade IV glioblastoma tumor were used for brain cancer detection. The SVM algorithm was employed using an intra-patient methodology for evaluating the supervised classifier. This research obtained specificity and sensitivity results higher than 99% for the TT class. However, due to the intra-patient methodology, where the data from the same patient were employed for both training and test, the results were highly optimistic, without taking into account the intra-patient variability of the spectral data. In [28], the authors presented the results obtained with different deep-learning techniques using the in vivo human brain cancer HS dataset employed in this work and evaluated eight HS images using a leave-one-patient-out cross-validation methodology. The sensitivity obtained for the NT and TT classes was 90% and 42%, respectively. Furthermore, the deep-learning results were compared with a linear SVM-based classifier that obtained a sensitivity for the NT and TT classes of 95% and 26%, respectively. However, our results cannot be directly compared to these previous studies, since the testing HS images in the dataset were not the same. In any case, future works with new HS datasets will perform a large comparison among different classification approaches for this particular case of in vivo brain cancer detection using HSI.

Conclusions
In this work, we proposed two methods based on a BLU algorithm (EBEAE) to classify intraoperative HS images of in vivo brain tissue, and we compared the results with a machine learning classification method based on a supervised SVM strategy. The main contribution of this paper is achieving a competitive classification performance with respect to the SVM strategy at the lowest computational time cost. One important feature of our proposal is that both Methods A and B require roughly the same computational cost, but we observe, as the advantages of Method B, less variability in the classification performance. Furthermore, as originally intended, Method B is more accurate in identifying the BG class (F1-score and MCC metric), and both Methods A and B improve the sensitivity and F1-score in detecting the TT class compared to the SVM-based approach.
In this study, we were able to achieve speedup factors of~459× and~429× by using the proposed methods with respect to the SVM-based approach, while keeping constant and even slightly improving the classification metrics. Note that in Method A, the training stage by using the ground-truth datasets is equivalent to extracting the characteristic end-members per class, and in Method B, we add the segmentation of the background class. Meanwhile, the labeling of the pixels in the HS images is equivalent to computing the minimum distance to the classes' sets. Both processes are significantly less complex than the training and labeling steps of the SVM-based classifier. Meanwhile, the accuracy results could not be significantly improved due to the intrinsic variability in the intraoperative HS images.
For this particular application, reducing the computational time could allow using the spectral data from the current patient to obtain personalized classification results. This particular information could be combined with previous patient datasets to develop a classification model that takes into account inter-patient and intra-patient spectral variability. Moreover, the massive parallelization capabilities of the proposed BLU methods, together with the use of the so-called snapshot HS cameras (cameras that are able to capture spectral and spatial information in one single shot) that provide real-time HSI, could achieve classification results in real time during clinical procedures, improving the outcomes of the surgery and, hence, the patient outcomes and quality of life.
One of the main limitations of this preliminary study is the reduced number of patients included in the HS brain cancer database for a better generalization of the classification results. This lack of data was produced due to the challenges of obtaining good-quality HS images during surgical procedures. For this reason, in future works, we will include more patients in the HS database in order to further validate the proposed methodology. Furthermore, the inclusion of morphological post-processing methods will be explored to reduce the misclassifications found in the qualitative classification results. Especially the false positives of the tumor class could be reduced if the method is combined with spatial filtering algorithms.
Moreover, further research will be carried out to reduce the processing time of the proposed methodologies by using specific hardware accelerators, such as GPUs (Graphic Processing Units) or FPGAs (field-programmable gate arrays), exploiting the parallelism of these platforms. As stated before, the algorithm acceleration will assist neurosurgeons to identify the tumor area and its boundaries during the surgical procedure, reaching real-time performance. Funding: This work was supported in part by a Basic Science Grant of CONACYT (Ref # 254637); the Canary Islands Government through the ACIISI (Canarian Agency for Research, Innovation and the Information Society), and the ITHACA project "Hyperspectral Identification of Brain Tumors" (ProID2017010164). Additionally, this work was completed while Samuel Ortega and Raquel Leon were beneficiaries of a pre-doctoral grant given by the "Agencia Canaria de Investigacion, Innovacion y Sociedad de la Información (ACIISI)" of the "Conserjería de Economía, Industria, Comercio y Conocimiento" of the "Gobierno de Canarias", which is partly financed by the European Social Fund (FSE) (POC 2014-2020, Eje 3 Tema Prioritario 74 (85%)). Alejandro Cruz-Guerrero acknowledges the financial support of CONACYT through a doctoral fellowship (# 865747).

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript: