Classification of Microscopic Hyperspectral Images of Blood Cells Based on Lightweight Convolutional Neural Network

: Hyperspectral imaging has emerged as a novel imaging modality in the medical field, offering the ability to acquire images of biological tissues while simultaneously providing biochemical insights for in-depth tissue analysis. This approach facilitates early disease diagnosis, presenting advantages over traditional medical imaging techniques. Addressing challenges such as the computational burden of existing convolutional neural networks (CNNs) and imbalances in sample data, this paper introduces a lightweight GhostMRNet for the classification of microscopic hyperspectral images of human blood cells. The proposed model employs Ghost Modules to replace conventional convolutional layers and a cascading approach with small convolutional kernels for multiscale feature extraction, aiming to enhance feature extraction capabilities while reducing computational complexity. Additionally, an SE (Squeeze-and-Excitation) module is introduced to selectively allocate weights to features in each channel, emphasizing informative features and efficiently achieving spatial–spectral feature extraction in microscopic hyperspectral imaging. We evaluated the performance of the proposed GhostMRNet and compared it with other state-of-the-art models using two real medical hyperspectral image datasets. The experimental results demonstrate that GhostMRNet exhibits a superior performance, with an overall accuracy (OA), average accuracy (AA), and Kappa coefficient reaching 99.965%, 99.565%, and 0.9925, respectively. In conclusion, the proposed GhostMRNet achieves a superior classification performance at a smaller computational cost, thereby providing a novel approach for blood cell detection.


Introduction
Blood analysis is commonly employed as a routine medical diagnostic method, playing a crucial role in the early detection and treatment of numerous diseases.Generally, variations in the content of blood cells and various components in the plasma, as well as morphological changes in blood cells, can provide valuable information when the internal environment of the body undergoes alterations.This aids healthcare professionals in assessing the patient's condition [1].Traditional blood cell analysis involves the manual counting and recording of the number of red blood cells, white blood cells, and platelets in blood smears prepared through dilution under a microscope.However, the human eye cannot perform in-depth analyses or differentiate the chemical properties of cells.Prolonged microscopic observation is prone to observer fatigue, increasing the likelihood of misdiagnosis.The advent of blood analysis instruments has partially replaced manual work, allowing for accurate and rapid cell count statistics.Nevertheless, current instruments are not flawless in blood analysis, with errors occurring relatively frequently.Therefore, to ensure the accuracy and reliability of diagnostic results, re-examination is necessary.Moreover, certain pathological cells, such as abnormal lymphocytes, remain undetectable to blood analyzers.The detection of such pathological information is critical and indispensable for the early diagnosis of some diseases [2].
(1) A lightweight GhostMRNet model is proposed in this study.The model was tested on a real dataset of blood cell microscopic hyperspectral images, achieving an overall classification accuracy, average classification accuracy, and Kappa coefficient of 99.965%, 99.565%, and 0.9925%, respectively.These results substantiate the effectiveness of our proposed approach, underscoring its research value and practical significance in assisting medical professionals in disease diagnosis.(2) To simultaneously achieve a lightweight design and enhance network feature extraction capability, the GhoMR block was introduced.A Ghost Module replaces conventional convolutional layers, effectively reducing the network's computational complexity.Multiscale feature extraction is realized through the cascading of convolutional kernels, enhancing network feature extraction capabilities.Additionally, to further augment the feature representation capability and reduce redundant information, the SE module was incorporated to allocate weights to features in each channel, facilitating the fusion of inter-channel features.(3) To address the issue of imbalanced sample classes, focal loss is employed.By adjusting the weights of the loss function, focal loss focuses on challenging-to-classify samples, contributing to a balanced emphasis on different categories.This enhances the classification accuracy, particularly for rare categories, and mitigates the impact of class imbalance on model training.

Related Work 2.1. Ghost Module
The literature [25] posits that redundant feature maps in CNNs exhibit superior classification performances.Consequently, the Ghost Module was introduced as a mechanism to enhance classification efficacy by replacing conventional convolutions with computationally less intensive "inexpensive" linear operations, as depicted in Figure 1.The Ghost Module comprises two steps: in the first step, a limited number of convolutional kernels are employed for standard convolutional operations, generating a subset of feature maps.Subsequently, in the second step, linear operations are applied to the feature maps obtained in the first step, ensuring the generation of additional feature maps while maintaining the same number of channels.
Electronics 2024, 13, x FOR PEER REVIEW 3 of 15 tailored for the classification of blood cell microscopic hyperspectral images.The primary contributions of this study are delineated as follows: (1) A lightweight GhostMRNet model is proposed in this study.The model was tested on a real dataset of blood cell microscopic hyperspectral images, achieving an overall classification accuracy, average classification accuracy, and Kappa coefficient of 99.965%, 99.565%, and 0.9925%, respectively.These results substantiate the effectiveness of our proposed approach, underscoring its research value and practical significance in assisting medical professionals in disease diagnosis.(2) To simultaneously achieve a lightweight design and enhance network feature extraction capability, the GhoMR block was introduced.A Ghost Module replaces conventional convolutional layers, effectively reducing the network's computational complexity.Multiscale feature extraction is realized through the cascading of convolutional kernels, enhancing network feature extraction capabilities.Additionally, to further augment the feature representation capability and reduce redundant information, the SE module was incorporated to allocate weights to features in each channel, facilitating the fusion of inter-channel features.(3) To address the issue of imbalanced sample classes, focal loss is employed.By adjusting the weights of the loss function, focal loss focuses on challenging-to-classify samples, contributing to a balanced emphasis on different categories.This enhances the classification accuracy, particularly for rare categories, and mitigates the impact of class imbalance on model training.

Ghost Module
The literature [25] posits that redundant feature maps in CNNs exhibit superior classification performances.Consequently, the Ghost Module was introduced as a mechanism to enhance classification efficacy by replacing conventional convolutions with computationally less intensive "inexpensive" linear operations, as depicted in Figure 1.The Ghost Module comprises two steps: in the first step, a limited number of convolutional kernels are employed for standard convolutional operations, generating a subset of feature maps.Subsequently, in the second step, linear operations are applied to the feature maps obtained in the first step, ensuring the generation of additional feature maps while maintaining the same number of channels.Let the input feature map data be denoted as h w c , with an n-dimensional input as kk  and convolutional output as '' h w n  .In the conventional convolutional pro- cess, the computational cost is Let the input feature map data be denoted as h × w × c, with an n-dimensional input as k × k and convolutional output as h ′ × w ′ × n.In the conventional convolutional process, the computational cost is n × Regarding the linear operations of the Ghost Module, there is one identity mapping and m • (s − 1) = n s • (s − 1) linear operations; with the average kernel size of each linear operation being d × d, the theoretical speedup ratio of replacing the ordinary convolution with the Ghost Module can be expressed as follows: From the above calculations, it is evident that under the same operation, the Ghost Module reduces computational cost by a factor of s compared to ordinary convolution.Overall, the Ghost Module has a light weight through the replacement of a larger convolutional kernel with multiple smaller ones through the strategies of replication and grouping.These reductions in the number of parameters and computational complexity are achieved while preserving network performance.Consequently, the Ghost Module proves to be an effective tool for enhancing network performance in resource-constrained environments.

SE Block
SENet [26] is a representative channel-wise attention model.Essentially, it leverages contextual feature maps to learn weight distributions, subsequently applying the learned weights to the original feature maps through weighted summation.This process aims to extract information deemed more crucial for the target task.
The principle of the SE block is depicted in Figure 2, comprising three steps: squeeze, excitation, and feature recalibration.The SE block initiates by compressing the feature map with dimensions M × N × C through global average pooling, reducing it to 1 × 1 × C.This process captures the global context of each channel.Subsequently, a fully connected layer predicts the importance of each channel, yielding weights corresponding to the significance of different channels.Finally, these weights are applied to the corresponding channels of the input feature map, resulting in a feature map with distinct channel-wise weights.s operation being dd  , the theoretical speedup ratio of replacing the ordinary convolu- tion with the Ghost Module can be expressed as follows: 11 1 From the above calculations, it is evident that under the same operation, the Ghost Module reduces computational cost by a factor of s compared to ordinary convolution.
Overall, the Ghost Module has a light weight through the replacement of a larger convolutional kernel with multiple smaller ones through the strategies of replication and grouping.These reductions in the number of parameters and computational complexity are achieved while preserving network performance.Consequently, the Ghost Module proves to be an effective tool for enhancing network performance in resource-constrained environments.

SE Block
SENet [26] is a representative channel-wise attention model.Essentially, it leverages contextual feature maps to learn weight distributions, subsequently applying the learned weights to the original feature maps through weighted summation.This process aims to extract information deemed more crucial for the target task.
The principle of the SE block is depicted in Figure 2, comprising three steps: squeeze, excitation, and feature recalibration.The SE block initiates by compressing the feature map with dimensions

M N C 
through global average pooling, reducing it to 11C  .This process captures the global context of each channel.Subsequently, a fully connected layer predicts the importance of each channel, yielding weights corresponding to the significance of different channels.Finally, these weights are applied to the corresponding channels of the input feature map, resulting in a feature map with distinct channel-wise weights.

GhostMRNet
Due to the limited number of samples in the microscopic hyperspectral blood cell data used in this study and the categorization of annotated samples into only two classes, red blood cells and white blood cells, the adoption of excessively deep or complex network structures may have led to overfitting issues.Therefore, based on the principle of simplicity, this paper proposes a lightweight GhostMRNet model for the classification of microscopic hyperspectral blood cell images.
The overall architecture of the network is illustrated in Figure 3. Initially, noise reduction and dimensionality reduction in the hyperspectral images are achieved through median filtering and principal component analysis (PCA).The preprocessed hyperspectral images are then partitioned into image blocks as inputs to the network.Feature extraction is performed initially using a standard 3 × 3 convolutional layer.Subsequently, two GhoMR Electronics 2024, 13, 1578 5 of 14 blocks are employed to extract features at different scales.The GhoMR block employs the Ghost Module instead of conventional convolutions, reducing the number of parameters and computational complexity while maintaining network performance.Multiscale feature extraction is achieved through the cascading of convolutional kernels.Furthermore, an SE block is utilized to recalibrate the weights of features in each channel, learning the correlation between features in different channels and facilitating feature fusion.The introduction of residual connections helps prevent overfitting during deep feature extraction, thereby enhancing classification efficiency and accuracy.The final step involves outputting image prediction results through fully connected layers.To address the issue of imbalanced sample classes, focal loss is employed to balance the model's focus on different categories and improve the classification accuracy.
microscopic hyperspectral blood cell images.
The overall architecture of the network is illustrated in Figure 3. Initially, noise reduction and dimensionality reduction in the hyperspectral images are achieved through median filtering and principal component analysis (PCA).The preprocessed hyperspectral images are then partitioned into image blocks as inputs to the network.Feature extraction is performed initially using a standard 3 × 3 convolutional layer.Subsequently, two GhoMR blocks are employed to extract features at different scales.The GhoMR block employs the Ghost Module instead of conventional convolutions, reducing the number of parameters and computational complexity while maintaining network performance.Multiscale feature extraction is achieved through the cascading of convolutional kernels.Furthermore, an SE block is utilized to recalibrate the weights of features in each channel, learning the correlation between features in different channels and facilitating feature fusion.The introduction of residual connections helps prevent overfitting during deep feature extraction, thereby enhancing classification efficiency and accuracy.The final step involves outputting image prediction results through fully connected layers.To address the issue of imbalanced sample classes, focal loss is employed to balance the model's focus on different categories and improve the classification accuracy.

GhoMR
The spectral information in hyperspectral images is rich, and effectively extracting image features while maintaining a lightweight structure poses a challenge.The GhoMR block proposed in this paper employs the Ghost Module instead of conventional convolutional layers, effectively reducing network computational complexity.To achieve multiscale feature extraction, a method involving the cascading of small convolutional kernels is employed instead of large ones, enhancing network nonlinearity while reducing computational load.The Ghost Module utilizes "inexpensive" linear operations to generate similar feature maps, yet the contributions of different feature maps vary.Hence, the SE block was introduced to allocate weights to the features in each channel, selectively emphasizing informative features and suppressing irrelevant channel features.

GhoMR
The spectral information in hyperspectral images is rich, and effectively extracting image features while maintaining a lightweight structure poses a challenge.The GhoMR block proposed in this paper employs the Ghost Module instead of conventional convolutional layers, effectively reducing network computational complexity.To achieve multiscale feature extraction, a method involving the cascading of small convolutional kernels is employed instead of large ones, enhancing network nonlinearity while reducing computational load.The Ghost Module utilizes "inexpensive" linear operations to generate similar feature maps, yet the contributions of different feature maps vary.Hence, the SE block was introduced to allocate weights to the features in each channel, selectively emphasizing informative features and suppressing irrelevant channel features.
The overall architecture of the GhoMR block is illustrated in Figure 4.In assuming that the model's input is I ∈ R W×H×C , the feature extraction of the GhoMR block is divided into the following three steps: (1) The extraction of feature Y 1 ∈ R W×H×C is performed using a Ghost Module with a 1 × 1 kernel: (2) The Spit operation is employed to partition the N feature maps of Y 1 into four subsets, denoted by n i , where 1 ≤ i ≤ 4. Each subset, excluding n 1 , is required to undergo processing through a 3 × 3 Ghost Module.The output, o i , of the preceding Ghost Module undergoes hierarchical fusion through summation with the elements of the current subset, n i , resulting in the generation of the feature set o i−1 : where + represents summing by elements.
Electronics 2024, 13, 1578 6 of 14 (3) Ultimately, the output mappings o 1 , o 2 , o 3 , and o 4 are concatenated along their depth, creating a unified feature block encompassing all information.This consolidated feature block undergoes feature recalibration through a 1 × 1 Ghost Module and an SE block.Subsequently, it is fused with the input I through a residual link to generate the final output O.This operation is denoted as follows: where ⊕ represents concatenation, and + denotes element-wise summation.
of the current subset, i n , resulting in the generation of the feature set where + represents summing by elements.
(3) Ultimately, the output mappings O .This operation is denoted as follows: where  represents concatenation, and + denotes element-wise summation.

Focal Loss
Table 1 presents the proportions of two types of blood cell samples.In considering the issue of imbalanced positive and negative classes of red and white blood cells in hyperspectral images, the loss function employed is an improvement upon the Cross-Entropy Loss, known as the focal loss [27].The formula for the focal loss is given by 1 ( 1) log 0 ( 1) log( 1) Here,

Focal Loss
Table 1 presents the proportions of two types of blood cell samples.In considering the issue of imbalanced positive and negative classes of red and white blood cells in hyperspectral images, the loss function employed is an improvement upon the Cross-Entropy Loss, known as the focal loss [27].The formula for the focal loss is given by Here, y label denotes the sample labels of the true samples, while y pre represents the predicted sample labels; α is used to address the imbalance between positive and negative samples, and γ is employed to handle the imbalance of easy and difficult samples.Based on debugging experience, α was set to 0.6, and γ was set to 1.

Dataset Preprocessing
The human blood cell dataset utilized in this study was collected using a microscopic hyperspectral imaging system consisting of a microscope and a silicon charge-coupled device.Blood smears used for dataset collection were provided by the Department of Hematology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, and stained with Giemsa biological dye.Datasets 1-3 and 2-2 originated from distinct patients, and cellular categorization was performed under the supervision of expert medical professionals.As illustrated in Figure 5, the original size of the 1-3 human blood cell dataset is 974 × 799 × 33, and that of the 1-2 human blood cell dataset is 462 × 451 × 33.On the left, there are single-band dataset images, and on the right, there are data labels.In the label data images, black represents the background, red denotes red blood cells, and white signifies white blood cells.Since background data contribute insignificantly to the classification of red and white blood cells, this study excluded the background during classification, focusing solely on the classification of red and white blood cells.Due to the blurriness of the original images, preprocessing involved the initial application of median filtering and normalization.Subsequently, PCA was employed to reduce the dimensionality of the hyperspectral images.matology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, and stained with Giemsa biological dye.Datasets 1-3 and 2-2 originated from distinct patients, and cellular categorization was performed under the supervision of expert medical professionals.As illustrated in Figure 5, the original size of the 1-3 human blood cell dataset is 974 × 799 × 33, and that of the 1-2 human blood cell dataset is 462 × 451 × 33.On the left, there are single-band dataset images, and on the right, there are data labels.In the label data images, black represents the background, red denotes red blood cells, and white signifies white blood cells.Since background data contribute insignificantly to the classification of red and white blood cells, this study excluded the background during classification, focusing solely on the classification of red and white blood cells.Due to the blurriness of the original images, preprocessing involved the initial application of median filtering and normalization.Subsequently, PCA was employed to reduce the dimensionality of the hyperspectral images.

Experiment Setting
The GhostMRNet proposed in this paper was implemented using the Python language and the PyTorch deep learning framework.All experiments were conducted on a computer with a 64-bit Windows 10 operating system, 16 GB of RAM, and an NVIDIA GeForce GTX 1660 Ti 6GB GPU Nvidia Corporation, Santa Clara, CA, USA.To mitigate biases introduced by diverse training samples, this paper provides the average results and standard deviation of 10 experiments conducted under identical conditions in self-experimentation, ablation experiments, parameter configuration experiments, and comparative experiments with other models.
To accurately reflect the classification performance of the designed model, stochastic gradient descent was employed as the optimization algorithm with a learning rate set to 0.001, a batch size set to 256, and 50 epochs.In terms of training sample selection, 10% of the training samples were randomly chosen, and the remaining 90% were designated as the test set.To assess the classification accuracy comprehensively, metrics such as the overall accuracy (OA), average accuracy (AA), precision, recall, F1-score, and Kappa coefficient were utilized for the quantitative evaluation of the classification performance.

Classification Results of GhostMRNet
The GhostMRNet was applied for classification on two hyperspectral images, and the classification results are illustrated in Figure 6 and presented in Table 2.In Figure 6, (a) represents the annotated image generated based on medical experts' annotations, where black signifies excluded background, red denotes red blood cells, and white signifies white blood cells.Subsequently, (b) displays the classification predictions from GhostMRNet.It is evident that, overall, the predictions from this method exhibit a high degree of alignment with the ground truth labels.0.001, a batch size set to 256, and 50 epochs.In terms of training sample selection, 10% of the training samples were randomly chosen, and the remaining 90% were designated as the test set.To assess the classification accuracy comprehensively, metrics such as the overall accuracy (OA), average accuracy (AA), precision, recall, F1-score, and Kappa coefficient were utilized for the quantitative evaluation of the classification performance.

Classification Results of GhostMRNet
The GhostMRNet was applied for classification on two hyperspectral images, and the classification results are illustrated in Figure 6 and presented in Table 2.In Figure 6, (a) represents the annotated image generated based on medical experts' annotations, where black signifies excluded background, red denotes red blood cells, and white signifies white blood cells.Subsequently, (b) displays the classification predictions from GhostMRNet.It is evident that, overall, the predictions from this method exhibit a high degree of alignment with the ground truth labels.From Table 2, it can be observed that there is minimal difference in the comprehensive evaluation metrics across different images, with an average OA of 99.965% and an average AA of 99.565%.This indicates that the GhostMRNet model exhibits a consistent and excellent classification performance for both cell types, demonstrating its robustness in addressing sample imbalance issues.The average Kappa coefficient of 0.9925 further signifies a high level of consistency between the predicted and true categories, emphasizing the model's accuracy in classification.

Experimental Parameters
In the classification of human blood cells based on microscopic hyperspectral imaging, the classification performance is primarily influenced by three factors: window size, the number of principal components, and the proportion of training samples.This subsection will evaluate the impact of different parameter settings on the blood cell classification results through a comparative analysis.

Effect of Window Size
To investigate the impact of window size on the classification results, while keeping other parameters constant, window sizes of 5 × 5, 7 × 7, 9 × 9, and 11 × 11 were employed.The classification results are depicted in Figure 7. From the experimental results, it is evident that an increase in window size has a positive impact on the classification of human blood cells in two distinct microscopic hyperspectral images.However, when the window size exceeds 9 × 9, the improvement in classification results becomes marginal, accompanied by a substantial increase in computational overhead.Therefore, in this study, the window size was determined to be 9 × 9.

Effect of the Number of Principal Components
To investigate the impact of the dimensionality of reduced data on the classification results, while keeping other parameters constant, the dimensionality of the reduced data was set to 5, 10, 15, and 20, employing PCA.The classification results are illustrated in Figure 8. From the experimental results, it is evident that an increase in window size has a positive impact on the classification of human blood cells in two distinct microscopic hyperspectral images.However, when the window size exceeds 9 × 9, the improvement in classification results becomes marginal, accompanied by a substantial increase in computational overhead.Therefore, in this study, the window size was determined to be 9 × 9.

Effect of the Number of Principal Components
To investigate the impact of the dimensionality of reduced data on the classification results, while keeping other parameters constant, the dimensionality of the reduced data was set to 5, 10, 15, and 20, employing PCA.The classification results are illustrated in Figure 8. From the experimental results, it is observed that the classification performance of the 2-2 blood cell dataset improves with an increase in the PCA dimensionality.For the 1-3 human blood cell dataset, the PCA dimensionality and classification performance exhibit a positive correlation from dimensions 5 to 10.However, when the PCA dimensionality exceeds 10, the classification performance begins to decline.Extremely low dimensions may lead to reductions in the spectral characteristics reflected in the images, while excessively high dimensions may introduce more redundant information, incurring higher computational costs.Moreover, beyond a data dimensionality of 10, there is a limited improvement in the accuracy and Kappa coefficient values.Therefore, reducing the dimensionality to around 10 is a more cost-effective choice.

Effect of Training Ratio
The small-sample problem is a prevalent issue in existing hyperspectral image (HSI) classification methods.To assess the classification performance of GhostMRNet under small training sets, we varied the training set proportions to 5%, 10%, 15%, and 20%.The From the experimental results, it is observed that the classification performance of the 2-2 blood cell dataset improves with an increase in the PCA dimensionality.For the 1-3 human blood cell dataset, the PCA dimensionality and classification performance exhibit a positive correlation from dimensions 5 to 10.However, when the PCA dimensionality exceeds 10, the classification performance begins to decline.Extremely low dimensions may lead to reductions in the spectral characteristics reflected in the images, while excessively high dimensions may introduce more redundant information, incurring higher computational costs.Moreover, beyond a data dimensionality of 10, there is a limited improvement in the accuracy and Kappa coefficient values.Therefore, reducing the dimensionality to around 10 is a more cost-effective choice.

Effect of Training Ratio
The small-sample problem is a prevalent issue in existing hyperspectral image (HSI) classification methods.To assess the classification performance of GhostMRNet under small training sets, we varied the training set proportions to 5%, 10%, 15%, and 20%.The classification results are depicted in Figure 9. ality exceeds 10, the classification performance begins to decline.Extremely low dimensions may lead to reductions in the spectral characteristics reflected in the images, while excessively high dimensions may introduce more redundant information, incurring higher computational costs.Moreover, beyond a data dimensionality of 10, there is a limited improvement in the accuracy and Kappa coefficient values.Therefore, reducing the dimensionality to around 10 is a more cost-effective choice.

Effect of Training Ratio
The small-sample problem is a prevalent issue in existing hyperspectral image (HSI) classification methods.To assess the classification performance of GhostMRNet under small training sets, we varied the training set proportions to 5%, 10%, 15%, and 20%.The classification results are depicted in Figure 9. From the experimental results, it is evident that the proposed classification method maintains good accuracy even in scenarios with small training sets.A substantial improvement in classification performance is observed when the training set proportion increases from 5% to 10%.However, beyond a training set proportion of 10%, the enhancement in classification performance becomes less noticeable.Therefore, considering From the experimental results, it is evident that the proposed classification method maintains good accuracy even in scenarios with small training sets.A substantial improvement in classification performance is observed when the training set proportion increases from 5% to 10%.However, beyond a training set proportion of 10%, the enhancement in classification performance becomes less noticeable.Therefore, considering considerations related to GPU computing power and experimental time, a training set proportion of around 10% was deemed a suitable choice.

Ablation Experiments
To validate the effectiveness of the proposed GhoMR module, as well as the Ghost Module and SE blocks within the GhoMR module, this study conducted ablation experiments.Experiment 1 replaces the Ghost Module in GhoMRNet with a regular convolutional layer, Experiment 2 removes the SE block from the GhoMR module, and Experiment 4 substitutes the multiscale feature extraction part of the GhoMR module with regular convolutional layers.The experiments utilized the same data set partitioning method and hyperparameter settings, with the network input being preprocessed microscopic hyperspectral image blocks.The results are presented in Tables 3 and 4. From Tables 3 and 4, it can be observed that after incorporating the Ghost Module, the network parameters were reduced by 24%, yet the classification performance did not decrease significantly.This suggests that the Ghost Module can significantly reduce the network parameters without compromising classification performance.After the introduction of multiscale feature extraction in GhoMR, the AAs for 1-3 and 2-2 increased by 0.64% and 0.54%, respectively, and the Kappa coefficients increased by 0.007 and 0.004, indicating that the multiscale structure in the GhoMR effectively enhances feature extraction capabilities, thereby improving the classification performance for minority samples.However, upon the removal of the SE block from the GhoMR module, particularly evident in dataset 2-2, a significant decline in the model performance occurred.This observation indicates that the SE block effectively enhances crucial feature channels extracted through multiscale feature extraction, suppressing redundant features, and thereby enhancing the network's feature extraction capability.

Comparison Results of Different Methods
To further evaluate the classification performance of the proposed model on microscopic hyperspectral images, this subsection compares it with those of GhostNet [25], ResNet34 [28], a CNN-based classification model [24], and a SVM.
In the CNN-based classification model [24], there were eight weighted layers, including the input layer, two convolutional layers, two max-pooling layers, two fully connected layers, and a final output layer.The parameters for each layer were set as follows: in the second layer, the number of convolutional filters was 16, and the filter size was 3 × 3; in the third layer, the pooling layer filter size was 3 × 3; in the fourth layer, the number of convolutional filters was 32, and the filter size was 3 × 3; in the fifth layer, the pooling layer filter size was 3 × 3; and the numbers of neurons in the sixth and seventh layers were 256 and 2, respectively.The same preprocessing and experimental parameter settings as in the baseline experiment were applied to each model.The experimental results are shown in Tables 5 and 6       From the figures, it can be observed that GhostMRNet and ResNet34 exhibit superior classification performances, while GhostNet and the CNN-based classification model [24] struggled to identify white blood cells.The SVM model shows a poorer classification performance, especially in the case of dataset 1-3, where it failed to effectively classify red and white blood cells.
Tables 5 and 6 indicate that, under the same focal loss framework, when faced with highly imbalanced class distributions, GhostNet and the CNN-based classification model [24] both struggle to perform well, particularly in recognizing white blood cells.On the other hand, the SVM suffered from severe overfitting and could not correctly distinguish between red and white blood cells.In contrast, GhostMRNet, which accurately classified red and white blood cells, outperformed ResNet34 in all evaluation metrics.This demonstrates that the proposed GhostMRNet, utilizing multiscale feature extraction, effectively captures the features of a small amount of data, achieving excellent performances in the classification of imbalanced human blood cell datasets.

Discussion
Blood detection is crucial for the early detection and treatment of many diseases.Although existing blood analyzers can accurately and rapidly perform cell counts, they still fall short of perfecting blood analysis.Microscopic hyperspectral imaging provides spatial and spectral information to assist in blood detection.In this study, we propose a lightweight end-to-end network that utilizes multiscale feature extraction and SE modules to extract the spatial and channel features of microscopic hyperspectral images, while employing Ghost Modules to reduce network parameters, effectively achieving blood cell classification.We conducted extensive experiments to evaluate the effectiveness of our model.Compared to other well-known networks, the proposed network achieves better performances.The experimental results demonstrate the helpfulness of GhoMR's design and the introduction of SE modules.
Despite our model outperforming others, there are still limitations.During data preprocessing, we employed PCA for dimensionality reduction to reduce spectral redundancy and computational load.PCA is an effective dimensionality reduction method, but its linear transformation disrupts the original spatial geometry of medical hyperspectral images, which may have undesirable effects on the dimensionality reduction.Therefore, incorporating blood cell biochemical properties into band selection for medical hyperspectral images would be a better dimensionality reduction approach.

Conclusions
This study primarily addresses the human blood cell classification issue based on microscopic hyperspectral imaging.A lightweight GhostMRNet model was proposed for the classification of microscopic hyperspectral images of blood cells.The GhoMR block uses Ghost Modules instead of conventional convolutional layers and employs a cascading approach with small convolutional kernels to achieve multiscale feature extraction, aiming to enhance network feature extraction capabilities while reducing computational complexity.Additionally, the SE block was introduced to allocate weights to features in each channel, selectively emphasizing informative features while suppressing less crucial channel features.The GhoMR block effectively extracts spatial and spectral features from microscopic hyperspectral images, yielding an improved classification performance with lower computational costs.
Several comparative experiments were conducted in this study.Firstly, by comparing models with and without the GhoMR block, as well as GhoMR blocks without multiscale feature extraction, the effectiveness of the GhoMR block in reducing network parameters while enhancing the classification performance was demonstrated.Secondly, through comparisons with other deep learning and machine learning methods, the results indicate that GhostMRNet exhibits a superior classification performance in blood cell hyperspectral image tasks, with lower computational costs, demonstrating robustness.
It is worth noting that the experiments in this study were conducted solely on two microscopic hyperspectral images of blood cells from the same time period.Future research could further develop experiments using blood cell hyperspectral datasets from different time periods to comprehensively assess the model's performance and enhance its practical application value in medical diagnosis.

Figure 2 .
Figure 2. Structure of SE block.Figure 2. Structure of SE block.

Figure 2 .
Figure 2. Structure of SE block.Figure 2. Structure of SE block.

4 o
are concatenated along their depth, creating a unified feature block encompassing all information.This consolidated feature block undergoes feature recalibration through a 1 × 1 Ghost Module and an SE block.Subsequently, it is fused with the input I through a residual link to generate the final output
label y denotes the sample labels of the true samples, while pre y represents the predicted sample labels;  is used to address the imbalance between positive and

Figure 5 .
Figure 5. Single-band images and ground truth labels of blood cell hyperspectral images (images on the left are single-band images, and those on the right are the ground truth labels).(a) Single-band image and ground truth label of blood cell 1-3; (b) single-band image and ground truth label of blood cell 2-2.

Figure 6 .
Figure 6.Classification result of GhostMRNet ((a,b) represent the label map and classification results for blood cells 1-3, while (c,d) represent the label map and classification results for blood cells 2-2).

Figure 6 .
Figure 6.Classification result of GhostMRNet ((a,b) represent the label map and classification results for blood cells 1-3, while (c,d) represent the label map and classification results for blood cells 2-2).

Table 1 .
The proportions of red and white blood cell samples.
Noted: The standard deviation is retained to four decimal places.

Table 5 .
Classification results of different models on dataset 1-3.