Next Article in Journal
Genicular Arteries Embolization for Patients with Osteoarthritis, Their Selection, and Follow-Up Based on MRI Findings
Previous Article in Journal
Current Practices in Antibiotic Prophylaxis for Transoral Endoscopic Thyroid and Parathyroid Surgery: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Viewing Model for Classification of Erythrocytes Infected with Plasmodium spp. Applied to Malaria Diagnosis Using Optical Microscope

1
Laboratorio de Investigación en Salud de Precisión, Departamento de Procesos Diagnósticos y Evaluación, Facultad de Ciencias de la Salud, Universidad Católica de Temuco, Temuco 4780000, Chile
2
Laboratorio SouthGenomics SpA, Temuco 4780000, Chile
3
Centro de Investigación, Innovación y Creación UCT (CIIC-UCT), Universidad Católica de Temuco, Temuco 4780000, Chile
4
Departamento de Procesos Terapéuticos, Facultad de Ciencias de la Salud, Universidad Católica de Temuco, Temuco 4780000, Chile
*
Author to whom correspondence should be addressed.
Medicina 2025, 61(5), 940; https://doi.org/10.3390/medicina61050940
Submission received: 11 April 2025 / Revised: 6 May 2025 / Accepted: 13 May 2025 / Published: 21 May 2025
(This article belongs to the Section Hematology and Immunology)

Abstract

Background and Objectives: Malaria is a disease that can result in a variety of complications. Diagnosis is carried out by an optical microscope and depends on operator experience. The use of artificial intelligence to identify morphological patterns in erythrocytes would improve our diagnostic capability. The object of this study was therefore to establish computer viewing models able to classify blood cells infected with Plasmodium spp. to support malaria diagnosis by optical microscope. Materials and Methods: A total of 27,558 images of human blood sample extensions were obtained from a public data bank for analysis; half were of parasite-infected red cells (n = 13,779), and the other half were of uninfected erythrocytes (n = 13,779). Six models (five machine learning algorithms and one pre-trained for a convolutional neural network) were assessed, and the performance of each was measured using metrics like accuracy (A), precision (P), recall, F1 score, and area under the curve (AUC). Results: The model with the best performance was VGG-19, with an AUC of 98%, accuracy of 93%, precision of 92%, recall of 94%, and F1 score of 93%. Conclusions: Based on the results, we propose a convolutional neural network model (VGG-19) for malaria diagnosis that can be applied in low-complexity laboratories thanks to its ease of implementation and high predictive performance.

1. Introduction

Malaria is a type of parasitosis caused by protozoa of the genus Plasmodium spp. present in tropical and subtropical regions [1]. According to the WHO, 249 million cases and 608,000 deaths were reported during 2022 in 85 countries where the disease is endemic, with Africa accounting for 94% of the confirmed cases [2]. Although the mortality rate of the disease has fallen, it is still considered one of the principal causes of death worldwide [3,4]. The complications described are important alterations in various organs as a consequence of the parasite invading the liver and the erythrocytes to complete its life cycle [5,6,7].
Although malaria can be diagnosed by rapid diagnostic tests or by detecting the parasite using molecular methods [3,8,9], these procedures must be carried out in specialized laboratories [10]. Optical microscopes are still considered to be the gold standard for diagnosis [11]. Although this is an accessible method for basic clinical laboratories, it depends on the observer’s expertise [8,11], which may have an impact on diagnostic capability.
In view of this difficulty, computerized microscope methods have recently become increasingly important [12,13]. The development of deep learning methodologies with convolutional neural networks (CNNs) has improved the results of parasite identification compared with traditional techniques [14]. These automatic learning tools, based on morphological characteristics and patterns, have proven themselves to be useful for diagnosing malaria [12,15,16].
The availability of open code data and images is crucial for establishing prediction models, since access to a large dataset is fundamental to ensure robust and reliable results [17,18]. Therefore, the object of this study was to compare the performance of different machine learning (ML) models (Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors and GaussianNB) and one pre-trained convolutional neural network (CNN) (VGG-19) in classifying erythrocytes infected with Plasmodium spp.

2. Materials and Methods

2.1. Construction of Dataset

The methodology used to train the machine learning models and the layered structure of the convolutional neural network (CNN) is described as follows. A public dataset containing May–Grunwald–Giemsa-stained blood smear images was downloaded from the Malaria Project of the National Library of Medicine (LHNCBC) (https://lhncbc.nlm.nih.gov/LHC-research/LHC-projects/image-processing/malaria-datasheet.html, accessed on 7 May 2024) for the retrospective analysis. A detailed overview of the workflow and CNN structure is provided in Figure 1. This dataset included 27,558 images of human blood extensions, of which half were red cells infected with the Plasmodium spp. parasite (n = 13,779) and the other half were uninfected erythrocytes (n = 13,779), as described by Kassim et al. [19]. The images were recorded using a high-resolution camera and reviewed and filed by an expert morphologist.

2.2. Training of Machine Learning (ML) Model

The images were processed using the Open Computer Vision (V 4.11.0) (OpenCV) and Numpy libraries (V 2.0.2), which enabled the morphological and textural characteristics of the blood cells to be extracted. OpenCV was used for pre-processing the images, keeping the pixel values at their original scale (0–255). Statistical characteristics were also calculated, such as the standard deviation, mean, and median of pixel intensity, together with the histogram of the image, divided into 8 bins to capture the distribution of the intensities. The textural characteristics were extracted by Discrete Wavelet Transformation (DWT).
All the data extracted from the information available in the images were organized in a structured dataset, with each row representing an image and each column a characteristic extracted. The final dataset included 23 characteristics per image and a class label (1 for parasite-infected erythrocytes and 0 for uninfected cells).
The dataset was divided by stratification to ensure that the classes (infected and uninfected erythrocytes) were balanced in the training and test sets. Eighty percent (n = 22,045) of the data were assigned to the training set and 20% (n = 5513) to the test set.
Five machine learning models were trained and evaluated: Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors (KNNs), and GaussianNB (GNB). The performance of each model was assessed in terms of its accuracy (A), precision (P), recall, F1 score, and area under the curve (AUC). K-5-Fold cross-validation was applied to all of the models. This method divides the training set into five subsets (folds) and trains/validates the model in each of them by iteration. Cross-validation was used to obtain a more robust estimate of the model’s performance, reducing the risk of over-demanding fit and ensuring that the results did not depend on a specific division of the data.
After cross-validation, each model was trained again using 100% of the training dataset. Finally, the performance of the models was assessed with the 20% test set, which had not been used in any previous stage, and a confusion matrix was created to analyze the performance of the various models in terms of false positives, false negatives, true positives, and true negatives.

2.3. Training a Convolutional Neural Network Model

Tensor Flow (2.18.0) and Keras (V 3.8.0) were used for image training and pre-processing. Initially, the images were normalized by scaling pixel values to the range 0 to 1, dividing each pixel value by 255, to create a more efficient model for computing purposes; the images were re-dimensioned to 224 × 224 pixels.
Data augmentation techniques were applied to improve the model’s capacity to generalize and manage variations in the images. These included the rotation of the images through different angles, horizontal and vertical displacement, and zoom to simulate different degrees of proximity. These transformations enabled the model to recognize infected and uninfected erythrocytes under varying conditions, reducing the risk of overfitting and improving robustness in real scenarios. This process was performed with the following parameters: rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, and horizontal_flip = True. The dataset was stratified and divided into two parts (80% training and 20% test).
The network was based on the VGG-19 pre-trained model, described by Simonyan et al. [20], which includes convolution layers, Max Pooling and activation functions, and has produced adequate performance in medical images [21,22]. All convolutional layers in our implementation were frozen (layer.trainable = False) to preserve pre-trained weights and reduce overfitting. A custom classifier was added, consisting of a Flatten layer followed by a single Dense neuron with a sigmoid activation function, suitable for binary classification.
The model was optimized using the Adam algorithm [23] with a default learning rate of 0.001, and the loss function used was binary cross entropy. Recent studies have shown that using Adam can significantly improve precision and stability in classifying medical images [24,25]. Precision, accuracy, recall, F1 score, and AUC were used to assess the model. Furthermore, a confusion matrix was created to analyze the results in terms of false positives, false negatives, true positives, and true negatives.
The model was trained for 50 epochs using a batch size of 32. No explicit early stopping criterion was implemented. Local Interpretable Model–agnostic Explanations (V 0.2.0.1) (LIME) was used to check that the model focused on the correct area of the erythrocyte [26]. This method allows the model’s decisions to be interpreted by identifying the regions of the image which have the greatest influence on the classification. This interpretability approach strengthens the clinical trust in model decisions and ensures that predictions are based on meaningful morphological features rather than irrelevant artifacts or background noise.

2.4. Statistical Analysis

We compared the performance of the models in the assessment metrics used (accuracy, precision, recall, F1 score, and AUC) using the Friedman test followed by the Nemenyi post hoc test. The Friedman test is a non-parametric method for comparing multiple algorithms or models based on their performance in different sets of data or measurements [27]. It is useful when working with more than two models as it avoids the problems associated with multiple comparisons and does not assume a normal distribution of the data.

3. Results

A total of six models (five machine learning algorithms and one CNN) were assessed. K-5-Fold cross-validation was applied to all of the machine learning models, showing consistent, robust performance in all of the algorithms (results detailed in Table A1). The performance of the models was then assessed using the test set (20% of the dataset), the results of which are presented in Table 1. In the case of the VGG-19 model, 5-fold cross-validation was not performed in order to reduce the computational time associated with training deep neural networks.
The machine learning model which obtained the best performance was Random Forest, with A of 95.77%; P of 96.42%; R of 95.21%; F1 score of 95.81%; and AUC of 95.78%. The results observed for the VGG-19 convolutional neural network model were A of 93.14%; P of 92.04%; R of 94.22%; F1 score of 93.11%; and AUC of 97.76%.
The confusion matrices for each model are shown in Figure 2. Random Forest presented the lowest results for false negatives and false positives. Figure 3 presents the ROC curves of each algorithm, showing the good performance of Random Forest (AUC 96%) and VGG-19 (AUC 98%). The real classification of infected and uninfected red cells, compared with the prediction by VGG-19, is presented in Figure 4, showing the performance of the model. Figure 5 presents the result of the application of LIME, indicating the area of the image evaluated to classify the red cell.
In general terms, both Random Forest and VGG-19 were able to correctly classify 95% of the cases, achieving a mean area under the curve (AUC) of 97%. In the Friedman test, no statistically significant differences were observed in the performance metrics between the models assessed.

4. Discussion

Optical microscopy is still considered the gold standard for diagnosing malaria. Although this is an accessible method for basic clinical laboratories, it depends on the observer’s expertise [8,11], which may have an impact on diagnostic capability, and the accurate, timely, and species-specific diagnosis of malaria is essential for successful treatment. Malaria diagnosis can be improved by the use of prediction models based on computer viewing methods. In this study, we assessed various computer viewing models for classifying erythrocytes infected with Plasmodium spp.
Among the machine learning models assessed, Decision Tree and Random Forest proved to be robust classification models, with better performance observed for Random Forest. These results are consistent with previous evidence, which has highlighted the effectiveness of these models in medical classification due to their ability to handle complex characteristics and heterogeneous data [28,29]. The precision achieved by the models indicates that they successfully select the right region of the erythrocyte to carry out the prediction.
Various studies have established neural network models for diagnosing malaria [14,15,16,17,30,31,32]; however, the present study is distinguished by employing an innovative approach through the use of the LIME method. This technique allowed us to verify the internal functioning of the trained model and provide interpretability and transparency to the decisions of the VGG-19 model. We consider this interpretability to be a valuable differential contribution to strengthen confidence in the clinical implementation of AI in malaria diagnosis, and we suggest that this type of analysis should be pursued in future research related to medical images.
Furthermore, the use of the Adam optimizer in our model significantly increased its computing efficiency. Adam combines the advantages of the AdaGrad and RMSProp algorithms, allowing for the rapid and efficient updating of the model’s parameters, even in large, varied datasets. This characteristic reduces the need for manual adjustments in the learning rate, making the model lighter and more adaptable for implementation in limited hardware systems, for example in low-complexity laboratories or those with limited resources. In comparison with techniques like stochastic gradient descent (SGD), Adam achieves faster, more stable convergence, making model training more cost-efficient.
The use of public data gave us access to a large number of images, enabling us to establish robust prediction models. A crucial aspect to consider for the application of our model is the staining method used in microscopic preparations. The dataset employed for training consisted exclusively of images of blood smears stained with May–Grunwald–Giemsa (member of the Romanowsky group of stains). Consequently, the VGG-19 model has learned to recognize morphological patterns and textural features as they appear under this specific stain. This highlights the importance of standardizing the staining method to May–Grunwald–Giemsa in clinical laboratories implementing this model to ensure its optimal performance. Despite this specificity, May–Grunwald–Giemsa staining is a standard and widely used technique in routine hematology and for malaria diagnosis in various settings, including low-complexity laboratories, due to its efficacy and relatively low cost. The study does, however, present some limitations, since the models were trained mostly on images of erythrocytes infected with annular forms of the parasite. The exclusion of advanced stages of Plasmodium spp. implies the need to increase the size of the dataset in future studies to include different phases of the parasite’s cycle.
This model can be used in low-complexity laboratories to identify the parasite in a blood smear under optical microscope, as it does not require operator expertise. Furthermore, the use of web platforms or mobile applications could help to maximize the practical implementation of this tool, as proposed by Breslauer et al. [33]. This approach would give laboratories with limited resources access to diagnosis support tools.
As part of future work, we envision evaluating other, more recent convolutional neural network architectures, such as those specifically optimized for mobile devices or systems with limited computational resources (e.g., MobileNet, EfficientNet). It would be essential to comparatively analyze the computational cost and inference efficiency of these models in low-complexity or resource-constrained settings, contrasting them with the performance and requirements of the proposed VGG-19 model. This would allow for identifying the most suitable architecture not only in terms of diagnostic accuracy but also in its practical feasibility for large-scale implementation in malaria-endemic regions.

5. Conclusions

The convolutional neural network model VGG-19 exhibits superior performance in terms of false positives and negatives, making it a potentially more reliable model in more complex clinical scenarios. Based on these results, we propose a convolutional neural network model for malaria diagnosis that can be applied in low-complexity laboratories thanks to its high predictive capacity. Additional training with images of more advanced infected stages will help to determine the different life cycle stages of the parasite and contribute to improving model efficiency.

Author Contributions

Conceptualization, E.R. and N.G.; methodology, E.R., R.B., P.L., C.M. and N.G.; software, E.R., P.Á., M.M. and M.S.; validation, E.R. and N.G.; formal analysis, E.R., P.Á., M.M. and M.S.; investigation, E.R., P.L. and N.G.; resources, E.R. and N.G.; data curation, E.R., I.C.-E., P.Á., M.M. and M.S.; writing—original draft preparation, E.R., I.C.-E., P.Á., M.M., M.S., R.B., P.L., L.S.M., V.S.M., C.M. and N.G.; writing—review and editing, E.R., I.C.-E., P.Á., M.M., M.S., R.B., P.L., L.S.M., V.S.M., C.M. and N.G; visualization, E.R., I.C.-E., P.Á., M.M., M.S., L.S.M., V.S.M., C.M. and N.G.; supervision, E.R. and N.G.; project administration, E.R. and N.G.; funding acquisition, R.B and N.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Vicerrectoría de Investigación y Postgrado, Universidad Católica de Temuco, Grant number 2024GI-AH-03 and 2023FEQUIP-RB-01. Thanks are extended to the Universidad Católica de Temuco for the 2024 Postdoctoral Fellowship (Cartas-Espinel I).

Institutional Review Board Statement

This study was exempt for ethical approval as all data used in this study were obtained from a Malaria Project of the National Library of Medicine (LHNCBC). This is a public dataset that contains thick and thin blood smear images of P. falciparum in which all images are anonymized to protect patient confidentiality and privacy. The data were retrospectively analyzed, and no experiments were conducted directly on human or animal subjects.

Informed Consent Statement

Not applicable, the data were obtained from a free, publicly available database, that does not contain any personal, identifiable, or sensitive patient information.

Data Availability Statement

The dataset used in this study was obtained from a Malaria Project of the National Library of Medicine (https://lhncbc.nlm.nih.gov/LHC-research/LHC-projects/image-processing/malaria-datasheet.html accessed on 7 May 2024). The data obtained can be obtained upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Results K-5-Fold cross-validation.
Table A1. Results K-5-Fold cross-validation.
MACHINE LEARNING
DECISION TREE Mean (SD)GNB
Mean (SD)
KNN
Mean(SD)N
RF
Mean (SD)
SVC
Mean (SD)
Metrics
92.82 (0.53)65.55 (0.92)71.63 (0.34)95.52 (0.33)73.88 (0.76)A (%)
93.12 (0.17)77.89 (1.64)70.99 (0.3)96.28 (0.32)75.66 (0.79)P (%)
92.85 (1.01)44.0 (3.75)73.58 (0.79)94.8 (0.68)70.72 (0.85)R (%)
92.95 (0.39)56.09 (2.84)72.26 (0.42)95.56 (0.35)73.11 (0.8)F1 score (%)
92.89 (0.38)73.74 (0.41)78.14 (0.45)98.66 (0.17)86.67 (0.48)AUC (%)
Abbreviations: GNB = GaussianNB; KNNs = K-Nearest Neighbors; RF = Random Forest; SVC = Support Vector Machine; SD = standard deviation; A = accuracy; P = precision; R = recall; AUC = area under the curve.

References

  1. Fikadu, M.; Ashenafi, E. Malaria: An Overview. Infect. Drug Resist. 2023, 16, 3339–3347. [Google Scholar] [CrossRef] [PubMed]
  2. World Health Organization. World Malaria Report 2023, 1st ed.; World Health Organization, Ed.; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
  3. Bouzayene, A.; Zaffaroullah, R.; Bailly, J.; Ciceron, L.; Sarrasin, V.; Cojean, S.; Argy, N.; Houzé, S.; Joste, V.; Angoulvant, A.; et al. Evaluation of Two Commercial Kits and Two Laboratory-Developed QPCR Assays Compared to LAMP for Molecular Diagnosis of Malaria. Malar. J. 2022, 21, 204. [Google Scholar] [CrossRef]
  4. Korenromp, E.; Hamilton, M.; Sanders, R.; Mahiané, G.; Briët, O.J.T.; Smith, T.; Winfrey, W.; Walker, N.; Stover, J. Impact of Malaria Interventions on Child Mortality in Endemic African Settings: Comparison and Alignment between LiST and Spectrum-Malaria Model. BMC Public Health 2017, 17, 781. [Google Scholar] [CrossRef] [PubMed]
  5. Balaji, S.; Deshmukh, R.; Trivedi, V. Severe Malaria: Biology, Clinical Manifestation, Pathogenesis and Consequences. J. Vector Borne Dis. 2020, 57, 1–13. [Google Scholar] [CrossRef]
  6. López, A.R.; Martins, E.B.; de Pina-Costa, A.; Pacheco-Silva, A.B.; Ferreira, M.T.; Mamani, R.F.; Detepo, P.J.T.; Lupi, O.; Bressan, C.S.; Calvet, G.A.; et al. A Fatal Respiratory Complication of Malaria Caused by Plasmodium Vivax. Malar. J. 2023, 22, 303. [Google Scholar] [CrossRef]
  7. Trivedi, S.; Chakravarty, A. Neurological Complications of Malaria. Curr. Neurol. Neurosci. Rep. 2022, 22, 499–513. [Google Scholar] [CrossRef]
  8. Panwar, V.; Bansal, S.; Chauhan, C.; Sinha, A. Cost Analyses for Malaria Molecular Diagnosis for Research Planners in India and Beyond. Expert. Rev. Mol. Diagn. 2024, 24, 549–559. [Google Scholar] [CrossRef] [PubMed]
  9. Lucchi, N.W.; Karell, M.A.; Journel, I.; Rogier, E.; Goldman, I.; Ljolje, D.; Huber, C.; Mace, K.E.; Jean, S.E.; Akom, E.E.; et al. PET-PCR Method for the Molecular Detection of Malaria Parasites in a National Malaria Surveillance Study in Haiti, 2011. Malar. J. 2014, 13, 462. [Google Scholar] [CrossRef]
  10. Torres, K.; Bachman, C.M.; Delahunt, C.B.; Alarcon Baldeon, J.; Alava, F.; Gamboa Vilela, D.; Proux, S.; Mehanian, C.; McGuire, S.K.; Thompson, C.M.; et al. Automated Microscopy for Routine Malaria Diagnosis: A Field Comparison on Giemsa-Stained Blood Films in Peru. Malar. J. 2018, 17, 339. [Google Scholar] [CrossRef]
  11. Fitri, L.E.; Widaningrum, T.; Endharti, A.T.; Prabowo, M.H.; Winaris, N.; Nugraha, R.Y.B. Malaria Diagnostic Update: From Conventional to Advanced Method. J. Clin. Lab. Anal. 2022, 36, e24314. [Google Scholar] [CrossRef]
  12. Das, D.K.; Mukherjee, R.; Chakraborty, C. Computational Microscopic Imaging for Malaria Parasite Detection: A Systematic Review. J. Microsc. 2015, 260, 1–19. [Google Scholar] [CrossRef] [PubMed]
  13. Hassan, A.; Al Moaraj, A.M.H.A. The Role of Artificial Intelligence in Entrepreneurship. In Proceedings of the Artificial Intelligence for Sustainable Finance and Sustainable Technology; Musleh Al-Sartawi, A.M.A., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 530–542. [Google Scholar]
  14. Maturana, C.R.; de Oliveira, A.D.; Nadal, S.; Bilalli, B.; Serrat, F.Z.; Soley, M.E.; Igual, E.S.; Bosch, M.; Lluch, A.V.; Abelló, A.; et al. Advances and Challenges in Automated Malaria Diagnosis Using Digital Microscopy Imaging with Artificial Intelligence Tools: A Review. Front. Microbiol. 2022, 13, 1006659. [Google Scholar] [CrossRef]
  15. Shashinkiran, S.; Srinivas, B.N.; Sunitha, H.D.; Sandip, M.; Sadaqathulla, S.; Pramod, K.S. Plasmodium Vivax Parasite Detection Using Transfer Learning Techniques. In Proceedings of the 2024 Asia Pacific Conference on Innovation in Technology (APCIT), Mysore, India, 26–27 July 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
  16. Aluvala, S.; Bhargavi, K.; Deekshitha, J.; Suresh, B.; Rao, G.N.; Sravani, A. A Web-Based Approach for Malaria Parasite Detection Using Deep Learning in Blood Smears. In Proceedings of the 2024 2nd World Conference on Communication & Computing (WCONF), Raipur, India, 12–14 July 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
  17. Moysis, E.; Brown, B.J.; Shokunbi, W.; Manescu, P.; Fernandez-Reyes, D. Leveraging Deep Learning for Detecting Red Blood Cell Morphological Changes in Blood Films from Children with Severe Malaria Anaemia. Br. J. Haematol. 2024, 205, 699–710. [Google Scholar] [CrossRef] [PubMed]
  18. Holmes, W.R.; Ambros-Ingerson, J.; Grover, L.M. Fitting Experimental Data to Models That Use Morphological Data from Public Databases. J. Comput. Neurosci. 2006, 20, 349–365. [Google Scholar] [CrossRef] [PubMed]
  19. Kassim, Y.M.; Palaniappan, K.; Yang, F.; Poostchi, M.; Palaniappan, N.; Maude, R.J.; Antani, S.; Jaeger, S. Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears. IEEE J. Biomed. Health Inform. 2021, 25, 1735–1746. [Google Scholar] [CrossRef]
  20. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 14–16 April 2014. [Google Scholar]
  21. Mateen, M.; Wen, J.; Nasrullah; Song, S.; Huang, Z. Fundus Image Classification Using VGG-19 Architecture with PCA and SVD. Symmetry 2018, 11, 1. [Google Scholar] [CrossRef]
  22. Faghihi, A.; Fathollahi, M.; Rajabi, R. Diagnosis of Skin Cancer Using VGG16 and VGG19 Based Transfer Learning Models. Multimed. Tools Appl. 2023, 83, 57495–57510. [Google Scholar] [CrossRef]
  23. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  24. Sun, H.; Zhou, W.; Yang, J.; Shao, Y.; Xing, L.; Zhao, Q.; Zhang, L. An Improved Medical Image Classification Algorithm Based on Adam Optimizer. Mathematics 2024, 12, 2509. [Google Scholar] [CrossRef]
  25. Jaber, M.M.; Abd, S.K.; Ali, S.M. Adam Optimized Deep Learning Model for Segmenting ROI Region in Medical Imaging. In Proceedings of the International Conference on Emerging Technologies and Intelligent Systems; Al-Emran, M., Al-Sharafi, M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 669–691. [Google Scholar]
  26. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  27. Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  28. Yang, F.; Poostchi, M.; Yu, H.; Zhou, Z.; Silamut, K.; Yu, J.; Maude, R.J.; Jaeger, S.; Antani, S. Deep Learning for Smartphone-Based Malaria Parasite Detection in Thick Blood Smears. IEEE J. Biomed. Health Inform. 2020, 24, 1427–1438. [Google Scholar] [CrossRef] [PubMed]
  29. Bhuiyan, M.; Islam, M.S. A New Ensemble Learning Approach to Detect Malaria from Microscopic Red Blood Cell Images. Sens. Int. 2023, 4, 100209. [Google Scholar] [CrossRef]
  30. Hemachandran, K.; Alasiry, A.; Marzougui, M.; Ganie, S.M.; Pise, A.A.; Alouane, M.T.-H.; Chola, C. Performance Analysis of Deep Learning Algorithms in Diagnosis of Malaria Disease. Diagnostics 2023, 13, 534. [Google Scholar] [CrossRef] [PubMed]
  31. Aggarwal, G.; Goyal, M.K. VL-M2C: Leveraging Deep Learning Approach for Stage Detection of Malaria Parasites. J. Integr. Sci. Technol. 2025, 13, 1055. [Google Scholar] [CrossRef]
  32. Nagendra, S.; Hayes, R.; Bae, D.; Dodge, K. Diagnosis of Plasmodium Infections Using Artificial Intelligence Techniques versus Standard Microscopy in a Reference Laboratory. J. Clin. Microbiol. 2025, 63, e00775-24. [Google Scholar] [CrossRef] [PubMed]
  33. Breslauer, D.N.; Maamari, R.N.; Switz, N.A.; Lam, W.A.; Fletcher, D.A. Mobile Phone Based Clinical Microscopy for Global Health Applications. PLoS ONE 2009, 4, e6320. [Google Scholar] [CrossRef]
Figure 1. Flowchart for the processing and training of machine learning models.
Figure 1. Flowchart for the processing and training of machine learning models.
Medicina 61 00940 g001
Figure 2. Confusion matrices of machine learning and CNN algorithms.
Figure 2. Confusion matrices of machine learning and CNN algorithms.
Medicina 61 00940 g002
Figure 3. ROC curves of machine learning and CNN algorithms.
Figure 3. ROC curves of machine learning and CNN algorithms.
Medicina 61 00940 g003
Figure 4. Classification and prediction by VGG-19 of infected and uninfected red cells.
Figure 4. Classification and prediction by VGG-19 of infected and uninfected red cells.
Medicina 61 00940 g004
Figure 5. Result of the use of LIME to check the functioning of VGG-19. Left: segmentation of the erythrocyte on a black background highlighting the critical region identified by the model as influencing its decision most strongly. Right: the original erythrocyte. The areas colored red indicate the characteristics assigned most importance in the classification by the model. These regions contain morphological characteristics potentially associated with infection by parasites of the genus Plasmodium. In this case, the erythrocyte was classified as uninfected.
Figure 5. Result of the use of LIME to check the functioning of VGG-19. Left: segmentation of the erythrocyte on a black background highlighting the critical region identified by the model as influencing its decision most strongly. Right: the original erythrocyte. The areas colored red indicate the characteristics assigned most importance in the classification by the model. These regions contain morphological characteristics potentially associated with infection by parasites of the genus Plasmodium. In this case, the erythrocyte was classified as uninfected.
Medicina 61 00940 g005
Table 1. Performance of machine learning and CNN models assessed for the diagnosis of erythrocytes infected with malaria parasites.
Table 1. Performance of machine learning and CNN models assessed for the diagnosis of erythrocytes infected with malaria parasites.
CNNMACHINE LEARNING
VGG-19DECISION TREEGNBKNNRFSVCMetrics
93.1492.2263.8672.9995.7774.99A (%)
92.0492.0878.472.8596.4278.02P (%)
94.2292.6439.7974.6495.2170.64R (%)
93.1192.3652.7873.7395.8174.15F1 score (%)
97.7692.2164.2472.9795.7875.05AUC (%)
Abbreviations: CNNs = convolutional neural networks; VGG-19 = visual geometry group; GNB = GaussianNB; KNNs = K-Nearest Neighbors; RF = Random Forest; SVC = Scikit-Learn; A = accuracy; P = precision; R = recall; AUC = area under the curve.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rojas, E.; Cartas-Espinel, I.; Álvarez, P.; Moris, M.; Salazar, M.; Boguen, R.; Letelier, P.; San Martín, L.; San Martín, V.; Morales, C.; et al. Computer Viewing Model for Classification of Erythrocytes Infected with Plasmodium spp. Applied to Malaria Diagnosis Using Optical Microscope. Medicina 2025, 61, 940. https://doi.org/10.3390/medicina61050940

AMA Style

Rojas E, Cartas-Espinel I, Álvarez P, Moris M, Salazar M, Boguen R, Letelier P, San Martín L, San Martín V, Morales C, et al. Computer Viewing Model for Classification of Erythrocytes Infected with Plasmodium spp. Applied to Malaria Diagnosis Using Optical Microscope. Medicina. 2025; 61(5):940. https://doi.org/10.3390/medicina61050940

Chicago/Turabian Style

Rojas, Eduardo, Irene Cartas-Espinel, Priscila Álvarez, Matías Moris, Manuel Salazar, Rodrigo Boguen, Pablo Letelier, Lucia San Martín, Valeria San Martín, Camilo Morales, and et al. 2025. "Computer Viewing Model for Classification of Erythrocytes Infected with Plasmodium spp. Applied to Malaria Diagnosis Using Optical Microscope" Medicina 61, no. 5: 940. https://doi.org/10.3390/medicina61050940

APA Style

Rojas, E., Cartas-Espinel, I., Álvarez, P., Moris, M., Salazar, M., Boguen, R., Letelier, P., San Martín, L., San Martín, V., Morales, C., & Guzmán, N. (2025). Computer Viewing Model for Classification of Erythrocytes Infected with Plasmodium spp. Applied to Malaria Diagnosis Using Optical Microscope. Medicina, 61(5), 940. https://doi.org/10.3390/medicina61050940

Article Metrics

Back to TopTop