Next Article in Journal
Energy-Efficient Data Fusion in WSNs Using Mobility-Aware Compression and Adaptive Clustering
Next Article in Special Issue
Enhancing Thyroid Nodule Detection in Ultrasound Images: A Novel YOLOv8 Architecture with a C2fA Module and Optimized Loss Functions
Previous Article in Journal
Isolated High-Gain DC-DC Converter with Nanocrystalline-Core Transformer: Achieving 1:16 Voltage Boost for Renewable Energy Applications
Previous Article in Special Issue
Comparing Optical and Custom IoT Inertial Motion Capture Systems for Manual Material Handling Risk Assessment Using the NIOSH Lifting Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Malaria Cell Image Classification Using Compact Deep Learning Architectures on Jetson TX2

by
Adán-Antonio Alonso-Ramírez
1,
Alejandro-Israel Barranco-Gutiérrez
1,
Iris-Iddaly Méndez-Gurrola
2,
Marcos Gutiérrez-López
3,
Juan Prado-Olivarez
1,
Francisco-Javier Pérez-Pinal
1,
J. Jesús Villegas-Saucillo
1,
Jorge-Alberto García-Muñoz
1 and
Carlos-Hugo García-Capulín
4,*
1
Departamento de Ingenieria Electrónica. Línea de Investigación, Bioelectrónica, Tecnológico Nacional de México en Celaya, Celaya 38010, Mexico
2
Departamento de Diseño, Instituto de Arquitectura, Diseño y Arte, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32310, Mexico
3
Tecnológico Nacional de México en Morelia, TecNM-Morelia, Morelia 58120, Mexico
4
Departamento de Electrónica, Universidad de Guanajuato DICIS, Salamanca 36885, Mexico
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(12), 247; https://doi.org/10.3390/technologies12120247
Submission received: 3 October 2024 / Revised: 15 November 2024 / Accepted: 22 November 2024 / Published: 27 November 2024

Abstract

:
Malaria is a significant global health issue, especially in tropical regions. Accurate and rapid diagnosis is critical for effective treatment and reducing mortality rates. Traditional diagnostic methods, like blood smear microscopy, are time-intensive and prone to error. This study introduces a deep learning approach for classifying malaria-infected cells in blood smear images using convolutional neural networks (CNNs); Six CNN models were designed and trained using a large labeled dataset of malaria cell images, both infected and uninfected, and were implemented on the Jetson TX2 board to evaluate them. The model was optimized for feature extraction and classification accuracy, achieving 97.72% accuracy, and evaluated using precision, recall, and F1-score metrics and execution time. Results indicate deep learning significantly improves diagnostic time efficiency on embedded systems. This scalable, automated solution is particularly useful in resource-limited areas without access to expert microscopic analysis. Future work will focus on clinical validation.

1. Introduction

Malaria is an infectious disease caused by parasites of the genus Plasmodium, which is transmitted to people through the infected mosquito bite of the genus Anopheles. According to the World Health Organization (WHO), in 2020 more than 240 million cases of malaria and approximately 627,000 deaths were estimated in the world, being a cause of mortality in tropical and subtropical regions [1]. Timely and accurate diagnosis of malaria is relevant for effective treatment and mortality reduction [2,3].
Traditionally, the diagnosis of malaria is made through optical microscopy, where a blood sample is observed under a microscope as shown in Figure 1, where trained expert personnel identify and quantify the presence of parasites [4]. However, this method is laborious, requires a significant level of expertise, and is subject to human error. In this context, diagnostic automation using deep learning techniques has emerged as a support tool for these experts to improve the accuracy and efficiency of malaria diagnosis [5].
Deep learning, a machine learning subdiscipline, has demonstrated outstanding performance in various image classification tasks, leveraging convolutional neural networks (CNNs) to extract highly relevant features from images [6,7,8]. These techniques have been successfully applied in the classification of various diseases through medical images, showing potential to transform clinical diagnosis [9,10,11,12].
The Jetson TX2 is a board developed by NVIDIA Enterprise. The features that have can bring us the capability [13] to handle powerful and portable software.
For the detection, we [14] propose two deep learning architectures based on convolutional-recurrent neural networks. The first one implements a convolutional long short-term memory, while the second uses a convolutional bidirectional long short-term memory architecture. Vijayalakshmi et al. [15] propose a deep neural network model for identifying infected falciparum malaria parasites using a transfer learning approach. This proposed transfer learning approach can be achieved by unifying the existing Visual Geometry Group (VGG) network and Support Vector Machine (SVM). The VGG19-SVM model achieves 93.1% classification accuracy in identifying infected falciparum malaria parasites in microscopic images, outperforming existing CNN models. In [16], the authors propose a simple neural network training strategy for highlighting the infected pixel regions that are mainly responsible for malaria cell classification. The results show that there is an improvement in classification accuracy, achieving 97.2% compared to 94.49% for a baseline model.
The methods developed in this work achieved an accuracy of 99.89% in the detection of malaria-infected red blood cells. Another proposed method is shown in Ref. [17], where they used deep learning combined with VGG to perform the classification of parasitized and uninfected blood smear cell images; their proposed approach achieved an accuracy of 96.02%. A similar work is Ref. [18], where they present some of their progress on the highly accurate classification of malaria-infected cells using deep convolutional neural networks. On the other hand, Ref. [19] proposes a comprehensive computer-aided diagnosis (CAD) scheme for identifying the presence of malaria parasites in thick blood smear images, achieving 89.10% detection accuracy, 93.90% sensitivity, and 83.10% specificity. Ref. [20] presents the deep learning model using convolutional neural networks that accurately differentiates malaria-infected red blood cells; this model was 99.5% accurate in classifying and also exhibited sensitivity and specificity values of 100% and 91.7%, respectively. Silka et al., Ref. [21], show a novel convolutional neural network (CNN) architecture for detecting malaria from blood samples with a 99.68% accuracy. Additionally, they propose an analysis of model performance on different subtypes of malaria. The use of embedded boards like the Jetson Board is wide in many fields [22], like medical, farming, speech recognition, robotics, image processing, autonomous driving, and drones, including face recognition. Ref. [23] mentions the use of CNN for that purpose, where they used a Jetson TX2 specifically for this job, and they obtained the result of the recognition in an average time of 0.3 s and a minimum recognition rate above 83.67%. In another case of use, Ref. [24] exhibits a convolutional neural network to estimate the center of a gate robustly so it can pass through the gate in autonomous drone racing. Ref. [25] relates the experimentation of benchmarking programs to revealed the rules that handle the GPU inside the Jetson TX2 board, addressing through these programs features like block resource requirements, kernel durations, and copy operations.
Especially in medicine, studies such as Ref. [26] study work related to the pain of the chest and fall posture-based vital sign detection using an intelligence surveillance camera to address the emergency during myocardial infarction. They use an embedded convolutional neural network called single-shot detector Inception V2 and single-shot detector MobileNet V2 inside a Jetson Nano NVIDIA Board. The accuracy that they obtained is 76.4% and an average recall of 80%.
Ref. [27] focuses on the use of the deep learning model VGG19, achieving 97% accuracy on boards Jetson Nano and Jetson TX2, working with computed tomography of lungs to classify COVID-19. In Ref. [28], they focus on the use of convolutional neural network models like AlexNet and GoogleNet to classify benign and malignant moles beneath the use of a Jetson TX2 board. The accuracy rates are up to 74%.
In Ref. [29], they detect the traffic flow with an average processing speed of 37.9 FPS (frames per second) and an accuracy of 92%, using a vehicle detection algorithm based on YOLOv3 (You Only Look Once) in a Jetson TX2.
In Ref. [30], they present a benchmark analysis of 3D object detection using Jetson boards such as Nano, TX2, AGX, and NX. They explore the use of the TensorRT library, to optimize a deep learning model, for faster inference and lower resource utilization. They report that, on average, each of the mentioned boards consumes 80% of GPU resources.
A study related to Sugar Beet Seed Classification is mentioned in Ref. [31]. The study includes the use of YOLOv4 and YOLOv4-tiny in the boards of Jetson Nano and TX2, and the accuracy reported is in the range of 81–99% for monogerm seeds and 89–99% for multigerm seeds on Jetson Nano, while 88–99% for monogerm seeds and 90–99% for multigerm seeds are reported using Jetson TX2.
Finally, in Ref. [32], a CNN proposal is presented, and the statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for the purpose of classifying malaria parasite detection.
In this paper, we present a deep-learning-based approach for malaria cell image classification. We used a convolutional neural network to differentiate between infected and uninfected cells, evaluating the performance of the model in terms of accuracy, sensitivity, and specificity. Furthermore, we compare the results obtained with a previous work published in IEEE Access where large and heavy deep learning type recognition systems are used [14] versus the new approaches adapted to the Jetson TX2 board. We also discuss the clinical implications of our research.
Our goal is to provide an automated portable tool that can assist healthcare professionals in malaria diagnosis, improving accuracy and reducing the time required for sample analysis. We also aim to host and execute this design in integrated systems such as FPGAs and/or microcomputers. Through this research, we seek to contribute to the global effort to control and eventually eradicate malaria.

2. Materials and Methods

2.1. Dataset and Hardware

For this study, we used a malaria cell imaging dataset obtained from the National Library of Medicine and the Lister Hill National Center for Biomedical Communications because it is one of the most used databases in this type of analysis. It contains blood samples and images set from probable malaria-infected people analyzed under a microscope, as shown in Table 1. The folder has 27,560 96 × 96 pixel color images of Giemsa-stained blood samples obtained from 193 patients and distributed evenly between images of parasitized and uninfected RBCs. The research related to the data was approved by the Institutional Review Board of the Office of Human Subjects Research (OHSR) (Protocol number 12972 and approval date 25 June 2015) [33]. The implementation of the model was carried out using the TensorFlow and Keras framework [34], running on a Windows 10 Pro operating system on a PC equipped with an Intel(R) Core(TM) i9-10900X CPU @ 3.70 GHz processor, manufactured in Dalian, Liaoning, China. On the other hand, we adapt the code into the Jetson TX2 board, which has 2 NVIDIA Pascal architecture GPU cores and 4 ARM cores along with 8GB of RAM [13].

2.2. Convolutional Neural Network Architecture

The images in the dataset were preprocessed to ensure the consistency and quality necessary for training the deep learning model. Preprocessing stages included: Resizing: all images were resized to 64 × 64 pixels to reduce computational load and ensure uniform input to the model. Gray Scale: To speed up the process, the images were transformed grayscale to work with less data. Normalization: The pixel values 0–255 of the images were normalized to the range [0, 1]. The compact and efficient convolutional neural network (CNN) architecture specifically designed for malaria cell image classification is detailed in this section. The model architecture includes, as shown in the Figure 2, the following layers:
Input: Input layers for 64 × 64 × 1 images (width, height, and gray channel). Convolutional: Six architectures were tested; the first three of them used two convolutional layers with 3 × 3 filter sizes, changing the number of filters on 32, 48, and 64, respectively, while the other three architectures used three convolutional layers using the same variation in the filters (32, 48, and 64), each followed by a ReLU activation layer and a 2 × 2 max-pooling layer. Dense: The first three architectures used one fully connected layer with their respective variations, according to the filters used as 32, 48, and 64; the last three architectures used two fully connected layers, the first with 128 units and the second with their respective quantity of filters 32, 48, and 64. Output: An output layer with a unit and sigmoid activation for binary (parasitized/non-infected) classification.

2.3. Model Training

The model was trained using the preprocessed dataset with the following settings:
  • Loss function: binary cross entropy. Optimizer: Adam, with an initial learning rate of 0.001. Evaluation metrics: accuracy, specificity, recall, precision, and F1-score.
  • Data split: The dataset was split into 80% for training and 20% for validation.
  • Epochs: The model was trained for 50 epochs with a batch size of 32. Evaluation and Validation Model performance was evaluated using a separate test dataset not used during training. Performance metrics included overall accuracy, specificity, recall, precision, and F1-score. In addition, confusion matrices were generated to analyze false positives and false negatives. The codes are available in the repository Ref. [35].

3. Results

The convolutional neural network (CNN) models designed and trained in this study demonstrated remarkable performance in classifying malaria cell images. Below, in Figure 3, are detailed results of key evaluation metrics obtained during testing:
In binary classification, the following metrics are commonly used to evaluate the performance of a model: accuracy, precision, sensitivity, specificity, and F1-score (see Appendix A).
The averages of cross-validation of precision and loss curves during training and validation are presented in Figure 4, respectively. The organization of the plots is given by the filter quantity. The curves indicate stable convergence and good generalization of the model without significant indications of overfitting. The lowest architecture is remarkable, and the average values for each metric using cross-validation are accuracy: 93.11%, specificity: 94.59%, recall: 91.63%, precision: 94.42%, and F1-Score: 93.01%, as long as the highest give it the values of accuracy: 94.28%, specificity: 95.45%, recall: 93.11%, precision: 95.34%, and F1-Score: 94.21%. Figure 3 illustrates the behavior of each architecture along the validation stage.
According to the properties of the models shown in Figure 5 and Figure 6, it is important to mention the time required to execute the network for the lowest architecture is 1131.77 s and the weight of the model is 1.52 MB. The confusion matrices are shown in Figure 7. Relevant information was obtained using the Jetson TX2, as the device to read the model and classify the images, contained in the dataset is shown in Table 2 and Table 3.

4. Discussion

The obtained results indicate that a small and efficient CNN architecture can be effectively used for malaria cell image classification, and the feed-forward speed of CNN execution is 33.98 times faster than designs published in 2022 [14]. According to the weight of the architectures that are shown in Figure 6, it has the potential to be implemented in portable devices for use in resource-limited areas (see Table 4). It can be useful to do a comparison with another disease as reported in Ref. [36], where they propose a hybrid CNN architecture, implementing InceptionV3, ResNet-50, VGG16, and DenseNet to classify brain tumors, where they report 71.54% to 95.5% in accuracy metric; the runtime mentioned is in the range of 3.2 to 5.6 min and the memory utilization in GB is from 2.7 to 4.8. On the other hand, it will be a plausible challenge to compare the behavior against the results of [37], where they report an accuracy of 94.82%, a 97.34 F1-score, 96.74 precision, 97.10 sensitivity, and 84.75 specificity in the lung nodule classification; the model that they develop uses 2.61 MB.

5. Conclusions

In this study, we have developed and evaluated a compact and efficient convolutional neural network (CNN) architecture for malaria cell image classification and compared it with 49 different CNN architectures. Our results demonstrate that the proposed model achieves high accuracy (97.72%), sensitivity (93.4%), specificity (95.1%), and F1-score (94.2%) using the architecture 64 × 64 × 64, significantly achieving the reduction in computational processing and speed of execution compared to the work we published in 2022. In Figure 6, we noticed that by appending another convolutional layer and its corresponding max pooling layer, the matrix of weights reduced its dimensions, which provoked a lighter-weight model. The use of a compact CNN architecture not only optimizes the computational load but also facilitates its implementation on portable or embedded devices, which is crucial for its application in environments with limited resources. The computational efficiency of the model, with inference times of 0.0038254 s and 2.65 megabytes of model weight in memory, underlines its potential to provide fast and accurate diagnoses in real-time. It is important to notice that 33.98 times faster is the new proposal versus the previous one.
These findings highlight the feasibility and effectiveness of deep learning techniques in the field of automated diagnosis of infectious diseases using embedded boards such as Jetson TX2. Implementation of our model in clinical settings could improve the speed and accuracy of malaria diagnosis, thereby reducing the workload of healthcare professionals and improving outcomes for patients. Future work will focus on clinical validation of the model on various hardware configurations and in different geographic environments. Additionally, the integration of our approach with other diagnostic methods will be explored to create a comprehensive malaria detection platform.
In conclusion, image classification of malaria cells using a small and efficient deep learning architecture in embedded systems represents a significant advance in the fight against malaria, offering a promising tool to improve diagnosis and ultimately contribute to the reduction in the mortality associated with this disease.

Author Contributions

Conceptualization, A.-A.A.-R. and A.-I.B.-G.; methodology, I.-I.M.-G.; software, A.-A.A.-R. and M.G.-L.; validation, J.P.-O.; formal analysis, F.-J.P.-P.; investigation, J.J.V.-S.; resources, J.-A.G.-M.; data curation, C.-H.G.-C.; writing—original draft preparation, A.-A.A.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CONAHCyT, TecNM Celaya, and Universidad de Guanajuato grant number Becas Nacionales 725022.

Data Availability Statement

No new data was generated on this research, but the code created to get the results is located on: https://github.com/adanantonio07A/MalariaClassification_JetsonTX2, accessed on 27 September 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Accuracy is the ratio of correctly predicted observations to the total observations.
Accuracy = T P + T N T P + T N + F P + F N
where
  • TP = True Positives;
  • TN = True Negatives;
  • FP = False Positives;
  • FN = False Negatives.
Precision (also called positive predictive value) is the ratio of correctly predicted positive observations to the total predicted positives.
Precision = TP TP + FP
Sensitivity, also known as recall or true positive rate, is the ratio of correctly predicted positive observations to all observations in the actual class.
Sensitivity   ( Recall ) = TP TP + FN
Specificity, also called the true negative rate, measures the proportion of correctly identified negatives out of the actual negatives.
Specificity = TN TN + FP
The F1-score is the harmonic mean of precision and recall, providing a balance between the two.
F 1 = 2   ×   Precision   ×   Recall Precision + Recall

References

  1. World Health Organization. Fact Sheet About Malaria. Available online: https://www.who.int/news-room/fact-sheets/detail/malaria (accessed on 19 September 2024).
  2. Landier, J.; Parker, D.M.; Thu, A.M.; Lwin, K.M.; Delmas, G.; Nosten, F.H. The role of early detection and treatment in malaria elimination. Malar. J. 2016, 15, 363. [Google Scholar] [CrossRef] [PubMed]
  3. Gonçalves, D.; Hunziker, P. Transmission-blocking strategies: The roadmap from laboratory bench to the community. Malar. J. 2016, 15, 95. [Google Scholar] [CrossRef] [PubMed]
  4. Shahbodaghi, S.; Rathjen, N. Malaria: Prevention, Diagnosis, and Treatment. Am. Fam. Physician 2022, 106, 270–278. [Google Scholar] [PubMed]
  5. Chima, J.S.; Shah, A.; Shah, K.; Ramesh, R. Malaria Cell Image Classification using Deep Learning. Int. J. Recent Technol. Eng. 2020, 8, 5553–5559. [Google Scholar] [CrossRef]
  6. Cai, Z.; Ma, C.; Li, J.; Liu, C. Hybrid Amplitude Ordinal Partition Networks for ECG Morphology Discrimination: An Application to PVC Recognition. IEEE Trans. Instrum. Meas 2024, 73, 4008113. [Google Scholar] [CrossRef]
  7. Ibrahim, E.; Zaghden, N.; Mejdoub, M. Semantic Analysis System to Recognize Moving Objects by Using a Deep Learning Model. IEEE Access 2024, 12, 80740–80753. [Google Scholar] [CrossRef]
  8. Malu, G.; Uday, N.; Sherly, E.; Abraham, A.; Bodhey, N.K. CirMNet: A Shape-based Hybrid Feature Extraction Technique using CNN and CMSMD for Alzheimer’s MRI Classification. IEEE Access 2024, 12, 80491–80504. [Google Scholar] [CrossRef]
  9. Tseng, C.H.; Chien, S.J.; Wang, P.S.; Lee, S.J.; Pu, B.; Zeng, X.J. Real-time Automatic M-mode Echocardiography Measurement with Panel Attention. IEEE J. Biomed. Health Inform. 2024, 28, 5383–5395. [Google Scholar] [CrossRef]
  10. Salah, S.; Chouchene, M.; Sayadi, F. FPGA implementation of a Convolutional Neural Network for Alzheimer’s disease classification. In Proceedings of the 2024 21st International Multi-Conference on Systems, Signals & Devices (SSD), Erbil, Iraq, 22–25 April 2024; pp. 193–198. [Google Scholar] [CrossRef]
  11. Gondkar, R.R.; Gondkar, S.R.; Kavitha, S.; RV, S.B. Hybrid Deep Learning Based GRU Model for Classifying the Lung Cancer from CT Scan Images. In Proceedings of the 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE), Ballari, India, 26–27 April 2024; pp. 1–8. [Google Scholar] [CrossRef]
  12. Preetha, R.; Priyadarsini, M.J.P.; Nisha, J.S. Automated Brain Tumor Detection from Magnetic Resonance Images Using Fine-Tuned EfficientNet-B4 Convolutional Neural Network. IEEE Access 2024, 12, 112181–112195. [Google Scholar] [CrossRef]
  13. NVIDIA. NVIDIA Jetson TX2: High Performance AI at the Edge. Available online: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/ (accessed on 26 November 2024).
  14. Alonso-Ramírez, A.A.; Mwata-Velu, T.; García-Capulín, C.H.; Rostro-González, H.; Prado-Olivarez, J.; Gutiérrez-López, M.; Barranco-Gutiérrez, A.I. Classifying Parasitized and Uninfected Malaria Red Blood Cells Using Convolutional-Recurrent Neural Networks”. IEEE Access 2022, 10, 97348–97359. [Google Scholar] [CrossRef]
  15. Arunagiri, V.; Rajesh, B. Deep Learning Approach to Detect Malaria from Microscopic Images. Multimed. Tools Appl. 2020, 79, 15297–15317. [Google Scholar] [CrossRef]
  16. Yebasse, M.; Cheoi, K.; Ko, J. Malaria Disease Cell Classification with Highlighting Small Infected Regions. IEEE Access 2023, 11, 15945–15953. [Google Scholar] [CrossRef]
  17. Suraksha, S.; Santhosh, C.; Vishwa, B. Classification of Malaria Cell Images Using Deep Learning Approach. In Proceedings of the 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), Bhilai, India, 5–6 January 2023; pp. 1–5. [Google Scholar] [CrossRef]
  18. Pan, W.D.; Dong, Y.; Wu, D. Classification of Malaria-Infected Cells Using Deep Convolutional Neural Networks. In Machine Learning; Farhadi, H., Ed.; IntechOpen: Rijeka, Croatia, 2018; Chapter 8. [Google Scholar] [CrossRef]
  19. Pattanaik, P.; Mittal, M.; Khan, M. Unsupervised Deep Learning CAD Scheme for the Detection of Malaria in Blood Smear Microscopic Images. IEEE Access 2020, 8, 94936–94946. [Google Scholar] [CrossRef]
  20. Molina-Borrás, A.; Rojas, C.; del Río, J.; Bermejo, J.; Gutiérrez, J. Automatic Identification of Malaria and Other Red Blood Cell Inclusions Using Convolutional Neural Networks. Comput. Biol. Med. 2021, 136, 104680. [Google Scholar] [CrossRef]
  21. Siłka, W.; Sobczak, J.; Duda, J.; Wieczorek, M. Malaria Detection Using Advanced Deep Learning Architecture. Sensors 2023, 23, 1501. [Google Scholar] [CrossRef]
  22. Mittal, S. A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform. J. Syst. Archit. 2019, 97, 428–442. [Google Scholar] [CrossRef]
  23. Saypadith, S.; Aramvith, S. Real-Time Multiple Face Recognition using Deep Learning on Embedded GPU System. In Proceedings of the APSIPA Annual Summit and Conference, Honolulu, HI, USA, 12–15 November 2018; pp. 1318–1324. [Google Scholar] [CrossRef]
  24. Jung, S.; Kim, Y.; Lee, H.; Jang, J.; Hwang, J. Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning. IEEE Robot. Autom. Lett. 2018, 3, 2539–2544. [Google Scholar] [CrossRef]
  25. Amert, T.; Otterness, N.; Yang, M.; Anderson, J.H.; Smith, F.D. GPU Scheduling on the NVIDIA TX2: Hidden Details Revealed. In Proceedings of the 2017 IEEE Real-Time Systems Symposium (RTSS), Paris, France, 5–8 December 2017; pp. 104–115. [Google Scholar] [CrossRef]
  26. Mohan, H.M.; Singh, D.; Sadiq, M.; Dey, P.; Maji, S.; Pati, S.K. Edge Artificial Intelligence: Real-Time Noninvasive Technique for Vital Signs of Myocardial Infarction Recognition Using Jetson Nano. Adv. Hum.-Comput. Interact. 2021, 2021, 6483003. [Google Scholar] [CrossRef]
  27. Lou, L.; Liang, H.; Wang, Z. Deep-Learning-Based COVID-19 Diagnosis and Implementation in Embedded Edge-Computing Device. Diagnostics 2023, 13, 1329. [Google Scholar] [CrossRef]
  28. Shihadeh, J.; Ansari, A.; Ozunfunmi, T. Deep Learning Based Image Classification for Remote Medical Diagnosis. In Proceedings of the 2018 IEEE Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 18–21 October 2018; pp. 1–8. [Google Scholar] [CrossRef]
  29. Liu, B.; Chen, C.; Wan, S.; Qiao, P.; Pei, Q. An Edge Traffic Flow Detection Scheme Based on Deep Learning in an Intelligent Transportation System. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1840–1852. [Google Scholar] [CrossRef]
  30. Choe, C.; Choe, M.; Jung, S. Run Your 3D Object Detector on NVIDIA Jetson Platforms:A Benchmark Analysis. Sensors 2023, 23, 4005. [Google Scholar] [CrossRef] [PubMed]
  31. Beyaz, A.; Saripinar, Z. Sugar Beet Seed Classification for Production Quality Improvement by Using YOLO and NVIDIA Artificial Intelligence Boards. Sugar Tech 2024. [Google Scholar] [CrossRef]
  32. Rajaraman, S.; Antani, S.K.; Poostchi, M.; Silamut, K.; Hossain, M.A.; Maude, R.J.; Jaeger, S.; Thoma, G.R. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 2018, 6, e4568. [Google Scholar] [CrossRef] [PubMed]
  33. U.S. National Library of Medicine. Malaria Datasheet. Available online: https://lhncbc.nlm.nih.gov/LHC-research/LHC-projects/image-processing/malaria-datasheet.html (accessed on 27 September 2024).
  34. Chollet, F. Deep Learning with Python, 1st ed.; Manning Publications Co.: Shelter Island, NY, USA, 2017; ISBN 1617294438. [Google Scholar]
  35. Alonso-Ramírez, A.-A.; Barranco-Gutiérrez, A.-I.; Méndez-Gurrola, I.-I.; Gutiérrez-López, M.; Prado-Olivarez, J.; Pérez-Pinal, F.-J.; Villegas-Saucillo, J.J.; García-Muñoz, J.-A.; García-Capulín, C.-H. MalariaClassification_JetsonTX2. Available online: https://github.com/adanantonio07A/MalariaClassification_JetsonTX2 (accessed on 26 November 2024).
  36. Ramakrishnan, A.B.; Sridevi, M.; Vasudevan, S.K.; Manikandan, R.; Gandomi, A.H. Optimizing brain tumor classification with hybrid CNN architecture: Balancing accuracy and efficiency through oneAPI optimization. Inform. Med. Unlocked 2024, 44, 101436. [Google Scholar] [CrossRef]
  37. Lv, E.; Kang, X.; Wen, P.; Tian, J.; Zhang, M. A novel benign and malignant classification model for lung nodules based on multi-scale interleaved fusion integrated network. Sci. Rep. 2024, 14, 27506. [Google Scholar] [CrossRef]
Figure 1. Malaria diagnosis process using images of a patient’s blood sample.
Figure 1. Malaria diagnosis process using images of a patient’s blood sample.
Technologies 12 00247 g001
Figure 2. Architectures used in the experiments (32 × 32, 32 × 32 × 32, 48 × 48, 48 × 48 × 48, 64 × 64, and 64 × 64 × 64).
Figure 2. Architectures used in the experiments (32 × 32, 32 × 32 × 32, 48 × 48, 48 × 48 × 48, 64 × 64, and 64 × 64 × 64).
Technologies 12 00247 g002
Figure 3. Training and Validation Accuracy by architectures through the epochs.
Figure 3. Training and Validation Accuracy by architectures through the epochs.
Technologies 12 00247 g003
Figure 4. Average of cross-validation results.
Figure 4. Average of cross-validation results.
Technologies 12 00247 g004
Figure 5. Execution time by architectures.
Figure 5. Execution time by architectures.
Technologies 12 00247 g005
Figure 6. Model weight by architectures.
Figure 6. Model weight by architectures.
Technologies 12 00247 g006
Figure 7. Confusion matrix, by each K of cross-validation, using K = 5.
Figure 7. Confusion matrix, by each K of cross-validation, using K = 5.
Technologies 12 00247 g007
Table 1. Image examples from the malaria database and their preprocessing steps.
Table 1. Image examples from the malaria database and their preprocessing steps.
ParasitizedTechnologies 12 00247 i001
UninfectedTechnologies 12 00247 i002
PR: Parasitized (Resized [64 × 64])Technologies 12 00247 i003
PU: Uninfected (Resized [64 × 64])Technologies 12 00247 i004
PR (Grayscale)Technologies 12 00247 i005
PU (Grayscale)Technologies 12 00247 i006
Table 2. Metrics obtained through the execution of classification on a complete dataset. Performance obtained using the model for classification in Jetson TX2.
Table 2. Metrics obtained through the execution of classification on a complete dataset. Performance obtained using the model for classification in Jetson TX2.
ModelK-FoldAccuracySpecificityRecallPrecisionF1-Score
32 × 32197.2798.6495.9898.6897.31
297.3298.7395.9998.7797.36
397.4498.7296.2298.7597.47
497.1198.8995.4698.9397.16
597.2898.8495.8298.8897.32
32 × 32 × 32197.1299.0395.3699.0697.18
297.6498.9796.399997.68
397.7199.0496.4699.0697.74
497.5998.5296.798.5597.61
597.798.9396.5398.9597.73
48 × 48197.2798.8395.898.8797.31
297.2398.7995.7798.8397.27
397.4698.7396.2698.7797.5
497.1998.8395.6598.8797.23
597.0598.8895.3598.9397.1
48 × 48 × 48196.6798.6694.8398.7296.73
297.4898.9796.089997.52
397.89996.6699.0297.83
497.6799.0496.3799.0697.7
597.7598.8796.6798.997.77
64 × 64197.2698.8695.7798.997.31
297.2598.7995.898.8297.29
397.498.6896.1998.7297.44
497.598.8996.1998.9297.53
597.3298.8295.9198.8697.36
64 × 64 × 64197.6799.0796.3699.0997.71
297.6698.9396.4698.9697.69
397.8899.0696.7599.0897.9
497.7299.0596.4799.0797.76
597.6798.9696.4498.9997.7
Table 3. Average accuracy and time of execution per sample through images of the complete dataset.
Table 3. Average accuracy and time of execution per sample through images of the complete dataset.
ModelAccuracyClassification Execution(s)
32 × 3297.280.0014876
48 × 4897.550.0015972
64 × 6497.240.0023
32 × 32 × 3297.470.0025032
48 × 48 × 4897.350.0034522
64 × 64 × 6497.720.0038254
Table 4. Comparative results with previous work, using images of 64 × 64 pixels.
Table 4. Comparative results with previous work, using images of 64 × 64 pixels.
ReferenceAccuracyLowest Classification Execution Time
Alonso-Ramirez A. A. et al. (2022) Ref. [14] first approach99.89%0.125 s
Alonso-Ramirez A. A. et al. (2022) Ref. [14] second approach99.89%0.130 s
Alonso-Ramirez A. A. et al. (2024) minimal arch97.28%0.0014876 s
Alonso-Ramirez A. A. et al. (2024) maximum arch97.72%0.0038254 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alonso-Ramírez, A.-A.; Barranco-Gutiérrez, A.-I.; Méndez-Gurrola, I.-I.; Gutiérrez-López, M.; Prado-Olivarez, J.; Pérez-Pinal, F.-J.; Villegas-Saucillo, J.J.; García-Muñoz, J.-A.; García-Capulín, C.-H. Malaria Cell Image Classification Using Compact Deep Learning Architectures on Jetson TX2. Technologies 2024, 12, 247. https://doi.org/10.3390/technologies12120247

AMA Style

Alonso-Ramírez A-A, Barranco-Gutiérrez A-I, Méndez-Gurrola I-I, Gutiérrez-López M, Prado-Olivarez J, Pérez-Pinal F-J, Villegas-Saucillo JJ, García-Muñoz J-A, García-Capulín C-H. Malaria Cell Image Classification Using Compact Deep Learning Architectures on Jetson TX2. Technologies. 2024; 12(12):247. https://doi.org/10.3390/technologies12120247

Chicago/Turabian Style

Alonso-Ramírez, Adán-Antonio, Alejandro-Israel Barranco-Gutiérrez, Iris-Iddaly Méndez-Gurrola, Marcos Gutiérrez-López, Juan Prado-Olivarez, Francisco-Javier Pérez-Pinal, J. Jesús Villegas-Saucillo, Jorge-Alberto García-Muñoz, and Carlos-Hugo García-Capulín. 2024. "Malaria Cell Image Classification Using Compact Deep Learning Architectures on Jetson TX2" Technologies 12, no. 12: 247. https://doi.org/10.3390/technologies12120247

APA Style

Alonso-Ramírez, A.-A., Barranco-Gutiérrez, A.-I., Méndez-Gurrola, I.-I., Gutiérrez-López, M., Prado-Olivarez, J., Pérez-Pinal, F.-J., Villegas-Saucillo, J. J., García-Muñoz, J.-A., & García-Capulín, C.-H. (2024). Malaria Cell Image Classification Using Compact Deep Learning Architectures on Jetson TX2. Technologies, 12(12), 247. https://doi.org/10.3390/technologies12120247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop