Next Article in Journal
Bed Bugs (Hemiptera, Cimicidae): A Global Challenge for Public Health and Control Management
Previous Article in Journal
Optimal Abdominal CT Image Quality in Non-Lean Patients: Customization of CM Injection Protocols and Low-Energy Acquisitions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images

1
Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
2
School of Computer Science (SCS), Taylor’s University, Subang Jaya 47500, Malaysia
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(13), 2280; https://doi.org/10.3390/diagnostics13132280
Submission received: 9 May 2023 / Revised: 2 June 2023 / Accepted: 11 June 2023 / Published: 5 July 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.

1. Introduction

Cell counting is a fundamental task in biological research and clinical practice. It is an essential step for various applications, such as cell culture [1,2,3], drug discovery [4], disease diagnosis [5,6,7], and treatment monitoring [8,9]. Accurate cell counting is crucial for understanding cellular behavior, identifying cellular abnormalities, and evaluating the efficacy of interventions [10]. Traditionally, cell counting is performed manually by trained personnel using a microscope and a counting chamber, which is time-consuming, labor-intensive, and prone to errors [11]. Therefore, there is a growing need for robust and automated cell counting methods [12,13].
Several traditional methods have been developed for cell counting, such as manual counting [14,15], hemocytometry [16,17], and flow cytometry [18]. Manual counting is the most straightforward method, but it is subject to inter-observer variability, low throughput, and poor reproducibility [19]. Hemocytometry, however, is a more accurate and reliable method, but it is still labor-intensive and requires skilled operators [15,20]. Flow cytometry is a high-throughput and automated method, but it requires expensive equipment and specialized training [21]. Moreover, these methods are often limited by their ability to distinguish different cell types and sizes, especially in complex biological samples.
Fluorescence cell counting is a more advanced method that uses fluorescent dyes to label specific cell populations and quantify them with the help of fluorescence microscopy [22,23]. This method allows for higher accuracy and specificity, especially in complex biological samples, but it requires specialized equipment and expertise. Figure 1 adapted from [24] shows the fluorescence microscopy image depicting A549 human lung adenocarcinoma cells after their incubation, fluorescence microscopy enables the segmentation of individual cells and the removal of artefacts such as cellular detritus by highlighting the contour of objects based on their fluorescent signal. The picture demonstrates how fluorescence-based cell counting methods can be used to visualise and examine particular cell populations. Fluorescence cell counting has gained popularity as a method in biological research due to the expanding availability of fluorescent dyes and imaging technologies  [25].
Moreover, it can be said that conventional cell counting techniques have a number of drawbacks and difficulties, including poor accuracy, slow throughput, high cost, and subjectivity. Additionally, they might not be appropriate for complex samples, such as tissues, cell aggregates, or mixed cell populations [26]. Therefore, more accurate and automated cell counting methods utilizing a fluorescence mechanism are becoming more and more popular.
More reliable and automated cell counting techniques have emerged as a result of recent developments in computer vision [27] and machine learning [28]. These techniques make use of the capabilities of deep learning algorithms to find and count cells in images from fluorescence microscopy. Deep learning-based approaches for counting are more precise and reproducible than conventional approaches because they can handle a wide range of objects with varying kinds, sizes, and complex materials and textures [29]. When we look at the literature on cell counting using automated tools, the bulk of the methods that are currently being used, however, rely on segmentation-based tactics, which call for a lot of training, tuning, and parameter optimization [30,31,32,33]. These techniques utilize image processing algorithms such as edge detection, thresholding, morphological operations, and watershed segmentation to separate cells from the background and from each other [34,35,36,37,38]. While these techniques have shown promising results, they are often computationally intensive and require skilled personnel for optimization and validation. More advanced methods use machine learning algorithms such as a CNN to accurately identify cells. One such latest algorithm is YOLO (You Only Look Once), which uses a single neural network to detect and classify objects in real time [39,40,41]. Although it has been widely used in object detection and counting tasks [42,43,44,45,46] and  [47], its application in automated cell counting is still being explored, and there is potential for further research in this area. Using a single neural network, the real-time object detection and counting method forecasts the bounding boxes and class probabilities of objects in a picture. It operates by dividing the input image into a grid and forecasting bounding boxes and class probabilities for each grid cell. This can thus make it useful for cell counting, where the objective is to identify and count multiple cells in a given fluorescence image. Moreover, feature pyramid networks have also been employed as an effective method in improving the object detection accuracy of YOLO by leveraging feature maps at different resolutions [48].
In this study, we propose to use YOLOv5 with a feature pyramid network to detect and count multiple-sized cells in fluorescence microscopy images. An FPN is a popular architecture for object detection that utilizes feature maps of different resolutions to improve the accuracy of object detection. We believe that the combination of YOLOv5 and an FPN can enhance the detection and counting of cells of different sizes in a single image.
Our main contributions are highlighted as follows:
  • We utilized YOLOv5, a state-of-the-art object detection algorithm, for cell counting in fluorescence microscopy images.
  • We employed the FPN as a feature extractor to handle cells of different sizes in the images.
  • We annotated the cell images with bounding boxes using a labeling tool for training the YOLOv5 model.
  • We augmented the original dataset of 283 images to 600 images with rotation, scaling, and flipping to improve the model’s performance.
  • We evaluated the performance of the YOLOv5 model with an FPN on the cell counting task and compared it to other YOLOv5 model versions.
The remainder of this paper is structured in the following manner. Section 2 presents a comprehensive literature review of previous studies on cell counting using traditional methods and deep learning techniques. Section 3 introduces the two main components of our approach, YOLO and the FPN, and explains their technical details. Section 4 outlines our proposed approach in detail, including dataset annotation, data preprocessing, customizing YOLO’s hyperparameters, and the experimental settings. Section 5 presents the results of our experiments and an evaluation of our approach’s performance in terms of accuracy and efficiency. Section 6 discusses the implications of our findings and compares our approach’s performance with existing methods. Finally, Section 7 concludes the paper by summarizing the main contributions, discussing the limitations and future directions, and providing a final remark on the potential applications of our approach in the field of cell counting.

2. Literature Review

In many biomedical applications, including cancer detection, drug discovery, and toxicity testing, cell counting is a critical step [45,49,50,51]. Using a microscope and a counting chamber, skilled workers manually count cells according to traditional methods [18]. Although manual cell counting is the industry standard, it is labor-intensive, time-consuming, and prone to human error, making it challenging to standardize and replicate results across various samples [52]. Additionally, it can be difficult to distinguish between cells that are the same size and shape or that group together, which can result in errors in cell counts [53].
Automated cell counting techniques have been developed to address these issues [34,54,55]. One of the earlier techniques was based on electronic particle counting, which detects cells as they pass through a small aperture using impedance or light scattering. Although this method is quick and precise, it cannot tell the difference between live and dead cells and needs a high cell density [56].
The examination of digital photographs of cells using computer algorithms is a different automated way for counting the number of cells [57]. Direct and indirect procedures make up the two basic groups into which these techniques can be divided. With direct methods, cells are marked with stains or dyes and counted in accordance with their fluorescence or absorbance. The investigation of morphological characteristics of cells, such as their size, shape, and texture, is utilized in indirect approaches for cell counting and can be used to recognize and count cells in digital images. Deep learning, machine learning, and image processing-based methods are another way to categorize automated cell counting techniques. The classification tree is shown in Figure 2.
Image processing based methods involve the application of mathematical operations to enhance, segment, and analyze cell images [58,59]. These methods are computationally efficient but require expert knowledge to design and tune the algorithms [60]. Machine learning based methods use statistical models to learn patterns and features from the data to classify and count cells. These methods require extensive feature engineering and parameter tuning, making them time-consuming and computationally expensive [61,62]. On the other hand, deep learning-based methods use artificial neural networks with multiple layers to automatically learn and extract features from the data. These methods have shown superior performance in terms of accuracy and speed compared to other methods. They have shown remarkable results in various other domains also, particularly in medical image analysis [63,64,65]. Table 1 summarizes the performance of the three methods in terms of advantages and limitations and also highlights their applications in cell detection and counting.
Morelli et al. in [70] suggest using deep learning to automate cell counting in fluorescence microscopy. To localize cells and obtain counts as the number of observed objects, the method employs a fully convolutional network known as c-ResUnet. Kayasandik et al. in [72] provide a unique image analysis framework for automatic astrocyte recognition and segmentation in 2D fluorescent brain tissue images. The method contains two significant innovations: an automated cell detection method based on multiscale directional filters and astrocyte segmentation using a modified CNN architecture. In another work by [73], a deep learning-based approach is proposed for creating pseudo-nuclear stained images from phase contrast images of cells. To recognize the nuclei of cells, the suggested method employs a simple deep neural network. The suggested approach also determines the relative position of the cells, counting the number and tracking them for different cell densities. Using fluorescence microscopy pictures, Ref. [74] demonstrates an automated workflow for recognizing and counting Mycobacterium tuberculosis (Mtb) germs in sputum samples. The pipeline is divided into four stages: annotation with generative adversarial networks (GANs), extraction of key picture patches, classification of extracted patches, and regression to obtain the final bacteria count. In a very similar work, Ref. [75] proposes a framework that uses a DCNN for automated cell counting in time-lapse microscopy images of developing human embryos. The study uses a dataset of 265 human embryos to demonstrate the effectiveness of the approach. The results show that the proposed framework provides robust estimates of the number of cells in a developing embryo up to the 5-cell stage, which is 48 hours post-fertilization. Whereas, Ref. [76] uses a fully convolutional regression network to estimate cell density maps from images. The method includes auxiliary convolutional neural networks to improve performance on unseen datasets.
Furthermore, Refs. [71,77] propose an automated method for blood cell counting and categorization that employs instance segmentation, transfer learning, and Mask R-CNN. The proposed approach successfully detects a wide range of blood cells, including overlapped and faded cells. Similarly, Ref. [78] establishes a 3D cell counting method based on U-net deep learning to effectively identify original seed cell numbers in extracellular matrix (ECM) aggregated cells. When compared to standard contour and watershed segmentation methods, the proposed method has a smaller counting error.
Finally, Ref. [79] presents a deep learning approach for the detection and segmentation of macrophage cells in fluorescence microscopy images using feature pyramid fusion. The proposed approach shows superior performance compared to a state-of-the-art Mask R-CNN approach and provides a novel dataset of macrophage cells for public use. Likewise, Ref. [80] proposes a fully convolutional neural network-based approach for automatic cell counting in fluorescent microscopy images. The proposed method shows human-level performance and satisfactory performance in terms of the counting task, with mean and median absolute errors of 0.8 and 1, respectively.
Despite all the preceding literature, the task of cell counting in fluorescence images has yet to be studied utilizing a single-stage method for object detection. One such object detection technology is You Only Look Once (YOLO), a cutting-edge deep learning-based object identification framework. Because it uses a single convolutional neural network to estimate bounding boxes and class probabilities directly from complete pictures, YOLO is faster and more accurate than earlier object detection algorithms [40]. Numerous applications, including self-driving cars, pedestrian detection, and face detection, have effectively exploited YOLO for object detection [81,82,83].
FPNs are another deep learning-based approach for image object detection, segmentation, and feature extraction. An FPN is a multiscale pyramid network that detects objects of varying sizes in images by using feature maps of varying resolutions [84]. An FPN has been used to recognize and segment small objects in aerial photos and to segment buildings in satellite images [85,86,87].

3. Baseline Architecture

Figure 3 illustrates the baseline architecture of our model.

3.1. Overview of YOLOv5 Architecture

The architecture of YOLOv5 follows a similar concept to previous versions of YOLO, where a single neural network is trained to directly predict bounding boxes and class probabilities for each object in the image. However, YOLOv5 has undergone several improvements to make it faster and more accurate. The network architecture of YOLOv5 is based on a backbone of CSP convolutional layers and a neck of PAN layers, followed by three different-sized detection heads. This architecture allows the model to capture features at different scales, leading to improved accuracy in object detection tasks.

3.2. Overview of FPN Architecture and Implementation

The basic idea behind an FPN is to construct a pyramid of multiscale feature maps from a single input image. This pyramid is built by applying a series of convolutional layers with decreasing spatial resolution to the input image. The resulting feature maps at each level of the pyramid contain different levels of semantic information and different levels of detail. The top-level feature map has the lowest spatial resolution but the highest semantic information, while the bottom-level feature map has the highest spatial resolution but the lowest semantic information.

3.3. Combining YOLOv5 and FPN

YOLOv5 with an FPN and YOLOv5 head architecture consists of three main parts: backbone, neck, and head.

3.3.1. Backbone

The backbone is responsible for extracting feature maps from the input image. YOLOv5 uses a modified CSP backbone that is composed of a series of convolutional layers with shortcut connections. The backbone consists of four stages, where each stage performs downsampling of the feature maps. The stem is the initial part of the backbone that processes the input image and generates the first set of feature maps. In YOLOv5, the stem consists of a series of convolutional layers and a pooling layer that downsamples the image to a smaller size. The stem is responsible for extracting low-level features such as edges and corners from the input image. The stages are responsible for progressively extracting more complex features from the input image by processing the feature maps generated by the previous stage. Each stage typically performs downsampling of the feature maps to increase their receptive field and reduce their spatial resolution. The stem and stages in the YOLOv5 backbone are responsible for extracting increasingly complex features from the input image, which are then used by the feature pyramid network (FPN) and YOLOv5 head to generate bounding box predictions and class probabilities.

3.3.2. Neck

The neck connects the backbone to the head and is responsible for fusing feature maps of different resolutions. We use a feature pyramid network (FPN) as the neck, which generates a pyramid of feature maps at different scales by combining feature maps from different levels of the backbone. The FPN consists of two parts: a bottom-up pathway that generates the feature maps from the backbone, and a top-down pathway that combines the feature maps to create a pyramid of features. After the last stage of the backbone, the feature map has a very small spatial resolution, which makes it difficult to detect small objects. To address this, the FPN upsamples the feature maps from the lower scales and fuses them with the feature maps from the higher scales to create a set of feature maps with varying scales and resolutions. In the FPN, the upsampling operation is used to increase the spatial resolution of the feature maps from the lower scales to match the resolution of the feature maps from the higher scales. The upsampled feature maps are then merged with the higher-scale feature maps using an additional operation to create a fused feature map.
The merged feature map is then passed through a convolution layer to refine the features and reduce the channel depth to match the desired output size. The resulting feature map is then used as the input to the next level of the pyramid.
The downsampling operation, which reduces the spatial resolution of the feature maps, is typically implemented using pooling layers, such as max pooling or average pooling. This operation is used in the backbone to reduce the spatial resolution of the input image and generate the initial feature maps.

3.3.3. Head

The head is responsible for predicting the bounding boxes and class probabilities for the objects in the input image. The YOLOv5 head architecture is a single-stage object detection system that predicts bounding boxes and class probabilities directly from the fused feature maps generated by the FPN. The head consists of a series of convolutional layers that reduce the dimensionality of the fused feature maps, followed by two fully connected layers that output the bounding box coordinates and class probabilities.

4. Methodology

4.1. Dataset Preparation

The dataset utilized for cell counting in this study was obtained from the study conducted by [70], consisting of 283 images of cultured cells from mice brain slices. As the dataset was not initially suitable for object detection tasks using bounding boxes, we preprocessed the data by annotating individual cells using the LabelImg tool in YOLOv5 pytorch format. This involved converting the bounding box coordinates to the format ( x , y , w , h ) , where ( x , y ) represents the center coordinates of the bounding box and ( w , h ) represents the width and height of the bounding box relative to the image size. The class label for each bounding box was also encoded as an integer. A thorough quality check was conducted to ensure the accuracy and consistency of the annotations. Following annotation, we randomly divided the dataset into training (80%), validation (10%), and testing (10%) sets in order to train and evaluate our deep learning model for autonomous cell counting.

4.2. Dataset Augmentation

In order to increase the diversity of the dataset and prevent overfitting, data augmentation techniques were applied to the original dataset of 283 images. The following data augmentation techniques were used:
  • Horizontal flipping: The images were horizontally flipped to generate new images.
  • Rotation: The images were rotated at different angles to create variations in the cell positions and orientations.
  • Brightness and contrast adjustment: The brightness and contrast of the images were adjusted within a range of −40 to +40 to simulate different lighting conditions and highlight the dim and dull cells.
The augmented dataset was generated by applying these techniques randomly to the original dataset. The size of the augmented dataset was increased from 283 to 600 images. The effectiveness of the data augmentation techniques was evaluated by training the custom YOLOv5 model on both the original and augmented datasets and comparing their performance.

4.3. Customizing YOLOv5

To customize YOLOv5 for cell counting, we needed to modify the architecture and parameters of the model. The YOLOv5 architecture consists of a backbone network and a detection head. The backbone network is responsible for feature extraction from the input image, while the detection head is responsible for predicting the bounding boxes and confidence scores for objects in the image. We customized both parts of the network to improve its accuracy for cell counting. To configure YOLOv5 for cell counting, we also modified the number of classes in the detection head to one, as we are only interested in detecting one type of object, i.e., cells. We also modified the anchor box sizes and aspect ratios to better match the size and shape of cells in our images. In addition, we modified the output layer of the network to predict the number of cells in the image instead of detecting their bounding boxes. Configuring the YOLOv5 architecture with FPN for cell counting, the baseline architecture was modified to include the FPN module for multiscale feature extraction. The number of feature levels and the feature map sizes for each level were determined based on the input image size and the cell size. The algorithm for the modification is presented in Algorithm 1.
Algorithm 1 Customized YOLOv5 Model with FPN.
1:
Load the pre-trained YOLOv5 model and remove the detection head.
2:
Add the FPN layers to the model architecture by implementing the bottom-up and top-down pathways.
3:
Concatenate the feature maps generated by the bottom-up pathway with the corresponding feature maps generated by the top-down pathway.
4:
Apply convolutional layers to the concatenated feature maps to generate the final multiscale feature maps.
5:
Add the detection head back to the model architecture and train the model on the annotated and augmented dataset.
6:
Evaluate the performance of the customized YOLOv5 model with FPN.

5. Results

After fine-tuning the hyperparameters and experimenting with various optimization algorithms, we settled on using the Adam optimizer with an initial learning rate of 0.001, a weight decay of 0.0003, and a batch size of 80 for our customized YOLOv5 model with an FPN. We trained the model on images of different sizes, namely 415 × 415, 640 × 640, and 840 × 840, to generate multiscale feature maps that can detect cells of varying sizes. Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 illustrate the results obtained from the evaluation of the model on the validation and test datasets. In the validation dataset, the precision was 0.796, recall was 0.741, and mAP was 0.79. In the test dataset, the precision was 0.79, recall was 0.829, and mean average precision was 0.837. The experiments in this study were conducted on a Google Colab platform, utilizing a free GPU Tesla T4 for evaluating the performance of the proposed model. On average, the processing time for each image, encompassing detection and counting, was measured to be 43.9 ms. The confusion matrix in Figure 8 shows that the model correctly predicted the “Cell” class 0.91 times.
The F1 curve illustrates the trade-off between precision and recall for different confidence thresholds. The highest F1 score achieved was 80%, indicating a good balance between precision and recall. The training and validation accuracy and loss curves show the progress of the model’s performance during training. The validation accuracy steadily increased over time, while the training loss steadily decreased, indicating that the model was effectively learning from the data. The precision and confidence curves show the relationship between precision and confidence for different confidence thresholds. The highest precision was achieved at a confidence threshold of 0.7, indicating that the model was highly confident in its predictions at this threshold.
Furthermore, we retrained the model using the initial weights and conducted training for 300 epochs, implementing the early stopping technique. Early stopping involves monitoring the validation loss, and if there is no improvement in the validation loss over a consecutive number of epochs, training is halted. This approach ensures that the model achieves the best performance while preventing overfitting. We also compared the performance of our model with and without data augmentation, and with different image sizes and YOLOv5 architectures (Yolos, Yolon, Yolol, and Yolofpn). Our experiments demonstrated that YOLOv5 with an FPN achieves the best performance with an mAP of 0.799. The comparison table of the results obtained from the different experiments provides a comprehensive view of the performance achieved by our model. Table 2 shows that the model trained with data augmentation achieved higher precision, recall, and F1 scores compared to the model trained without data augmentation. It also shows that the best results were achieved with the YOLOv5 architecture and an image size of 416 × 416, which achieved the highest mAP on the validation set and test set also.
In the final stage of our evaluation, we quantified the number of cells detected in the test dataset. A few exemplary images from the test dataset are shown in Figure 9, Figure 10, Figure 11 and Figure 12, along with their corresponding detected cells. The numbers with every box along with the word “cell” represent the confidence scores associated with each cell detection. The term “confidence score” in object detection refers to a numerical number that expresses the algorithm’s opinion regarding the likelihood that a detected region or bounding box contains a certain object of interest, in this case “cell”. This analysis allowed us to assess the effectiveness of our customized YOLOv5 model with an FPN in detecting cells accurately and reliably, and to further confirm its potential for use in practical applications.

6. Discussion

Our paper presents a machine learning approach for automated cell counting in fluorescent microscopy images using YOLOv5fpn. In comparison to the approach proposed in the UNet paper [80], which achieves an F1 score of 0.87, our method achieves a true positive cell prediction rate of 0.94, demonstrating high precision and accuracy. Our method employs a single-shot detection approach, which eliminates the need for a separate segmentation step and makes the model faster and more efficient. Furthermore, we conducted several experiments to optimize our model’s performance, including retraining the model with early stopping and comparing the performance with and without data augmentation, and with different image sizes and YOLOv5 architectures. Our method employs a single-shot detection approach, which eliminates the need for a separate segmentation step and makes the model faster and more efficient. By directly detecting cells in a single pass, our approach significantly reduces the computational complexity and processing time, making it suitable for large-scale analyses and real-time applications.
Another notable advantage of our approach is its ability to handle cells of varying sizes. Evident from Figure 9, Figure 10, Figure 11 and Figure 12, the cells in the images exhibit size variations and our method successfully detects and counts them. This capability is crucial in biomedical research, as it enables the analysis of cellular populations with diverse sizes and facilitates the study of cellular dynamics and disease progression. Moreover, our method can extract the bounding box coordinates for each detected cell. These coordinates provide the spatial location of the cells within the image, enabling further spatial analysis and characterization. Researchers can use these coordinates to study the distribution and clustering patterns of cells in the image, which can provide valuable insights into cellular dynamics and interactions. Additionally, by measuring the width and height of the bounding box, researchers can obtain an approximation of the size of each cell. This information can be used to analyze cell size distribution, track changes in cell size over time, or compare the sizes of cells under different experimental conditions.
As a whole, our suggested method detects and counts cells accurately while also providing valuable parameters, such as bounding box coordinates, confidence ratings, and projected cell sizes. These parameters enable the comprehensive analysis and characterization of cellular features, facilitating further investigations into cellular dynamics, disease progression, and the effects of various treatments or interventions. In summary, our single-shot detection method shows promise for automated cell counting in fluorescence microscopy pictures, offering a quick and accurate solution that might be employed in a range of research and clinical situations.

7. Conclusions

In conclusion, our study presents a promising solution for automated cell counting in fluorescence microscopy images using the YOLOv5fpn model. Our experiments demonstrate the effectiveness of the single-shot detection approach, which eliminates the need for a separate segmentation step, making the model faster and more efficient. We also show that the model’s performance can be further optimized by retraining with early stopping and utilizing data augmentation, achieving good results.
However, our study also has limitations that can be addressed in future work. One limitation is the size and variety of the dataset used for training and validation. While our dataset includes a large number of images, it is limited to a specific type of cell, and future studies could benefit from incorporating additional cell types and imaging conditions. Additionally, our study focuses on detecting and counting cells in 2D images, and future work could explore extending this approach to 3D images or time-lapse microscopy. Overall, our study provides a foundation for further research in the automated analysis of fluorescence microscopy images, with potential applications in various research and clinical settings.

Author Contributions

Data curation, B.A.; funding acquisition, B.A.; investigation, F.A.; methodology, B.A.; project administration, M.H.; supervision, N.Z.J.; validation, F.A.; writing—original draft, F.A.; writing—review and editing, N.Z.J. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, the Ministry of Education in Saudi Arabia for funding this research work through project number 223202.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through project number 223202.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo Dimensional
3DThree Dimensional
CNNConvolution Neural Network
CSPCross-Stage Partial
DCNNDeep Convolution Neural Network
FCNFully Convolution Network
FPNFeature Pyramid Network
GANGenerative Adversarial Network
IoUIntersection Over Union
PANPath Aggregation Network
mAPMean Average Precision
MtbMycobacterium tuberculosis
SSDSingle-Shot Detector
UNETU-Shaped Convolutional Network
YOLOYou Only Look Once

References

  1. Piccinini, F.; Tesei, A.; Arienti, C.; Bevilacqua, A. Cell counting and viability assessment of 2D and 3D cell cultures: Expected reliability of the trypan blue assay. Biol. Proced. Online 2017, 19, 1–12. [Google Scholar] [CrossRef] [PubMed]
  2. Ongena, K.; Das, C.; Smith, J.L.; Gil, S.; Johnston, G. Determining cell number during cell culture using the Scepter cell counter. JoVE (J. Vis. Exp.) 2010, e2204. [Google Scholar] [CrossRef]
  3. Barbosa, M.A.; Xavier, C.P.; Pereira, R.F.; Petrikaitė, V.; Vasconcelos, M.H. 3D cell culture models as recapitulators of the tumor microenvironment for the screening of anti-cancer drugs. Cancers 2022, 14, 190. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, W.; Lee, A.M.; Jena, S.; Huang, Y.; Ho, Y.; Tietz, K.T.; Miller, C.R.; Su, M.C.; Mentzer, J.; Ling, A.L.; et al. Computational drug discovery for castration-resistant prostate cancers through in vitro drug response modeling. Proc. Natl. Acad. Sci. USA 2023, 120, e2218522120. [Google Scholar] [CrossRef] [PubMed]
  5. Briggs, C. Quality counts: New parameters in blood cell counting. Int. J. Lab. Hematol. 2009, 31, 277–297. [Google Scholar] [CrossRef]
  6. Almufareh, M.F.; Tehsin, S.; Humayun, M.; Kausar, S. A Transfer Learning Approach for Clinical Detection Support of Monkeypox Skin Lesions. Diagnostics 2023, 13, 1503. [Google Scholar] [CrossRef]
  7. Zou, L.h.; Chen, J.; Zhang, J.; Garcia, N. Malaria cell counting diagnosis within large field of view. In Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications, Sydney, Australia, 1–3 December 2010; pp. 172–177. [Google Scholar]
  8. Guo, J.; Ge, J.; Guo, Y. Recent advances in methods for the diagnosis of Corona Virus Disease 2019. J. Clin. Lab. Anal. 2022, 36, e24178. [Google Scholar] [CrossRef]
  9. Pillai, S.; Kunze, E.; Sordillo, L.; Jayarao, B. Application of differential inflammatory cell count as a tool to monitor udder health. J. Dairy Sci. 2001, 84, 1413–1420. [Google Scholar] [CrossRef]
  10. Phelan, M.C.; Lawler, G. Cell counting. Curr. Protoc. Cytom. 1997, A.3A.1–A.3A.4. [Google Scholar] [CrossRef]
  11. Hedreen, J.C. What was wrong with the Abercrombie and empirical cell counting methods? A review. Anat. Rec. Off. Publ. Am. Assoc. Anat. 1998, 250, 373–380. [Google Scholar] [CrossRef]
  12. Kickler, T.S. Clinical analyzers. Advances in automated cell counting. Anal. Chem. 1999, 71, 363–365. [Google Scholar] [CrossRef] [PubMed]
  13. Alcaide Martín, M.J.; Altimira Queral, L.; Sahuquillo Frías, L.; Valiña Amado, L.; Merino, A.; García de Guadiana-Romualdo, L. Automated cell count in body fluids: A review. Adv. Lab. Med. Med. Lab. 2021, 2, 149–161. [Google Scholar] [CrossRef] [PubMed]
  14. Cadena-Herrera, D.; Esparza-De Lara, J.E.; Ramírez-Ibañez, N.D.; López-Morales, C.A.; Pérez, N.O.; Flores-Ortiz, L.F.; Medina-Rivero, E. Validation of three viable-cell counting methods: Manual, semi-automated, and automated. Biotechnol. Rep. 2015, 7, 9–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Jindal, D.; Singh, M. Counting of Cells. In Animal Cell Culture: Principles and Practice; Springer: Cham, Switzerland, 2023; pp. 131–145. [Google Scholar]
  16. Green, M.R.; Sambrook, J. Estimation of cell number by hemocytometry counting. Cold Spring Harb. Protoc. 2019, 2019, pdb-prot097980. [Google Scholar] [CrossRef] [PubMed]
  17. Pioli, P.D. Protocol: Hemocytometer Cell Counting, 2019. Available online: https://med.wmich.edu/sites/default/files/Hemacytometer_Cell_Counting.pdf (accessed on 8 May 2023).
  18. Vembadi, A.; Menachery, A.; Qasaimeh, M.A. Cell cytometry: Review and perspective on biotechnological advances. Front. Bioeng. Biotechnol. 2019, 7, 147. [Google Scholar] [CrossRef]
  19. Khan, S.; Khan, A.; Khattak, F.S.; Naseem, A. An accurate and cost effective approach to blood cell count. Int. J. Comput. Appl. 2012, 50, 18–24. [Google Scholar] [CrossRef] [Green Version]
  20. Mattern, C.F.; Brackett, F.S.; Olson, B.J. Determination of number and size of particles by electrical gating: Blood cells. J. Appl. Physiol. 1957, 10, 56–70. [Google Scholar] [CrossRef]
  21. Avci, M.B.; Yasar, S.D.; Cetin, A.E. An Optofluidic Platform for Cell-Counting Applications. Anal. Methods 2023, 15, 2244–2252. [Google Scholar] [CrossRef]
  22. Philipson, L. Adenovirus assay by the fluorescent cell-counting procedure. Virology 1961, 15, 263–268. [Google Scholar] [CrossRef]
  23. Chiu, R.J.; Black, L. Assay of wound tumor virus by the fluorescent cell counting technique. Virology 1969, 37, 667–677. [Google Scholar] [CrossRef]
  24. Levina, A.; Repkova, M.; Shikina, N.; Ismagilov, Z.; Yashnik, S.; Semenov, D.; Savinovskaya, Y.; Mazurkova, N.; Pyshnaya, I.; Zarytova, V. Non-agglomerated silicon-organic nanoparticles and their nanocomplexes with oligonucleotides: Synthesis and properties. Beilstein J. Nanotechnol. 2018, 9, 2516–2525. [Google Scholar] [CrossRef] [Green Version]
  25. Lo, C.H.; Schaaf, T.M.; Thomas, D.D.; Sachs, J.N. Fluorescence-based TNFR1 biosensor for monitoring receptor structural and conformational dynamics and discovery of small molecule modulators. In The TNF Superfamily: Methods and Protocols; Humana: New York, NY, USA, 2021; pp. 121–137. [Google Scholar]
  26. Sciences, C.L. Introduction to Fluorescence Cell Counting. 2021. Available online: https://www.corning.com/catalog/cls/documents/application-notes/CLS-AN-495.pdf (accessed on 30 April 2023).
  27. Parvaiz, A.; Khalid, M.A.; Zafar, R.; Ameer, H.; Ali, M.; Fraz, M.M. Vision transformers in medical computer vision—A contemplative retrospection. Eng. Appl. Artif. Intell. 2023, 122, 106126. [Google Scholar] [CrossRef]
  28. Greener, J.G.; Kandathil, S.M.; Moffat, L.; Jones, D.T. A guide to machine learning for biologists. Nat. Rev. Mol. Cell Biol. 2022, 23, 40–55. [Google Scholar] [CrossRef] [PubMed]
  29. Bereciartua-Pérez, A.; Gómez, L.; Picón, A.; Navarra-Mestre, R.; Klukas, C.; Eggers, T. Insect counting through deep learning-based density maps estimation. Comput. Electron. Agric. 2022, 197, 106933. [Google Scholar] [CrossRef]
  30. Asha, S.; Gopakumar, G.; Subrahmanyam, G.R.S. Saliency and ballness driven deep learning framework for cell segmentation in bright field microscopic images. Eng. Appl. Artif. Intell. 2023, 118, 105704. [Google Scholar] [CrossRef]
  31. Chen, X.; Kandel, M.E.; He, S.; Hu, C.; Lee, Y.J.; Sullivan, K.; Tracy, G.; Chung, H.J.; Kong, H.J.; Anastasio, M.; et al. Artificial confocal microscopy for deep label-free imaging. Nat. Photonics 2023, 17, 250–258. [Google Scholar] [CrossRef]
  32. Flórez, S.L.; González-Briones, A.; Hernández, G.; de la Prieta, F. Automated Counting via Multicolumn Network and CytoSMART Exact FL Microscope. In Ambient Intelligence—Software and Applications—13th International Symposium on Ambient Intelligence; Springer: Cham, Switzerland, 2023; pp. 207–218. [Google Scholar]
  33. Venkatapathiah, S.K.; Selvan, S.S.; Nanda, P.; Shetty, M.; Swamy, V.M.; Awasthi, K. Deep learning based object detection in nailfold capillary images. IAES Int. J. Artif. Intell. 2023, 12, 931. [Google Scholar] [CrossRef]
  34. Kothari, S.; Chaudry, Q.; Wang, M.D. Automated cell counting and cluster segmentation using concavity detection and ellipse fitting techniques. In Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 795–798. [Google Scholar]
  35. Mohapatra, S.; Patra, D. Automated cell nucleus segmentation and acute leukemia detection in blood microscopic images. In Proceedings of the 2010 International Conference on Systems in Medicine and Biology, Kharagpur, India, 16–18 December 2010; pp. 49–54. [Google Scholar]
  36. Fenistein, D.; Lenseigne, B.; Christophe, T.; Brodin, P.; Genovesio, A. A fast, fully automated cell segmentation algorithm for high-throughput and high-content screening. Cytom. Part A J. Int. Soc. Anal. Cytol. 2008, 73, 958–964. [Google Scholar] [CrossRef]
  37. Kesler, B.; Li, G.; Thiemicke, A.; Venkat, R.; Neuert, G. Automated cell boundary and 3D nuclear segmentation of cells in suspension. Sci. Rep. 2019, 9, 10237. [Google Scholar] [CrossRef] [Green Version]
  38. Mandal, S.; Uhlmann, V. Splinedist: Automated cell segmentation with spline curves. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1082–1086. [Google Scholar]
  39. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  40. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  41. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  42. Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic management: Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid pooling network. Electronics 2022, 11, 2748. [Google Scholar] [CrossRef]
  43. Alwakid, G.; Gouda, W.; Humayun, M.; Jhanjhi, N.Z. Diagnosing Melanomas in Dermoscopy Images Using Deep Learning. Diagnostics 2023, 13, 1815. [Google Scholar] [CrossRef] [PubMed]
  44. Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.; Humayun, M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare 2023, 11, 1222. [Google Scholar] [CrossRef] [PubMed]
  45. Alwakid, G.; Gouda, W.; Humayun, M. Deep Learning-based prediction of Diabetic Retinopathy using CLAHE and ESRGAN for Enhancemen. Healthcare 2023, 11, 863. [Google Scholar] [CrossRef]
  46. Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.; Humayun, M. Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP. Diagnostics 2023, 13, 1932. [Google Scholar] [CrossRef]
  47. Ashfaq, F.; Ghoniem, R.; Jhanjhi, N.; Khan, N.; Algarni, A. Using Dual Attention BiLSTM to Predict Vehicle Lane Changing Maneuvers on Highway Dataset. Systems 2023, 11, 196. [Google Scholar] [CrossRef]
  48. Peng, F.; Miao, Z.; Li, F.; Li, Z. S-FPN: A shortcut feature pyramid network for sea cucumber detection in underwater images. Expert Syst. Appl. 2021, 182, 115306. [Google Scholar] [CrossRef]
  49. Blay, V.; Tolani, B.; Ho, S.P.; Arkin, M.R. High-throughput screening: Today’s biochemical and cell-based approaches. Drug Discov. Today 2020, 25, 1807–1821. [Google Scholar] [CrossRef]
  50. Kuusanmäki, H.; Leppä, A.M.; Pölönen, P.; Kontro, M.; Dufva, O.; Deb, D.; Yadav, B.; Brück, O.; Kumar, A.; Everaus, H.; et al. Phenotype-based drug screening reveals association between venetoclax response and differentiation stage in acute myeloid leukemia. Haematologica 2020, 105, 708. [Google Scholar] [CrossRef]
  51. Costa, A.F.; Campos, D.; Reis, C.A.; Gomes, C. Targeting glycosylation: A new road for cancer drug discovery. Trends Cancer 2020, 6, 757–766. [Google Scholar] [CrossRef]
  52. Alam, M.M.; Islam, M.T. Machine learning approach of automatic identification and counting of blood cells. Healthc. Technol. Lett. 2019, 6, 103–108. [Google Scholar] [CrossRef] [PubMed]
  53. Schüffler, P.J.; Fuchs, T.J.; Ong, C.S.; Wild, P.J.; Rupp, N.J.; Buhmann, J.M. TMARKER: A free software toolkit for histopathological cell counting and staining estimation. J. Pathol. Informatics 2013, 4, 2. [Google Scholar] [CrossRef] [PubMed]
  54. Guo, X.; Yu, F. A method of automatic cell counting based on microscopic image. In Proceedings of the 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2013; Volume 1, pp. 293–296. [Google Scholar]
  55. O’Brien, J.; Hayder, H.; Peng, C. Automated quantification and analysis of cell counting procedures using ImageJ plugins. JoVE (J. Vis. Exp.) 2016, e54719. [Google Scholar] [CrossRef]
  56. Santen, R.J. Automated estimation of diploid and tetraploid nuclei with an electronic particle counter. Exp. Cell Res. 1965, 40, 413–420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Ertürk, İ.F. Automatic Cell Counting from Microchannel Images. Master’s Thesis, Middle East Technical University, Ankara, Turkey, 2022. [Google Scholar]
  58. Akkoyun, F.; Öcçelik, A. Rapid characterization of cell and bacteria counts using computer vision. Türk Doğa Ve Fen Dergisi 2021, 10, 269–274. [Google Scholar] [CrossRef]
  59. Biswas, S.; Ghoshal, D. Blood cell detection using thresholding estimation based watershed transformation with Sobel filter in frequency domain. Procedia Comput. Sci. 2016, 89, 651–657. [Google Scholar] [CrossRef] [Green Version]
  60. Pandit, A.; Kolhar, S.; Patil, P. Survey on automatic rbc detection and counting. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2015, 4, 128–131. [Google Scholar]
  61. Li, C.; Ma, X.; Deng, J.; Li, J.; Liu, Y.; Zhu, X.; Liu, J.; Zhang, P. Machine learning-based automated fungal cell counting under a complicated background with ilastik and ImageJ. Eng. Life Sci. 2021, 21, 769–777. [Google Scholar] [CrossRef]
  62. Hagen, G.M.; Bendesky, J.; Machado, R.; Nguyen, T.A.; Kumar, T.; Ventura, J. Fluorescence microscopy datasets for training deep neural networks. GigaScience 2021, 10, giab032. [Google Scholar] [CrossRef]
  63. Zhang, Q.; Yun, K.K.; Wang, H.; Yoon, S.W.; Lu, F.; Won, D. Automatic cell counting from stimulated Raman imaging using deep learning. PLoS ONE 2021, 16, e0254586. [Google Scholar] [CrossRef]
  64. Caicedo, J.C.; Roth, J.; Goodman, A.; Becker, T.; Karhohs, K.W.; Broisin, M.; Molnar, C.; McQuin, C.; Singh, S.; Theis, F.J.; et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytom. Part A 2019, 95, 952–965. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Sun, J.; Tárnok, A.; Su, X. Deep learning-based single-cell optical image studies. Cytom. Part A 2020, 97, 226–240. [Google Scholar] [CrossRef]
  66. Dvanesh, V.D.; Lakshmi, P.S.; Reddy, K.; Vasavi, A.S. Blood cell count using digital image processing. In Proceedings of the 2018 International Conference on Current Trends towards Converging Technologies (ICCTCT), Coimbatore, India, 1–3 March 2018; pp. 1–7. [Google Scholar]
  67. Grishagin, I.V. Automatic cell counting with ImageJ. Anal. Biochem. 2015, 473, 63–65. [Google Scholar] [CrossRef]
  68. Huang, X.; Jiang, Y.; Liu, X.; Xu, H.; Han, Z.; Rong, H.; Yang, H.; Yan, M.; Yu, H. Machine learning based single-frame super-resolution processing for lensless blood cell counting. Sensors 2016, 16, 1836. [Google Scholar] [CrossRef] [Green Version]
  69. Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 2019, 16, 67–70. [Google Scholar] [CrossRef] [PubMed]
  70. Morelli, R.; Clissa, L.; Amici, R.; Cerri, M.; Hitrec, T.; Luppi, M.; Rinaldi, L.; Squarcio, F.; Zoccoli, A. Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet. Sci. Rep. 2021, 11, 22920. [Google Scholar] [CrossRef] [PubMed]
  71. Dhieb, N.; Ghazzai, H.; Besbes, H.; Massoud, Y. An automated blood cells counting and classification framework using mask R-CNN deep learning model. In Proceedings of the 2019 31st International Conference on Microelectronics (ICM), Cairo, Egypt, 15–18 December 2019; pp. 300–303. [Google Scholar]
  72. Kayasandik, C.B.; Ru, W.; Labate, D. A multistep deep learning framework for the automated detection and segmentation of astrocytes in fluorescent images of brain tissue. Sci. Rep. 2020, 10, 5137. [Google Scholar] [CrossRef] [Green Version]
  73. Tsuzuki, Y.; Sanami, S.; Sugimoto, K.; Fujita, S. Pseudo-nuclear staining of cells by deep learning improves the accuracy of automated cell counting in a label-free cellular population. J. Biosci. Bioeng. 2021, 131, 213–218. [Google Scholar] [CrossRef]
  74. Zachariou, M.; Arandjelović, O.; Sabiiti, W.; Mtafya, B.; Sloan, D. Tuberculosis bacteria detection and counting in fluorescence microscopy images using a multi-stage deep learning pipeline. Information 2022, 13, 96. [Google Scholar] [CrossRef]
  75. Khan, A.; Gould, S.; Salzmann, M. Deep convolutional neural networks for human embryonic cell counting. In Proceedings of the Computer Vision–ECCV 2016 Workshops, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016; Proceedings, Part I 14. Springer: Cham, Switzerland, 2016; pp. 339–348. [Google Scholar]
  76. He, S.; Minn, K.T.; Solnica-Krezel, L.; Anastasio, M.A.; Li, H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal. 2021, 68, 101892. [Google Scholar] [CrossRef]
  77. Lugagne, J.B.; Lin, H.; Dunlop, M.J. DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning. PLoS Comput. Biol. 2020, 16, e1007673. [Google Scholar] [CrossRef] [Green Version]
  78. Jeong, E.J.; Choi, D.; Lee, D.W. U-Net Deep-Learning-Based 3D Cell Counter for the Quality Control of 3D Cell-Based Assays through Seed Cell Measurement. SLAS Technol. Transl. Life Sci. Innov. 2021, 26, 468–476. [Google Scholar] [CrossRef]
  79. Korfhage, N.; Mühling, M.; Ringshandl, S.; Becker, A.; Schmeck, B.; Freisleben, B. Detection and segmentation of morphologically complex eukaryotic cells in fluorescence microscopy images via feature pyramid fusion. PLoS Comput. Biol. 2020, 16, e1008179. [Google Scholar] [CrossRef] [PubMed]
  80. Morelli, R.; Clissa, L.; Dalla, M.; Luppi, M.; Rinaldi, L.; Zoccoli, A. Automatic cell counting in flourescent microscopy using deep learning. arXiv 2021, arXiv:2103.01141. [Google Scholar]
  81. Bie, M.; Liu, Y.; Li, G.; Hong, J.; Li, J. Real-time vehicle detection algorithm based on a lightweight You-Only-Look-Once (YOLOv5n-L) approach. Expert Syst. Appl. 2023, 213, 119108. [Google Scholar] [CrossRef]
  82. Kumar, A. SEAT-YOLO: A Squeeze-Excite and Spatial Attentive You Only Look Once Architecture for Shadow Detection. Optik 2023, 273, 170513. [Google Scholar] [CrossRef]
  83. Gallo, I.; Rehman, A.U.; Dehkordi, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images. Remote Sens. 2023, 15, 539. [Google Scholar] [CrossRef]
  84. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  85. Min, K.; Lee, G.H.; Lee, S.W. Attentional feature pyramid network for small object detection. Neural Netw. 2022, 155, 439–450. [Google Scholar] [CrossRef] [PubMed]
  86. Zhu, L.; Lee, F.; Cai, J.; Yu, H.; Chen, Q. An improved feature pyramid network for object detection. Neurocomputing 2022, 483, 127–139. [Google Scholar] [CrossRef]
  87. Yang, Q.; Zhang, C.; Wang, H.; He, Q.; Huo, L. SV-FPN: Small Object Feature Enhancement and Variance-Guided RoI Fusion for Feature Pyramid Networks. Electronics 2022, 11, 2028. [Google Scholar] [CrossRef]
Figure 1. Fluorescence microscopy images of A549 human lung adenocarcinoma cells after their incubation with Si–NH 2 Flu nanoparticles and Si–NH 2 ·ODN(3)Flu nanocomplexes. (a) Samples that were fluorescein-labeled were found in the green channel (488 nm). (b) The blue channel (405 nm) revealed cell nuclei stained with DAPI. (c) All channels superimposed. (Scale bar: 25 m for all). Figure adapted from [24].
Figure 1. Fluorescence microscopy images of A549 human lung adenocarcinoma cells after their incubation with Si–NH 2 Flu nanoparticles and Si–NH 2 ·ODN(3)Flu nanocomplexes. (a) Samples that were fluorescein-labeled were found in the green channel (488 nm). (b) The blue channel (405 nm) revealed cell nuclei stained with DAPI. (c) All channels superimposed. (Scale bar: 25 m for all). Figure adapted from [24].
Diagnostics 13 02280 g001
Figure 2. Classification of Methods of Automated Cell Counting.
Figure 2. Classification of Methods of Automated Cell Counting.
Diagnostics 13 02280 g002
Figure 3. Architecture of our YOLOv5 model with FPN.
Figure 3. Architecture of our YOLOv5 model with FPN.
Diagnostics 13 02280 g003
Figure 4. F1 curve illustrating the model’s performance in terms of the F1 score at different confidence thresholds. The F1 curve represents the harmonic mean of precision and recall, providing a balanced measure of the model’s accuracy. Higher values on the curve indicate better overall performance. At a confidence threshold of 38.7%, the model achieves an F1 score of 79%, indicating a good balance between precision and recall.
Figure 4. F1 curve illustrating the model’s performance in terms of the F1 score at different confidence thresholds. The F1 curve represents the harmonic mean of precision and recall, providing a balanced measure of the model’s accuracy. Higher values on the curve indicate better overall performance. At a confidence threshold of 38.7%, the model achieves an F1 score of 79%, indicating a good balance between precision and recall.
Diagnostics 13 02280 g004
Figure 5. Precision curve illustrating the model’s performance in terms of precision at different confidence thresholds. The precision curve represents the precision values obtained by the model at various confidence levels. Higher values on the curve indicate better precision performance. At a confidence threshold of 60.9%, the model achieves a precision of 100%, indicating perfect accuracy in its positive predictions.
Figure 5. Precision curve illustrating the model’s performance in terms of precision at different confidence thresholds. The precision curve represents the precision values obtained by the model at various confidence levels. Higher values on the curve indicate better precision performance. At a confidence threshold of 60.9%, the model achieves a precision of 100%, indicating perfect accuracy in its positive predictions.
Diagnostics 13 02280 g005
Figure 6. Recall curve illustrating the model’s performance in terms of recall at different confidence thresholds. The recall curve measures the ability of the model to correctly identify positive instances (cells) at varying confidence levels. Higher values on the curve indicate better recall performance. The model achieves a recall of 94%, reflecting its high accuracy in identifying positive instances.
Figure 6. Recall curve illustrating the model’s performance in terms of recall at different confidence thresholds. The recall curve measures the ability of the model to correctly identify positive instances (cells) at varying confidence levels. Higher values on the curve indicate better recall performance. The model achieves a recall of 94%, reflecting its high accuracy in identifying positive instances.
Diagnostics 13 02280 g006
Figure 7. Precision–recall curve illustrating the performance of the proposed model in cell detection. The curve showcases the trade-off between precision and recall, with higher values indicating better performance. At the classification threshold of 0.5, the model achieves a precision of 83.7% and a corresponding recall value.
Figure 7. Precision–recall curve illustrating the performance of the proposed model in cell detection. The curve showcases the trade-off between precision and recall, with higher values indicating better performance. At the classification threshold of 0.5, the model achieves a precision of 83.7% and a corresponding recall value.
Diagnostics 13 02280 g007
Figure 8. Confusion matrix evaluating the proposed model’s performance. Rows represent actual classes, and columns represent predicted classes. Values in the matrix indicate the number of instances per class. Higher diagonal values indicate accurate predictions, while off-diagonal elements represent misclassifications. Notably, the model achieved a 91% correct detection rate for cell detection.
Figure 8. Confusion matrix evaluating the proposed model’s performance. Rows represent actual classes, and columns represent predicted classes. Values in the matrix indicate the number of instances per class. Higher diagonal values indicate accurate predictions, while off-diagonal elements represent misclassifications. Notably, the model achieved a 91% correct detection rate for cell detection.
Diagnostics 13 02280 g008
Figure 9. Detected and Counted Cells in Sample Images from Test Dataset = 3.
Figure 9. Detected and Counted Cells in Sample Images from Test Dataset = 3.
Diagnostics 13 02280 g009
Figure 10. Detected and Counted Cells in Sample Images from Test Dataset = 44.
Figure 10. Detected and Counted Cells in Sample Images from Test Dataset = 44.
Diagnostics 13 02280 g010
Figure 11. Detected and Counted Cells in Sample Images from Test Dataset = 4.
Figure 11. Detected and Counted Cells in Sample Images from Test Dataset = 4.
Diagnostics 13 02280 g011
Figure 12. Detected and Counted Cells in Sample Images from Test Dataset = 9.
Figure 12. Detected and Counted Cells in Sample Images from Test Dataset = 9.
Diagnostics 13 02280 g012
Table 1. Pros and Cons of Automated Cell Detection and Counting Methods.
Table 1. Pros and Cons of Automated Cell Detection and Counting Methods.
MethodsAdvantagesLimitationsApplied to Cell
Detection and Counting
Image
Processing based
Simple and computationally efficientLimited accuracy and robustness[54,66,67]
No need for large datasets or complex algorithmsStruggle with complex cell morphologies and low SNR
Easy to implement and interpret
Machine
Learning based
Can handle complex cell morphologies and low SNRRequires labeled training data[52,61,68]
More accurate and robust than image processing basedSensitive to variability in data and imaging protocol
Can be adapted to different imaging modalitiesRequires feature engineering, which can be time-consuming
Deep
Learning based
State-of-the-art accuracy for cell detection and countingHighly dependent on the quality and quantity of training data[69,70,71]
Highly robust to variability in data and imaging protocolCan be computationally expensive
Does not require feature engineering, saving time and effortMay be less interpretable than traditional methods
Table 2. Comparison of performance metrics (precision, recall, mAP) for YOLOv5s, YOLOv5n, and YOLOv5fpn on different input image sizes (416 × 416, 640 × 640, and 840 × 840).
Table 2. Comparison of performance metrics (precision, recall, mAP) for YOLOv5s, YOLOv5n, and YOLOv5fpn on different input image sizes (416 × 416, 640 × 640, and 840 × 840).
416 × 416640 × 640840 × 840
PrecisionRecallmAPPrecisionRecallmAPPrecisionRecallmAP
YOLOv5s0.7410.7010.7320.7870.7440.7640.7560.7230.741
YOLOv5n0.7380.6610.6810.7790.6950.730.7590.7340.749
YOLOv5fpn0.7960.7410.7990.7580.7400.7480.7480.7080.732
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.Z.; Humayun, M. YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images. Diagnostics 2023, 13, 2280. https://doi.org/10.3390/diagnostics13132280

AMA Style

Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images. Diagnostics. 2023; 13(13):2280. https://doi.org/10.3390/diagnostics13132280

Chicago/Turabian Style

Aldughayfiq, Bader, Farzeen Ashfaq, N. Z. Jhanjhi, and Mamoona Humayun. 2023. "YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images" Diagnostics 13, no. 13: 2280. https://doi.org/10.3390/diagnostics13132280

APA Style

Aldughayfiq, B., Ashfaq, F., Jhanjhi, N. Z., & Humayun, M. (2023). YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images. Diagnostics, 13(13), 2280. https://doi.org/10.3390/diagnostics13132280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop