Next Article in Journal
Why Do Insects Close Their Spiracles? A Meta-Analytic Evaluation of the Adaptive Hypothesis of Discontinuous Gas Exchange in Insects
Previous Article in Journal
First Report on Mitochondrial Gene Rearrangement in Non-Biting Midges, Revealing a Synapomorphy in Stenochironomus Kieffer (Diptera: Chironomidae)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models

Department of Microbiology, University of Massachusetts, Amherst, MA 01003, USA
*
Author to whom correspondence should be addressed.
Insects 2022, 13(2), 116; https://doi.org/10.3390/insects13020116
Submission received: 6 December 2021 / Revised: 18 January 2022 / Accepted: 19 January 2022 / Published: 22 January 2022

Abstract

:

Simple Summary

Ticks are ectoparasites of humans, livestock, and wild animals and, as such, they are a nuisance, as well as vectors for disease transmission. Since the risk of tick-borne disease varies with the tick species, tick identification is vitally important in assessing threats. Standard taxonomic approaches are time-consuming and require skilled microscopy. Computer vision may provide a tenable solution to this problem. The emerging field of computer vision has many practical applications already, such as medical image analyses, facial recognition, and object detection. This tool may also help with the identification of ticks. To train a computer vision model, a substantial number of images are required. In the present study, tick images were obtained from a tick passive surveillance program that receives ticks from public individuals, partnering agencies, or veterinary clinics. We developed a computer vision method to identify common tick species and our results indicate that this tool could provide accurate, affordable, and real-time solutions for discriminating tick species. It provides an alternative to the present tick identification strategies.

Abstract

A wide range of pathogens, such as bacteria, viruses, and parasites can be transmitted by ticks and can cause diseases, such as Lyme disease, anaplasmosis, or Rocky Mountain spotted fever. Landscape and climate changes are driving the geographic range expansion of important tick species. The morphological identification of ticks is critical for the assessment of disease risk; however, this process is time-consuming, costly, and requires qualified taxonomic specialists. To address this issue, we constructed a tick identification tool that can differentiate the most encountered human-biting ticks, Amblyomma americanum, Dermacentor variabilis, and Ixodes scapularis, by implementing artificial intelligence methods with deep learning algorithms. Many convolutional neural network (CNN) models (such as VGG, ResNet, or Inception) have been used for image recognition purposes but it is still a very limited application in the use of tick identification. Here, we describe the modified CNN-based models which were trained using a large-scale molecularly verified dataset to identify tick species. The best CNN model achieved a 99.5% accuracy on the test set. These results demonstrate that a computer vision system is a potential alternative tool to help in prescreening ticks for identification, an earlier diagnosis of disease risk, and, as such, could be a valuable resource for health professionals.

Graphical Abstract

1. Introduction

Ticks are obligate blood-sucking ectoparasites and are considered second only to mosquitoes as vectors of human disease. Ticks are notorious for their ability to transmit a wide variety of pathogens to humans, including viruses, bacteria, and protozoa. Tick-borne diseases (TBDs) have rapidly become a serious and growing threat to public health in the USA. A total of 649,628 cases of six TBDs were reported to the CDC during 2004–2019 [1]. Moreover, these diseases are often difficult to diagnose or go unreported, so these figures are likely to be an underestimate [2]. Diagnostic approaches rely, in part, on patient symptoms after a confirmed, or suspected, tick bite. Since different tick species are associated with different TBDs, species identification is an essential component of the diagnosis [3]. The TickSpotters program at the university of Rhode Island, for example, provides an online, photograph-based tick identification service [4]. The strategies to determine tick species are either their morphological identification using a taxonomic key [5,6], or a molecular marker analysis using tools such as real-time quantitative polymerase chain reactions (qPCR) [7]. Both approaches, however, require trained experts, specialized equipment, and time to mail or process the tick samples, resulting in a lag to timely treatment. In many instances, the tick identity is unknown and may lead to unnecessary antibiotic administration.
With the development of automated image analysis technology, artificial intelligence has been shown to be a promising solution for various challenges that require specialized and labor intensive image analyses, including medical imaging (X-ray [8], CT scan [9], fMRI [10]), cell image classification [11,12], the monitoring of insects [13], and insect classification [14,15,16]. There are many machine learning algorithms available in the computer vision field. Among them, deep learning algorithms tend to show substantially higher accuracy when the sample size is relatively large [17] and it also achieves a greater accuracy in image classification compared to traditional computer vision techniques [18]. Deep neural networks extract important features from the data automatically without any human supervision, and in some cases, it has been shown to be proficient in classifying images and surpassing human-level performance [19,20]. Unsurprisingly, a recent review analyzed 69 image-based insect identification studies published between 2010–2020 and showed a trend towards applying deep learning techniques [21]. Thus, computer vision and deep learning algorithms could be a powerful tool to replace the current tick identification methods. To date, there have been two studies that have applied computer vision in tick species identification [22,23]. The first study focused on Ixodes scapularis (the blacklegged tick) [22] while the second study compared slightly less than 2000 images from four tick species that were captured in the state of Indiana, USA [23]. To improve on this, we sought to develop computer vision algorithms to discern the three major human-biting ticks of North America: Amblyomma americanum (lone star tick), Dermacentor variabilis (American dog tick), and Ixodes scapularis.
The categorization of tick species has two main challenges: different tick species can share morphological similarities (high inter-species similarity) and tick samples from the same species can have significant differences (high intra-species variability). Furthermore, ticks undergo a life cycle accompanied by dramatic morphological changes. For instance, female ticks can increase their weight up to a hundred-fold [24] after blood-feeding and larval ticks only have six legs, whereas nymph and adult ticks possess eight legs. The scutum (a hard chitinous covering) on the nymph and female tick covers roughly half of the dorsal anterior, while the scutum on the male tick occupies the entire dorsal surface. Moreover, the pattern on the scutum changes drastically between different sex and developmental stages. The images in Figure 1 show that ticks within the same species have different developmental stages (nymph, adult), sex (male, female), feeding status (flat, partially fed, engorged) and, therefore, cause a high variance in their visual appearance as well as a high similarity across different tick species. To train a deep learning model, we used a novel large-scale tick dataset, which consists of 12,000 high-resolution micrographs collected from a passive surveillance system. In this dataset, all tick species were molecularly confirmed by a species-specific TaqMan PCR assay, which prevents the human error that may occur in visually identified methods [22,23]. With the ground truth label, this large tick image dataset was applied to several well-known CNN models, such as VGG [25], ResNet [26], Inception [27], MobileNet [28], and DenseNet [29]. Our results showed that a 99.5% classification accuracy can be achieved by applying transfer learning with pre-trained deep convolutional neural networks. We expect that our results will motivate further study on the automatic image-based classification of tick species for the timely detection of potential tick-borne diseases.

2. Materials and Methods

2.1. Data Sources

Tick samples that were analyzed in the present study were submitted to a passive surveillance tick identification and pathogen testing program (TickReport) from January 2018 through to December 2020 at the Laboratory of Medical Zoology, University of Massachusetts, Amherst. A total of 43,230 tick specimens across the United States were received and 91.4% of ticks belonged to the three major tick species, as shown in Figure 2a.
All images were captured using a Leica S9i stereo microscope and two high-resolution micrographs (3648 × 2736 pixels), of the dorsal and ventral surfaces, were taken for each tick with a white background. Each image captures a single tick. To prevent human error, the tick species identification was first determined by morphological characterization and then was confirmed by a species-specific TaqMan PCR assay [30]. Because of the computing capability, 12,000 tick images were divided into three groups (A. americanum, D. variabilis, and I. scapularis) and four thousand images were randomly selected for each group, as shown in Figure 2b. This tick image dataset covers different developmental stages (larva, nymph, adult), sex (male, female, unknown), feeding status (flat, partially fed, engorged, replete), and host (human, dog, cat, others). In the experiment, ninety percent (10,800/12,000) of the dataset was used for training and validation, and the other ten percent (1200/12,000) was used for testing purposes. We split our dataset into subsets using stratified random sampling, ensuring that the frequency distribution of the outcome was the same in all subsets to ensure the dataset was balanced. Image augmentation has been implemented to increase the diversity and the number of training images [31]. Here, the images were augmented by randomly rotating up to 20 degrees and zooming in on the data with an up to 0.2-scale increase. To minimize the likelihood of the model overfitting and to minimize selection bias, the images in the test dataset were never seen by the neural network model during the training and validation phases.

2.2. DCNN Model Architectures

The deep learning model requires a sufficient dataset to achieve a reliable result, and, in some cases, it may be challenging to collect enough data for training an effective model. Due to the limitations on the number of images, the large deep convolutional neural network (DCNN) models may suffer from overfitting in the training process, resulting in a model that does not generalize well from the training dataset. A popular and practical approach to address this issue is using transfer learning [32], which reuses formerly learned knowledge on related tasks and it is a widely utilized method in the computer vision field [33]. Considering the computing resources and time, we applied five well-known pre-trained models along with their weights (VGG16, ResNet50, InceptionV3, DenseNet121, and MobileNetV2) as our transfer learning architectures. All five models have been previously trained on the ImageNet dataset (including 1000 categories with 1.2 million images) [34].
VGG16 architecture was invented by the Visual Geometry Group of Oxford University [25] and won the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) task for object localization [35]. This model contains a stack of 13 convolutional layers with 3 fully connected layers and represents a classical deep learning model with roughly 138 million parameters. In contrast, ResNet50 architecture is a residual network with skip connections [26] and represents a large deep learning model. Although ResNet50 is 50 layers deep, it has only 25.6 million parameters. Inception-V3 architecture is the 3rd version of GoogLeNet [36] and has 23 million parameters that are 48 layers deep. The memory requirement and computational cost is much lower than VGG16. DenseNet121 is a densely connected convolutional network architecture which is state-of-the-art, according to the classification results from the ImageNet validation dataset [29]. The size of the parameters was significantly reduced down to 8 million and its design also improves its computational efficiency. Finally, MobileNetV2 architecture was also developed by Google Inc. and is focused on the mobile computer vision application, which has relatively light computational requirements [37]. It is designed based on the prior version, MobileNetV1 [28], using depthwise separable convolution as its basic unit.
All the pre-trained models we use, by default, have 1000 different classes at the output layer. We replaced the fully connected layer of the original model with our own fully connected layer that outputs 3-unit tensors, using the Softmax activation function, to classify the image into its corresponding class (A. americanum, D. variabilis, and I. scapularis). Then, we trained our models with the Adam optimizer [38] using a batch size of 32 and an initial learning rate of 1 × 10−5, to minimize the categorical cross-entropy loss in all the training processes. TensorFlow [39] provided a python application programming interface with tutorials to retrain our models with transfer learning. At this stage, our work does not aim to classify other tick species and non-tick samples.

2.3. The Hardware and Software Environment

All the experiments were conducted in Keras and TensorFlow deep learning frameworks. A single workstation PC was employed in the entire process of training, validating, and testing the deep learning models described herein. All the captured tick images were resized to 224 × 224 pixels to reduce the memory usage in the procedures. To avoid over-fitting, a simple data augmentation was used that has been proven to enhance the accuracy of classification problems [31]. The hardware applied included a LENOVO ThinkStation P720 Workstation with 80 GB RAM and an NVIDIA GeForce GTX 1080. The software used to train our model was based on Python 3.8.5, Keras 2.4.0, and TensorFlow 2.4.1. More information about the configuration of hardware and software environments can be found in Table 1.

2.4. Training Process

To verify the experimental results, a 10-fold cross-validation technique was adopted to evaluate each model’s performance. The tick dataset was divided into two parts: a training dataset and a test dataset. For all the data, 90% was used for training and the validation of the process and the other 10% of the data was reserved for testing. The test dataset was used to evaluate the model. The training dataset was divided into ten subsets containing an equal number of ticks of each species. The training process was carried out 10 times, where each time one part was excluded as validation data, and the remaining dataset was used to train the model. Figure 3 shows the segmentation applied to the dataset.

3. Results

3.1. Performance Metrics

After training, the accuracy from all subsets was used to obtain a mean accuracy and loss. The accuracy was calculated as the ratio between the number of correct classifications and the total number of classifications. The test accuracy at the end of 20 epochs for each model showed how well the models performed on data they did not previously see (Table 2). The Inception-V3 model achieved the best results with an accuracy of 99.5% and a loss of 0.01. The ResNet50, VGG16, DenseNet121, and MobileNetV2 models achieved accuracies of 99.42%, 99.37%, 99.2%, and 98.73%, respectively. Overall, the results indicate that all five architectures we applied in this experiment performed exceptionally well.

3.2. Classification Results

The performance of our best model, InceptionV3, was also evaluated by a confusion matrix (N = 400 images per group) as illustrated in Figure 4. In this experiment, three Amblyomma ticks were misidentified as Dermacentor, one Dermacentor tick was misidentified as Amblyomma, and two Ixodes ticks were misidentified as Amblyomma. Overall, out of 1200 tick images in the test set, only 6 samples were incorrectly identified.
The loss reduction and accuracy during the training process is shown in Figure 5. The loss in the training and validation sets decreased substantially in the first 5 epochs and the loss tended to stabilize even after the number of epochs increased, indicating that the model has converged. The accuracy of the training and validation sets increased greatly in the first few epochs and stabilized at about 99%, indicating the model training was completed.

4. Discussion

The DCNN models are a promising solution in computer vision applications, but they require considerable data for training. The present study adopted transfer learning in addition to the data augmentation methods, such as zooming and rotation, increasing the data size and improving the model generalization ability. The weight parameters of the pre-trained DCNN models were transferred to the new models and then the models were further modified to suit the tick classification tasks. This technique has been heavily used in medical image analyses [40], remote sensing [41], and many computer vision-related fields. To apply the accurate ground truth labels in the training process, our specimens were received through a passive surveillance tick identification and pathogen testing program, and the tick species identification was confirmed by a species-specific TaqMan real-time PCR. The molecular assays ruled out the possibility of misidentification caused by human error or damaged body parts, which are critical for morphological identification.
To our knowledge, only two prior research studies applied computer vision in tick classification. One study used a DCNN model with attention transfer and label smoothing regularization techniques to differentiate Ixodes scapularis vs. other ticks and reported an accuracy of 92% [22]. The other study used ResNet50 and custom-built models to predict four tick species and reported an accuracy of 75% and 80%, respectively [23]. Our study focuses on identifying the three most commonly encountered tick species, which includes Ixodes scapularis, and the best accuracy achieved was 99.5%. The ResNet50 model has better accuracy in this study compared to the previous report (99.42% vs. 75%). This improvement may be due to providing close to six times more data to the model with photos captured in a standardized setting. Compared to the classical deep learning model VGG16, ResNet50 has less than half the number of parameters but reaches slightly higher accuracy values. The automated tick identification tool developed in this work using the Inception-V3 model obtained the best results, even though the computational cost and memory requirements are much lower than VGG16 and ResNet50. DenseNet121 architecture has fewer parameters, but the accuracy and loss are very similar to VGG16. Lastly, MobileNetV2 has relatively light computational requirements and it still achieves considerable accuracy.
Currently, our work only focused on automated tick identification in a laboratory setting and has achieved noteworthy results. Future model developments may aim towards lighter models which can be deployed on a mobile application for general use. Combining our automated tick identification tool and the long-term passive surveillance data, users can acquire real-time tick species information, as well as the risk of exposure to tick-borne diseases, after uploading tick photos and the location of the specimens. The number of image labels could also be increased to identify more tick characteristics (e.g., developmental stages, sex, and feeding status), which are important factors for risk assessment. A photograph-based tick surveillance program conducted by entomology experts has been shown as a valid method for risk assessment and monitoring among commonly encountered ticks [42]. Our computer vision models could also be integrated into this type of surveillance, without a need for human identification, to provide accurate and timely information for tick control, prevention, and to further combat the rising cases of tick-borne diseases. As tick-borne diseases continue to have an enormous impact, we hope our approach would accelerate the tick identification process and facilitate the early detection of potential tick-borne diseases.

5. Conclusions

Deep learning has demonstrated exceptional performance in many computer vision tasks [43] and it also has been applied to image based animal species identification in recent studies [44]. To improve the labor-intensive and time-consuming tick identification process, five pre-trained deep learning models were used to predict tick species and the results of the models were compared. Data augmentation and transfer learning methods were applied during the training phase and the images in the test dataset were never seen by the neural network in the training process. All the pre-trained DCNN models with transfer learning were successful in providing over a 98% recognition rate, indicating that DCNN-based classification approaches are effective for tick species identification.

Author Contributions

Conceptualization, C.-Y.L.; methodology, C.-Y.L.; software, C.-Y.L.; validation, C.-Y.L.; formal analysis, C.-Y.L.; investigation, C.-Y.L.; resources, S.M.R., and G.X.; data curation, C.-Y.L. and P.P.; writing—original draft preparation, C.-Y.L.; writing—review and editing, S.M.R., G.X. and P.P.; visualization, C.-Y.L.; supervision, G.X.; project administration, S.M.R. and G.X.; funding acquisition, S.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tickborne Disease Surveillance Data Summary | Ticks | CDC. Available online: https://www.cdc.gov/ticks/data-summary/index.html (accessed on 20 July 2021).
  2. Rosenberg, R.; Lindsey, N.P.; Fischer, M.; Gregory, C.J.; Hinckley, A.F.; Mead, P.S.; Paz-Bailey, G.; Waterman, S.H.; Drexler, N.A.; Kersh, G.J.; et al. Vital Signs: Trends in Reported Vectorborne Disease Cases—United States and Territories, 2004–2016. Morb. Mortal. Wkly. Rep. 2018, 67, 496–501. [Google Scholar] [CrossRef] [Green Version]
  3. Dantas-Torres, F.; Chomel, B.B.; Otranto, D. Ticks and tick-borne diseases: A One Health perspective. Trends Parasitol. 2012, 28, 437–446. [Google Scholar] [CrossRef] [PubMed]
  4. Kopsco, H.L.; Duhaime, R.J.; Mather, T.N. An analysis of companion animal tick encounters as revealed by photograph-based crowdsourced data. Vet. Med. Sci. 2021, 7, 2198–2208. [Google Scholar] [CrossRef]
  5. Barker, S.C.; Walker, A.R. Ticks of Australia. The Species that Infest Domestic Animals and Humans. Zootaxa 2014, 3816, 1–144. [Google Scholar] [CrossRef] [PubMed]
  6. Walker, A.R.; Bouattour, A.; Camicas, J.-L.; Estrada-Peña, A.; Horak, I.G.; Latif, A.A.; Pegram, R.G.; Preston, P.M. Ticks of Domestic Animals in Africa: A Guide to Identification of Species; Bioscience Reports: Edinburgh Scotland, UK, 2003; ISBN 095451730X. [Google Scholar]
  7. Xu, G.; Pearson, P.; Dykstra, E.; Andrews, E.S.; Rich, S.M. Human-Biting Ixodes Ticks and Pathogen Prevalence from California, Oregon, and Washington. Vector-Borne Zoonotic Dis. 2019, 19, 106–114. [Google Scholar] [CrossRef] [Green Version]
  8. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-ray Images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  9. Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study. Lancet 2018, 392, 2388–2396. [Google Scholar] [CrossRef]
  10. Wen, D.; Wei, Z.; Zhou, Y.; Li, G.; Zhang, X.; Han, W. Deep learning methods to process fmri data and their application in the diagnosis of cognitive impairment: A brief overview and our opinion. Front. Neuroinform. 2018, 12, 23. [Google Scholar] [CrossRef] [Green Version]
  11. Shifat-E-Rabbi, M.; Yin, X.; Fitzgerald, C.E.; Rohde, G.K. Cell Image Classification: A Comparative Overview. Cytom. Part A 2020, 97, 347–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Hamilton, N.A.; Pantelic, R.S.; Hanson, K.; Teasdale, R.D. Fast automated cell phenotype image classification. BMC Bioinform. 2007, 8, 110. [Google Scholar] [CrossRef] [Green Version]
  13. Høye, T.T.; Ärje, J.; Bjerge, K.; Hansen, O.L.P.; Iosifidis, A.; Leese, F.; Mann, H.M.R.; Meissner, K.; Melvad, C.; Raitoharju, J. Deep learning and computer vision will transform entomology. Proc. Natl. Acad. Sci. USA 2021, 118, 1–10. [Google Scholar] [CrossRef] [PubMed]
  14. Xia, D.; Chen, P.; Wang, B.; Zhang, J.; Xie, C. Insect detection and classification based on an improved convolutional neural network. Sensors 2018, 18, 4169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Spiesman, B.J.; Gratton, C.; Hatfield, R.G.; Hsu, W.H.; Jepsen, S.; McCornack, B.; Patel, K.; Wang, G. Assessing the potential for deep learning and computer vision to identify bumble bee species from images. Sci. Rep. 2021, 11, 7580. [Google Scholar] [CrossRef] [PubMed]
  16. Okayasu, K.; Yoshida, K.; Fuchida, M.; Nakamura, A. Vision-Based Classification of Mosquito Species: Comparison of Conventional and Deep Learning Methods. Appl. Sci. 2019, 9, 3935. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GISci. Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  18. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep Learning vs. Traditional Computer Vision. Adv. Intell. Syst. Comput. 2020, 943, 128–144. [Google Scholar] [CrossRef] [Green Version]
  19. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3642–3649. [Google Scholar] [CrossRef] [Green Version]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classificatio. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef] [Green Version]
  21. Amarathunga, D.C.K.; Grundy, J.; Parry, H.; Dorin, A. Methods of insect image capture and classification: A Systematic literature review. Smart Agric. Technol. 2021, 1, 100023. [Google Scholar] [CrossRef]
  22. Akbarian, S.; Cawston, T.; Moreno, L.; Patel, S.; Allen, V.; Dolatabadi, E. A computer vision approach to combat lyme disease. arXiv 2020, arXiv:2009.11931. [Google Scholar]
  23. Omodior, O.; Saeedpour-Parizi, M.R.; Rahman, M.K.; Azad, A.; Clay, K. Using convolutional neural networks for tick image recognition—A preliminary exploration. Exp. Appl. Acarol. 2021, 84, 607–622. [Google Scholar] [CrossRef]
  24. Anderson, J.F.; Magnarelli, L.A. Biology of Ticks. Infect. Dis. Clin. N. Am. 2008, 22, 195–215. [Google Scholar] [CrossRef]
  25. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  27. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the Ieee Conference On Computer Vision And Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
  28. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  29. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, G.; Mather, T.N.; Hollingsworth, C.S.; Rich, S.M. Passive Surveillance of Ixodes scapularis (Say), Their Biting Activity, and Associated Pathogens in Massachusetts. Vector-Borne Zoonotic Dis. 2016, 16, 520–527. [Google Scholar] [CrossRef] [Green Version]
  31. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  32. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  33. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst. 2014, 4, 3320–3328. [Google Scholar]
  34. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
  35. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  36. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  37. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
  38. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  39. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  40. Raghu, M.; Zhang, C.; Kleinberg, J.; Bengio, S. Transfusion: Understanding transfer learning for medical imaging. arXiv 2019, arXiv:1902.07208. [Google Scholar]
  41. Chen, Z.; Zhang, T.; Ouyang, C. End-to-end airplane detection using transfer learning in remote sensing images. Remote Sens. 2018, 10, 139. [Google Scholar] [CrossRef] [Green Version]
  42. Kopsco, H.L.; Xu, G.; Luo, C.Y.; Rich, S.M.; Mather, T.N. Crowdsourced photographs as an effective method for large-scale passivetick surveillance. J. Med. Entomol. 2020, 57, 1955–1963. [Google Scholar] [CrossRef] [PubMed]
  43. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  44. Wäldchen, J.; Mäder, P. Machine learning for image based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
Figure 1. Tick species from the dataset. Inter-species similarity (rows) shows similar traits between different species (A. americanum, D. variabilis, and I. scapularis) and intra-species variability (columns) shows differences such as size, color, and developmental stages within the same species. Row I shows adult female ticks; row II shows ventral view of nymph ticks; row III shows male ticks at adult stage; row IV shows dorsal view of nymph ticks; row V shows ticks with missing body parts; and row VI shows engorged adult ticks. Scale bar corresponds to 1 mm.
Figure 1. Tick species from the dataset. Inter-species similarity (rows) shows similar traits between different species (A. americanum, D. variabilis, and I. scapularis) and intra-species variability (columns) shows differences such as size, color, and developmental stages within the same species. Row I shows adult female ticks; row II shows ventral view of nymph ticks; row III shows male ticks at adult stage; row IV shows dorsal view of nymph ticks; row V shows ticks with missing body parts; and row VI shows engorged adult ticks. Scale bar corresponds to 1 mm.
Insects 13 00116 g001
Figure 2. (a) Proportion of the tick species to the overall submission; (b) proportion of the tick species used in the training process.
Figure 2. (a) Proportion of the tick species to the overall submission; (b) proportion of the tick species used in the training process.
Insects 13 00116 g002
Figure 3. Schematic overview of the 10-fold cross-validation and the excluded test dataset.
Figure 3. Schematic overview of the 10-fold cross-validation and the excluded test dataset.
Insects 13 00116 g003
Figure 4. Example results of the confusion matrices from Inception-V3 architecture.
Figure 4. Example results of the confusion matrices from Inception-V3 architecture.
Insects 13 00116 g004
Figure 5. (a) Visualizing the loss reduction for the last fold cross-validation training process; (b) visualizing the accuracy for the last fold cross-validation training process. Train = training, Val = validation, acc = accuracy.
Figure 5. (a) Visualizing the loss reduction for the last fold cross-validation training process; (b) visualizing the accuracy for the last fold cross-validation training process. Train = training, Val = validation, acc = accuracy.
Insects 13 00116 g005
Table 1. Hardware and software environment.
Table 1. Hardware and software environment.
Configuration ItemValue
Type and specificationLENOVO ThinkStation P720 Workstation
CPUIntel Xeon Silver 4110 2.10 GHz
GPUNVIDIA GeForce GTX 1080
Memory80 GB
Hard disk1 TB
Operating system
Image acquisition device
Microsoft Windows 10 Pro
Leica S9i stereo microscope
Programming languagePython 3.8.5
Deep learning frameworkTensorflow 2.4.1
Table 2. Comparison of different deep learning architectures.
Table 2. Comparison of different deep learning architectures.
ArchitecturesNumber of ParametersAccuracy (SD)Loss
VGG16138 M99.37% (±0.29)0.02
ResNet5025.6 M99.42% (±0.17)0.03
InceptionV323.8 M99.5% (±0.15)0.01
DenseNet1218 M99.2% (±0.29)0.03
MobileNetV23.5 M98.73% (±0.37)0.04
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, C.-Y.; Pearson, P.; Xu, G.; Rich, S.M. A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models. Insects 2022, 13, 116. https://doi.org/10.3390/insects13020116

AMA Style

Luo C-Y, Pearson P, Xu G, Rich SM. A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models. Insects. 2022; 13(2):116. https://doi.org/10.3390/insects13020116

Chicago/Turabian Style

Luo, Chu-Yuan, Patrick Pearson, Guang Xu, and Stephen M. Rich. 2022. "A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models" Insects 13, no. 2: 116. https://doi.org/10.3390/insects13020116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop