A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting
Abstract
1. Introduction
2. Materials and Methods
2.1. Data Acquisition, and Calibration
- Reflectance calculation: . For each pixel, raw intensity values were converted to reflectance by subtracting the dark reference (to remove sensor dark current), then dividing by the difference between the white reference and the dark reference.
- Demosaicing: Each channel was demosaiced by linearly interpolating missing values along rows and columns, producing a complete reflectance data cube.
- Spectral correction: A correction matrix, built from IMEC supplied virtual-band coefficients, was used to adjust for spectral overlap between adjacent channels. Each pixel’s raw spectrum was multiplied by this matrix to obtain spectrally refined data.
2.2. Annotation and Labeling
2.3. Data Splits and Sampling
2.4. Software and Hardware Environment
3. Conducted Experiments
3.1. Deep Learning Pipeline
- Preprocessing: Data augmentation (random horizontal flips, rotations) was applied using Albumentations.
- Feature Extraction: The core model is Faster R-CNN with either a ResNet-50 or ResNet-101 backbone, modified to accept 25-channel input and equipped with a Feature Pyramid Network (FPN). The first convolution layer was reinitialized for compatibility with hyperspectral data.
- Region Proposal Network (RPN): The RPN generates anchor boxes and proposes candidate regions likely to contain Varroa.
- RoI Pooling and Head Network: Features corresponding to proposed regions are pooled and passed through classification and regression heads to predict class labels and bounding box coordinates.
- Post-processing: Non-maximum suppression (NMS) with IoU threshold 0.5 was applied to filter overlapping detections and generate the final set of bounding boxes and Varroa counts.
3.2. Machine Learning Pipeline
- Input and Annotation: Each hyperspectral cubes are manually annotated with binary masks distinguishing Varroa from background pixels.
- Feature Extraction: From each annotated pixel in the six training images, the 25-band spectral signature was extracted, resulting in a high-dimensional data matrix.
- Dimensionality Reduction: Principal Component Analysis (PCA) was performed on the pixel-wise spectra, retaining the top five principal components.
- Pixel-wise Segmentation: The reduced feature matrix was classified using a 1-nearest-neighbor (1-NN) classifier, assigning each pixel to Varroa or non-Varroa class.
- Blob Generation and Shape Feature Extraction: Segmented images were used to identify blobs (connected Varroa regions), from which eleven geometric shape descriptors were computed for each blob.
- Feature Selection and Shape Classification: Six shape features identified, and a linear SVM (C = 1.0) was then trained to distinguish true Varroa blobs from false positives, providing the final Varroa count per image.
4. Results
4.1. Quantitative Evaluation
4.2. Visual Comparison of Detection Results
4.3. Hyperparameter Sensitivity Analysis
5. Discussion
- ResNet-50 + FPN registered false negatives in Image1 (2TP/1FN), Image2 (4TP/2FN), Image4 (3TP/3FN), and Image5 (5TP/2FN), and produced one false positive in Image3 (7TP/1FP vs. 6GT).
- ResNet-101 + FPN missed three mites in Image4 (3TP/3FN) and one in Image5 (6TP/1FN), and detected only 7 of 9 mites in Image6 (7TP/2FN).
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
ML | Machine Learning |
DL | Deep Learning |
NIR | k-Nearest Neighbors |
NIR | Near Infrared |
SVM | Support Vector Machine |
HSI | Hyperspectral Imaging |
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Halevy, A.; Norvig, P.; Pereira, F. The Unreasonable Effectiveness of Data. IEEE Intell. Syst. 2009, 24, 8–12. [Google Scholar] [CrossRef]
- Huh, M.; Agrawal, P.; Efros, A.A. What makes ImageNet good for transfer learning? arXiv 2016, arXiv:1608.08614. [Google Scholar] [CrossRef]
- Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Sun, C.; Shrivastava, A.; Singh, S.; Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 843–852. [Google Scholar]
- Sanchez-Bayo, F.; Goka, K. Pesticide residues and bees—A risk assessment. PLoS ONE 2014, 9, e94482. [Google Scholar] [CrossRef]
- Goulson, D.; Nicholls, E.; Botías, C.; Rotheray, E.L. Bee declines driven by combined stress from parasites, pesticides, and lack of flowers. Science 2015, 347, 1255957. [Google Scholar] [CrossRef] [PubMed]
- Potts, S.G.; Biesmeijer, J.C.; Kremen, C.; Neumann, P.; Schweiger, O.; Kunin, W.E. Global pollinator declines: Trends, impacts and drivers. Trends Ecol. Evol. 2010, 25, 345–353. [Google Scholar] [CrossRef]
- Di Pasquale, G.; Salignon, M.; Le Conte, Y.; Belzunces, L.P.; Decourtye, A.; Kretzschmar, A.; Suchail, S.; Brunet, J.L.; Alaux, C. Influence of pollen nutrition on honey bee health: Do pollen quality and diversity matter? PLoS ONE 2013, 8, e72016. [Google Scholar] [CrossRef] [PubMed]
- Meixner, M.D. A historical review of managed honey bee populations in Europe and the United States and the factors that may affect them. J. Invertebr. Pathol. 2010, 103, S80–S95. [Google Scholar] [CrossRef] [PubMed]
- Le Conte, Y.; Navajas, M. Climate change: Impact on honey bee populations and diseases. Rev. Sci. Tech.-Off. Int. Epizoot. 2008, 27, 499–510. [Google Scholar]
- Rosenkranz, P.; Aumeier, P.; Ziegelmann, B. Biology and control of Varroa destructor. J. Invertebr. Pathol. 2010, 103 (Suppl. 1), S96–S119. [Google Scholar] [CrossRef]
- Popovska Stojanov, D.; Dimitrov, L.; Danihlík, J.; Uzunov, A.; Golubovski, M.; Andonov, S.; Brodschneider, R. Direct economic impact assessment of winter honeybee colony losses in three European countries. Agriculture 2021, 11, 398. [Google Scholar] [CrossRef]
- Hafi, A.; Millist, N.; Morey, K.; Caley, P.; Buetre, B. A Benefit-Cost Framework for Responding to an Incursion of Varroa Destructor; ABARES Research Report 12.5; Australian Bureau of Agricultural and Resource Economics and Sciences: Canberra, Australia, 2012. Available online: https://www.agriculture.gov.au/abares/research-topics/biosecurity/biosecurity-economics/benefit-cost-framework-responding-varroa (accessed on 13 August 2025).
- Dietemann, V.; Nazzi, F.; Martin, S.J.; Anderson, D.L.; Locke, B.; Delaplane, K.S.; Wauquiez, Q.; Tannahill, C.; Frey, E.; Ziegelmann, B.; et al. Standard Methods for Varroa Research. J. Apic. Res. 2013, 52, 1–54, The COLOSS BEEBOOK Part 1. [Google Scholar] [CrossRef]
- Voudiotis, G.; Moraiti, A.; Kontogiannis, S. Deep learning beehive monitoring system for early detection of the varroa mite. Signals 2022, 3, 506–523. [Google Scholar] [CrossRef]
- Picek, L.; Novozamsky, A.; Frydrychova, R.C.; Zitova, B.; Mach, P. Monitoring of varroa infestation rate in beehives: A simple ai approach. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3341–3345. [Google Scholar]
- Chazette, L.; Becker, M.; Szczerbicka, H. Basic algorithms for bee hive monitoring and laser-based mite control. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
- Bjerge, K.; Frigaard, C.E.; Mikkelsen, P.H.; Nielsen, T.H.; Misbih, M.; Kryger, P. A computer vision system to monitor the infestation level of Varroa destructor in a honeybee colony. Comput. Electron. Agric. 2019, 164, 104898. [Google Scholar] [CrossRef]
- Schurischuster, S.; Kampel, M. Image-based classification of honeybees. In Proceedings of the 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 9–12 November 2020; pp. 1–6. [Google Scholar]
- Bilik, S.; Kratochvila, L.; Ligocki, A.; Bostik, O.; Zemcik, T.; Hybl, M.; Horak, K.; Zalud, L. Visual diagnosis of the varroa destructor parasitic mite in honeybees using object detector techniques. Sensors 2021, 21, 2764. [Google Scholar] [CrossRef]
- Divasón, J.; Romero, A.; Martinez-de Pison, F.J.; Casalongue, M.; Silvestre, M.A.; Santolaria, P.; Yániz, J.L. Analysis of Varroa Mite Colony Infestation Level Using New Open Software Based on Deep Learning Techniques. Sensors 2024, 24, 3828. [Google Scholar] [CrossRef]
- Ghezal, A.; Peña, C.J.L.; König, A. Varroa Mite Counting Based on Hyperspectral Imaging. Sensors 2024, 24, 4437. [Google Scholar] [CrossRef]
- Photonfocus. Available online: https://www.photonfocus.com/de/support/software/ (accessed on 13 August 2025).
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2623–2631. [Google Scholar]
- Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data: An Introduction to Cluster Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Alahmari, S.S.; Goldgof, D.B.; Mouton, P.R.; Hall, L.O. Challenges for the repeatability of deep learning models. IEEE Access 2020, 8, 211860–211868. [Google Scholar] [CrossRef]
- Yániz, J.; Casalongue, M.; Martinez-de Pison, F.J.; Silvestre, M.A.; Consortium, B.; Santolaria, P.; Divasón, J. An AI-Based Open-Source Software for Varroa Mite Fall Analysis in Honeybee Colonies. Agriculture 2025, 15, 969. [Google Scholar] [CrossRef]
- Thongpull, K.; König, A. Advance and case studies of the DAICOX framework for automated design of multi-sensor intelligent measurement systems. Tm-Tech. Mess. 2016, 83, 234–243. [Google Scholar] [CrossRef]
- Peters, S.; Koenig, A. A Contribution to Automatic Design of Image Processing Systems–Breeding Optimized Non-Linear and Oriented Kernels for Texture Analysis. In Proceedings of the 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS’06), Rio de Janeiro, Brazil, 13–15 December 2006; p. 19. [Google Scholar]
- Peters, S.; Koenig, A. Optimized texture operators for the automated design of image analysis systems: Non-linear and oriented kernels vs. gray value co-occurrence matrices. Int. J. Hybrid Intell. Syst. 2007, 4, 185–202. [Google Scholar] [CrossRef]
- Konig, A.; Eberhardt, M.; Wenzel, R. A transparent and flexible development environment for rapid design of cognitive systems. In Proceedings of the 24th EUROMICRO Conference (Cat. No.98EX204), Vasteras, Sweden, 27 August 1998; Volume 2, pp. 655–662. [Google Scholar] [CrossRef]
Method | Hardware Used | GPU Required | Training Time | Inference Time/ Image |
---|---|---|---|---|
ML (kNN + SVM) | Intel i7 CPU | No | ∼68.7 s | ∼0.0080 s |
Faster R-CNN (ResNet-50) | NVIDIA Tesla V100-SXM2 (16 GB) | Yes | ∼1043.7 s | ∼0.6646 s |
Faster R-CNN (ResNet-101) | NVIDIA Tesla V100-SXM2 (16 GB) | Yes | ∼1055.3 s | ∼0.6531 s |
Model | Backbone | LR | Weight Decay | Scheduler |
---|---|---|---|---|
Faster R-CNN | ResNet-50 + FPN | StepLR (, step = 50) | ||
Faster R-CNN | ResNet-101 + FPN | StepLR (, step = 100) |
Step | ML Pipeline | DL Pipeline |
---|---|---|
Input and annotation | 1088 × 2048 × 25 bands + masks | 1088 × 2048 × 25 bands + box & class labels |
Feature extraction | Flatten pixels → N = 177,630 × 25-D spectral vectors | conv1→1 × 64 × 400 × 672, C2→1 × 256 × 200 × 336, C3→1 × 512 × 100 × 168, C4→1 × 1024 × 50 × 84, C5→1 × 2048 × 25 × 42 |
Feature selection/fusion | PCA on N × 25 → N × 5 | FPN fuses C2–C5 → P2:1 × 256 × 200 × 336; P3:1 × 256 × 100 × 168; P4:1 × 256 × 50 × 84; P5:1 × 256 × 25 × 42 |
Proposal/ regionization | Segmented images → M = 116 blobs | Anchors → objectness logits + box deltas |
Region features | Blobs → M × 6 shape-feature matrix | RoIAlign top-100 proposals → 1000 × 256 × 7 × 7 pooled features |
Classifier heads | SVM on 92 × 6 (train)/M × 6 (test) → M labels | Cls-head on 1000 × (256 × 7 × 7) = 1000 × 12,544 → 1000 × 2 class logits Reg-head on 1000 × 12,544 → 1000 × 8 box offsets |
Post-processing and output | Sum M “Varroa” labels → final count | NMS (IoU = 0.5) → final boxes + final count |
Model | Precision | Recall | F1-Score | TP/FP/FN |
---|---|---|---|---|
ML (SVM + kNN) | 0.9983 | 0.9947 | 0.9918 | 37/0/1 |
Faster R-CNN (ResNet-50) | 0.966 | 0.757 | 0.848 | 28/1/9 |
Faster R-CNN (ResNet-101) | 0.971 | 0.829 | 0.894 | 30/5/7 |
Model | Precision (%) | Recall (%) | F1-Score (%) | mAP@0.5 (%) |
---|---|---|---|---|
ResNet-50 + FPN | ||||
ResNet-101 + FPN |
Image ID | GT Mites | ML (TP/FP/FN) | R50 (TP/FP/FN) | R101 (TP/FP/FN) |
---|---|---|---|---|
1 | 3 | 6/0/0 | 2/0/1 | 2/1/1 * |
2 | 6 | 6/0/0 | 4/0/2 | 6/0/0 |
3 | 6 | 6/0/0 | 6/1/0 | 6/0/0 * |
4 | 7 | 6/0/1 | 3/0/3 | 3/0/3 |
5 | 7 | 7/0/0 | 5/0/2 | 6/0/1 * |
6 | 9 | 9/0/0 | 8/0/1 | 7/0/2 * |
Study | Resolution | Area/Frame (cm2) | Varroa/img | Mean Varroa Size (Pixels) | Notes |
---|---|---|---|---|---|
Ours | 2048 × 1088 × 25 | 1.85 × 3.55 | 1–20 | 6000–9000 | HSI, FRCNN |
[22] | 8064 × 6048 | 24 × 17.5 | 1–60 | Not reported | Smartphone, FRCNN |
[28] | 48 MP/108 MP | 23.5 × 18.5 | 1–351 | Not reported | iPhone/Xiaomi, YOLOv11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ghezal, A.; König, A. A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting. Sensors 2025, 25, 5075. https://doi.org/10.3390/s25165075
Ghezal A, König A. A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting. Sensors. 2025; 25(16):5075. https://doi.org/10.3390/s25165075
Chicago/Turabian StyleGhezal, Amira, and Andreas König. 2025. "A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting" Sensors 25, no. 16: 5075. https://doi.org/10.3390/s25165075
APA StyleGhezal, A., & König, A. (2025). A Comparative Study of Hybrid Machine-Learning vs. Deep-Learning Approaches for Varroa Mite Detection and Counting. Sensors, 25(16), 5075. https://doi.org/10.3390/s25165075