Next Article in Journal
HIF-1A Expression in Placenta of Pregnancies Complicated with Preeclampsia and Fetal Growth Restriction
Next Article in Special Issue
Early Detection of Autism Spectrum Disorder Through Automated Machine Learning
Previous Article in Journal
Analytical Performance of the New Sysmex High-Sensitivity Troponin T Assay
Previous Article in Special Issue
ECG Signal Analysis for Detection and Diagnosis of Post-Traumatic Stress Disorder: Leveraging Deep Learning and Machine Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning

1
Department of Data Communication Networks and Systems, Tashkent University of Information Technologies, Tashkent 100084, Uzbekistan
2
Department of Computer Engineering, Gachon University, Seognam-daero, Sujeong-gu, Seongnam-si 1342, Republic of Korea
3
Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
4
Department of Computer Engineering, Konkuk University, Chungju 05029, Republic of Korea
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841
Submission received: 24 April 2025 / Revised: 25 June 2025 / Accepted: 16 July 2025 / Published: 22 July 2025
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)

Abstract

Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions.

1. Introduction

Insect bites rank among the most common reasons people end up in the skin clinic [1]. The bumps and blisters they leave can be trivial one moment and alarmingly complicated the next, blooming into systemic trouble or triggering an allergy [2]. Deciding the exact bug behind the mark is no idle curiosity; timely identification can halt the spread of a vector-borne illness [3]. Yet, even an experienced eye often struggles because the dot or streak looks astonishingly like a dozen other things that land on the skin [4]. The picture grows dimmer when the doctor is working in a small outpost without a dermatologist on call [5], worse [6,7], with nothing more sophisticated than a flashlight and a gut instinct [8]. Smartphones are now almost universal companions, and mHealth tools are appearing on more screens than ever before [9]. This surge has sparked fresh enthusiasm for diagnostic software [10] that can quickly sort patients [11] and help local providers make tougher calls early in a visit [12]. Against that backdrop, deep-learning models—especially convolutional neural networks [13]—have quietly rewritten the rulebook for processing clinical images [14]. A substantial body of evidence shows that those same networks can spot melanomas [15], rank psoriasis flare-ups [16], and classify dermoscopic patterns with a reliability [17] that matches a seasoned dermatologist [18].
Despite steady progress in computer vision, few groups have tackled the knotty challenge of identifying insect bites from smartphone photos [19]. Surgeons routinely note that mosquito, spider, and flea welts can look strikingly similar even on a single limb, yet that intra-class jumble has received little formal study in the dermatology literature [20]. Low contrast [21], accidental blur [22], and harsh sunlight—common nuisances in picnic snapshots—add another layer of confusion that neatly controlled lab datasets usually ignore [23]. A handful of pilot projects have plugged off-the-shelf convolutional nets into the pipeline and barely nudged accuracy numbers upward, but none have remembered to trim the architecture for field use or explain its decisions in human terms. Borrowing a page from surge-protecting appliances, we built DeepBiteNet rugged ensemble wired for multiclass bite sorting on an everyday phone screen. The system stitches together three compact engines—DenseNet-121, EfficientNet-B0, and MobileNetV3-Small let a quick meta-classifier choreograph their votes Figure 1. A dedicated augmentation fork simulates the swaying light and shaky hands that real users will throw at it. All tensors funnel straight to TensorFlow Lite, so even a budget Android wanders through predictions in well under a second. The sections ahead map the technical literature, sketch the model guts, report the grind of experiments, and consider what wildlife clinics might borrow tomorrow.

2. Related Works

Deep-learning frameworks have dramatically reshaped dermatology, yet they still glance past a distinctive niche: the diagnosis of insect bites. The bulk of existing convolutional-networks research zeros in on neatly labeled targets such as melanoma or basal-cell lesions, tasks for which large, high-resolution dermoscopic collections already exist. Representatives of this trend include [16], whose SKINC-Net compactly trades speed for heavyweight accuracy when classifying multiple lesion types, and the dual papers by [17,18] that fuse temporal signals or ensemble tricks to boost detection numbers. However, none of those systems confront the smudged, uneven light patterns one sees in patient snapshots taken with a phone on the clinic floor; their training sets simply do not prepare them for that kind of chaos.
Interest in insect-bite image classification has stirred modest activity within the deep-learning community. Ilijoski and colleagues [1] forged an end-to-end pipeline that benchmarks VGG16, DenseNet169, and InceptionV3 against one another; their ensemble yields a neat 86% accuracy but stays silent on lightweight roll-outs and the headaches of mixed lighting or occluded skin. Searching for an agile alternative, Sushma and Pande [2] built a multiclass MobileNet-V2 model, yet the numbers hover in the midrange, and practical deployment goes unexamined. A separate study by Akshaykrishnan and co-authors [4] glued convolutional layers to their bite-image trove; because the collection is private, because no ensemble tricks are tested, and because meta-classification sits untouched, their findings remain somewhat insular.
A comparative summary of selected prior works is provided in Table 1, highlighting model type, dataset, deployment focus, and reported accuracy.
A review of the current literature exposes a set of persistent shortcomings. Many investigations center on a single backbone architecture and neglect a meta-learning framework capable of harmonizing their divergent outputs. Very few implementations are engineered with mobile footprints in mind, and domain-sensitive data augmentation essential step for resilient field performance rarely featured. Furthermore, widely used explainability tools such as Grad-CAM or attention maps are conspicuously absent, undermining the clinical transparency that practitioners require. In response to this landscape, the present work puts forward a compact, yet surprisingly powerful, ensemble classifier tailored for identifying insect bites via handheld devices. By marrying semantically distinct convolutional networks with a layered meta-classification scheme, the system gains in both accuracy and interpretability. Targeted, domain-driven augmentation alongside conversion to TensorFlow Lite ensures that the solution remains functional in low-resource environments, thereby filling a critical gap in the toolbox of digital dermatology.

3. Materials and Methods

To investigate the effectiveness of the proposed DeepBiteNet architecture for multiclass insect bite classification, a systematic and reproducible methodology was adopted, encompassing dataset preparation, image preprocessing, data augmentation, model architecture design, training configuration, and evaluation protocol. This section provides a detailed account of each component of the experimental framework. Emphasis is placed not only on architectural innovation but also on the practical aspects of training stability, deployment feasibility, and the integrity of performance evaluation. By adhering to rigorous standards in data handling, model selection, and experimental design, the proposed method aims to deliver both technical robustness and clinical relevance Figure 2.
The architectural design of DeepBiteNet is predicated on the hypothesis that no single convolutional neural network (CNN) architecture is universally optimal for all fine-grained visual classification tasks, particularly those involving subtle inter-class visual variations, such as bug bite classification. To this end, DeepBiteNet introduces a novel two-tier ensemble framework that combines multiple lightweight CNN backbones with a meta-classification layer, thereby enabling the integration of heterogeneous feature representations into a cohesive and discriminative decision space.
The Backbone Feature Extraction stage in the DeepBiteNet architecture is designed to independently encode the salient features of bug bite images using three diverse convolutional neural networks (CNNs): DenseNet121, EfficientNet-B0, and MobileNetV3-Small. Each network is adapted for transfer learning, fine-tuned on the bug bite dataset, and truncated prior to its final classification layer. The goal of this stage is to generate deep semantic representations that are sufficiently rich and complementary to be combined in a downstream ensemble. X R H × W × 3 denote the input RGB image, resized to H = W = 224. Each CNN acts as a transformation function f k :   R H × W × 3   R C where k 1,2 , 3 indexes the base model, and C is the number of output classes. DenseNet121 implements densely connected convolutional blocks, where each layer receives the feature maps of all preceding layers as input:
X l = H l x 0 , x 1 , , x l 1
where X l is the output of the ℓ-th layer, H l · is a composite function of Batch Normalization (BN), ReLU, and 3 × 3 convolution, and [·] denotes concatenation along the channel axis. This dense connectivity facilitates efficient gradient flow, alleviates vanishing gradients, and promotes feature reuse. The final convolutional block produces a feature tensor F 1 R 7 × 7 × D 1 where D 1 is the number of filters in the final block. After global average pooling (GAP), the tensor is reduced to a feature vector z 1 R D 1 and then projected to the class logits:
y ^ 1 = s o f t m a x W 1 z 1 + b 1
where W 1 R C × D 1 and b 1 R C are the learned parameters. EfficientNet-B0 introduces a compound scaling method that uniformly scales network depth d, width w, and resolution r using a single compound coefficient ϕ :
d =   α ϕ ,   w = β ϕ ,   r =   γ ϕ   s u b j e c t   t o   α · β 2 · γ 2 2 ,   α , β , γ > 1
This leads to efficient utilization of model capacity, especially under resource constraints. EfficientNet-B0 utilizes Mobile Inverted Bottleneck Convolutions (MBConv) with Squeeze-and-Excitation (SE) blocks, enhancing the network’s ability to focus on relevant channels. The resulting final feature map F 2 R 7 × 7 × D 2 is similarly reduced using GAP and mapped to class probabilities:
y ^ 2 = s o f t m a x W 2 z 2 + b 2
With z 2 R D 2 representing the global features from EfficientNet-B0.
MobileNetV3-Small is optimized for mobile inference and incorporates hard-swish nonlinearities, lightweight SE modules, and neural architecture search-derived block configurations. The core computation is based on depth-wise separable convolutions:
D W C o n v x = C o n v 3 × 3 d e p t h w i s e x × C o n v 1 × 1 p o i n t w i s e x
which significantly reduces parameter count and computational cost. After the final convolutional stage, the feature tensor F 3 R 7 × 7 × D 3 is transformed as:
y ^ 3 = s o f t m a x W 3 z 3 + b 3
where z 3 = G A P F 3 R D 3 ,   a n d   W 3 ,   b 3 are the final learnable projection parameters. Each base model independently generates a class-probability vector y ^ k R c . These are concatenated to form the ensemble input vector Y f u s i o n R 3 c :
Y f u s i o n = y ^ 1 y ^ 2 y ^ 3
where ∣∣ denotes vector concatenation.
This unified representation serves as the input to the meta-classifier described in the subsequent section. The independence of the backbones ensures diversity in learned representations, a critical requirement for effective ensemble performance.
Following the independent processing of input images by the three backbone networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—the stacked meta-classifier functions as the second tier of the DeepBiteNet architecture. Its role is to aggregate the class probability distributions predicted by each base model and to generate a refined final prediction that captures inter-model consensus and mitigates individual misclassifications. Unlike ensemble techniques such as majority voting or soft averaging, which treat all base models equally, the stacked meta-classifier learns a discriminative function over the outputs of the base learners. This method is inspired by the principle of stacked generalization (Wolpert, 1992) [24], which introduces a second-level learner—also called the meta-learner—trained on the predictions of base-level models. Each base model f k , f o r   k 1,2 , 3 , output a softmax probability vector:
y ^ k =   f k X R c
where C denotes the number of target classes. The meta-classifier g(⋅) is realized as a fully connected neural network designed to refine predictions from the ensemble of base models. It accepts an input vector of dimensionality 3C, corresponding to the concatenated softmax outputs from the three backbone networks. This input is processed through a hidden layer comprising 128 units with ReLU activation, enabling the model to capture nonlinear interactions between the base predictions. To mitigate overfitting, a dropout layer with a rate of 0.5 is applied. The final layer consists of C output neurons with a softmax activation function, producing the final class probability distribution. The operations can be expressed as follows:
h = R e L U W 1 Y f u d i o n + b 1 ,   h d r o p = D r o p o u t h , p ,   y ^ f i n a l = s o f t m a x W 2 h d r o p + b 2
where W 1 R 128 × 3 c ,   b 1 R 128 ,   W 2 R c × 128 ,   b 2 R c . This architecture enables the model to learn complex nonlinear mappings between the predictions of individual models and the ground truth labels. The dropout layer serves to regularize the learning process and reduce overfitting, particularly important given the relatively low-dimensional input.

Training Strategy

To prevent data leakage and mitigate overfitting, the meta-classifier was trained using a k-fold cross-validation strategy split = 5. In this approach, each base model was first trained independently on its respective training partition. Predictions were then generated exclusively on the corresponding validation folds, which remained unseen during the training phase. These out-of-fold predictions were subsequently aggregated and paired with their true labels to construct the training dataset for the meta-classifier. By relying solely on predictions from data not exposed to the base models during training, this method ensured that the meta-classifier learned from unbiased outputs, thereby preserving the integrity of the ensemble and enhancing generalization. Loss for the meta-classifier is computed using categorical cross-entropy:
L m e t a =   i = 1 c y i log y ^ f i n a l , i
where y i is the ground truth one-hot encoded label and y ^ f i n a l , i is the predicted probability for class i. The stacked meta-classifier offers two principal advantages that enhance the performance of the ensemble framework. First, it facilitates adaptive weighting by learning to assign dynamic importance to the outputs of individual base models, thereby leveraging their varying levels of confidence across different input instances. This mechanism allows the meta-classifier to emphasize more reliable predictions while attenuating those from weaker models. Second, it contributes to error correction by identifying consistent misclassification patterns—for instance, if a specific base model frequently confuses spider bites with flea bites—and adjusting its influence accordingly. Furthermore, the modular architecture of the ensemble enables seamless extensibility; additional base models can be incorporated or substituted without disrupting the overall framework, as long as the dimensionality of the fusion layer is properly adjusted.
The training process of DeepBiteNet was carefully designed to optimize the model’s ability to generalize from a relatively small, imbalanced dataset while maintaining computational efficiency. Both the backbone CNNs and the stacked meta-classifier were trained under distinct but interdependent configurations to ensure independent feature learning at the base level and optimal fusion at the ensemble stage. All training procedures were implemented using TensorFlow 2.12 with Keras APIs and executed on an NVIDIA Tesla T4 GPU with 16 GB of VRAM, hosted on a high-performance cloud instance. The following configuration details outline the supervised learning settings employed during the training and validation phases.
In the first phase, all convolutional layers were frozen, and only the custom classification head (comprising a global average pooling layer, a dense layer with ReLU activation, and a final softmax layer) was trained. This allowed the model to adapt its high-level features to the target domain without disrupting low-level filters. In the second phase, the top convolutional blocks (typically the last one or two) were unfrozen, and the model was fine-tuned end-to-end with a reduced learning rate. This enabled domain adaptation of higher-order representations while avoiding catastrophic forgetting. Each model was trained using the Adam optimizer, with a learning rate initialized at α = 1 × 10 4 . The learning rate was adaptively reduced by a factor of 0.5 on validation loss plateaus, using the ReduceLROnPlateau callback. The categorical cross-entropy loss function was used, given the mutually exclusive nature of class labels. Training was conducted over a maximum of 50 epochs, with early stopping triggered if validation loss did not improve over 7 consecutive epochs, thus preventing overfitting. The batch size was set to 32, balancing GPU memory usage and gradient stability. Each training batch was subject to on-the-fly data augmentation. The training dataset be defined as:
D =   X i ,   y i i = 1 N
where X i R 224 × 224 × 3 and y i R c .   The loss for each mini-batch was computed as follows:
L b a t c h = 1 B i = 1 B j = 1 C y i , j l o g y ^ i , j  
where B is the batch size, C is the number of classes, y i , j is the ground truth indicator, and y ^ i , j is the predicted probability for class j. After training the individual backbones, each model generated softmax probability outputs for the validation set. These outputs were concatenated to form input vectors for the meta-classifier. The meta-classifier was trained separately using the same categorical cross-entropy loss and Adam optimizer, with an initial learning rate of 5 × 10 4 , batch size of 16, and a maximum of 30 epochs. To prevent information leakage, the training of the meta-classifier was strictly confined to outputs generated from non-overlapping data partitions. Only validation set predictions from unseen data were used to train the meta-classifier, ensuring independence from the backbone training phase.

4. Results

The proposed model was trained and evaluated using the Bug Bite Images dataset, a publicly available resource hosted on the Kaggle platform. This dataset consists of real-world RGB photographs capturing visible skin reactions from various insect bites, along with control images representing unaffected skin. The visual diversity across the dataset—stemming from variations in lighting, anatomical location, skin tone, image resolution, and camera type—provides a realistic foundation for developing a robust and generalizable classification model. The dataset encompasses eight semantic categories corresponding to different types of insect bites: ant, bed bug, chigger, flea, mosquito, spider, tick, and a negative class comprising images of unaffected skin Figure 3.
These classes reflect a wide range of dermatological presentations commonly encountered in medical and outdoor environments. A considerable proportion of the data includes close-up views of erythema, papules, or puncture marks, which form the basis for machine learning-based diagnostic inference. Prior to model training, the dataset underwent a thorough cleaning process. Images containing irrelevant backgrounds, visual artifacts (such as watermarks), excessive blurring, or ambiguous annotations were removed. Duplicate or near-duplicate images were filtered using structural similarity index (SSIM) and perceptual hashing algorithms. A manual review stage followed to ensure categorical consistency and remove mislabeled samples. After preprocessing, a total of 1932 high-quality, uniquely labeled images were retained for use in model development.
To facilitate fair evaluation and prevent data leakage, the dataset was randomly partitioned into three subsets: 70% of the images were allocated for training, 15% for validation, and 15% for testing. Stratified sampling was used to preserve the relative distribution of each class across all subsets, ensuring that rare classes were adequately represented during both model optimization and performance assessment. D denotes the entire dataset, with the following subsets:
D t r a i n , D v a l , D t e s t D such   that   D t r a i n : D v a l : D t e s t = 70 : 15 : 15
and for each class c C , where C is the set of all classes:
D t r a i n C | D t r a i n | | D v a l C | | D v a l | D t e s t C D t e s t
This ensured that all subsets were representative of the overall dataset, thus reducing selection bias and improving the statistical significance of performance metrics. The final class distribution after preprocessing is summarized in Table 2 While mild class imbalance persists, the distribution is sufficiently diverse for supervised learning and was further addressed through augmentation techniques.
It is important to note that all images in this dataset were sourced from open-access repositories or user-contributed platforms under permissive licenses. No personally identifiable information is contained in any image, and no human subjects were involved in data collection. Consequently, no ethical review was required for the use of this dataset.

Dataset Preprocessing

Effective preprocessing and augmentation play a pivotal role in improving the generalizability of deep learning models, particularly in domains characterized by limited and heterogeneous datasets such as medical and dermatological imaging. In this study, the preprocessing pipeline was meticulously designed to normalize image properties while preserving the clinical relevance of dermatological features associated with various bug bites. Additionally, the augmentation strategy was crafted to emulate real-world variabilities typically encountered in images captured by non-expert users using consumer-grade devices. Prior to augmentation, all input images were uniformly resized to a spatial resolution of 224 × 224 pixels, consistent with the input dimensions required by the selected backbone CNN architectures. This resizing was applied using bilinear interpolation while preserving the aspect ratio through zero-padding where necessary. To ensure consistency across color distributions, images were normalized on a per-channel basis using the standard mean and standard deviation values derived from the ImageNet dataset:
μ = 0.485 , 0.456 , 0.406 ,   σ = 0.229 , 0.224 , 0.225
The normalization operation was applied as follows:
X n o r m =   X μ σ
where X R 224 × 224 × 3 denotes the original RGB image tensor, and the operation is performed channel-wise. To address class imbalance and enhance model robustness to environmental variations, a comprehensive augmentation protocol was implemented during the training phase. These augmentations were applied randomly with controlled probabilities and included both geometric and photometric transformations. Geometric augmentations comprised horizontal and vertical flipping, random rotations in the range of ±20 degrees, random zooming with scale factors between 0.85 and 1.15, and random cropping to simulate partial visibility of bites. These operations increase the diversity of positional representations, which is critical for learning spatial-invariant features. Photometric augmentations aimed to simulate variable illumination and camera quality. These included random adjustments to brightness, contrast, saturation, and hue, as well as Gaussian noise injection and motion blur simulation. These transformations are particularly relevant for mobile-captured images, where lighting conditions and image clarity can vary significantly. In a subset of training samples, we also applied synthetic shadow overlays and mild occlusions to improve the model resilience to real-world obstructions such as body hair or clothing edges.
All augmentation transformations were applied using the Albumentations and TensorFlow Image libraries, ensuring reproducibility and performance efficiency during batch processing. The validation and test datasets were left unaugmented, except for the initial resizing and normalization steps, to ensure an unbiased evaluation of model performance. Through this preprocessing and augmentation framework, the model was exposed to a wide spectrum of input conditions during training, allowing it to learn robust and invariant representations. This approach was especially important in the context of fine-grained classification, where subtle textural or chromatic differences between classes such as flea vs. bed bug bites can be easily confounded in the presence of visual noise.
It is important to note that all augmentations were applied dynamically during training (on-the-fly) using the Albumentations and TensorFlow Image libraries. The number of stored images per class remained unchanged after preprocessing, with the class distribution shown in Table 1. However, each input sample passed through randomized augmentation pipelines during training, effectively increasing visual diversity without altering the total dataset size. This approach preserved stratified sampling while enhancing robustness to image variability.
The performance of DeepBiteNet was empirically evaluated and benchmarked against fifteen widely used convolutional neural network architectures for image classification. These included both classical and modern deep learning models with varying computational complexities and design philosophies. All models were trained under the same experimental protocol as described in this study, ensuring a fair and unbiased comparison. The evaluation was conducted using the held-out test set, comprising 15% of the dataset, stratified by class and excluded from all training and validation processes. Accuracy was selected as the primary performance metric, with supplementary evaluation using precision, recall, and F1-score. The comparative results are summarized in Figure 4 and Table 3.
The models were sorted in descending order based on classification accuracy. As illustrated, the proposed DeepBiteNet outperformed all individual models, achieving a classification accuracy of 84.6%, thereby setting a new state-of-the-art benchmark on the bug bite image dataset. This performance exceeded that of DenseNet121 (78.2%), the best-performing individual backbone model, by a margin of over 6.4 percentage points.
Table 2 demonstrates that DeepBiteNet maintains competitive computational efficiency while outperforming all baseline models in terms of classification performance. DeepBiteNet strikes a balance between high accuracy and moderate complexity, making it appropriate for mobile deployment following quantization. It has a total parameter count of approximately 15.8 million and FLOPs under 3.5 G. On the other hand, despite having substantially higher parameter and computation costs, heavier models like VGG16/19 or ResNet50 achieve lower accuracy. Although they are effective, lightweight models like SqueezeNet and MobileNetV3-Small perform poorly in classification accuracy. This demonstrates how well DeepBiteNet’s ensemble approach works, providing better performance without requiring an excessive amount of computing power.
Figure 4 illustrates the model classification results across various input images from the test set, including bed bug, ant, chigger, mosquito, spider, fly, and tick bites. Red bounding boxes indicate the predicted bite type along with the associated confidence score. The model demonstrates high confidence and consistency in distinguishing between bite types, even in visually ambiguous or overlapping cases, supporting its potential for clinical and field use.
Other notable models, such as InceptionV3 (76.3%), MobileNetV2 (75.1%), and EfficientNet-B0 (76.9%), demonstrated relatively strong performance but were consistently outperformed by DeepBiteNet. Lightweight architectures, including SqueezeNet and ShuffleNet, showed the lowest accuracy.
Table 3 records precision, recall, and the F1-score for each of the eight insect-bite types that DeepBiteNet can identify. The detailed per-class presentation allows researchers to see, bite by bite, how well the network is learning even the rarer categories in the training set. Across the board, the F1 figures cluster between 0.825 for chigger bites and 0.915 for mosquito lesions, suggesting stable behavior regardless of a group’s numerical standing. More encouragingly, ticks and bed bugs-which often share subtle visual cues-still receive solid recall and precision ratings. In contrast, mosquitoes and fleas top the table, a result that probably stems from their more obvious cutaneous markings as well as the heavier examples provided during training. Skin that shows no abnormality can also be classified correctly most of the time, with precision hitting 0.87 and recall reaching 0.88, so the boundary between healthy and diseased skin remains clear Table 4. These results validate that the model generalizes well across all bite types and does not overfit to any specific class, reinforcing the value of the ensemble design and domain-aware augmentation strategy employed in DeepBiteNet Figure 5.
Table 5 presents a comparative analysis of the DeepBiteNet model trained with and without the proposed data augmentation pipeline. The results demonstrate that augmentation improves model performance across all key metrics. By introducing realistic variability into the training data, the augmentation strategy helps mitigate overfitting and equips the model to better handle noise, occlusion, and lighting variations commonly observed in mobile-captured images.
We trained DeepBiteNet from scratch (random initialization) and contrasted its performance with the version initialized with ImageNet weights in order to evaluate the impact of transfer learning. Along with notable improvements in all other evaluation metrics, the pre-trained model beat the randomly initialized version by more than 6% in classification accuracy, as indicated in Table 6. This demonstrates that ImageNet initialization provides a significant performance improvement, most likely as a result of the transfer of general visual representations that stabilize training on sparse medical image data.

5. Discussion

The experimental results presented in Section 3 strongly affirm the efficacy of the proposed DeepBiteNet architecture in addressing the complex task of multiclass bug bite classification. Achieving a test accuracy of 84.6%, DeepBiteNet significantly outperformed a comprehensive suite of baseline models, including both heavyweight networks such as DenseNet169 and InceptionV3, as well as lightweight mobile-friendly architectures like MobileNetV2, ShuffleNet, and SqueezeNet. This performance gain can be primarily attributed to the architectural design of DeepBiteNet, which strategically integrates heterogeneous convolutional feature extractors via a stacked meta-classifier. The backbone networks were chosen to balance representational diversity and computational efficiency. DenseNet121 offered deep feature connectivity and reuse, EfficientNet-B0 contributed compound scaling, and MobileNetV3 ensured optimization for deployment. The meta-classifier’s role in aggregating predictions proved crucial in refining classification boundaries and correcting model-specific biases. Unlike soft voting or averaging, the meta-learner demonstrated an ability to selectively amplify accurate predictions while down weighting erroneous outputs—a trait that is particularly advantageous when dealing with fine-grained visual similarities, such as between mosquito and flea bites.
The improvements were not limited to accuracy alone. DeepBiteNet also demonstrated superior performance across all additional evaluation metrics, including macro-averaged precision (0.880), recall (0.870), and F1-score (0.875). These metrics are critical for real-world applications where false positives or false negatives can misguide medical interpretation and lead to suboptimal patient management. For example, misclassifying a tick bite as benign could result in delayed treatment of Lyme disease or other tick-borne illnesses. One of the most compelling advantages of DeepBiteNet is its adaptability to mobile deployment. Unlike computationally expensive models that require high-end GPUs, the individual backbones and the ensemble meta-classifier were quantized and converted to TensorFlow Lite without significant degradation in performance. This enables on-device inference in real time, making the model accessible in rural or low-resource settings where dermatological expertise is unavailable. Nonetheless, several limitations merit consideration. First, although the dataset used in this study is publicly available and diverse, it remains modest in size. While data augmentation strategies have mitigated some of the associated generalization issues, future work would benefit from a substantially larger, clinically annotated dataset that includes multiple images per case, temporal progression, and metadata such as patient symptoms or geographic origin. Additionally, class imbalance remains a persistent challenge, particularly for less-represented categories such as tick and chigger bites. Although stratified sampling and regularization techniques were employed, long-tailed distributions can still skew learning dynamics. Another limitation lies in the model’s dependency on the visual manifestation of bites. In many real-world scenarios, insect bites may be partially healed, obscured by skin conditions, or masked by pigmentation differences, especially across varied ethnic populations. These factors can reduce model robustness and highlight the importance of incorporating multimodal inputs in future extensions, such as combining image analysis with user-reported symptoms or temporal data. While Grad-CAM and similar techniques can offer post hoc explainability, there is a need to enhance transparency by integrating inherently interpretable models or attention-guided architectures. This would improve clinical trust and facilitate broader adoption among practitioners. The DeepBiteNet model presents a promising step forward in AI-assisted insect bite classification, offering a well-balanced trade-off between accuracy, interpretability, and deployability. Its ensemble-based design, combined with strategic transfer learning and lightweight optimization, positions it as a strong candidate for integration into mobile teledermatology platforms and community health systems.

6. Conclusions

This study introduced DeepBiteNet, a novel ensemble-based deep learning architecture specifically designed for the multiclass classification of insect bite images. By integrating diverse convolutional neural network backbones—DenseNet121, EfficientNet-B0, and MobileNetV3—through a stacked meta-classification strategy, the proposed framework effectively captured both general and class-specific visual patterns across a heterogeneous dataset. Extensive experiments conducted on a publicly available bug bite image dataset demonstrated that DeepBiteNet outperforms 15 widely recognized baseline models in all major evaluation metrics, achieving a state-of-the-art classification accuracy of 84.6% along with high precision, recall, and F1-score values. The success of DeepBiteNet underscores the advantages of architectural heterogeneity and meta-level decision fusion in complex image classification tasks characterized by fine-grained inter-class variability. Moreover, the framework was optimized for deployment through model quantization and compression, ensuring compatibility with real-time inference on resource-constrained mobile devices—an essential feature for field applications in remote or underserved areas. While the results are promising, the study also highlights the importance of addressing data limitations. Future work will focus on expanding the dataset with clinically verified annotations, incorporating multimodal features such as symptom descriptions and location metadata, and exploring explainable AI mechanisms to increase transparency in model predictions. Additionally, efforts will be made to evaluate the model’s robustness across diverse demographic groups and skin tones, as well as to adapt the system to progressive or overlapping dermatological conditions. DeepBiteNet establishes a strong foundation for the application of ensemble deep learning in digital dermatology, particularly in the context of automated insect bite recognition. Its strong predictive performance, lightweight deployment capabilities, and scalability make it a viable solution for enhancing diagnostic support in global health contexts where dermatological expertise is limited.

Author Contributions

Methodology, D.K., H.K., M.S., M.A., S.A., T.T., H.-S.J. and C.L.; software, D.K., H.K. and M.S.; validation, M.A., T.T., H.-S.J., S.A. and C.L.; formal analysis, M.A., T.T., S.A., H.-S.J. and C.L.; resources, D.K., H.K., M.S. and M.A.; data curation, D.K., H.K., M.S. and M.A.; writing—original draft, D.K., H.K. and M.S.; writing—review and editing, H.-S.J. and C.L.; supervision, H.-S.J. and C.L.; project administration, M.S., M.A., T.T., H.-S.J. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets used are available online, with open access.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ilijoski, B.; Trojachanec Dineva, K.; Tojtovska Ribarski, B.; Petrov, P.; Mladenovska, T.; Trajanoska, M.; Gjorshoska, I.; Lameski, P. Deep Learning methods for bug bite classification: An end-to-end system. Appl. Sci. 2023, 13, 5187. [Google Scholar] [CrossRef]
  2. Sushma, K.V.N.D.; Pande, S.D. Multiclass Classification of Insect Bites Using Deep Learning Techniques. In Proceedings of the International Conference on Machine Vision and Augmented Intelligence, Patna, India, 24–25 November 2023; Springer Nature: Singapore, 2023; pp. 577–585. [Google Scholar]
  3. Umirzakova, S.; Muksimova, S.; Baltayev, J.; Cho, Y.I. Force Map-Enhanced Segmentation of a Lightweight Model for the Early Detection of Cervical Cancer. Diagnostics 2025, 15, 513. [Google Scholar] [CrossRef] [PubMed]
  4. Akshaykrishnan, V.; Sharanya, C.; Abhinav, K.; Aparna, C.K.; Bindu, P.V. A machine learning based insect bite classification. In Proceedings of the 2023 3rd International Conference on Smart Data Intelligence (ICSMDI), Trichy, India, 30–31 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 259–264. [Google Scholar]
  5. Rajak, P.; Ganguly, A.; Adhikary, S.; Bhattacharya, S. Smart technology for mosquito control: Recent developments, challenges, and future prospects. Acta Trop. 2024, 258, 107348. [Google Scholar] [CrossRef] [PubMed]
  6. Zeledon, E.V.; Baxt, L.A.; Khan, T.A.; Michino, M.; Miller, M.; Huggins, D.J.; Jiang, C.S.; Vosshall, L.B.; Duvall, L.B. Next-generation neuropeptide Y receptor small-molecule agonists inhibit mosquito-biting behavior. Parasites Vectors 2024, 17, 276. [Google Scholar] [CrossRef] [PubMed]
  7. Nainggolan, P.I.; Efendi, S.; Budiman, M.A.; Lydia, M.S.; Rahmat, R.F.; Bukit, D.S.; Salmah, U.; Indirawati, S.M.; Sulaiman, R. Detection and Classification of Mosquito Larvae Based on Deep Learning Approach. Eng. Lett. 2025, 33, 198–206. [Google Scholar]
  8. Aggarwal, G.; Goyal, M.K. V2SeqNet: A robust deep learning framework for malaria parasite stage classification in thin smear microscopic images. J. Integr. Sci. Technol. 2025, 13, 1104. [Google Scholar] [CrossRef]
  9. Muksimova, S.; Umirzakova, S.; Shoraimov, K.; Baltayev, J.; Cho, Y.-I. Novelty Classification Model Use in Reinforcement Learning for Cervical Cancer. Cancers 2024, 16, 3782. [Google Scholar] [CrossRef] [PubMed]
  10. Zhang, B.; Zhou, X.; Luo, Y.; Zhang, H.; Yang, H.; Ma, J.; Ma, L. Opportunities and challenges: Classification of skin disease based on deep learning. Chin. J. Mech. Eng. 2021, 34, 112. [Google Scholar] [CrossRef]
  11. Michelerio, A.; Rubatto, M.; Roccuzzo, G.; Coscia, M.; Quaglino, P.; Tomasini, C. Eosinophilic Dermatosis of Hematologic Malignancy: Emerging Evidence for the Role of Insect Bites—A Retrospective Clinico-Pathological Study of 35 Cases. J. Clin. Med. 2024, 13, 2935. [Google Scholar] [CrossRef] [PubMed]
  12. Amekpor, F.; Mwansa, C.; Benjamin, D.O.; Adedokun, D.A.; Aderonke, T.R.; Kamfechukwu, O.G.; Tripathy, S.; Mehta, V. The Prospects and Challenges of Telemedicine and Digital Health Tools in Enhancing the Timely Reporting and Surveillance of Malaria Cases. J. Health Sci. Med. Res. 2025, 43, 20251167. [Google Scholar] [CrossRef]
  13. Deniz-Garcia, A.; Fabelo, H.; Rodriguez-Almeida, A.J.; Zamora-Zamorano, G.; Castro-Fernandez, M.; Alberiche Ruano, M.D.P.; Solvoll, T.; Granja, C.; Schopf, T.R.; Callico, G.M.; et al. Quality, usability, and effectiveness of mHealth apps and the role of artificial intelligence: Current scenario and challenges. J. Med. Internet Res. 2023, 25, e44030. [Google Scholar] [CrossRef] [PubMed]
  14. Verma, N.; Ranvijay; Yadav, D.K. A comprehensive review on step-based skin cancer detection using machine learning and deep learning methods. Arch. Comput. Methods Eng. 2025, 32, 1–54. [Google Scholar] [CrossRef]
  15. McMullen, E.P.; Al Naser, Y.A.; Maazi, M.; Grewal, R.S.; Abdel Hafeez, D.; Folino, T.R.; Vender, R.B. Predicting psoriasis severity using machine learning: A systematic review. Clin. Exp. Dermatol. 2025, 50, 520–528. [Google Scholar] [CrossRef] [PubMed]
  16. Asif, S.; Khan, S.U.R.; Amjad, K.; Awais, M. SKINC-NET: An efficient Lightweight Deep Learning Model for Multiclass skin lesion classification in dermoscopic images. Multimed. Tools Appl. 2024, 84, 12531–12557. [Google Scholar] [CrossRef]
  17. Amin, M.U.; Iqbal, M.M.; Saeed, S.; Hameed, N.; Iqbal, M.J. Skin lesion detection and classification. J. Comput. Biomed. Inform. 2024, 6, 47–54. [Google Scholar]
  18. Padhy, S.; Dash, S.; Kumar, N.; Singh, S.P.; Kumar, G.; Moral, P. Temporal Integration of ResNet Features with LSTM for Enhanced Skin Lesion Classification. Results Eng. 2025, 25, 104201. [Google Scholar] [CrossRef]
  19. Bello, A.; Ng, S.C.; Leung, M.F. Skin cancer classification using fine-tuned transfer learning of DENSENET-121. Appl. Sci. 2024, 14, 7707. [Google Scholar] [CrossRef]
  20. Dillshad, V.; Khan, M.A.; Nazir, M.; Saidani, O.; Alturki, N.; Kadry, S. D2LFS2Net: Multi-class skin lesion diagnosis using deep learning and variance-controlled Marine Predator optimisation: An application for precision medicine. CAAI Trans. Intell. Technol. 2025, 10, 207–222. [Google Scholar] [CrossRef]
  21. Akhoundi, M.; Sereno, D.; Marteau, A.; Bruel, C.; Izri, A. Who Bites Me? A Tentative Discriminative Key to Diagnose Hematophagous Ectoparasites Biting Using Clinical Manifestations. Diagnostics 2020, 10, 308. [Google Scholar] [CrossRef] [PubMed]
  22. Singh, S.; Mann, B.K. Insect bite reactions. Indian J. Dermatol. Venereol. Leprol. 2013, 79, 151. [Google Scholar] [CrossRef] [PubMed]
  23. Vaidyanathan, R.; Feldlaufer, M.F. Bed bug detection: Current technologies and future directions. Am. J. Trop. Med. Hyg. 2013, 88, 619. [Google Scholar] [CrossRef] [PubMed]
  24. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
Figure 1. High-level flow diagram of the DeepBiteNet ensemble framework. The input RGB image (224 × 224 × 3) is independently passed through three backbone CNNs: DenseNet121, EfficientNet-B0, and MobileNetV3-Small. Each backbone outputs an eight-dimensional softmax vector after global average pooling and dense projection. These vectors are concatenated to form a 24-dimensional input to the stacked meta-classifier, which consists of one hidden layer with 128 units (ReLU activation), a dropout layer (rate = 0.5), and a final softmax layer for 8-class prediction.
Figure 1. High-level flow diagram of the DeepBiteNet ensemble framework. The input RGB image (224 × 224 × 3) is independently passed through three backbone CNNs: DenseNet121, EfficientNet-B0, and MobileNetV3-Small. Each backbone outputs an eight-dimensional softmax vector after global average pooling and dense projection. These vectors are concatenated to form a 24-dimensional input to the stacked meta-classifier, which consists of one hidden layer with 128 units (ReLU activation), a dropout layer (rate = 0.5), and a final softmax layer for 8-class prediction.
Diagnostics 15 01841 g001
Figure 2. Detailed outline of the structure used for the extraction and fusion of backbone features within DeepBiteNet. All blocks labeled as “Input” correspond to the same original RGB image (224 × 224 × 3), which is independently input into each of the three CNN backbones: DenseNet121, EfficientNet-B0, and MobileNetV3-Small. These parallel inputs are depicted separately for illustrative purposes to clarify the distinct feature extraction paths. Each CNN produces an eight-dimensional vector of class probabilities, computed after global average pooling and dense projection. These vectors are then concatenated and passed to the stacked meta-classifier, which computes the final eight-class prediction. All blocks and data flows are labeled to enhance readability.
Figure 2. Detailed outline of the structure used for the extraction and fusion of backbone features within DeepBiteNet. All blocks labeled as “Input” correspond to the same original RGB image (224 × 224 × 3), which is independently input into each of the three CNN backbones: DenseNet121, EfficientNet-B0, and MobileNetV3-Small. These parallel inputs are depicted separately for illustrative purposes to clarify the distinct feature extraction paths. Each CNN produces an eight-dimensional vector of class probabilities, computed after global average pooling and dense projection. These vectors are then concatenated and passed to the stacked meta-classifier, which computes the final eight-class prediction. All blocks and data flows are labeled to enhance readability.
Diagnostics 15 01841 g002
Figure 3. Representative dermal images across insect bite categories in the bug bite dataset. The figure showcases sample images from the eight semantic classes used in this study: ant, bed bug, chigger, flea, mosquito, spider, tick, and unaffected skin. These examples highlight the substantial intra-class variability and inter-class similarity, including differences in skin tone, lighting conditions, lesion morphology, and anatomical location.
Figure 3. Representative dermal images across insect bite categories in the bug bite dataset. The figure showcases sample images from the eight semantic classes used in this study: ant, bed bug, chigger, flea, mosquito, spider, tick, and unaffected skin. These examples highlight the substantial intra-class variability and inter-class similarity, including differences in skin tone, lighting conditions, lesion morphology, and anatomical location.
Diagnostics 15 01841 g003
Figure 4. DeepBiteNet’s output on representative test images from the eight classes of insect bites. Each image displays a cropped dermal region annotated with the predicted class and the corresponding confidence score (in percentage). Red bounding boxes indicate the model predicted bite type. The examples include common bite types such as ant, bed bug, chigger, mosquito, spider, tick, and unaffected skin, and demonstrate the model’s ability to distinguish lesions with similar appearances. The consistent agreement between ground truth and model predictions highlights the robustness and generalization capability of DeepBiteNet under diverse visual conditions.
Figure 4. DeepBiteNet’s output on representative test images from the eight classes of insect bites. Each image displays a cropped dermal region annotated with the predicted class and the corresponding confidence score (in percentage). Red bounding boxes indicate the model predicted bite type. The examples include common bite types such as ant, bed bug, chigger, mosquito, spider, tick, and unaffected skin, and demonstrate the model’s ability to distinguish lesions with similar appearances. The consistent agreement between ground truth and model predictions highlights the robustness and generalization capability of DeepBiteNet under diverse visual conditions.
Diagnostics 15 01841 g004
Figure 5. Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values for each insect bite class on the test set using the DeepBiteNet model. The model demonstrates high discriminative ability across most classes, particularly mosquito, flea, and spider bites.
Figure 5. Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values for each insect bite class on the test set using the DeepBiteNet model. The model demonstrates high discriminative ability across most classes, particularly mosquito, flea, and spider bites.
Diagnostics 15 01841 g005
Table 1. Summary of representative works in insect bite and dermatological image classification.
Table 1. Summary of representative works in insect bite and dermatological image classification.
StudyTaskModelsDatasetMobile-ReadyAccuracy
Ilijoski et al. (2023) [1]Bug bite classificationVGG16 + InceptionV3Kaggle + curatedNo~86%
Sushma & Pande (2023) [2]Multiclass bite classificationMobileNetV2Self-curatedNo~76%
Akshaykrishnan et al. (2023) [4]Insect bite classificationCNN + SVMCustom datasetNo~78%
Asif et al. (2024) [16]Lesion classificationSKINC-NetDermoscopyYes83%
Amin et al. (2024) [17]Skin diseaseCNNsPublic datasetsNo86%
Table 2. Distribution of images across insect bite categories in the cleaned dataset.
Table 2. Distribution of images across insect bite categories in the cleaned dataset.
ClassNumber of Images
Ant239
Bed Bug218
Chigger213
Flea251
Mosquito287
Spider265
Tick206
Unaffected Skin253
Total1932
Table 3. Classification accuracy (%) of DeepBiteNet versus baseline CNN models on the test set.
Table 3. Classification accuracy (%) of DeepBiteNet versus baseline CNN models on the test set.
ModelAccuracy (%)PrecisionRecallF1-ScoreParams (M)FLOPs (G)
DeepBiteNet84.60.880.870.875~15.8~3.2
DenseNet12178.20.7420.7320.7377.982.9
DenseNet16977.90.7570.7470.75214.153.4
EfficientNet-B177.20.7880.7780.7837.80.7
EfficientNet-B076.90.7730.7630.7685.30.39
InceptionV376.30.6650.6550.6623.95.7
Xception760.8030.7930.79822.98.4
ConvNeXt-T75.90.8650.8550.8628.64.5
MobileNetV375.40.7270.7170.7222.50.07
MobileNetV275.10.7110.7010.7063.40.3
NASNetMobile74.60.8190.8090.8145.30.6
ResNet5074.50.650.640.64525.64.1
VGG1973.50.6960.6860.691143.719.6
VGG1672.80.6810.6710.676138.315.5
ShuffleNet71.50.8340.8240.8291.40.14
SqueezeNet70.80.8490.8390.8441.20.24
Table 4. Per-class precision, recall, and F1-score for the DeepBiteNet model on the test set.
Table 4. Per-class precision, recall, and F1-score for the DeepBiteNet model on the test set.
ClassPrecisionRecallF1-Score
Ant0.880.870.875
Bed Bug0.860.840.85
Chigger0.830.820.825
Flea0.890.900.895
Mosquito0.910.920.915
Spider0.880.860.87
Tick0.840.830.835
Unaffected Skin0.870.880.875
Table 5. Comparison of DeepBiteNet performance trained with and without data augmentation.
Table 5. Comparison of DeepBiteNet performance trained with and without data augmentation.
SettingAccuracy (%)PrecisionRecallF1-Score
Without Augmentation79.20.810.800.805
With Augmentation84.60.880.870.875
Table 6. Performance comparison of DeepBiteNet trained with and without ImageNet pre-trained weights.
Table 6. Performance comparison of DeepBiteNet trained with and without ImageNet pre-trained weights.
InitializationAccuracy (%)PrecisionRecallF1-Score
Without Pretraining78.50.790.780.785
With ImageNet Weights84.60.880.870.875
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khasanov, D.; Khujamatov, H.; Shakhnoza, M.; Abdullaev, M.; Toshtemirov, T.; Anarova, S.; Lee, C.; Jeon, H.-S. DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning. Diagnostics 2025, 15, 1841. https://doi.org/10.3390/diagnostics15151841

AMA Style

Khasanov D, Khujamatov H, Shakhnoza M, Abdullaev M, Toshtemirov T, Anarova S, Lee C, Jeon H-S. DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning. Diagnostics. 2025; 15(15):1841. https://doi.org/10.3390/diagnostics15151841

Chicago/Turabian Style

Khasanov, Doston, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee, and Heung-Seok Jeon. 2025. "DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning" Diagnostics 15, no. 15: 1841. https://doi.org/10.3390/diagnostics15151841

APA Style

Khasanov, D., Khujamatov, H., Shakhnoza, M., Abdullaev, M., Toshtemirov, T., Anarova, S., Lee, C., & Jeon, H.-S. (2025). DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning. Diagnostics, 15(15), 1841. https://doi.org/10.3390/diagnostics15151841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop