Next Article in Journal
New Insights into the Biology, Ecology, and Management of Mosquitoes
Previous Article in Journal
Assessing Habitat Suitability for Phloeosinus aubei Perris in China: A MaxEnt-Based Predictive Analysis
Previous Article in Special Issue
Morphological Comparisons of Adult Worker Bees Developed in Chinese and Italian Honey Bee Combs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Detection of Honey Storage Areas in Apis mellifera Colonies for Predicting Physical Parameters of Honey via Linear Regression

by
Watit Khokthong
1,2,3,†,
Panpakorn Kritangkoon
3,†,
Chainarong Sinpoo
1,2,4,
Phuwasit Takioawong
2,
Patcharin Phokasem
1,2,4 and
Terd Disayathanoowat
1,2,*
1
Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Research Center of Deep Technology in Beekeeping and Bee Products for Sustainable Development Goals (SMART BEE SDGs), Chiang Mai University, Chiang Mai 50200, Thailand
3
Bachelor of Environmental Science Program, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
4
Office of Research Administration, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Insects 2025, 16(6), 575; https://doi.org/10.3390/insects16060575
Submission received: 30 April 2025 / Revised: 23 May 2025 / Accepted: 26 May 2025 / Published: 29 May 2025
(This article belongs to the Special Issue Precision Apicultures)

Simple Summary

This study addresses the challenge of accurately and efficiently monitoring honey production in beehives. Using high-resolution photographs captured under controlled lighting with a commercial digital camera, we analyzed honeycomb images with a deep learning model to automatically detect and quantify honey-filled cells as a proportion of the total comb area. The deep learning system performed best in identifying uncapped honey cells. Additionally, we validated the pixel-based classification results against measured honey physical properties—including pH, conductivity, moisture content, and color. Although the automated method revealed only weak correlations with these physical parameters, the deep learning-based classification offers a promising solution for real-time, scalable monitoring of hive productivity, supporting modern and data-driven beekeeping practices.

Abstract

Traditional methods for assessing honey storage in beehives predominantly rely on manual visual inspection, which often leads to inconsistencies and inefficiencies. This study presents an automated deep learning approach utilizing the YOLOv11 model to detect, classify, and quantify honey cells within Apis mellifera frames across monthly sampling periods. The model’s performance varied depending on image resolution and dataset partitioning. Using the free version of YOLOv11 with high-resolution images (960 × 960 resolution) and a dataset split of 90:5:5 for training, validating, and testing, the model achieved a mean average precision at IoU threshold of 0.5 (mAP@0.5) of 83.4% for uncapped honey cells and 80.5% for capped honey cells. A strong correlation (r = 0.94) was observed between the 90:5:5 and 80:10:10 dataset splits, indicating that increasing the volume of training data enhances classification accuracy. In parallel, the study investigated the relationship between the physical properties of honey and image-based honey storage detection. Of the four tested properties, electrical conductivity (R2 = 0.19) and color (R2 = 0.21) showed weak predictive power for honey storage area estimation, with even weaker associations found for pH and moisture content. The honey storage areas via 90:5:5 and 80:10:10 datasets moderately correlated (r = 0.44–0.46) with increasing electrical conductivity and color. Especially, electrical conductivity exhibited statistically significant correlations with dataset performance across different dataset splits (p < 0.05), suggesting some potential influence of chemical composition on model accuracy. Our findings demonstrate the viability of image-based honey classification as a reliable technique for monitoring beehive productivity. Additionally, the research on image-based honey detection can be a non-invasive solution for improved honey production, beehive productivity, and optimized beekeeping practices.

1. Introduction

Beekeeping has been an important part of world agriculture and environmental stewardship for centuries, with honeybees delivering valuable pollination services that enhance food production and biological diversity [1]. Western honeybees (Apis mellifera) play a vital role in maintaining agricultural ecosystems through pollination of flowering plants, contributing significantly to global food production from crops used for human consumption [2,3]. Given the global expansion of the beekeeping industry, a critical concern for stakeholders is the implementation of routine physical inspections in accordance with modern apicultural practices, particularly to ensure proper honey storage, maintain hive health, and enhance the quality and marketability of bee products [4]. Monitoring the health of honeybee colonies is crucial for sustainable beekeeping practices. Although physical honeybee colony inspections during the summer are very important to encourage honey production regarding time-based management, this practice should not be allowed, as it causes subsequent disturbances during the major nectar flows [4].
Traditional beehive inspections on honey storage by beekeepers often rely on visual estimation, which can be fast but inaccurate. The estimates of honey or nectar provision per comb frame are obtained by subtracting the weight of the foundation and other components (such as capped brood, larvae, and bee bread) from the total weight of the frame without bees [5,6]. To enable new functionalities in precision beekeeping—such as honey production monitoring, pollination optimization, and bee health assessment—developing intelligent hives equipped with sensors for audio and image data analysis is considered a best management practice [5,6,7,8,9]. As the cost of camera sensor systems decreases and their capabilities improve, image acquisition in beehives still presents challenges. For example, capturing high-quality images often requires the use of a wooden tunnel to block external light [10], and inspections remain time-consuming and are not typically conducted continuously. In addition, technological developments should incorporate analytical overlays to provide beekeepers with precisely processed data. Although digital images of honey and nectar cells can be successfully quantified using DeepBee© software, (https://github.com/AvsThiago/DeepBee-source, accessed on 12 January 2025) the actual weight of honey or nectar varies depending on the depth of the cells [10]. Consequently, modern honeybee colony monitoring systems are expected to become more prescriptive, providing data that is analyzed before being displayed to users. This will allow beekeepers to process digital data more precisely, particularly when analyzing information from multiple colonies to support informed decision-making.
In modern apiculture, as opposed to wild nests, beehives consist of several honeycomb frames, where bees store brood, pollen, and honey [7]. Recent studies have addressed the pollen detection in beehives as well as its color variation and textures using image-based classification to enable suitable interventions and management strategies [7,11]. Counting the comb cells with bee food reserves offers information on colony health status [10]. To improve colony health assessments, methods such as CombCount, (https://github.com/jakebruce/CombCount, accessed on 12 January 2025) a semi-automated brood counting tool, enhance accuracy by efficiently detecting empty comb cells and providing more reliable estimates of brood area [12]. Furthermore, quantitative assessment of honey yield is equally important, as it reflects the productivity of the hive and directly impacts economic returns. Deep learning algorithms embedded in computer vision, which offer precise automated honey detection, are considered the next step toward improving the beekeeping industry by enabling researchers to analyze high-resolution photographs of honeycomb structures through digital image processing [7,13]. Automated segmentation techniques can estimate the proportion of comb cells filled with honey, offering a rapid and objective alternative to manual counting. Precision apicultures are relying on image databases and segmentation algorithms; however, deriving spectral signatures from low-resolution images directly still gives limited information [13,14].
In precision beekeeping, images are commonly used to train convolutional neural networks (CNNs), a technique widely adopted in deep learning algorithms for object detection [15,16]. Object detection methods are generally categorized into one-stage and two-stage approaches [17]. Redmon et al. (2016) [18] introduced YOLO (You Only Look Once), a single-stage deep learning detection algorithm. The latest version, YOLOv11, incorporates the C3K2 module to improve the accuracy of small object detection [19]. Previous CNN-based object detection models, such as ResNet and AlexNet [20], as well as region proposal-based models like Faster R-CNN [21] and Mask R-CNN [22], have been applied to computer vision-based honeybee inspections. While these two-stage models rely on region proposals and deliver strong accuracy, YOLO’s single-stage architecture enables it to perform object classification and localization within a single network, directly extracting features to make predictions [17]. Additionally, the single-shot multibox detector (SSD), another single-stage CNN algorithm, has been used in honeybee inspection systems. However, SSD has shown inferior performance compared to YOLOv5 in detecting Varroa destructor mites using image datasets [23]. Therefore, deep learning algorithms embedded in computer vision, offering precise automated honey detections, are considered the next step toward improved beekeeping industries.
Data augmentation is essential for enhancing the robustness of deep learning in computer vision models [14,24], such as for honey and pollen cell detection. Especially, detecting uncapped and capped brood, as well as honey cells, remains a challenging task for automated computer vision algorithms. Earlier, the circular Hough transform (CHT) demonstrated some capability in detecting honeybee cells [25,26]; however, bee comb cells are naturally hexagonal in shape, not circular. To address this limitation, object-based detection using the CHT method was enhanced with semantic segmentation techniques, as implemented in the free software DeepBee© (https://github.com/AvsThiago/DeepBee-source, accessed on 12 January 2025, which can detect hexagon-shaped cells in bee combs [10]. The detection methods used in Alves et al. (2020) [10] demonstrated high accuracy in identifying honey cells by distinguishing between cells containing eggs, larvae, capped brood, pollen, nectar, honey, and other materials. To improve feature extraction for hexagonally shaped honey cells, data augmentation plays a crucial role in enhancing deep learning models in computer vision [24]. The X-AnyLabeling v.2.5.3 tools, built on the PyPI package, support a wide range of annotation shapes, such as freeform multi-vertex polygons, which facilitate detailed data annotation and more accurate feature extraction of the YOLO base models.
Honey quality is influenced by various physical and chemical properties, including pH, electrical conductivity, moisture content, and color. These parameters are often used to assess honey’s freshness, stability, and purity, which depend primarily on the botanical origin—determined through pollen analysis [27]—as well as on microbiological properties [28,29]. While these factors and conditions play a crucial role in the honey properties, the extent to which the honey area—extracted from image segmentation—affects these properties remains unclear. Our aims were to (1) apply digital image processing and deep learning to enhance precision beekeeping by determining the percentage of honey area within the beehive, and (2) explore its relationship with physical parameters. The four measured variables are pH, electrical conductivity (EC), moisture content, and color, which could investigate the feasibility of applying the honey data via deep learning in honeybee products and yield estimation. This information will help beekeepers and researchers assess comb conditions in honeybee farms in Thailand, with potential applications in other countries as well.

2. Materials and Methods

2.1. Dataset

2.1.1. Experimental Setup

The experiment was conducted over six months (July 2024–January 2025, except October 2024), in Chiang Mai province, Thailand. The setup for this research involved visits to three apiary locations (Figure 1): Agricultural Technology Promotion Center for Economic Insects (coordinates: 18.73729, 98.92272), Chiang Mai Healthy Product Co., Ltd., 193 (coordinates: 18.68635, 99.05318), and Faculty of Agriculture, Chiang Mai University (coordinates: 18.79348, 98.96000). At each site, data were collected from queenright Apis mellifera colonies, each consisting of 8 to 10 frames per colony. All locations were selected for their active beekeeping practices and diverse colony compositions, which provided ideal conditions for controlled data collection of every single frame, ensuring a robust dataset for analysis.

2.1.2. Image Acquisition

To ensure consistent lighting conditions, a portable DIY wooden studio box (Figure 2) was used along with a Flash Godox TT685 TTL. A digital camera, Sony A7R4, was employed for image capture. The camera was placed 50 cm from the beehive, and image samples were collected from the two-sided beehives. During the process, the studio box was closed on the other sides to minimize external light interference. The images captured had a resolution of 60 megapixels (9504 × 6336 pixels) without adult bees on the frame. In this study, four A. mellifera colonies were selected from each site. From each colony, four frames were chosen, resulting in a total of 16 frames per site. All selected frames were photographed on both sides, yielding 96 images per visit across the three sites. Data collection was conducted monthly using the same frames, resulting in a total of 464 high-resolution images over the entire study. However, not all captured images were used for this study, as some frames lacked honey and due to the flood event in Chiang Mai in October 2024 (Upper Northern Region Irrigation Hydrology Center, https://www.hydro-1.net, accessed on 12 January 2025). The final dataset that was analyzed consisted of 300 images that met the necessary criteria for this research.

2.1.3. Image Annotation

A total of 300 images were annotated by using X-AnyLabelling v.2.5.3 and uploaded into the Roboflow platform. The annotation of the images involved drawing a polygon around the objects of interest and assigning object classes (Figure 3) to the polygon. Table 1 shows all of the different classes annotated in the dataset.
To calculate the percentage of honey in a honeycomb, it is essential to capture both the honey class and the total area of the honeycomb. While the honey class alone is sufficient to identify the honey-filled cells, the total area of the honeycomb is needed to determine the proportion of honey relative to the entire structure. To do this, we differentiate between visible and uncapped types of honey cells as one of the three formal types of cells, which was tested with machine learning algorithms [30].
The uncapped class has the highest number of instances because the selected images exclusively feature honeycombs. Given that honeycombs consist of numerous cells, the number of honey-filled cells is naturally high. After annotating and labeling all images, they were exported to the YOLO format with the configuration [class_id x, y, w, h]. These parameters are used to represent an object in a computer vision system. The annotation files were saved with the same names as the images, and the configurations for the file paths that feed the model for training, validating, and testing were then saved in a YAML file. Figure 4 shows an example of an annotation image on Roboflow.

2.1.4. Data Augmentation

Before model training, the annotated dataset was preprocessed and resized to five distinct dimensions: 960 × 960, 800 × 800, 640 × 640, 512 × 512, and 256 × 256 resolution. This resizing ensured compatibility with the input requirements of the pre-trained models. Additionally, data augmentation techniques were employed to increase the dataset size and improve generalization by reducing the risk of overfitting. These techniques create new training examples for your model to learn from by generating augmented versions of each image in your training set. Such as flipping, rotating, and altering image properties, a process that effectively enhances dataset diversity. Table 2 summarizes the preprocessing and augmentation settings applied.

2.2. Object Detection Model

Model Training and Validation

This study utilized YOLOv11s-seg frameworks for object detection and instance segmentation. The dataset was divided into training portions (90%, 80%, 70%), validating portions (5%, 10%, 15%), and testing portions (5%, 10%, 15%) with three subsets; 90:5:5, 80:10:10 and 70:15:15. YOLOv11, the latest version of the YOLO algorithm, released by Ultralytics on 30 September 2024 was used.
The training was conducted on an NVIDIA GeForce 3070 laptop GPU with 8 GB of GDDR6 memory, NVIDIA Ampere with 5120 CUDA Cores, and 40 RT Cores with significantly low TOPS (~20–25 TFLOPS FP32) but still capable of AI tasks. Pretrained weights from the COCO dataset were leveraged to accelerate and improve the training process. The model was trained using various combinations of input image sizes and batch sizes and tested across multiple YOLOv11 architectures as previously described. The input image sizes included 960 × 960, 640 × 640, 512 × 512, and 256 × 256 pixels, while batch sizes of 4. The performance of these combinations was evaluated to determine the optimal model and parameter settings for real-time applications. To eliminate this source of variance and enable direct performance comparison, we fixed all training runs to exactly 200 epochs, regardless of individual early-stop points.

2.3. Data Extraction

After predicting the image, we extract the segmented area of each class from the predicted image and measure its region properties using the scikit-image. This process enables the calculation of the honey percentage by analyzing the honey area in pixels. Figure 5 shows an example of an extracted data plot by matplotlib.

2.4. Performance Metrics

After completing the training and validation phases, the performance of the models is assessed by testing them on a designated test dataset. Selecting appropriate evaluation metrics for object detection models can be complex.

2.4.1. Precision and Recall

Precision measures the proportion of correctly predicted instances, reflecting the model’s reliability in producing accurate predictions. Recall, on the other hand, quantifies the proportion of relevant instances correctly identified by the model. These metrics are calculated using the following equations:
P = TP/(TP + FP)
R = TP/(TP + FN)
Here, TP represents true positives (correctly detected objects), FP indicates false positives (incorrectly predicted objects), and FN denotes false negatives (missed detections). A detection is classified as a TP if the Intersection over Union (IoU) between the predicted bounding box and the ground truth exceeds a predetermined threshold (commonly 0.5). Otherwise, it is considered an FP.

2.4.2. Average Precision and Mean Average Precision

Average precision (AP) is computed as the area under the precision-recall curve, as defined by the following:
A P = k = 0 n 1 [ R e c a l l ( k ) R e c a l l ( k + 1 ) ] × P r e c i s i o n ( k )
In Equation (3), n denotes the total number of discrete precision–recall evaluation points (one for each unique detection-score threshold), and k indexes each of those points from 0 to n–1. The IoU, which evaluates the overlap between the detected and actual bounding boxes, is given by the following:
I o U = A r e a   o f   O v e r l a p   b e t w e e n   O b j e c t   a n d   D e t e c t e d   B o x A r e a   o f   U n i o n   b e t w e e n   O b j e c t   a n d   D e t e c t e d   B o x
The AP score ranges between 0 and 1, providing a single value that summarizes precision across recall levels. Mean average precision (mAP) is calculated as the mean of AP values across all object classes:
m A P = 1 n i = 1 n A P i
In this expression, n refers to the total number of object classes under evaluation, and i runs from 1 to n, indexing each class’s individual average-precision score (AP1APn) before they are averaged to yield mAP. Object detection models often use two mAP thresholds: mAP@0.5 (the mean AP for an IoU threshold of 0.5) and mAP@0.5:0.95 (the mean AP averaged over IoU thresholds from 0.5 to 0.95). In this study, the performance of all selected models was evaluated using precision, recall, and mAP@0.5, which are widely recognized as standard benchmarks for evaluating object detection models.

2.4.3. Honey Area Acquisition and Dataset Fittings Along Linear Regression Line

The percentage of honey area was computed by comparing pixel counts of honey presence to the total hive area. This method aligns with ongoing research in automated beehive monitoring by the following equation:
H o n e y   a r e a   % =   [ ( C a p p e d + U n c a p p e d ) / T o t a l ] × 100
Image datasets were processed to retrieve pixel data from different sides of the hive (Side A, Side B, and Both Sides) contribute to the calculation of honey area percentage (Supplementary Table S2). Here, we selected the 960 × 960 resolution because it gives the highest mAP@0.5 results (Table 3, Figure 6). When the model with 960 × 960 resolution was trained and tested under three different data-splitting schemes: 90:5:5, 80:15:15, and 70:15:15, the scatter plot compared the percentage of honey area estimated by one dataset split on the x-axis to another split on the y-axis. The fitted regression lines, along with their corresponding slopes, intercepts, the Pearson correlation coefficient (r), coefficient of determination (R2), and p-values, quantified how well these different splits agree with each other (Figure 7).

2.5. Assessment and Measurement of Honey Physical Parameters in Honeybee Hives

We took 20 mL of the honey sample from each beehive after the photography to test four physical parameters. Since the pH of honey influences its flavor profile, fermentation rate, and microbial stability, and moisture content is a key parameter influencing honey’s shelf life, viscosity, and susceptibility to fermentation [31]. Electrical conductivity provides insights into the mineral and organic acid content of honey [32] and also help identify adulteration or contamination, thus ensuring honey purity [33]. Color is one of the key indicators of honey type that is often linked to floral origin and consumer preference. This measurement supports both quality control and product marketing [34]. Here, we tested pH, EC, moisture content, and color for all honey samples, then fit them in the linear regression with estimated honey storage areas provided from data-splitting schemes: 90:5:5, 80:15:15, and 70:15:15.

2.5.1. pH Measurement

The pH of honey samples was measured under controlled laboratory conditions using a digital benchtop pH meter (SUNTEX SP-2100, Taipei, Taiwan). The meter comprises a sensitive electrode probe and a digital display unit, allowing precise and rapid measurement of honey samples or hive-derived fluids. For each honey sample, four grams of honey were mixed with 30 mL of DI water until a homogeneous solution was obtained. The pH was then measured, with three replications performed for each sample.

2.5.2. Electrical Conductivity (EC)

The HI99300 Hanna meter (Hanna Instruments, Nusfalau, Romania) was used to measure the EC of honey using a sample of four grams dissolved in 20 mL of DI water. These portable meters feature a probe that detects the ionic content of the solution, displaying results in millisiemens per centimeter (mS/cm) or parts per million (ppm).

2.5.3. Moisture Content

A digital handheld honey refractometer (ATAGO PAL-22S, Tokyo, Japan) was employed to measure the moisture content. In this device, a small sample of honey is placed on the sensor, and an internal light source measures the refraction of the sample, which is then correlated to moisture or Brix values.

2.5.4. Color Measurement

The HI96785 Honey Color Photometer (Hanna Instruments, Nusfalau, Romania) measured the honey color on the Pfund scale, which ranges from water-white to dark amber. This device uses a tungsten lamp and a silicon photodetector to determine the transmittance of the sample, thereby classifying the honey’s color.

3. Results

3.1. Model Performance

The model’s performance in detecting capped and uncapped honey cells was evaluated across different image sizes and data splits using mAP@0.5 as the primary performance metric. A dataset of 300 images was used to assess the effectiveness of the models under varying conditions. Table 3 and Figure 6 show the impact of image resolution and data split on model performance for detecting uncapped and capped honey cells, where the 960 × 960 resolution of input images depicted more mAP@0.5 values than the other resolutions.
Figure 6. Testing accuracy according to the different image input sizes: (A) bounding box, and (B) mask.
Figure 6. Testing accuracy according to the different image input sizes: (A) bounding box, and (B) mask.
Insects 16 00575 g006

3.2. Comparison of Honey Area Estimates Among Different Datasets

Figure 7 illustrates the relationships among the honey area predictions. In Figure 7A, the regression between the 90:5:5 and 70:15:15 is described by the equation y = 0.7103x + 2.8094. The correlation coefficient (r = 0.81) and the coefficient of determination (R2 = 0.66). In Figure 7B, the analysis comparing the 80:10:10 and 70:15:15 yielded the regression line y = 0.9701x − 0.2051 with a correlation coefficient of r = 0.81 and R2 was 0.66. In Figure 7C, a comparison between the 90:5:5 and 80:10:10 obtained the regression equation y = 0.8955x + 1.0858, along with a correlation coefficient of r = 0.94, indicating a strong linear relationship. The equality line (y = x) fitted well to this data-splitting scheme (R2 value of 0.87, p < 0.01), near one-to-one correspondence, which demonstrates that the 80:10:10 split yielded predictions that are highly consistent with those of the 90:5:5 split.
Figure 7. Honey areas estimated from the selected 960 × 960 resolution splits (A) 90:5:5 vs. 70:15:15, (B) 70:15:15 vs. 80:10:10, and (C) 80:10:10 vs. 90:15:15. The equality line (gray dashed line) indicates a 1:1 relationship.
Figure 7. Honey areas estimated from the selected 960 × 960 resolution splits (A) 90:5:5 vs. 70:15:15, (B) 70:15:15 vs. 80:10:10, and (C) 80:10:10 vs. 90:15:15. The equality line (gray dashed line) indicates a 1:1 relationship.
Insects 16 00575 g007

3.3. Relationship Between the Physical Parameters of Honey and the Honey Area

Figure 8 illustrates relationships between honey area and the four physical parameters (pH, EC, moisture content, and color) to determine whether areas of honey impact these measurable physical properties. By analyzing the correlation, regarding to the Pearson correlation coefficient values, the honey areas estimated from all data-splitting schemes revealed the negative trends to the pH (r values range from −0.12 to −0.24) and moisture content (r values range from −0.16 to −0.17) whereas EC (r = 0.28–0.44) and color (r = 0.30–0.46) showed the positive trends. Especially, the 80:10:10 and 90:15:15 datasets were more positively or negatively correlated than the 70:15:15 dataset (Figure 8). The weak-moderate linear relationships between honey areas and physical parameters are shown in EC and color. These R2 results (R2 ≈ 0.2 for 80:10:10 and 90:15:15) provide insights into whether the honey area can serve as a predictive honey property for EC and color.

4. Discussion

4.1. Model Performance and Scalability

Higher resolutions (960 × 960, 800 × 800) with a 90:05:05 split yielded the best results, while lower resolutions reduced accuracy. Bounding box detection outperformed segmentation masks, especially for uncapped cells, though segmentation remained competitive at higher resolutions. Although the recommended 640 × 640 resolution was previously recommended and efficient, the accuracy analysis of capped honey areas is usually low because the capped honey areas are highly complex due to the layer of wax [13]. A larger and more diverse dataset would improve the generalization of the model, making it more robust in different conditions. In large-scale apiaries, optimized hardware and refined data acquisition strategies are essential for the practical application to manage the increased data volume effectively [35,36]. We also note that integrating cameras within hives requires more appropriate non-invasive methods to minimize disturbances to bee colonies because collecting data from a high number of beehives could increase disturbances during data capture, as the combs must be lifted and then replaced. However, this approach could yield a greater volume of input data for deep learning techniques to generate meaningful insights. While our method is suitable for small-scale farms, where the number of beehives ranges from 5 to 50 [4], we suggest that improvements in in-hive sensors and digital data processing from the hive environment are essential to minimize disturbance of the colony’s honeycombs. In addition, the computational resources are limited to the NVIDIA RTX 3070 laptop GPU, which limits the ability to train large models with higher-resolution images and more complex architectures.
The analysis of the honey area estimates across the different data splits provides a robust evaluation of the model’s consistency. As illustrated in Figure 7, the corelations from three distinct splits highlighted strong relationships as evidenced by the high correlation coefficients; r = 0.94 for 90:5:5 versus 80:10:10, r = 0.81 for 90:5:5 versus 70:15:15, and r = 0.81 for 80:10:10 versus 70:15:15. These comparisons also revealed statistically significant p-values, below the 0.05 threshold. The highest R2 with the value 0.84, p < 0.01 indicates that two data splits (90:5:5 versus 80:10:10) consistently captured the overall honey areas.

4.2. Association Between Regional Factors and Honey Quality Parameters

The results indicate that the honey area quantified via deep learning segmentation presents varied relationships with the physical parameters of honey. In Figure 8A, the pH of honey exhibits weak negative correlations with the honey area (ranged from −0.12 to −0.24) with low R2 values (0.05, 0.06 and 0.01, in 90:5:5, 80:10:10 and 70:15:15, respectively) and their p-values less than <0.05, indicating that the acid content is largely independent of the spatial extent of honey within the comb. This suggests that the acidity is predominantly determined by the intrinsic chemical composition of the nectar rather than by the extent of honey deposition in the comb. Previous studies have shown that floral origin and the organic acids present (e.g., gluconic acid) strongly influence honey pH [31,34,37,38]. Similarly, in Figure 8C, moisture content appears nearly unaffected by the honey area, with an R2 value below 0.03 and p-values exceeding 0.05. This result aligns with findings that moisture levels are mainly controlled by environmental conditions during nectar collection and post-harvest processing rather than by the spatial deposition of honey [31,33,37]. In contrast, in Figure 8B, electrical conductivity shows a modest relationship and gives positive correlation (R2 = 0.19, r ≈ 0.45, p < 0.05, except dataset 70:15:15), implying that larger honey areas may be associated with a slight increase in mineral and organic acid content. This observation is in line with external research indicating that darker honeys, which typically have higher mineral contents, also exhibit higher conductivity [32,33,39]. Furthermore, in Figure 8D, the color of honey demonstrates a moderate relationship with the honey area (R2 = 0.21, r ≈ 0.45, p < 0.05, except dataset 70:15:15). Honey color is widely recognized as an indicator of both floral source and mineral content [33,35]. Additional studies have confirmed that darker honeys generally contain higher levels of pigments and phenolic compounds, contributing to both color intensity and antioxidant capacity [39,40]. Overall, the method for detecting areas of honey is being improved via the automated image-based quantification, which offers a rapid and non-invasive tool for hive monitoring. However, the relationship between the honey area versus the physical parameters of honey is limited. The moderate correlations observed between electrical conductivity and color suggest that further refinement is needed. This aligns with previous findings indicating that honey characteristics are influenced by the nectar source [41,42]. As this study focuses on the physical properties of multifloral honeys, future work should aim to more precisely characterize honey samples. Subsequently, the current image-based protocol could be applied to both monofloral and multifloral honeys. This approach may provide valuable supplementary information and support the development of automated methods while maintaining high model accuracy. The weak correlations for pH and moisture content reinforce the necessity of combining image analysis with other direct physio-chemical measurements for a comprehensive evaluation of honey quality. Measuring sugar content may be one of the key nutrient factors [43] that could help in enabling a relationship to honey classifications via image processing.
Furthermore, combining the plant sources based on floral pollen identification and the nutrients in honey may help for impressive results to detect incoming honey quantity in honeybee colonies. Therefore, we recommend deploying such systems to progressively test during production phases, which does not measure the periods where the bees are hungry or dying due to dependence on physical properties of honey, thereby improving honeybee farm management and promoting sustainable apiculture practices.

5. Conclusions

The deep learning approach based on YOLOv11 has proven highly effective for the automated classification of honeycomb structures. When trained using high-resolution images (960 × 960 pixels) and an optimal dataset split of 90:5:5, the model achieved a mAP@0.5 of 83.4% for uncapped honey cells and 80.5% for capped honey cells. These results are due to image quality in enhancing classification accuracy. The linear regression analyses showed the weak relationships between the quantified honey area and the four physical parameters of honey, which can ultimately make it difficult to interpret the quality and efficiency of honey production. The moderate-weak parameters are conductivity and color, suggesting that these properties are primarily determined by color image-based measurements that provide supplementary insights into the honey’s color indicator.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/insects16060575/s1; Table S1. Physical parameter and image-processed parameters (honey area estimation from beehive); Table S2. Full image-processed parameters estimation in pixels (honey area estimation from the beehive).

Author Contributions

Conceptualization, W.K. and P.K.; methodology, W.K., P.K., C.S., and P.T.; validation, W.K. and P.K.; formal analysis, W.K. and P.K.; investigation, W.K., P.K., C.S., and P.T.; resources, C.S., P.P., and T.D.; data curation, W.K., P.K., C.S., and P.P.; writing—original draft preparation, W.K. and P.K.; writing—review and editing, W.K., P.K, C.S., and P.P.; visualization, W.K. and P.K.; supervision, W.K. and T.D.; project administration, C.S., P.P., and T.D.; funding acquisition, T.D. All authors have read and agreed to the published version of the manuscript.

Funding

Mekong-Republic of Korea Cooperation Fund number 8.

Data Availability Statement

The data are available through a public repository on the GitHub platform, providing codes for the honeybee hive classification dataset. https://github.com/Panpakornk/honeybeehiveclassification (accessed on 12 January 2025); YOLOv11 available online at https://github.com/ultralytics/ultralytics (accessed on 12 January 2025).

Acknowledgments

This research work was partially supported by Chiang Mai University. We extend our gratitude to the Mekong-Republic of Korea Cooperation Fund for supporting this project’s materials. We also acknowledge the SMART BEE SDGs Lab, Faculty of Science, Chiang Mai University, for providing essential materials and research support. Additionally, we would like to thank Chiang Mai Agricultural Technology Promotion Center (Economic Insects), MARL Honey, Faculty of Agriculture, Chiang Mai University, and Chiang Mai Healthy Product Co., Ltd. for their generous provision of bee farm facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
YOLOYou Only Look Once
AIArtificial Intelligence
mAPMean average precision
mAP@0.5Mean average precision calculated at an intersection over union (IoU) threshold of 0.50
RGBRed, Green, Blue
R2Coefficient of determination
rCorrelation coefficient
PStatistical significance

References

  1. Patel, V.; Pauli, N.; Biggs, E.; Barbour, L.; Boruff, B. Why Bees Are Critical for Achieving Sustainable Development. Ambio 2021, 50, 49–59. [Google Scholar] [CrossRef] [PubMed]
  2. Khalifa, S.A.M.; Elshafiey, E.H.; Shetaia, A.A.; El-Wahed, A.A.A.; Algethami, A.F.; Musharraf, S.G.; AlAjmi, M.F.; Zhao, C.; Masry, S.H.D.; Abdel-Daim, M.M.; et al. Overview of Bee Pollination and Its Economic Value for Crop Production. Insects 2021, 12, 688. [Google Scholar] [CrossRef] [PubMed]
  3. Ghosh, S.; Jeon, H.; Jung, C. Foraging Behaviour and Preference of Pollen Sources by Honey Bee (Apis mellifera) Relative to Protein Contents. J. Ecol. Environ. 2020, 44, 4. [Google Scholar] [CrossRef]
  4. Wakgari, M.; Yigezu, G. Honeybee Keeping Constraints and Future Prospects. Cogent Food Agric. 2021, 7, 1872192. [Google Scholar] [CrossRef]
  5. Capela, N.; Dupont, Y.L.; Rortais, A.; Sarmento, A.; Papanikolaou, A.; Topping, C.J.; Arnold, G.; Pinto, M.A.; Rodrigues, P.J.; More, S.J.; et al. High Accuracy Monitoring of Honey Bee Colony Development by a Quantitative Method. J. Apic. Res. 2023, 62, 741–750. [Google Scholar] [CrossRef]
  6. Urban, M.; Chlebo, R. Current Status and Future Outlooks of Precision Beekeeping Systems and Services. Rev. Agric. Sci. 2024, 12, 165–181. [Google Scholar] [CrossRef]
  7. Hadjur, H.; Ammar, D.; Lefèvre, L. Toward an Intelligent and Efficient Beehive: A Survey of Precision Beekeeping Systems and Services. Comput. Electron. Agric. 2022, 192, 106604. [Google Scholar] [CrossRef]
  8. Odemer, R. Approaches, Challenges and Recent Advances in Automated Bee Counting Devices: A Review. Ann. Appl. Biol. 2022, 180, 73–89. [Google Scholar] [CrossRef]
  9. Alleri, M.; Amoroso, S.; Catania, P.; Lo Verde, G.; Orlando, S.; Ragusa, E.; Sinacori, M.; Vallone, M.; Vella, A. Recent Developments on Precision Beekeeping: A Systematic Literature Review. J. Agric. Food Res. 2023, 14, 100726. [Google Scholar] [CrossRef]
  10. Alves, T.S.; Pinto, M.A.; Ventura, P.; Neves, C.J.; Biron, D.G.; Junior, A.C.; De Paula Filho, P.L.; Rodrigues, P.J. Automatic Detection and Classification of Honey Bee Comb Cells Using Deep Learning. Comput. Electron. Agric. 2020, 170, 105244. [Google Scholar] [CrossRef]
  11. Ngo, T.N.; Rustia, D.J.A.; Yang, E.-C.; Lin, T.-T. Automated Monitoring and Analyses of Honey Bee Pollen Foraging Behavior Using a Deep Learning-Based Imaging System. Comput. Electron. Agric. 2021, 187, 106239. [Google Scholar] [CrossRef]
  12. Colin, T.; Bruce, J.; Meikle, W.G.; Barron, A.B. The Development of Honey Bee Colonies Assessed Using a New Semi-Automated Brood Counting Method: CombCount. PLoS ONE 2018, 13, e0205816. [Google Scholar] [CrossRef] [PubMed]
  13. Rodriguez-Lozano, F.J.; Geninatti, S.R.; Flores, J.M.; Quiles-Latorre, F.J.; Ortiz-Lopez, M. Capped Honey Segmentation in Honey Combs Based on Deep Learning Approach. Comput. Electron. Agric. 2024, 227, 109573. [Google Scholar] [CrossRef]
  14. Sehar, U.; Naseem, M.L. How Deep Learning Is Empowering Semantic Segmentation. Multimed. Tools Appl. 2022, 81, 30519–30544. [Google Scholar] [CrossRef]
  15. Abdollahi, M.; Giovenazzo, P.; Falk, T.H. Automated Beehive Acoustics Monitoring: A Comprehensive Review of the Literature and Recommendations for Future Work. Appl. Sci. 2022, 12, 3920. [Google Scholar] [CrossRef]
  16. Robles-Guerrero, A.; Gómez-Jiménez, S.; Saucedo-Anaya, T.; López-Betancur, D.; Navarro-Solís, D.; Guerrero-Méndez, C. Convolutional Neural Networks for Real Time Classification of Beehive Acoustic Patterns on Constrained Devices. Sensors 2024, 24, 6384. [Google Scholar] [CrossRef] [PubMed]
  17. Nnadozie, E.; Iloanusi, O.; Ani, O.; Yu, K. Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models. Remote Sens. 2023, 15, 2322. [Google Scholar] [CrossRef]
  18. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  19. Wang, Z.; Su, Y.; Kang, F.; Wang, L.; Lin, Y.; Wu, Q.; Li, H.; Cai, Z. PC-YOLO11s: A Lightweight and Effective Feature Extraction Method for Small Target Image Detection. Sensors 2025, 25, 348. [Google Scholar] [CrossRef]
  20. Schurischuster, S.; Kampel, M. Image-Based Classification of Honeybees. In Proceedings of the 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 9–12 November 2020; pp. 1–6. [Google Scholar]
  21. Yang, C.R. The Use of Video to Detect and Measure Pollen on Bees Entering a Hive; Auckland University of Technology: Auckland, New Zealand, 2018. [Google Scholar]
  22. Kriouile, Y.; Ancourt, C.; Wegrzyn-Wolska, K.; Bougueroua, L. Nested Object Detection Using Mask R-CNN: Application to Bee and Varroa Detection. Neural Comput. Appl. 2024, 36, 22587–22609. [Google Scholar] [CrossRef]
  23. Bilík, Š.; Kratochvila, L.; Ligocki, A.; Boštík, O.; Zemčík, T.; Hybl, M.; Horak, K.; Zalud, L. Visual Diagnosis of the Varroa destructor Parasitic Mite in Honeybees Using Object Detector Techniques. Sensors 2021, 21, 2764. [Google Scholar] [CrossRef]
  24. Alomar, K.; Aysel, H.I.; Cai, X. Data Augmentation in Classification and Segmentation: A Survey and New Strategies. J. Imaging 2023, 9, 46. [Google Scholar] [CrossRef] [PubMed]
  25. Liew, L.H.; Lee, B.Y.; Chan, M. Cell Detection for Bee Comb Images Using Circular Hough Transformation. In Proceedings of the 2010 International Conference on Science and Social Research (CSSR 2010), Kuala Lumpur, Malaysia, 5–7 December 2010; pp. 191–195. [Google Scholar]
  26. Janota, J.; Blaha, J.; Rekabi-Bana, F.; Ulrich, J.; Stefanec, M.; Fedotoff, L.; Arvin, F.; Schmickl, T.; Krajník, T. Towards Robotic Mapping of a Honeybee Comb. In Proceedings of the 2024 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), Delft, The Netherlands, 1–5 July 2024; pp. 1–6. [Google Scholar]
  27. Raweh, H.S.A.; Badjah-Hadj-Ahmed, A.Y.; Iqbal, J.; Alqarni, A.S. Physicochemical Composition of Local and Imported Honeys Associated with Quality Standards. Foods 2023, 12, 2181. [Google Scholar] [CrossRef]
  28. Matović, K.; Ćirić, J.; Kaljević, V.; Nedić, N.; Jevtić, G.; Vasković, N.; Baltić, M.Ž. Physicochemical Parameters and Microbiological Status of Honey Produced in an Urban Environment in Serbia. Environ. Sci. Pollut. Res. 2018, 25, 14148–14157. [Google Scholar] [CrossRef] [PubMed]
  29. Loredana Elena, V.; Mazilu, I.; Enache, C.; Enache, S.; Topala, C. Botanical Origin Influence on Some Honey Physicochemical Characteristics and Antioxidant Properties. Foods 2023, 12, 2134. [Google Scholar] [CrossRef] [PubMed]
  30. Knauer, U.; Zautke, F.; Bienefeld, K.; Meffert, B. A Comparison of Classifiers for Prescreening of Honeybee Brood Cells. In Proceedings of the International Conference on Computer Vision Systems: Proceedings (2007), Rio de Janeiro, Brazil, 14–21 October 2007. [Google Scholar]
  31. Bogdanov, S. Harmonised Methods of the International Honey Commission; Swiss Bee Research Centre, FAM: Liebefeld, Switzerland, 2009; pp. 1–62. [Google Scholar]
  32. Acquarone, C.; Buera, P.; Elizalde, B. Pattern of pH and Electrical Conductivity upon Honey Dilution as a Complementary Tool for Discriminating Geographical Origin of Honeys. Food Chem. 2007, 101, 695–703. [Google Scholar] [CrossRef]
  33. Alimentarius, C. Revised Codex Standard for Honey. Codex Stan. 2001, 12, 1982. [Google Scholar]
  34. El Sohaimy, S.A.; Masry, S.H.D.; Shehata, M.G. Physicochemical Characteristics of Honey from Different Origins. Ann. Agric. Sci. 2015, 60, 279–287. [Google Scholar] [CrossRef]
  35. Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
  36. Thompson, N.C.; Greenewald, K.; Lee, K.; Manso, G.F. The Computational Limits of Deep Learning. arXiv 2007. [Google Scholar] [CrossRef]
  37. Moniruzzaman, M.; Sulaiman, S.A.; Khalil, M.; Gan, S. Evaluation of Physicochemical and Antioxidant Properties of Sourwood and Other Malaysian Honeys: A Comparison with Manuka Honey. Chem. Cent. J. 2013, 7, 138. [Google Scholar] [CrossRef]
  38. Schiassi, M.C.E.V.; de Souza, V.R.; Lago, A.M.T.; Carvalho, G.R.; Curi, P.N.; Guimarães, A.S.; Queiroz, F. Quality of Honeys from Different Botanical Origins. J. Food Sci. Technol. 2021, 58, 4167–4177. [Google Scholar] [CrossRef]
  39. González-Miret, M.L.; Terrab, A.; Hernanz, D.; Fernández-Recamales, M.Á.; Heredia, F.J. Multivariate Correlation between Color and Mineral Composition of Honeys and by Their Botanical Origin. J. Agric. Food Chem. 2005, 53, 2574–2580. [Google Scholar] [CrossRef] [PubMed]
  40. Shekilango, S.G.; Mongi, R.J.; Shayo, N.B. Colour and Antioxidant Activities of Honey from Different Floral Sources and Geographical Origins in Tanzania. Int. J. Basic Appl. Res. 2016, 15. Available online: https://www.ajol.info/index.php/tjags/article/view/177785 (accessed on 12 January 2025).
  41. Akgün, N.; Çelik, Ö.F.; Kelebekli, L. Physicochemical Properties, Total Phenolic Content, and Antioxidant Activity of Chestnut, Rhododendron, Acacia and Multifloral Honey. Food Meas. 2021, 15, 3501–3508. [Google Scholar] [CrossRef]
  42. Pop, I.; Simeanu, D.; Cucu-Man, S.-M.; Pui, A.; Albu, A. Quality Profile of Several Monofloral Romanian Honeys. Agriculture 2022, 13, 75. [Google Scholar] [CrossRef]
  43. Puścion-Jakubik, A.; Borawska, M.H.; Socha, K. Modern Methods for Assessing the Quality of Bee Honey and Botanical Origin Identification. Foods 2020, 9, 1028. [Google Scholar] [CrossRef]
Figure 1. Apiary locations (A) Agricultural Technology Promotion Center for Economic Insects, (B) Chiang Mai Healthy Product Co., Ltd. (Chiang Mai, Thailand), and (C) Faculty of Agriculture, Chiang Mai University.
Figure 1. Apiary locations (A) Agricultural Technology Promotion Center for Economic Insects, (B) Chiang Mai Healthy Product Co., Ltd. (Chiang Mai, Thailand), and (C) Faculty of Agriculture, Chiang Mai University.
Insects 16 00575 g001
Figure 2. Experimental setups: (A) the Apis mellifera frame is placed on the holders inside the studio box, (B) the researcher made adjustments to the camera before image capture, and (C) an example of a frame from the image captured.
Figure 2. Experimental setups: (A) the Apis mellifera frame is placed on the holders inside the studio box, (B) the researcher made adjustments to the camera before image capture, and (C) an example of a frame from the image captured.
Insects 16 00575 g002
Figure 3. Comb by differentiating classes (A) uncapped honey cells, (B) capped honey cells, and (C) others (empty cell, pollen, larva, and pupa).
Figure 3. Comb by differentiating classes (A) uncapped honey cells, (B) capped honey cells, and (C) others (empty cell, pollen, larva, and pupa).
Insects 16 00575 g003
Figure 4. Data annotation consists of uncap (cyan polygon), cap (white polygon), total (blue polygon) from the frame sample ID: _DSC5335 (20 December 2024).
Figure 4. Data annotation consists of uncap (cyan polygon), cap (white polygon), total (blue polygon) from the frame sample ID: _DSC5335 (20 December 2024).
Insects 16 00575 g004
Figure 5. Extracted data from sample ID: _DSC5335 (20 December 2024) (A) capped honey cells, (B) uncapped honey cells, and (C) others. Yellow represents the extracted regions of interest, and purple indicates the background.
Figure 5. Extracted data from sample ID: _DSC5335 (20 December 2024) (A) capped honey cells, (B) uncapped honey cells, and (C) others. Yellow represents the extracted regions of interest, and purple indicates the background.
Insects 16 00575 g005
Figure 8. Correlation between honey area (%) and physicochemical properties of honey (A) pH vs. honey area (%), (B) electrical conductivity (ms ∙ ppt) vs. honey area (%), (C) moisture content (% brix) vs. honey area (%), and (D) color (mm) vs. honey area (%).
Figure 8. Correlation between honey area (%) and physicochemical properties of honey (A) pH vs. honey area (%), (B) electrical conductivity (ms ∙ ppt) vs. honey area (%), (C) moisture content (% brix) vs. honey area (%), and (D) color (mm) vs. honey area (%).
Insects 16 00575 g008
Table 1. Class features annotated in the image dataset.
Table 1. Class features annotated in the image dataset.
ClassDescriptionNumber of Annotation
UncapPolygon around the honey-uncapped cell62,520
CapPolygon around the honey-capped cell607
OtherPolygon around the area of the beeswax300
Table 2. Preprocessing and augmentation were implemented on the dataset.
Table 2. Preprocessing and augmentation were implemented on the dataset.
CategoryTechniques UsedDescription
PreprocessingAuto-orientEnsured all images were correctly oriented
ResizeStretched images to 960 × 960 pixels
Auto-adjust contrastApplied Adaptive Equalization for contrast
Filter NullEnsured all images contained annotations
AugmentationFlipHorizontal flipping
HueBetween −5° and +5°
SaturationBetween −10% and +10%
BrightnessBetween −10% and +10%
ExposureBetween −5% and +5%
NoiseUp to 1% of the pixels
Table 3. Summary of mAP@0.5 from model training results based on image resolutions and data splitting.
Table 3. Summary of mAP@0.5 from model training results based on image resolutions and data splitting.
Input ResolutionDataset
(Training: Validating: Testing)
mAP@0.5
Uncapped Honey CellsCapped Honey Cells
BoxMaskBoxMask
960 × 96090:5:50.8340.7430.8050.805
80:10:100.8300.7250.7490.730
70:15:150.7770.6340.6590.635
800 × 80090:5:50.8420.5130.7750.681
80:10:100.8100.5300.6550.568
70:15:150.7740.4820.6040.618
640 × 64090:5:50.7730.4340.7900.720
80:10:100.7560.4370.6390.616
70:15:150.7090.4050.5910.547
512 × 51290:5:50.6550.2730.6850.638
80:10:100.6580.2800.5900.552
70:15:150.5970.2550.5590.582
256 × 25690:5:50.2290.0280.5050.378
80:10:100.2340.0340.4150.363
70:15:150.2090.0280.4280.466
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khokthong, W.; Kritangkoon, P.; Sinpoo, C.; Takioawong, P.; Phokasem, P.; Disayathanoowat, T. Deep Learning-Based Detection of Honey Storage Areas in Apis mellifera Colonies for Predicting Physical Parameters of Honey via Linear Regression. Insects 2025, 16, 575. https://doi.org/10.3390/insects16060575

AMA Style

Khokthong W, Kritangkoon P, Sinpoo C, Takioawong P, Phokasem P, Disayathanoowat T. Deep Learning-Based Detection of Honey Storage Areas in Apis mellifera Colonies for Predicting Physical Parameters of Honey via Linear Regression. Insects. 2025; 16(6):575. https://doi.org/10.3390/insects16060575

Chicago/Turabian Style

Khokthong, Watit, Panpakorn Kritangkoon, Chainarong Sinpoo, Phuwasit Takioawong, Patcharin Phokasem, and Terd Disayathanoowat. 2025. "Deep Learning-Based Detection of Honey Storage Areas in Apis mellifera Colonies for Predicting Physical Parameters of Honey via Linear Regression" Insects 16, no. 6: 575. https://doi.org/10.3390/insects16060575

APA Style

Khokthong, W., Kritangkoon, P., Sinpoo, C., Takioawong, P., Phokasem, P., & Disayathanoowat, T. (2025). Deep Learning-Based Detection of Honey Storage Areas in Apis mellifera Colonies for Predicting Physical Parameters of Honey via Linear Regression. Insects, 16(6), 575. https://doi.org/10.3390/insects16060575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop