Next Article in Journal
The Reactivated Residual Strength: Laboratory Tests and Practical Considerations
Previous Article in Journal
Drag Reduction and Efficiency Enhancement in Wide-Range Electric Submersible Centrifugal Pumps via Bio-Inspired Non-Smooth Surfaces: A Combined Numerical and Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing

by
Vasutorn Chaowalittawin
1,
Woranidtha Krungseanmuang
1,
Posathip Sathaporn
1 and
Boonchana Purahong
2,*
1
Department of Robotics and Computational Intelligent Systems, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
2
Department of IoT and Information Engineering, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 7960; https://doi.org/10.3390/app15147960
Submission received: 23 May 2025 / Revised: 7 July 2025 / Accepted: 15 July 2025 / Published: 17 July 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Duck egg quality classification is critical in farms, hatcheries, and salted egg processing plants, where cracked eggs must be identified before further processing or distribution. However, duck eggs present a unique challenge due to their white eggshells, which make cracks difficult to detect visually. In current practice, human inspectors use standard white light for crack detection, and many researchers have focused primarily on improving detection algorithms without addressing lighting limitations. Therefore, this paper presents duck egg crack detection using an adaptive convolutional neural network (CNN) model ensemble with multi-light channels. We began by developing a portable crack detection system capable of controlling various light sources to determine the optimal lighting conditions for crack visibility. A total of 23,904 images were collected and evenly distributed across four lighting channels (red, green, blue, and white), with 1494 images per channel. The dataset was then split into 836 images for training, 209 images for validation, and 449 images for testing per lighting condition. To enhance image quality prior to model training, several image pre-processing techniques were applied, including normalization, histogram equalization (HE), and contrast-limited adaptive histogram equalization (CLAHE). The Adaptive MobileNetV2 was employed to evaluate the performance of crack detection under different lighting and pre-processing conditions. The results indicated that, under red lighting, the model achieved 100.00% accuracy, precision, recall, and F1-score across almost all pre-processing methods. Under green lighting, the highest accuracy of 99.80% was achieved using the image normalization method. For blue lighting, the model reached 100.00% accuracy with the HE method. Under white lighting, the highest accuracy of 99.83% was achieved using both the original and HE methods.

1. Introduction

Duck eggs are oval-shaped and white, light green, or sometimes pink, depending on the breed. Duck eggs have similar physical characteristics to chicken eggs, but the shell is much thicker than chicken eggs. Duck eggs have slightly thicker egg whites. The global duck egg market is worth 100 million USD in 2023 and will grow at a CAGR of 6% from 2024 to 2030 [1]. The duck egg business group and the main production base are in Asia, especially the processes of exporting duck eggs and pickling egg yolks with salt [2] or boiling eggs. One of the important steps is to inspect cracks in the eggshell. Preventing cracks in the shell of duck eggs before making salted and preserved eggs [3] is crucial for several reasons; for example, if the shell is cracked, the egg white and yolk can leak out, leading to loss of product and flavor. Also cracked eggs can become a breeding ground for bacteria, increasing the risk of spoilage and foodborne illness. An intact shell provides a barrier against contaminants. Maintaining the integrity of the shell contributes to a better final product. An intact shell helps preserve the desired texture and appearance of the preserved eggs, resulting in a more appealing product for the market. Many used to identify cracks by listening to the sound, and the technique has been widely developed in research; for example, Chia-Chun Lai et al. [4] conducted a study on duck eggshell crack detection by nondestructive sonic measurement and analysis. So, the sound measurement enables a simple and quantitative method for duck egg crack detection and classification, with overall accuracy rates of the calibration and prediction models using five frequencies of bandwidth (1500, 5000, 6000, 8500, and 10,000 Hz) of 89.7% and 87.6%, respectively. Ke Sun et al. [5] presented the sequenced wave signal extraction and classification algorithm for duck egg crack detection. The signal was captured from cracks, the air cell membrane, and shell texture, with 92.5% and 93.1% accuracy for duck eggs and salted duck eggs, respectively. JV Bryan DP. Caguioa et al. [6] conducted a study using a transfer learning approach for duck egg quality classification based on its shell visual using ResNet-50. The experiment was separated into three classes, namely Balut/Penoy, salted egg, and table egg. The model accuracy averaged 83%. Li sun et al. [7] demonstrated a method for detecting cracks in hen and duck eggs using acoustic resonance analysis, achieving a crack detection accuracy of 95.5%. However, numerous research studies regarding eggshell crack detection using AI and image processing utilized chicken eggs [8] as well. Bhavya Botta et al. [9] presented eggshell crack detection using deep convolutional neural networks. The proposed CNN had 95.38% accuracy using 64-bit images; in addition, they also suggested a deep transfer learning approach for the detection of cracks on eggs [10]. Chenbo Shi et al. [11] introduced real-time ConvNext-based U-Net with feature infusion for egg microcrack detection. The crack results for Crack-IoU were 65.51% for cracks smaller than 20 μm, and for even smaller cracks (<5 μm), the crack results for Crack-IoU were 60.76% and, for MioU, were 80.22%. Haokang Chen et al. [12] conducted submillimeter crack detection of eggs by an improved light source by enhancing crack information using a laser light source together with fixed-threshold and adaptive-threshold methods to filter noise. Bao Guanjun et al. [13] proposed a method for identifying cracks on eggshells using machine vision. The method is effective for complicated egg surfaces with dark spots and invisible micro-cracks, achieving a cracked egg recognition rate of 92.5%. Zekeriya Balci et al. [14] presented an artificial intelligence-based determination of cracks in eggshells using sound signals. Their study shows the potential of SVM and ANN to significantly improve the efficiency of egg crack detection in automated systems. Yousef Abbaspour-gılandeh et al. [15] reported on the identification of cracks in egg shells using computer vision and hough transform. The output of this identification and classification of intact and cracked eggs reached 90.1% accuracy, with 0.7 s average processing time. There are some applications of real-time monitoring [16] and classification of eggs based on an industrial concept. Muammer Turkoglu [17] developed a real-time system based on a continuous rotating system by combining retrained CNN and BiLSTM. This proposed solution was able to classify cracked, bloody, dirty, and robust eggs with good computation performances. Boonchana Purahong et al. [18] presented an image processing and computer vision of eggshell crack detection to minimize unexpected noise from the image. Woranidtha Krungseanmuang et al. [19] obtained the classification of overlapping eggs based on image processing.
Crack detection using deep learning [20,21] is a significant area of research and application, not only in the agriculture industry but also in fields such as civil engineering, infrastructure maintenance, and material science. Several applications have been developed using deep learning models to detect cracks in various materials, including concrete [22], steel [23], and even pipelines [24,25]. For example, Luqman Ali et al. [26] evaluated the performance of crack detection on concrete structures using deep CNN-based models (VGG-16, VGG-19, ResNet-50, and Inception V3). Zhun Fan et al. [27] proposed an ensemble of deep convolutional neural networks for automatic pavement crack detection and measurement. The proposed method is efficient for both crack detection and measurement, offering significant improvements in precision, recall, and F1-score compared to public datasets. Xinlin Chen et al. [28] proposed an improved YOLOv5 detection model that efficiently detects cracks in real time as an early warning system for coal mine safety.
The reviewed literature highlights that duck egg crack detection has been explored using various methods, including acoustic techniques, image processing, and transfer learning. However, several limitations and research gaps remain, such as sensitivity to environmental noise, limited accuracy of deep learning models, low processing efficiency, and uncontrolled lighting conditions. To address these gaps, this paper presents a comparative analysis of multi-light sources in deep learning-based duck egg crack detection by developing a controlled lighting environment that simulates different light frequencies. Subsequently, adaptive CNN models were used to evaluate and compare the performance and accuracy of crack detection. The models were assessed using a confusion matrix as well as evaluation metrics, including accuracy, precision, recall, and F1-score.

2. Materials and Methods

The experimental workflow began with the development of a hardware system for dataset acquisition. Next, image processing techniques were applied to reduce noise and enhance image quality before training the model. We then proceeded with model selection to identify the most suitable CNN architecture for the duck egg dataset. After selecting the best-performing model, adaptive techniques were applied to further optimize its performance. Finally, we evaluated the results to assess the system’s effectiveness. Figure 1 shows the experimental workflow.
We implemented a portable egg crack detection system by simulating environmental lighting and conditions similar to those found in a duck egg processing plant. The control system, built on a Raspberry Pi, manages four distinct lighting channels and is powered by a custom-designed backup battery. An integrated electrical circuit was developed to combine all components onto a universal PCB. Additionally, the egg tray was 3D printed using filament material and designed to match the dimensions of standard industry-grade trays. Figure 2 shows the overall system diagram.

2.1. Hardware

Image acquisition began under different lighting conditions directed at eggs placed on a tray. Figure 3 shows the structural layout, including the positioning of the power supply, LED components, circuitry, and the egg tray, which holds four eggs per batch. To ensure clear visibility of the reflected light from the eggs, all image captures were conducted in a dark room.
The system uses a high-power 3 W RGB LED as the light source. Due to the potential heat generated during prolonged use, the LED is mounted on an aluminum plate for effective heat dissipation. Each color channel of the LED requires a different operating voltage: 2.2–2.4 V for red and 3.2–3.4 V for both green and blue. To ensure stable voltage supply for each channel, a step-down switching regulator is employed.
In the experiment, the LED is lit inside a box. To prevent external factors such as fluorescent lights in the room from affecting the experiment, a battery is used to power both the Raspberry Pi and the LED. This setup allows for the box to remain as enclosed as possible, ensuring consistent lighting conditions during image acquisition.
Figure 4 presents the circuit diagram used to control the LED colors via a Raspberry Pi and AO3400A MOSFETs. The LED color can be changed by pressing a tactile switch. The AO3400A MOSFET was chosen for its compatibility with the low-voltage GPIO signals of the Raspberry Pi, making it ideal for switching operations and low-voltage power control.

2.2. Dataset

The dataset used in this research was collected in collaboration with Consolutech Company (Bangkok, Thailand) from a real duck egg processing plant environment. A total of 23,904 images were acquired under four different lighting conditions: red, green, blue, and white. Each lighting condition included 1494 images, consisting of 750 cracked eggs and 744 control (non-cracked) eggs. The images were further processed using image enhancement techniques. For each condition, the dataset was divided into 836 images for training, 209 images for validation, and 449 images for testing. Then, the images went through a series of image processing steps. First, the original images were normalized. Next, histogram equalization (HE) was applied to enhance contrast. Finally, contrast limited adaptive histogram equalization (CLAHE) was used for further enhancement. To increase the dataset size and improve model robustness, data augmentation techniques, such as rotation, zoom in, and zoom out, were applied. Examples of the processed images are shown in Figure 5.

2.2.1. Normalization

Image normalization is used to adjust the pixel intensity values of an image to a predefined range. This range is typically between 0 and 255 for images with 8-bit depth, where 0 represents black and 255 represents white. Normalization can be performed to improve the contrast of an image or to standardize the pixel values for further processing. The normalization is calculated as shown in Equation (1):
I n o r m a l i z a t i o n = I I m i n I m a x I m i n × n m a x n m i n + n m i n

2.2.2. Histogram Equalization (HE)

HE is used to enhance image contrast by redistributing pixel intensity values. It spreads out the most frequent intensity levels, allowing for areas with low contrast to gain more definition and visual clarity. The process involves computing the cumulative distribution function (CDF) of the image histogram and using it to map the original pixel intensities to new values. This results in a more uniform brightness distribution, making hidden details more visible. The HE is calculated as shown in Equation (2):
T r k = ( L 1 ) j = 0 k n j n
r k is the original intensity;
n j is the number of pixels with intensity r j ;
n is the original intensity;
L is the number of possible intensity levels;
T ( r k ) is the new intensity value.

2.2.3. Contrast Limited Adaptive Histogram Equalization (CLAHE)

CLAHE is an advanced contrast enhancement technique that builds upon standard histogram equalization. It is particularly effective in situations where normal equalization leads to over-enhancement or amplifies noise. The CLAHE process is calculated step-by-step as shown in Equations (3)–(6).
First, the histogram H ( r k ) is computed, where r k represents the pixel intensity level. If any bin in the histogram exceeds the clip limit T C , it is clipped. The total amount of excess pixels is denoted as E . This excess is then redistributed equally among all histogram bins, where L is the number of gray levels. Finally, histogram equalization is applied within each tile using the normalized cumulative distribution function (CDF), where n is the number of pixels in the tile.
H c l i p p e d r k = m i n ( H r k , T C )
E = r k H r k T C ,   W h e r e   H r k > T C
H f i n a l r k = H c l i p p e d r k + E L
T r k = ( L 1 ) j = 0 k H f i n a l r j n

2.3. Model Selection

Due to the large number of eggs, multiple convolutional neural network (CNN) models were trained using only one epoch. This approach was deemed sufficient to observe the preliminary performance trends of each model as shown in Table 1.
The results indicated that MobileNetV2 [29] achieved the highest accuracy when processing images. Thus, MobileNetV2 was selected as the base model for transfer learning. Its compact architecture includes a convolutional layer with 32 filters, 17 residual bottleneck blocks, and an inverted residual structure.
To improve model performance in duck egg crack detection, an adaptive MobileNetV2 was developed based on the original MobileNetV2 architecture using a transfer learning approach. Blocks 0 to 13 were frozen to retain the pretrained weights from ImageNet, while only blocks 14 to 16 were unfrozen and fine-tuned using task-specific data. This adaptation enabled the model to better capture the distinctive features of egg images under various lighting conditions. The architecture also retains the bottleneck module structure, which includes pointwise and depthwise convolutions with residual connections, helping to reduce the number of parameters and increase model depth without causing gradient vanishing issues as shown in Figure 6.

2.4. Experimental Environment Settings and Model Evaluation Indicator

The proposed model was implemented using Python 3.10 on Ubuntu 22.04 LTS OS, Intel(R) Xeon(R) CPU@ 2.20 GHz, RAM 53 GB, and GPU RAM 22.5 GB. The performance of the model was measured using the confusion matrix, F1-score, accuracy, precision, and recall.
A confusion matrix is a visual tool that helps assess the effectiveness of a classification model. It presents a summary of the actual versus predicted classifications, making it easier to identify how well the model is performing for each class. The matrix includes four key components as explained below.
  • TP (true positives): Correctly predicted positive cases;
  • TN (true negatives): Correctly predicted negative cases;
  • FP (false positives): Incorrectly predicted positive cases;
  • FN (false negatives): Incorrectly predicted negative cases.
Accuracy is the proportion of total correct predictions made by the model, including both true positives and true negatives, out of all predictions. It can be calculated using Formula (7):
A c c u r a c y = ( T P + T N ) ( T P + T N + F P + F N )
Precision is the proportion of correctly predicted positive cases out of all cases that the model predicted as positive, measuring the accuracy of positive predictions. It is calculated using Formula (8):
P r e c i s i o n = T P ( T P + F P )
Recall (sensitivity) is the proportion of actual positive cases that the model correctly identified, measuring the model’s ability to detect positive instances. It is calculated using Formula (9):
R e c a l l = T P ( T P + F N )
The F1-score is the harmonic mean of precision and recall, providing a single metric that balances both precision and recall, especially useful when dealing with imbalanced datasets. It is calculated using Formula (10):
F 1   s c o r e = 2 ( P r e c i s i o n R e c a l l ) ( P r e c i s i o n + R e c a l l )

3. Results

3.1. Hardware Implementation

The plate for mounting the LED power bank, Raspberry Pi, and universal PCB board was made from 5 mm thick acrylic. The egg tray was printed using a 3D printer with ABS material as well as the case for the battery. Figure 7 shows the hardware: (a) shows the PCB with the switch that controls the color of light; (b) shows the step-down module that was checked; (c) shows the step-down module and battery, which serve as the power sources for the light; (d) shows the protection of the step-down switching regulator with insulating tape; (e) shows the LED operation check to make sure the LED is operational; and (f) shows the battery in the case. When everything is assembled, the result is a power supply that can change colors, as shown in Figure 8.

3.2. Model Training

3.2.1. MobileNetV2

MobileNetV2 was selected as the base model for transfer learning. To prevent overfitting, the early stopping technique was applied. The training results showed that the red light achieved a true positive (TP) rate of 1.00 for all image processing techniques, except for the original, which yielded a TP of 0.99, as shown in Figure 9. This demonstrates the model’s excellent performance in accurately predicting the red light. Among the techniques, histogram equalization (HE) and CLAHE produced the lowest false positive (FP) rate of 0.0039. When comparing the image processing methods from best to worst based on overall performance, CLAHE and HE are tied for first, followed by normalization and, lastly, the original.
For the green light, all image processing techniques provide TP values between 0.98 and 0.99, which shows very good performance. HE and CLAHE slightly increase the false negative (FN) values compared to the original. Normalization gives a TP of 0.99, as shown in Figure 10, which is better than both CLAHE and HE. Therefore, the performance for the green light from best to worst is normalization, HE equals CLAHE, and, lastly, the original.
For the blue light, the original image gives a TP of 1 and an FN of 0. However, the FP is as high as 0.098, showing a strong tendency toward over-predicting positive cases, as shown in Figure 11. This means that many good eggs are predicted as bad ones. On the other hand, CLAHE reduces the FP to 0.041 while still maintaining a TP of 1. HE increases the FN and also gives the lowest true negative (TN) value at 0.94. Based on these results, the performance from best to worst for the blue light is CLAHE, original, normalization, and, lastly, HE.
For the white light, all image processing techniques give the lowest TP values compared to the other colors. The original and normalization give TP values of 0.98 and 0.97, respectively. HE and CLAHE reduce the TP to 0.94 and 0.93, and they also increase FP. This shows that HE and CLAHE are not suitable for white lighting conditions as shown in Figure 12. The performance order for this class from best to worst is original, normalization, HE, and, lastly, CLAHE. Table 2 presents the detailed analysis results.
Figure 13 shows the training and validation loss curves of the MobileNetV2 model, illustrating the application of the early stopping technique. The validation loss is monitored throughout the training process, and training is terminated when no improvement is observed for three consecutive epochs. As seen in the graph, the validation loss begins to increase after a certain point, indicating a potential risk of overfitting. Early stopping addresses this issue by restoring the model weights from the epoch where the validation loss was at its minimum, thereby preserving the best-performing version of the MobileNetV2 model.
The results of the experiment, shown in Table 3, explain how the MobileNetV2 model performed under four different lighting conditions, which are red, green, blue, and white. Different image pre-processing techniques were applied, including using raw images with no pre-processing, CLAHE, and HE. Under red lighting, both CLAHE and HE gave the highest scores, reaching 99.80% for accuracy, precision, recall, and F1-score. For green lighting, HE worked best and helped the model achieve 98.22% accuracy and F1-score, while the other methods produced lower results. Under blue lighting, the model gave the best results when no pre-processing was applied, achieving 99.55% in all evaluation scores. The same result occurred with white lighting, where raw images without pre-processing again gave the best performance at 98.34%, and CLAHE gave the lowest performance.

3.2.2. Adaptive MobileNetV2

To preserve the general feature extraction capabilities of the pretrained MobileNetV2, the first 116 layers were frozen during training. This ensures that low-level and mid-level visual features, which are transferable across tasks, remain unchanged. The remaining layers were fine-tuned to adapt the model to the specific characteristics of the target dataset. The training results showed that the red light achieved very high accuracy across all pre-processing methods. Both true positive (TP) and true negative (TN) rates remained at 1 in every case, with no errors from false negatives (FN) or false positives (FP), except when using CLAHE. In that case, FP slightly increased to 0.0077, causing TN to drop to 0.99 as shown in Figure 14. However, this is still considered an excellent result and does not significantly affect the model’s overall performance. Therefore, it can be concluded that red-light data is highly stable regardless of the pre-processing technique applied.
For the green light, all image pre-processing techniques result in TP values ranging from 0.97 to 1.0, indicating good overall performance. However, HE and CLAHE slightly reduce the TP values and increase the FN values compared to the original and normalization methods. Specifically, CLAHE yields the lowest TP at 0.97, with the highest FN of 0.031, while normalization achieves a TP of 1.0 and FP of 0, which outperforms all other methods. As shown in Figure 15, the best performance under green light conditions is achieved with normalization, followed by the original, with HE and CLAHE performing the worst.
For the blue light, both the original and HE pre-processing methods result in excellent performance, with TP = 1 and FN = 0 in both cases. In particular, blue HE achieves the best result, with FP = 0, while the original method has a slightly higher FP of 0.0043, which is still acceptable. In contrast, the normalization method produces the highest FP value of 0.035 and the lowest TN of 0.97 among all cases, indicating a significant drop in performance. Blue CLAHE shows similar performance to the original and HE methods but still introduces a small FP. As shown in Figure 16, the best methods for blue light data are HE and the original, while normalization should be avoided.
For the white light, most pre-processing methods maintain high performance, with TP = 1 in nearly all cases. Only CLAHE slightly reduces the TP to 0.99. All approaches produce low FP values, and white HE achieves the best result, with FP = 0 and FN = 0.0034. Both normalization and CLAHE result in slightly higher FP values of 0.0064 and 0.01, respectively, leading to a small reduction in TN. However, the performance remains acceptable. As shown in Figure 17, the best-performing methods for white light data are HE and the original, while CLAHE offers slightly lower performance. Table 4 presents the detailed analysis results of adaptive MobileNetV2.
Figure 18 shows the training and validation curves of the adaptive MobileNetV2. Both accuracy and loss graphs show desirable trends, with the accuracy increasing steadily and the loss decreasing consistently across epochs, indicating effective learning and model stability.
The results of the experiment, shown in Table 5, explain how the adaptive MobileNetV2 model performed under four different lighting conditions, which are red, green, blue, and white. Different image pre-processing techniques were applied, including using raw images with no pre-processing, CLAHE, and HE. Under red lighting, all pre-processing methods gave perfect results, except CLAHE, with the model achieving 100.00% for accuracy, precision, recall, and F1-score. For green lighting, the normalization methods produced the highest scores at 99.80%, while original, HE, and CLAHE resulted in slightly lower performance, with CLAHE giving the lowest values. Under blue lighting, HE provided the best performance, reaching 100.00% across all evaluation metrics, while normalization caused the most significant drop, with the lowest accuracy at 98.22%. For white lighting, both the original and HE methods gave the highest scores at 99.83%, whereas normalization and CLAHE showed slightly lower but still high results.

4. Conclusions

This paper presents a deep learning approach for detecting cracks in duck eggs by applying transfer learning on MobileNetV2 and introducing an adaptive version of the model designed for four different lighting channels: red, green, blue, and white. Image pre-processing techniques, specifically CLAHE and histogram equalization (HE), were applied to enhance image quality. Additionally, a model comparison was conducted to identify the most suitable architecture for the dataset.
The original MobileNetV2 model was evaluated under all four lighting conditions using three pre-processing methods: raw images (no pre-processing), CLAHE, and HE. Under red lighting, both CLAHE and HE yielded the highest scores, reaching 99.80% in accuracy, precision, recall, and F1-score. For green lighting, HE performed best, achieving 98.22% in both accuracy and F1-score, while the other methods showed lower performance. Under blue lighting, the model performed best with raw images, achieving 99.55% across all evaluation metrics. Similarly, under white lighting, raw images again delivered the best performance, with 98.34% accuracy, whereas CLAHE resulted in the lowest performance.
To further improve performance and accuracy, an adaptive version of MobileNetV2 was introduced through fine-tuning. The model was tested under the same four lighting conditions with the same pre-processing methods. Under red lighting, all pre-processing techniques achieved 100% across all evaluation metrics. For green lighting, the original and normalization methods provided the highest scores at 99.60%, while HE and CLAHE performed slightly worse, with CLAHE producing the lowest values. Under blue lighting, HE achieved 100%, while normalization resulted in the lowest accuracy at 98.22%. For white lighting, both the original and HE methods achieved the highest accuracy at 99.83%, with normalization and CLAHE trailing slightly behind.
These findings highlight the importance of selecting appropriate pre-processing methods tailored to each lighting condition to enhance model accuracy and reliability, especially for tasks that rely on subtle visual differences.
Moreover, the experiment was conducted in a real-world duck egg processing facility, demonstrating the practical reliability of the system and earning trust from both researchers and business stakeholders. Future work may expand on this foundation by scaling the algorithm for full integration into industrial duck egg processing lines. This advancement would contribute to increased production efficiency by reducing losses from cracked eggs and improving overall product quality and processing speed.

Author Contributions

Validation, P.S.; Writing—original draft & editing, V.C.; Writing—review & editing, W.K.; Funding acquisition, B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the co-authors. The dataset was obtained through a collaboration with Consolutech Co., Ltd. and is not publicly available due to privacy and ethical restrictions.

Acknowledgments

We sincerely thank Consolutech Co., Ltd. for their support and collaboration, particularly for providing resources and assistance essential to this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Virtue Market Research. Duck Eggs Market. Virtue Market Research. Available online: https://virtuemarketresearch.com/report/duck-eggs-market (accessed on 23 May 2025).
  2. Kaewmanee, T.; Benjakul, S.; Visessanguan, W. Changes in chemical composition, physical properties and microstructure of duck egg as influenced by salting. Food Chem. 2009, 112, 560–569. [Google Scholar] [CrossRef]
  3. Arthur, J.; Wiseman, K.; Cheng, K.M. Salted and preserved duck eggs: A consumer market segmentation analysis. J. Food Process Eng. 2015, 94, 1942–1956. [Google Scholar] [CrossRef] [PubMed]
  4. Lai, C.-C.; Li, C.-H.; Huang, K.-J.; Cheng, C.-W. Duck Eggshell Crack Detection by Nondestructive Sonic Measurement and Analysis. Sensors 2021, 21, 7299. [Google Scholar] [CrossRef] [PubMed]
  5. Sun, K.; Ma, L.; Pan, L.; Tu, K. Sequenced wave signal extraction and classification algorithm for duck egg crack on-line detection. Comput. Electron. Agric. 2017, 142, 429–439. [Google Scholar] [CrossRef]
  6. Caguioa, J.V.B.D.P.; Guinto, R.N.E.; Mesias, L.R.T.; De Goma, J.C. Duck Egg Quality Classification Based on its Shell Visual Property Through Transfer Learning Using ResNet-50. In Proceedings of the 12th Annual International Conference on Industrial Engineering and Operations Management, Istanbul, Turkey, 7–10 March 2022. [Google Scholar]
  7. Sun, L.; Feng, S.; Chen, C.; Liu, X.; Cai, J. Identification of eggshell crack for hen egg and duck egg using correlation analysis based on acoustic resonance method. J. Food Process Eng. 2020, 43, e13430. [Google Scholar] [CrossRef]
  8. Liu, C.; Wen, H.; Yin, G.; Ling, X.; Ibrahim, S.M. Research on Intelligent Recognition Method of Egg Cracks Based on EfficientNet Network Model. J. Phys. Conf. Ser. 2023, 2560, 012015. [Google Scholar] [CrossRef]
  9. Botta, B.; Gattam, S.S.R.; Datta, A.K. Eggshell crack detection using deep convolutional neural networks. J. Food Eng. 2022, 315, 110798. [Google Scholar] [CrossRef]
  10. Botta, B.; Datta, A.K. Deep transfer learning-based approach for detection of cracks on eggs. J. Food Process Eng. 2023, 46, e14425. [Google Scholar] [CrossRef]
  11. Shi, C.; Li, Y.; Jiang, X.; Sun, W.; Zhu, C.; Mo, Y.; Yan, S.; Zhang, C. Real-Time ConvNext-Based U-Net with Feature Infusion for Egg Microcrack Detection. Agriculture 2024, 14, 1655. [Google Scholar] [CrossRef]
  12. Chen, H.; Ma, J.; Zhuang, Q.; Zhao, S.; Xie, Y. Submillimeter Crack Detection Technology of Eggs Based on Improved Light Source. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Guangzhou, China, 22–25 January 2021; Volume 697, p. 012018. [Google Scholar]
  13. Bao, G.; Jia, M.; Xun, Y.; Cai, S.; Yang, Q. Cracked egg recognition based on machine vision. Comput. Electron. Agric. 2019, 158, 159–166. [Google Scholar] [CrossRef]
  14. Balci, Z.; Yabanova, I. Artificial Intelligence Based Determination of Cracks in Eggshell Using Sound Signals. Sak. Univ. J. Sci. 2022, 26, 579–589. [Google Scholar] [CrossRef]
  15. Abbaspour-Gilandeh, Y.; Omid, M.; Alimardani, R. Identification of cracks in eggs shell using computer vision and hough transform. Yüzüncü Yıl Üniversitesi Tarım Bilim. Derg. 2018, 28, 375–383. [Google Scholar] [CrossRef]
  16. Kanjanasurat, I.; Krungseanmuang, W.; Chaowalittawin, V.; Purahong, B. Egg-Counting System Using Image Processing and a Website for Monitoring. In Proceedings of the 7th International Conference on Engineering, Applied Sciences and Technology (ICEAST), Pattaya, Thailand, 1–3 July 2021. [Google Scholar]
  17. Türkoğlu, M. Defective egg detection based on deep features and Bidirectional Long-Short-Term-Memory. Comput. Electron. Agric. 2021, 185, 106152. [Google Scholar] [CrossRef]
  18. Purahong, B.; Chaowalittawin, V.; Krungseanmuang, W.; Sathaporn, P.; Anuwongpinit, T.; Lasakul, A. Crack Detection of Eggshell Using Image Processing and Computer Vision. J. Phys. Conf. Ser. 2022, 2261, 012021. [Google Scholar] [CrossRef]
  19. Purahong, B.; Krungseanmuang, W.; Chaowalittawin, V.; Pumee, T.; Kanjanasurat, I.; Lasakul, A. Classification of Overlapping Eggs Based on Image Processing. J. Phys. Conf. Ser. 2022, 2261, 012023. [Google Scholar] [CrossRef]
  20. Li, H.; Wang, W.; Wang, M.; Li, L.; Vimlund, V. A review of deep learning methods for pixel-level crack detection. J. Traffic Transp. Eng. (Engl. Ed.) 2022, 9, 945–968. [Google Scholar] [CrossRef]
  21. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  22. Ali, R.; Chuah, J.H.; Talip, M.S.A.; Mokhtar, N.; Shoaib, M.A. Structural crack detection using deep convolutional neural networks. Autom. Constr. 2022, 133, 103989. [Google Scholar] [CrossRef]
  23. Chen, K.; Huang, Z.; Chen, C.; Cheng, Y.; Shang, Y.; Zhu, P.; Jv, H.; Li, L.; Li, W.; Wang, S. Surface Crack Detection of Steel Structures in Railroad Industry Based on Multi-Model Training Comparison Technique. Processes 2023, 11, 1208. [Google Scholar] [CrossRef]
  24. Shen, Y.; Wu, J.; Chen, J.; Weiwei, Z.; Yang, X.; Ma, H. Quantitative Detection of Pipeline Cracks Based on Ultrasonic Guided Waves and Convolutional Neural Network. Sensors 2024, 24, 1204. [Google Scholar] [CrossRef] [PubMed]
  25. Wu, J.; Hao, H.; Li, J.; Wang, Y.; Wu, Z.; Ma, H. Defect detection in pipe structures using stochastic resonance of Duffing oscillator and ultrasonic guided waves. Int. J. Press. Vessel. Pip. 2020, 187, 104168. [Google Scholar] [CrossRef]
  26. Ali, L.; Alnajjar, F.; Jassmi, H.A.; Gocho, M.; Khan, W.; Serhani, M.A. Performance Evaluation of Deep CNN-Based Crack Detection and Localization Techniques for Concrete Structures. Sensors 2021, 21, 1688. [Google Scholar] [CrossRef] [PubMed]
  27. Fan, Z.; Li, C.; Chen, Y.; Di Mascio, P.; Chen, X.; Zhu, G.; Loprencipe, G. Ensemble of Deep Convolutional Neural Networks for Automatic Pavement Crack Detection and Measurement. Coatings 2020, 10, 152. [Google Scholar] [CrossRef]
  28. Chen, X.; Meng, F.; Zhang, C.; Hu, D.; Yang, F.; Lu, J. Surface Crack Detection Method for Coal Rock Based on Improved YOLOv5. Appl. Sci. 2022, 12, 9695. [Google Scholar] [CrossRef]
  29. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
Figure 1. Experimental workflow.
Figure 1. Experimental workflow.
Applsci 15 07960 g001
Figure 2. Overview system diagram.
Figure 2. Overview system diagram.
Applsci 15 07960 g002
Figure 3. CAD data.
Figure 3. CAD data.
Applsci 15 07960 g003
Figure 4. Circuit diagram.
Figure 4. Circuit diagram.
Applsci 15 07960 g004
Figure 5. Duck egg dataset.
Figure 5. Duck egg dataset.
Applsci 15 07960 g005
Figure 6. Adaptive MobileNetV2 architecture.
Figure 6. Adaptive MobileNetV2 architecture.
Applsci 15 07960 g006
Figure 7. Hardware. (a) PCB with switch, (b) step-down module, (c) step-down module with battery, (d) protection of the step-down switching regulator with insulating tape, (e) LED operation check, (f) battery in the case.
Figure 7. Hardware. (a) PCB with switch, (b) step-down module, (c) step-down module with battery, (d) protection of the step-down switching regulator with insulating tape, (e) LED operation check, (f) battery in the case.
Applsci 15 07960 g007
Figure 8. Light control and egg tray.
Figure 8. Light control and egg tray.
Applsci 15 07960 g008
Figure 9. Confusion matrix of MobileNetV2 under red light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 9. Confusion matrix of MobileNetV2 under red light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g009
Figure 10. Confusion matrix of MobileNetV2 under green light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 10. Confusion matrix of MobileNetV2 under green light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g010
Figure 11. Confusion matrix of MobileNetV2 under blue light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 11. Confusion matrix of MobileNetV2 under blue light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g011
Figure 12. Confusion matrix of MobileNetV2 under white light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 12. Confusion matrix of MobileNetV2 under white light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g012
Figure 13. Training and validation accuracy and loss curves of MobileNetV2. (a) Red light, (b) green light, (c) blue light, and (d) white light.
Figure 13. Training and validation accuracy and loss curves of MobileNetV2. (a) Red light, (b) green light, (c) blue light, and (d) white light.
Applsci 15 07960 g013
Figure 14. Confusion matrix of adaptive MobileNetV2 under red light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 14. Confusion matrix of adaptive MobileNetV2 under red light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g014
Figure 15. Confusion matrix of adaptive MobileNetV2 under green light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 15. Confusion matrix of adaptive MobileNetV2 under green light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g015
Figure 16. Confusion matrix of adaptive MobileNetV2 under blue light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 16. Confusion matrix of adaptive MobileNetV2 under blue light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g016
Figure 17. Confusion matrix of adaptive MobileNetV2 under white light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Figure 17. Confusion matrix of adaptive MobileNetV2 under white light. (a) Original image, (b) normalization, (c) HE, and (d) CLAHE.
Applsci 15 07960 g017
Figure 18. Training and validation accuracy and loss curves of the adaptive MobileNet2. (a) Red light, (b) green light, (c) blue light, and (d) white light.
Figure 18. Training and validation accuracy and loss curves of the adaptive MobileNet2. (a) Red light, (b) green light, (c) blue light, and (d) white light.
Applsci 15 07960 g018
Table 1. Comparison of CNN model accuracy after one epoch of training on original images.
Table 1. Comparison of CNN model accuracy after one epoch of training on original images.
ModelValidation Accuracy (%)
RedGreenBlueWhite
DenseNet12198.5093.2889.9690.00
DenseNet16999.6285.8292.8991.87
DenseNet20198.8793.2882.0191.87
EfficientNetB052.6354.8544.3545.31
EfficientNetB157.1445.1544.3545.94
EfficientNetB247.3745.1547.2854.69
EfficientNetB352.6355.2252.7260.94
EfficientNetB454.5154.8555.2355.94
EfficientNetB556.3956.3450.6346.56
EfficientNetB647.3745.1551.0553.75
EfficientNetB756.0254.8551.0547.50
InceptionResNetV298.5090.3090.7980.31
InceptionV398.1291.7980.7585.62
MobileNet99.6292.1686.1993.12
MobileNetV299.6291.0494.1493.75
MobileNetV3Large59.7769.4063.661.56
MobileNetV3Small52.6354.8555.6549.38
NASNetMobile98.8785.4587.8788.44
ResNet10148.8767.5457.7459.38
ResNet101V298.5091.7990.7990.00
ResNet15263.5358.9664.4461.87
ResNet152V296.9992.1687.4591.25
ResNet5056.3954.8570.7154.06
ResNet50V299.2592.9193.7293.44
VGG1692.1174.2567.7875.94
VGG1992.8683.5869.8774.37
Xception96.9987.6985.3690.94
Bold numbers represent the highest accuracy for each light source.
Table 2. Summary of the best pre-processing results for each color of MobileNetV2.
Table 2. Summary of the best pre-processing results for each color of MobileNetV2.
ColorThe Best Pre-ProcessingDetails
RedHE and CLAHELowest FP and FN = 0
GreenNormalizationHighest TP, FP lower than HE
BlueCLAHEReduces FP most effectively without reducing TP
WhiteOriginalHE/CLAHE degrades performance.
Table 3. Output of the experiments with MobileNetV2.
Table 3. Output of the experiments with MobileNetV2.
Light ColorPre-ProcessingModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)
RedOriginalMobilNetV299.6099.6399.5799.60
Normalization 99.2099.1699.2399.19
HE99.8099.8099.8099.80
CLAHE 99.8099.8099.8099.80
GreenOriginal97.8297.8398.8097.82
Normalization 97.2397.3697.1697.22
HE98.2298.2198.2298.22
CLAHE 97.2397.2497.1997.21
BlueOriginal99.5599.5599.5699.55
Normalization 97.3397.3497.3297.33
HE96.2196.2196.2996.21
CLAHE 97.7797.8997.7197.77
WhiteOriginal98.3498.3498.3498.34
Normalization 97.6797.6897.6997.67
HE94.6894.6894.6594.67
CLAHE 94.0294.0294.0094.01
Bold numbers represent the highest accuracy for each light source.
Table 4. Summary of the best pre-processing results for each color of adaptive MobileNetV2.
Table 4. Summary of the best pre-processing results for each color of adaptive MobileNetV2.
ColorThe Best Pre-ProcessingDetails
RedOriginal, Normalization, HEStable, no need to customize
GreenNormalizationCLAHE has highest FN
BlueHENormalization has highest FP
WhiteOriginal, HECLAHE has a slight TP reduction effect
Table 5. Output of the experiments with adaptive MobileNetV2.
Table 5. Output of the experiments with adaptive MobileNetV2.
Light ColorPre-ProcessingModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)
RedOriginalAdaptive
MobilNetV2
100.00100.00100.00100.00
Normalization 100.00100.00100.00100.00
HE100.00100.00100.00100.00
CLAHE 99.6099.5999.6199.60
GreenOriginal99.6099.5899.6399.60
Normalization 99.8099.7999.8199.80
HE99.0199.0199.0299.01
CLAHE 98.4298.4498.4498.42
BlueOriginal99.7899.7799.7999.78
Normalization 98.2298.2598.2598.22
HE100.00100.00100.00100.00
CLAHE 99.3399.3299.3499.33
WhiteOriginal99.8399.8399.8499.83
Normalization 99.5099.5099.5199.50
HE99.8399.8499.8399.83
CLAHE 99.5099.5199.5099.50
Bold numbers represent the highest accuracy for each light source.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chaowalittawin, V.; Krungseanmuang, W.; Sathaporn, P.; Purahong, B. Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing. Appl. Sci. 2025, 15, 7960. https://doi.org/10.3390/app15147960

AMA Style

Chaowalittawin V, Krungseanmuang W, Sathaporn P, Purahong B. Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing. Applied Sciences. 2025; 15(14):7960. https://doi.org/10.3390/app15147960

Chicago/Turabian Style

Chaowalittawin, Vasutorn, Woranidtha Krungseanmuang, Posathip Sathaporn, and Boonchana Purahong. 2025. "Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing" Applied Sciences 15, no. 14: 7960. https://doi.org/10.3390/app15147960

APA Style

Chaowalittawin, V., Krungseanmuang, W., Sathaporn, P., & Purahong, B. (2025). Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing. Applied Sciences, 15(14), 7960. https://doi.org/10.3390/app15147960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop