Author Contributions
Conceptualization, M.S.B., R.P., S.S., D.S. and G.G.-M.; methodology, S.S.; software, D.S. and M.A.; validation, S.S.; formal analysis, D.S.; investigation, M.S.B., R.P., S.S., D.S., M.A. and G.G.-M.; data curation, R.P.; writing—original draft preparation, R.P.; writing—review and editing, G.G.-M.; supervision, G.G.-M.; project administration, R.P.; funding acquisition, R.P. All authors have read and agreed to the published version of the manuscript.
Figure 1.
A sample image of the preparation of a tea sample.
Figure 1.
A sample image of the preparation of a tea sample.
Figure 2.
Examples of tea samples prepared for the present paper distributed between 4 classes, namely 0%, 15%, 45%, and 100% tea waste.
Figure 2.
Examples of tea samples prepared for the present paper distributed between 4 classes, namely 0%, 15%, 45%, and 100% tea waste.
Figure 3.
Comparison of training time, model size (number of parameters), and obtained accuracy for all proposed models.
Figure 3.
Comparison of training time, model size (number of parameters), and obtained accuracy for all proposed models.
Figure 4.
Loss, accuracy, and learning rate plots of RegNetY800MF model. (a) Patched and (b) non-patched approach.
Figure 4.
Loss, accuracy, and learning rate plots of RegNetY800MF model. (a) Patched and (b) non-patched approach.
Figure 5.
Confusion matrices of patched and unpatched approach for RegNetY800MF model.
Figure 5.
Confusion matrices of patched and unpatched approach for RegNetY800MF model.
Figure 6.
Confusion matrices of the patched and non-patched MobileNetV3Large model.
Figure 6.
Confusion matrices of the patched and non-patched MobileNetV3Large model.
Figure 7.
Confusion matrices of the patched and non-patched EfficientNetV2S model.
Figure 7.
Confusion matrices of the patched and non-patched EfficientNetV2S model.
Figure 8.
Confusion matrices of the patched and non-patched ShuffleNetV2X10 model.
Figure 8.
Confusion matrices of the patched and non-patched ShuffleNetV2X10 model.
Figure 9.
Confusion matrices of the patched and non-patched SwinV2T model.
Figure 9.
Confusion matrices of the patched and non-patched SwinV2T model.
Figure 10.
Confusion matrices of the patched and non-patched EfficientNetV2S model for different sizes and resolutions. (a) Unpatched version, size 330 × 220 pixels; (b) unpatched version, size 160 × 110 pixels; (c) patched version, size 220 × 220 pixels; (d) patched version, size 220 × 220 pixels.
Figure 10.
Confusion matrices of the patched and non-patched EfficientNetV2S model for different sizes and resolutions. (a) Unpatched version, size 330 × 220 pixels; (b) unpatched version, size 160 × 110 pixels; (c) patched version, size 220 × 220 pixels; (d) patched version, size 220 × 220 pixels.
Figure 11.
A comparison of the models using the evaluating criteria and the unpatched approach.
Figure 11.
A comparison of the models using the evaluating criteria and the unpatched approach.
Figure 12.
A comparison of the models using the evaluating criteria and the patched approach.
Figure 12.
A comparison of the models using the evaluating criteria and the patched approach.
Table 1.
An example of the values for , , , and for a case of 4 classes.
Table 1.
An example of the values for , , , and for a case of 4 classes.
| Predicted Label |
---|
Class 1 | Class 2 | Class 3 | Class 4 |
---|
True label | Class 1 | | |
Class 2 | | |
Class 3 |
Class 4 |
Table 2.
Comparison of training time, model size (number of parameters), and obtained accuracy for proposed models.
Table 2.
Comparison of training time, model size (number of parameters), and obtained accuracy for proposed models.
Model and Approach | Model Size (MB) | Training Time (s) | Accuracy (%) |
---|
Patched EfficientNetV2S | 77.578 | 20.062 | 83.13% |
Non-patched EfficientNetV2S | 77.578 | 5.855 | 90.63% |
Patched MobileNetV3Large | 16.142 | 7.605 | 95.00% |
Non-patched MobileNetV3Large | 16.142 | 3.532 | 82.50% |
Patched RegNetY800MF | 21.672 | 11.465 | 87.50% |
Non-patched RegNetY800MF | 21.672 | 3.711 | 87.50% |
Patched ShuffleNetV2X10 | 4.860 | 4.821 | 85.00% |
Non-patched ShuffleNetV2X10 | 4.860 | 3.387 | 66.88% |
Patched SwinV2T | 105.626 | 22.946 | 85.00% |
Non-patched SwinV2T | 105.626 | 9.081 | 81.25% |
Table 3.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the RegNetY800MF model.
Table 3.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the RegNetY800MF model.
| Precision | Recall | F1-Score |
---|
Approach | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg |
---|
Patched | 97.6% | 93.1% | 71.7% | 90.9% | 88.3% | 100.0% | 67.5% | 82.5% | 100.0% | 87.5% | 98.8% | 78.3% | 76.7% | 95.2% | 87.3% |
Unpatched | 95.1% | 83.3% | 100.0% | 80.0% | 89.6% | 97.5% | 100.0% | 52.5% | 100.0% | 87.5% | 96.3% | 90.9% | 68.9% | 88.9% | 86.2% |
Table 4.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the MobileNetV3Large model.
Table 4.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the MobileNetV3Large model.
| Precision | Recall | F1-Score |
---|
Approach | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg |
---|
Patched | 100.0% | 90.5% | 94.6% | 95.2% | 95.1% | 97.5% | 95.0% | 87.5% | 100.0% | 95.0% | 98.7% | 92.7% | 90.9% | 97.6% | 95.0% |
Unpatched | 91.9% | 69.0% | 100.0% | 85.1% | 86.5% | 85.0% | 100.0% | 45.0% | 100.0% | 82.5% | 88.3% | 81.6% | 62.1% | 92.0% | 81.0% |
Table 5.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the EfficientNetV2S model.
Table 5.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the EfficientNetV2S model.
| Precision | Recall | F1-Score |
---|
Approach | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg |
---|
Patched | 78.4% | 87.2% | 76.0% | 88.9% | 82.6% | 100.0% | 85.0% | 47.5% | 100.0% | 83.1% | 87.9% | 86.1% | 58.5% | 94.1% | 81.6% |
Unpatched | 97.6% | 92.5% | 90.6% | 83.0% | 90.9% | 100.0% | 92.5% | 72.5% | 97.5% | 90.6% | 98.8% | 92.5% | 80.6% | 89.7% | 90.4% |
Table 6.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the ShuffleNetV2X10 model.
Table 6.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the ShuffleNetV2X10 model.
| Precision | Recall | F1-Score |
---|
Approach | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg |
---|
Patched | 95.1% | 91.3% | 65.5% | 97.6% | 87.4% | 97.5% | 52.5% | 90.0% | 100.0% | 85.0% | 96.3% | 66.7% | 75.8% | 98.8% | 84.4% |
Unpatched | 100.0% | 58.7% | 87.5% | 60.6% | 76.7% | 57.5% | 92.5% | 17.5% | 100.0% | 66.9% | 73.0% | 71.8% | 29.2% | 75.5% | 62.4% |
Table 7.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the SwinV2T model.
Table 7.
The per-class precision, recall, and F1-score of the patched and non-patched approach for the four classes (0%, 15%, 45%, and 100% tea waste) for the SwinV2T model.
| Precision | Recall | F1-Score |
---|
Approach | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg | 0% | 15% | 45% | 100% | Avg |
---|
Patched | 100.0% | 86.4% | 90.9% | 71.4% | 87.2% | 95.0% | 95.0% | 50.0% | 100.0% | 85.0% | 97.4% | 90.5% | 64.5% | 83.3% | 83.9% |
Unpatched | 94.1% | 69.0% | 75.0% | 100.0% | 84.5% | 80.0% | 100.0% | 75.0% | 70.0% | 81.3% | 86.5% | 81.6% | 75.0% | 82.4% | 81.4% |
Table 8.
A comparison of the training time, accuracy, precision, recall, and F1-score achieved by the proposed approaches and image sizes, for the MobileNetV3Large model.
Table 8.
A comparison of the training time, accuracy, precision, recall, and F1-score achieved by the proposed approaches and image sizes, for the MobileNetV3Large model.
Approach and Image Size (Pixels) | Training Time (s) | Accuracy | Precision | Recall | F1-Score |
---|
Unpatched 660 × 440 | 3.53 | 0.825 | 0.865 | 0.825 | 0.810 |
Unpatched 330 × 220 | 2.32 | 0.725 | 0.544 | 0.725 | 0.621 |
Unpatched 160 × 110 | 1.94 | 0.469 | 0.272 | 0.469 | 0.332 |
Patched 440 × 440 | 7.61 | 0.950 | 0.951 | 0.950 | 0.950 |
Patched 220 × 220 | 8.02 | 0.757 | 0.877 | 0.756 | 0.680 |
Patched 110 × 110 | 21.98 | 0.794 | 0.864 | 0.794 | 0.782 |
Table 9.
A comparison of the correct classification rates (CCRs) achieved by the proposed classification models and other similar works in the literature for adulteration detection using computer vision systems.
Table 9.
A comparison of the correct classification rates (CCRs) achieved by the proposed classification models and other similar works in the literature for adulteration detection using computer vision systems.
Authors | Product | Method | CCR (%) |
---|
Present paper | Tea | RegNetY800MF | 87.5–87.5 |
MobileNetV3Large | 82.5–95.0 |
EfficientNetV2S | 90.6–83.1 |
ShuffleNetV2X10 | 66.9–85 |
SwinV2 | 81.2–85 |
Malyjurek et al. [39] | Tea | Partial Least Squares | 78 |
Kelis et al. [40] | Starch | SVM | 86.9 |
Zheng et al. [41] | Flour | CNN | 92.4 |