BBSNet: An Intelligent Grading Method for Pork Freshness Based on Few-Shot Learning
Abstract
1. Introduction
2. Materials and Methods
2.1. Data Set Acquisition
2.1.1. Pork Freshness Grading Criteria
2.1.2. Pork Freshness Dataset
2.2. Few-Shot Learning Method Based on BBSNet
2.2.1. Composition of BBSNet
2.2.2. Upgrading of ShuffleNetV2 Module
2.2.3. Accelerating Feature Fitting with Batch Channel Normalization
2.2.4. Upgrading of BiFormer Module
2.2.5. Probability Distribution Function Based on Cosine Similarity
2.3. Fine-Tuning Strategy
2.3.1. Updating Cross-Entropy Loss Function
2.3.2. Updating Entropy Regularization Function
2.4. Model Training
2.4.1. Experimental Design
2.4.2. Pre-Training Setting
2.5. Model Evaluation Metrics
3. Results and Discussions
3.1. Performance Comparison with Classic Algorithms
3.1.1. Comparison with Classic Few-Shot Models
3.1.2. Comparison Against Classical Universality Algorithms
3.2. Batch Channel Normalization Impacts
3.3. BiFormer Attention Mechanism Impacts
3.4. The Impact of Backbone Networks on Model Performance
3.5. Number of Support Set Samples Impacts
3.6. Impact of the Number of Query Set Samples
3.7. Validation of Model Generalization on Large-Scale Unknown Samples
3.7.1. Model Performance Across Different Datasets
3.7.2. Model Interpretability Across Different Datasets
3.7.3. Cross-Model Robustness of BBSNet on Foods101
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
BBSNet | BCN-BiFormer-ShuffleNetV2 |
BCN | Batch Channel Normalization |
BRA | Bi-level Routing Attention |
BN | Batch Normalization |
CCD | Charge-Coupled Device |
CNNs | Convolutional Neural Networks |
LN | Layer Normalization |
MAML | Model-Agnostic Meta-Learning |
TVB-N | Total Volatile Basic Nitrogen |
Appendix A
Derivation of the Output Vector Formula for the BiFormer Attention Mechanism
References
- Yang, Z.; Chen, Q.; Wei, L. Active and smart biomass film with curcumin pickering emulsion stabilized by chitosan adsorbed laurate esterified starch for meat freshness monitoring. Int. J. Biol. Macromol. 2024, 275, 133331. [Google Scholar] [CrossRef] [PubMed]
- Zhang, F.; Kang, T.; Sun, J.; Wang, J.; Zhao, W.; Gao, S.; Wang, W.; Ma, Q. Improving TVB-N prediction in pork using portable spectroscopy with just in time learning model updating method. Meat Sci. 2022, 188, 108801. [Google Scholar] [CrossRef] [PubMed]
- Cheng, M.; Yan, X.; Cui, Y.; Han, M.; Wang, X.; Wang, J.; Zhang, R. An ecofriendly film of phresponsive indicators for smart packaging. J. Food Eng. 2022, 321, 110943. [Google Scholar] [CrossRef]
- Lee, S.; Norman, J.M.; Gunasekaran, S.; Laack, R.L.J.M.V.; Kim, B.C.; Kauffman, R.G. Use of electrical conductivity to predict water holding capacity in post rigor pork sciencedirect. Meat Sci. 2020, 55, 385389. [Google Scholar] [CrossRef]
- Nie, J.; Wu, K.; Li, Y.; Li, J.; Hou, B. Advances in hyperspectral remote sensing for precision fertilization decision making: A comprehensive overview. Turk. J. Agric. For. 2024, 48, 1084–1104. [Google Scholar] [CrossRef]
- Zhou, L.; Wang, X.; Zhang, C.; Zhao, N.; Taha, M.F.; He, Y.; Qiu, Z. Powdery Food Identification Using NIR Spectroscopy and Extensible Deep Learning Model. Food Bioprocess Technol. 2022, 15, 2354–2362. [Google Scholar] [CrossRef]
- Guo, T.; Huang, M.; Zhu, Q.; Guo, Y.; Qin, J. Hyperspectral image based multi feature integration for tvbn measurement in pork. J. Food Eng. 2017, 218, 61–68. [Google Scholar] [CrossRef]
- Zhuang, Q.; Peng, Y.; Yang, D.; Wang, Y.; Zhao, R.; Chao, K.; Guo, Q. Detection of frozen pork freshness by fluorescence hyperspectral image. J. Food Eng. 2022, 316, 110840. [Google Scholar] [CrossRef]
- Musatov, V.; Sysoev, V.; Sommer, M.; Kiselev, I. Assessment of meat freshness with metal oxide sensor microarray electronic nose: A practical approach. Sensors Actuators B Chem. 2010, 144, 99–103. [Google Scholar] [CrossRef]
- Tian, X.-Y.; Cai, Q.; Zhang, Y.-M. Rapid Classification of Hairtail Fish and Pork Freshness Using an Electronic Nose Based on the PCA Method. Sensors 2012, 12, 260–277. [Google Scholar] [CrossRef]
- Zhang, J.; Wu, J.; Wei, W.; Wang, F.; Jiao, T.; Li, H.; Chen, Q. Olfactory imaging technology and detection platform for detecting pork meat freshness based on IoT. Comput. Electron. Agric. 2023, 215, 108384. [Google Scholar] [CrossRef]
- Huang, L.; Zhao, J.; Chen, Q.; Zhang, Y. Nondestructive measurement of total volatile basic nitrogen (tvbn) in pork meat by integrating near infrared spectroscopy, computer vision and electronic nose techniques. Food Chem. 2014, 145, 228–236. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Chu, Z.; Weng, S.; Zhu, G.; Han, K.; Zhang, Z.; Huang, L.; Zhu, Z.; Zheng, S. Fusion of electronic nose and hyperspectral imaging for mutton freshness detection using input modified convolution neural network. Food Chem. 2022, 385, 132651. [Google Scholar] [CrossRef] [PubMed]
- Cheng, J.; Sun, J.; Shi, L.; Dai, C. An effective method fusing electronic nose and fluorescence hyperspectral imaging for the detection of pork freshness. Food Biosci. 2024, 59, 103880. [Google Scholar] [CrossRef]
- Sun, X.; Young, J.; Liu, J.-H.; Newman, D. Prediction of pork loin quality using online computer vision system and artificial intelligence model. Meat Sci. 2018, 140, 72–77. [Google Scholar] [CrossRef]
- Chen, D.; Wu, P.; Wang, K.; Wang, S.; Ji, X.; Shen, Q.; Yu, Y.; Qiu, X.; Xu, X.; Liu, Y. Combining computer vision score and conventional meat quality traits to estimate the intramuscular fat content using machine learning in pigs. Meat Sci. 2022, 185, 108727. [Google Scholar] [CrossRef]
- Liu, H.; Zhan, W.; Du, Z.; Xiong, M.; Han, T.; Wang, P.; Li, W.; Sun, Y. Prediction of the intramuscular fat content of pork cuts by improved U2 Net model and clustering algorithm. Food Biosci. 2023, 53, 102848. [Google Scholar] [CrossRef]
- Arnal, B.J.G. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 4653. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, B.; Chen, Z.; Nyalala, I.; Chen, K.; Gao, J. A salient feature establishment tactic for cassava disease recognition. Artif. Intell. Agric. 2024, 14, 115–132. [Google Scholar] [CrossRef]
- Liu, C.; Zhang, J.; Qi, C.; Huang, G.; Chen, K. An intelligent method for pork freshness identification based on EfficientNet model. Food Sci. 2023, 44, 369–376, (In Chinese with English Abstract). [Google Scholar] [CrossRef]
- Nie, J.; Yuan, Y.; Li, Y.; Wang, H.; Li, J.; Wang, Y.; Song, K.; Ercisli, S. Few shot learning in intelligent agriculture: A review of methods and applications. J. Agric. Sci. Bilim. Derg. 2023, 30, 216228. [Google Scholar] [CrossRef]
- Pan, J.; Xia, L.; Wu, Q.; Guo, Y.; Chen, Y.; Tian, X. Automatic strawberry leaf scorch severity estimation via faster RCNN and few shot learning. Ecol. Informatics 2022, 70, 101706. [Google Scholar] [CrossRef]
- Iovan, C.; Mangeas, M.; Claverie, T.; Mouillot, D.; Villéger, S.; Vigliola, L. Automatic underwater fish species classification with limited data using few-shot learning. Ecol. Inform. 2021, 63, 101320. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, J.; Wang, Y. Enhancing Food Image Recognition by Multi-Level Fusion and the Attention Mechanism. Foods 2025, 14, 461. [Google Scholar] [CrossRef] [PubMed]
- Altmann, B.A.; Gertheiss, J.; Tomasevic, I.; Engelkes, C.; Glaesener, T.; Meyer, J.; Schäfer, A.; Wiesen, R.; Mörlein, D. Human perception of color differences using computer vision system measurements of raw pork loin. Meat Sci. 2022, 188, 108766. [Google Scholar] [CrossRef]
- GB 4789.2-2016; National Food Safety Standard Food Microbiological Examination: Aerobic Plate Count. National Health and Family Planning Commission of the People’s Republic of China; China Food and Drug Administration: Beijing, China, 2016.
- GB/T 9959.2-2008; Fresh and frozen pork lean, cuts. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China; Standardization Administration of the People’s Republic of China: Beijing, China, 2008.
- Snell, J.; Swersky, K.; Zemel, R.S. Prototypical Networks for Few-shot Learning. In Advances in Neural Information Processing Systems, Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30, pp. 4077–4087. [Google Scholar]
- Zhao, P.; Wang, L.; Zhao, X.; Liu, H.; Ji, X. Few shot learning based on prototype rectification with a self attention mechanism. Expert Syst. Appl. 2024, 249, 123586. [Google Scholar] [CrossRef]
- Huang, X.; Choi, S.H. SAPENet: Self Attention based Prototype Enhancement Network for Few shot Learning. Pattern Recognit. 2023, 135, 109170. [Google Scholar] [CrossRef]
- Peng, C.; Chen, L.; Hao, K.; Chen, S.; Cai, X.; Wei, B. A novel dimensional variational prototypical network for industrial few shot fault diagnosis with unseen faults. Comput. Ind. 2024, 162, 104133. [Google Scholar] [CrossRef]
- Liu, Y.; Pu, H.; Sun, D.-W. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends Food Sci. Technol. 2021, 113, 193204. [Google Scholar] [CrossRef]
- Li, X.; Li, Z.; Xie, J.; Yang, X.; Xue, J.-H.; Ma, Z. Self reconstruction network for fine grained few shot classification. Pattern Recognit. 2024, 153, 110485. [Google Scholar] [CrossRef]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11218, pp. 122–138. [Google Scholar] [CrossRef]
- Shimodaira, H. Improving predictive inference under covariate shift by weighting the log likelihood function. J. Stat. Plan. Inference 2000, 90, 227244. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of Machine Learning Research, Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; Bach, F., Blei, D., Eds.; PMLR: Lille, France, 2015; pp. 448–456. [Google Scholar] [CrossRef]
- Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer Normalization. In Advances in Neural Information Processing Systems, Proceedings of the 30th Conference on Neural Information Processing Systems (NeurIPS), Barcelona, Spain, 5–10 December 2016; Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29, pp. 3617–3627. [Google Scholar] [CrossRef]
- Khaled, A.; Cai, J.; Ning, J.; He, K. BCN: Batch Channel Normalization for Image Classification. In Proceedings of the 26th International Conference on Pattern Recognition (ICPR 2024), Kolkata, India, 1–5 December 2024; Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, C.L., Bhattacharya, S., Pal, U., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2025; Volume 15311, pp. 295–308. [Google Scholar] [CrossRef]
- Mukhoti, J.; Dokania, P.K.; Torr, P.H.S.; Gal, Y. On batch normalisation for approximate bayesian inference. arXiv 2020, arXiv:2012.13220. [Google Scholar] [CrossRef]
- Song, G.; Tao, Z.; Huang, X.; Cao, G.; Liu, W.; Yang, L. Hybrid Attention Based Prototypical Network for Unfamiliar Restaurant Food Image Few Shot Recognition. IEEE Access 2020, 8, 14893–14900. [Google Scholar] [CrossRef]
- Xia, Z.; Pan, X.; Song, S.; Li, L.E.; Huang, G. Vision Transformer with Deformable Attention. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 4784–4793. [Google Scholar] [CrossRef]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. BiFormer: Vision Transformer with Bi-Level Routing Attention. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 10323–10333. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, Y.; Li, Z.; Liu, S. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 893–902. [Google Scholar] [CrossRef]
- Gunasekaran, A.; Irani, Z.; Choy, K.-L.; Filippi, L.; Papadopoulos, T. Performance measures and metrics in outsourcing decisions: A review for research and applications. Int. J. Prod. Econ. 2015, 161, 153166. [Google Scholar] [CrossRef]
- Finn, C.; Abbeel, P.; Levine, S. Model agnostic meta learning for fast adaptation of deep networks. In Proceedings of the 34th international Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1120–1129. [Google Scholar] [CrossRef]
- Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. Matching networks for one shot learning. In Proceedings of the 30th Conference on Neural Information Processing System, Barcelona, Spain, 5–10 December 2016; pp. 3630–3638. [Google Scholar] [CrossRef]
- Sun, Q.; Liu, Y.; Chua, T.-S.; Schiele, B. Meta-Transfer Learning for Few-Shot Learning. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 403–412. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large scale image recognition. Comput. Sci. 2014, 6, 14124313. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Zheng, L.; Zhao, Y.; Wang, S.; Wang, J.; Tian, Q. Good practices for learning domain shift in fine-grained recognition. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 798–812. [Google Scholar]
- Su, J.; Zhu, X.; Gong, S.; Cao, X. Few-shot learning via multi-scale feature matching and relation networks. IEEE Trans. Image Process. 2021, 30, 6401–6413. [Google Scholar] [CrossRef]
- FAO. The State of Food and Agriculture 2019. Moving Forward on Food Loss and Waste Reduction. Food and Agriculture Organization of the United Nations. 2019. Available online: http://www.fao.org/3/ca6030en/ca6030en.pdf (accessed on 1 October 2019).
- Wu, Y.; He, K. Group Normalization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018; Volume 11217, pp. 3–19. [Google Scholar] [CrossRef]
- Wang, Z.; Xia, N.; Hua, S.; Liang, J.; Ji, X.; Wang, Z.; Wang, J. Hierarchical Recognition for Urban Villages Fusing Multiview Feature Information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 3344–3355. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G.; Albanie, S. Squeeze and excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 99. [Google Scholar] [CrossRef]
- Park, J.; Choi, M.; Kim, K. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; p. 319. [Google Scholar] [CrossRef]
- Rao, Y.; Zhao, W.L.; Liu, B.; Lu, J.; Zhou, J.; Hsieh, C.J. DynamicViT: Efficient vision transformers with dynamic token sparsification. In Proceedings of the 35th Conference on Neural Information Processing System, Online, 6–14 December 2021; Available online: https://arxiv.org/pdf/2106.02034 (accessed on 11 July 2025).
- Jiang, S.; Min, W.; Lyu, Y.; Liu, L. Few-shot food recognition via multi-view representation learning. ACM Trans. Multimedia Comput. Commun. Appl. 2020, 16, 1–20. [Google Scholar] [CrossRef]
- Lin, H.; Tse, R.; Tang, S.-K.; Qiang, Z.-P.; Pau, G. Few-shot learning approach with multi-scale feature fusion and attention for plant disease recognition. Front. Plant Sci. 2022, 13, 907916. [Google Scholar] [CrossRef] [PubMed]
- Jo, K.; Lee, S.; Jeong, S.-K.; Lee, D.-H.; Jeon, H.; Jung, S. Hyperspectral imaging–based assessment of fresh meat quality: Progress and applications. Microchem. J. 2023, 197, 109785109785. [Google Scholar] [CrossRef]
- Triantafillou, E.; Zemel, R.; Urtasun, R. Few shot learning via learning the representation, provably. In Proceedings of the 8th International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar] [CrossRef]
- Yuan, P.; He, C.; Wang, L.; Lyu, S.; Li, S. Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell Classification. In Proceedings of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2020), Lima, Peru, 4–8 October 2020; Martel, A.L., Abolmaesumi, P., Stoyanov, D., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2020; Volume 12261, pp. 371–381. [Google Scholar] [CrossRef]
- Xu, H.; Zhi, S.; Sun, S.; Patel, V.; Liu, L. Deep learning for cross domain few shot visual recognition: A survey. ACM Comput. Surv. 2025, 57, 1–37. [Google Scholar] [CrossRef]
- Fonseca, J.; Bacao, F. Improving active learning performance through the use of data augmentation. Int. J. Intell. Syst. 2023, 2023, 1–17. [Google Scholar] [CrossRef]
- Pang, S.; Zhao, W.; Wang, S.; Zhang, L.; Wang, S. Permute MAML: Exploring industrial surface defect detection algorithms for few shot learning. Complex Intell. Syst. 2023, 10, 1473–1482. [Google Scholar] [CrossRef]
- Triantafillou, E.; Zhu, T.L.; Dumoulin, V.; Lamblin, P.; Xu, K.; Goroshin, R.; Gelada, C.; Swersky, K.; Manzagol, P.; Larochelle, H. Meta Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. arXiv 2019, arXiv:1903.03096. [Google Scholar] [CrossRef]
- Subramanyam, R.; Heimann, M.; Jayram, T.S.; Anirudh, R.; Thiagarajan, J.J. Contrastive Knowledge-Augmented Meta-Learning for Few-Shot Classification. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 2478–2486. [Google Scholar] [CrossRef]
- Bossard, L.; Guillaumin, M.; Van Gool, L. Food 101—Mining discriminative components with random forests. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 446–461. [Google Scholar] [CrossRef]
- Kiswanto, K.; Hadiyanto, H.; Yono, E.S. Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray Level Co Occurrence Matrix. Appl. Syst. Innov. 2024, 7, 49. [Google Scholar] [CrossRef]
- Ropo Di, A.; Panagou, E.Z.; Nychas, G.J.E. Data mining derived from food analyses using non invasive/nondestructive analytical techniques; determination of food authenticity, quality & safety in tandem with computer science disciplines. Trends Food Sci. Technol. 2016, 50, 107–123. [Google Scholar] [CrossRef]
Freshness Grade | Microbial Concentration (×103 CFU/g) | Storage Time (h) |
---|---|---|
First-grade fresh pork | 4.168 | 0 |
Second-grade fresh pork | 13.182 | 24 |
Third-grade fresh pork | 301.995 | 48 |
First-grade spoiled pork | 1778.279 | 72 |
Second-grade spoiled pork | 5370.317 | 96 |
Class | Image |
---|---|
First-grade fresh meat | |
Second-grade fresh meat | |
Third-grade fresh meat | |
First-grade spoiled meat | |
Second-grade spoiled meat |
Training Set (83%) | Validation Set (12%) | Test Set (8%) | |
---|---|---|---|
Number of Categories | 80 | 12 | 8 |
Number of Samples | 49,800 | 7200 | 4800 |
Model | Backbone | Comparison Function | 5-Way, 1-Shot Accuracy (%) | t-Value (vs. Baselinet) | p-Value (α = 0.05) | 5-Way, 5-Shot Accuracy (%) | t-Value (vs. Baselinet) | p-Value (α = 0.05) |
---|---|---|---|---|---|---|---|---|
Matching Nets | ResNet-18 | Cosine similarity. | 48.12 ± 1.25 | −16.58 | <0.001 | 67.20 ± 1.18 | −20.87 | <0.001 |
Prototypical Networks | ResNet-18 | Euclidean distance. | 44.56 ± 1.37 | −22.01 | <0.001 | 54.31 ± 1.42 | −43.15 | <0.001 |
Relation Networks | ResNet-18 | — | 51.44 ± 1.21 | −14.12 | <0.001 | 63.12 ± 1.16 | −27.01 | <0.001 |
Ours (Baseline) | ShuffleNetV2 + BiFormer + BCN | Cosine similarity. | 59.72 ± 0.98 | - | - | 78.84 ± 0.87 | - | - |
Method | Accuracy (%) | t-Value (vs. Baseline) | p-Value |
---|---|---|---|
AlexNet | 52.13 ± 1.78 | −60.92 | <0.001 |
VGG16 | 52.42 ± 1.64 | −55.98 | <0.001 |
GoogLeNet | 58.24 ± 1.39 | −44.86 | <0.001 |
ResNet50 | 68.43 ± 1.26 | −37.21 | <0.001 |
Ours (Baseline) | 96.36 ± 0.82 | - | - |
Model | 5-Way 1-Shot Accuracy (%) | t-Value (vs. Baseline) | p-Value | 5-Way 5-Shot Accuracy (%) | t-Value (vs. Baseline) | p-Value |
---|---|---|---|---|---|---|
ShuffleNetV2 + BN, BiFormer + LN | 52.44 ± 0.15 | - | - | 69.52 ± 0.21 | - | - |
ShuffleNetV2 + BCN, BiFormer + LN | 55.73 ± 0.18 | 12.67 | <0.001 | 69.62 ± 0.19 | 0.42 | 0.689 |
ShuffleNetV2 + BN, BiFormer + BCN | 57.91 ± 0.12 | 24.92 | <0.001 | 71.61 ± 0.23 | 7.85 | <0.001 |
ShuffleNetV2 + BCN, BiFormer + BCN | 59.72 ± 0.20 | 31.80 | <0.001 | 78.84 ± 0.25 | 28.64 | <0.001 |
Model | 5-Way 1-Shot Accuracy (%) | t-Value (vs. Baseline) | p-Value | 5-Way 5-Shot Accuracy (%) | t-Value (vs. Baseline) | p-Value |
---|---|---|---|---|---|---|
ShuffleNetV2, Backbone (Baseline) | 52.44 ± 0.15 | - | - | 69.52 ± 0.22 | - | - |
ShuffleNetV2 + BCN-BiFormer, Backbone | 55.73 ± 0.18 | 18.32 | <0.001 | 69.62 ± 0.25 | 0.89 | 0.423 |
ShuffleNetV2, Backbone + BCN-BiFormer | 57.91 ± 0.18 | 24.56 | <0.001 | 71.61 ± 0.18 | 18.45 | <0.001 |
ShuffleNetV2 + BCN-BiFormer, Backbone + BCN-BiFormer | 59.72 ± 0.12 | 31.75 | <0.001 | 78.84 ± 0.15 | 48.21 | <0.001 |
Model | Normalization Method | 5-Way 80-Shot Accuracy (%) | t-Value (vs. Baseline) | p-Value |
---|---|---|---|---|
Alexnet (8 layers) | Batch Normalization | 32.03 ± 1.80 | 58.72 | <0.0001 |
GoogleNet (27 layers) | Batch Normalization | 42.54 ± 1.50 | 45.18 | <0.0001 |
VGG-16 (16 layers) | Batch Normalization | 38.36 ± 1.65 | 52.63 | <0.0001 |
BBSNet (Baseline) | Batch Channel Normalization | 96.36 ± 0.82% | - | - |
Support Set Size | No Fine-Tuning Accuracy (%) | Fine-Tuning Accuracy (%) | t-Value (vs. 1-Shot Baseline) | p-Value |
---|---|---|---|---|
1-Shot Accuracy (%) | 56.64 ± 1.02% | 59.72 ± 0.98% | ||
5-Shot Accuracy (%) | 71.55 ± 1.15% | 78.84 ± 0.87% | −12.34 | <0.0001 |
10-Shot Accuracy (%) | 76.29 ± 1.21% | 83.44 ± 0.92% | −14.86 | <0.0001 |
20-Shot Accuracy (%) | 83.46 ± 0.98% | 91.25 ± 0.85% | −18.21 | <0.0001 |
40-Shot Accuracy (%) | 85.81 ± 0.89% | 94.44 ± 0.76% | −21.34 | <0.0001 |
80-Shot Accuracy (%) | 87.21 ± 0.75% | 96.36 ± 0.82% | −23.57 | <0.0001 |
100-Shot Accuracy (%) | 87.19 ± 0.72% | 96.32 ± 0.81% | −23.61 | <0.0001 |
120-Shot Accuracy (%) | 87.20 ± 0.71% | 96.33 ± 0.80% | −23.65 | <0.0001 |
Query Set Sample Size | Accuracy (%) | Training Time (min) | t-Value (vs. 5-Sample Baseline) | p-Value |
---|---|---|---|---|
5 (Baseline) | 56.64 ± 1.15% | 12.3 | - | - |
10 | 68.55 ± 1.32% | 15.1 | 4.21 | <0.05 |
15 | 71.29 ± 1.28% | 18.9 | 5.34 | <0.01 |
20 | 73.46 ± 1.19% | 22.5 | 6.87 | <0.001 |
25 | 78.84 ± 1.02% | 26.8 | 8.76 | <0.001 |
30 | 77.21 ± 1.05% | 31.4 | 3.12 | >0.05 |
35 | 77.19 ± 1.08% | 35.6 | 2.98 | >0.05 |
Dataset Name | Accuracy (%) | Sensitivity (%) | Specificity (%) | Precision (%) |
---|---|---|---|---|
Food101 | 92.4 ± 1.85 | 89.6 ± 2.30 | 94.1 ± 1.65 | 91.8 ± 1.95 |
Pork freshness | 96.36 ± 0.82 | 78.85 ± 3.15 | 85.71 ± 2.80 | 96.35 ± 0.88 |
Model | Backbone | Accuracy (%) |
---|---|---|
ResNet50 | ResNet50 | 87.42 ± 0.83 |
WS-DAN | Inceptionv3 | 88.90 ± 0.77 |
SGLANet | SENet154 | 89.69 ± 0.65 |
Swin-B | Transformer | 89.78 ± 0.71 |
DAT | Transformer | 90.04 ± 0.62 |
VOLO-D3 | ViT | 90.53 ± 0.58 |
Ours | BBSNet | 89.32 ± 0.61 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.; Zhang, J.; Chen, K.; Huang, J. BBSNet: An Intelligent Grading Method for Pork Freshness Based on Few-Shot Learning. Foods 2025, 14, 2480. https://doi.org/10.3390/foods14142480
Liu C, Zhang J, Chen K, Huang J. BBSNet: An Intelligent Grading Method for Pork Freshness Based on Few-Shot Learning. Foods. 2025; 14(14):2480. https://doi.org/10.3390/foods14142480
Chicago/Turabian StyleLiu, Chao, Jiayu Zhang, Kunjie Chen, and Jichao Huang. 2025. "BBSNet: An Intelligent Grading Method for Pork Freshness Based on Few-Shot Learning" Foods 14, no. 14: 2480. https://doi.org/10.3390/foods14142480
APA StyleLiu, C., Zhang, J., Chen, K., & Huang, J. (2025). BBSNet: An Intelligent Grading Method for Pork Freshness Based on Few-Shot Learning. Foods, 14(14), 2480. https://doi.org/10.3390/foods14142480