Machine Vision Applications in Food

A special issue of Foods (ISSN 2304-8158). This special issue belongs to the section "Food Analytical Methods".

Deadline for manuscript submissions: closed (25 January 2024) | Viewed by 5008

Special Issue Editors


E-Mail Website
Guest Editor
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
Interests: non-destructive intelligent detection; origin traceability of agricultural and livestock products; agricultural electrification and automation; machine vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
Interests: intelligent detection and information processing; real-time online monitoring; automatic control technology

E-Mail Website
Co-Guest Editor
National Research and Development Center for Egg Processing, College of Food Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
Interests: films; egg white protein emulsions; food hydrocolloids; egg
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Food production is becoming more automated as consumers demand more quality, safety, and efficiency. Machine vision application to food can achieve continuous, stable, and reliable non-destructive detection; machine vision technology can overcome time-consuming and inefficient sampling, fatigue, insufficient repeatability, and individual differences in manual detection. However, it is a challenge to apply machine vision to food as a high-quality solution because of the food industry’s zero tolerance for food problems. In addition, it is another challenge to reduce the recall cost through food traceability with machine vision. Hence, this Special Issue aims to research and discuss how machine vision technology can ensure food quality and safety in an innovative and high-quality detection manner.

Prof. Dr. Qiaohua Wang
Dr. Zhihui Zhu
Prof. Dr. Meihu Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Foods is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine vision
  • non-destructive detection
  • food traceability
  • food safety detection
  • food defect detection
  • food labeling detection
  • food sorting

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9805 KiB  
Article
Research on Automatic Classification and Detection of Mutton Multi-Parts Based on Swin-Transformer
by Shida Zhao, Zongchun Bai, Shucai Wang and Yue Gu
Foods 2023, 12(8), 1642; https://doi.org/10.3390/foods12081642 - 14 Apr 2023
Cited by 2 | Viewed by 1323
Abstract
In order to realize the real-time classification and detection of mutton multi-part, this paper proposes a mutton multi-part classification and detection method based on the Swin-Transformer. First, image augmentation techniques are adopted to increase the sample size of the sheep thoracic vertebrae and [...] Read more.
In order to realize the real-time classification and detection of mutton multi-part, this paper proposes a mutton multi-part classification and detection method based on the Swin-Transformer. First, image augmentation techniques are adopted to increase the sample size of the sheep thoracic vertebrae and scapulae to overcome the problems of long-tailed distribution and non-equilibrium of the dataset. Then, the performances of three structural variants of the Swin-Transformer (Swin-T, Swin-B, and Swin-S) are compared through transfer learning, and the optimal model is obtained. On this basis, the robustness, generalization, and anti-occlusion abilities of the model are tested and analyzed using the significant multiscale features of the lumbar vertebrae and thoracic vertebrae, by simulating different lighting environments and occlusion scenarios, respectively. Furthermore, the model is compared with five methods commonly used in object detection tasks, namely Sparser-CNN, YoloV5, RetinaNet, CenterNet, and HRNet, and its real-time performance is tested under the following pixel resolutions: 576 × 576, 672 × 672, and 768 × 768. The results show that the proposed method achieves a mean average precision (mAP) of 0.943, while the mAP for the robustness, generalization, and anti-occlusion tests are 0.913, 0.857, and 0.845, respectively. Moreover, the model outperforms the five aforementioned methods, with mAP values that are higher by 0.009, 0.027, 0.041, 0.050, and 0.113, respectively. The average processing time of a single image with this model is 0.25 s, which meets the production line requirements. In summary, this study presents an efficient and intelligent mutton multi-part classification and detection method, which can provide technical support for the automatic sorting of mutton as well as for the processing of other livestock meat. Full article
(This article belongs to the Special Issue Machine Vision Applications in Food)
Show Figures

Graphical abstract

13 pages, 12983 KiB  
Article
Single-View Measurement Method for Egg Size Based on Small-Batch Images
by Chengkang Liu, Qiaohua Wang, Meihu Ma, Zhihui Zhu, Weiguo Lin, Shiwei Liu and Wei Fan
Foods 2023, 12(5), 936; https://doi.org/10.3390/foods12050936 - 22 Feb 2023
Cited by 1 | Viewed by 1452
Abstract
Egg size is a crucial indicator for consumer evaluation and quality grading. The main goal of this study is to measure eggs’ major and minor axes based on deep learning and single-view metrology. In this paper, we designed an egg-carrying component to obtain [...] Read more.
Egg size is a crucial indicator for consumer evaluation and quality grading. The main goal of this study is to measure eggs’ major and minor axes based on deep learning and single-view metrology. In this paper, we designed an egg-carrying component to obtain the actual outline of eggs. The Segformer algorithm was used to segment egg images in small batches. This study proposes a single-view measurement method suitable for eggs. Experimental results verified that the Segformer could obtain high segmentation accuracy for egg images in small batches. The mean intersection over union of the segmentation model was 96.15%, and the mean pixel accuracy was 97.17%. The R-squared was 0.969 (for the long axis) and 0.926 (for the short axis), obtained through the egg single-view measurement method proposed in this paper. Full article
(This article belongs to the Special Issue Machine Vision Applications in Food)
Show Figures

Figure 1

24 pages, 8480 KiB  
Article
Surface Defect Detection System for Carrot Combine Harvest Based on Multi-Stage Knowledge Distillation
by Wenqi Zhou, Chao Song, Kai Song, Nuan Wen, Xiaobo Sun and Pengxiang Gao
Foods 2023, 12(4), 793; https://doi.org/10.3390/foods12040793 - 13 Feb 2023
Cited by 3 | Viewed by 1670
Abstract
Carrots are a type of vegetable with high nutrition. Before entering the market, the surface defect detection and sorting of carrots can greatly improve food safety and quality. To detect defects on the surfaces of carrots during combine harvest stage, this study proposed [...] Read more.
Carrots are a type of vegetable with high nutrition. Before entering the market, the surface defect detection and sorting of carrots can greatly improve food safety and quality. To detect defects on the surfaces of carrots during combine harvest stage, this study proposed an improved knowledge distillation network structure that took yolo-v5s as the teacher network and a lightweight network that replaced the backbone network with mobilenetv2 and completed channel pruning as a student network (mobile-slimv5s). To make the improved student network adapt to the image blur caused by the vibration of the carrot combine harvester, we put the ordinary dataset Dataset (T) and dataset Dataset (S), which contains motion blurring treatment, into the teacher network and the improved lightweight network, respectively, for learning. By connecting multi-stage features of the teacher network, knowledge distillation was carried out, and different weight values were set for each feature to realize that the multi-stage features of the teacher network guide the single-layer output of the student network. Finally, the optimal lightweight network mobile-slimv5s was established, with a network model size of 5.37 MB. The experimental results show that when the learning rate is set to 0.0001, the batch size is set to 64, and the dropout is set to 0.65, the model accuracy of mobile-slimv5s is 90.7%, which is significantly higher than other algorithms. It can synchronously realize carrot harvesting and surface defect detection. This study laid a theoretical foundation for applying knowledge distillation structures to the simultaneous operations of crop combine harvesting and surface defect detection in a field environment. This study effectively improves the accuracy of crop sorting in the field and contributes to the development of smart agriculture. Full article
(This article belongs to the Special Issue Machine Vision Applications in Food)
Show Figures

Figure 1

Back to TopTop