Next Article in Journal
A Real-Time UWB-Based Device-Free Localization and Tracking System
Previous Article in Journal
Injection-Locked Frequency Multipliers with Single Inductor Component
Previous Article in Special Issue
Enabling Self-Practice of Digital Audio–Tactile Maps for Visually Impaired People by Large Language Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Vision in Human-Centric Manufacturing: A Review from the Perspective of the Frozen Dough Industry

by
Vasiliki Balaska
1,*,†,
Anestis Tserkezis
1,*,†,
Fotios Konstantinidis
2,
Vasileios Sevetlidis
3,
Symeon Symeonidis
1,
Theoklitos Karakatsanis
1 and
Antonios Gasteratos
1
1
Department of Production and Management Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
2
Institute of Communication and Computer Systems (ICCS), 15773 Athens, Greece
3
Athena Research Center, 15125 Maroussi, Greece
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2025, 14(17), 3361; https://doi.org/10.3390/electronics14173361
Submission received: 21 July 2025 / Revised: 19 August 2025 / Accepted: 21 August 2025 / Published: 24 August 2025

Abstract

Machine vision technologies play a critical role in the advancement of modern human-centric manufacturing systems. This study investigates their practical applications in improving both safety and productivity within industrial environments. Particular attention is given to areas such as quality assurance, worker protection, and process optimization, illustrating how intelligent visual inspection systems and real-time data analysis contribute to increased operational efficiency and higher safety standards. The research methodology combines an in-depth analysis of industrial case studies, including one from the frozen dough industry, with a systematic review of the current literature on machine vision technologies in manufacturing. The findings highlight the potential of such systems to reduce human error, maintain consistent product quality, minimize material waste, and promote safer and more adaptable work environments. This study offers valuable insights into the integration of advanced visual technologies within human-centered production environments, while also addressing key challenges and future opportunities for innovation and technological evolution.

1. Introduction

The transition from Industry 4.0 to Industry 5.0 signifies a critical reorientation of industrial paradigms, shifting from a technology-driven emphasis on automation, connectivity, and efficiency to a more holistic framework that incorporates sustainability, resilience, and human-centric innovation [1,2]. In this context, advanced technologies such as artificial intelligence (AI), digital twins, and computer vision are not merely tools to replace human labor but enablers of augmented human–machine collaboration. In particular, computer vision systems constitute a foundational component of real-time perception, contextual awareness, and intelligent decision-making in dynamic manufacturing environments. Their deployment in quality assurance, anomaly detection, and occupational safety directly contributes to realizing adaptive, transparent, and inclusive production systems. In addition, when coupled with explainable AI methodologies, these systems enhance interpretability and foster trust, prerequisites for the safe and effective human-in-the-loop industrial operations envisioned by Industry 5.0.
While Industry 4.0 introduced automation, cyber-physical systems, and data-driven optimization into manufacturing processes, its focus remained primarily on efficiency and scalability. However, these approaches often overlooked critical aspects such as worker well-being, environmental impact, and organizational resilience. Industry 5.0 builds upon the technological foundation of Industry 4.0, but redefines priorities to include human-centric innovation, sustainable production, and collaborative intelligence [3]. In this paradigm, technologies such as artificial intelligence, extended reality, and computer vision are not deployed to replace humans, but rather to enhance human capabilities, promote safer work environments, and support ethical and inclusive industrial transformation. The present review is situated firmly within this Industry 5.0 perspective, exploring how machine vision can support not just automation and quality assurance, but also human–machine collaboration, real-time ergonomic feedback, and worker empowerment in semi-structured production environments such as frozen dough manufacturing [4].
Despite the promise of vision-enabled manufacturing, most implementations remain concentrated in highly structured environments such as automotive or pharmaceutical production. Less is known about their performance in more variable and human-dependent settings, such as food manufacturing, where real-world conditions challenge the robustness and usability of AI systems. In this paper, we investigate the role of computer vision in supporting human-centric manufacturing through a real-world case study in the frozen dough industry. We examine how deep learning-based vision inspection systems enhance operational efficiency, product quality, and worker support, aligning with the core values of Industry 5.0 [5]. Unlike other sectors in food manufacturing, frozen dough presents a distinctive set of challenges for machine vision applications. Its deformable and non-rigid structure, combined with high surface reflectivity due to moisture and texture variability across fermentation stages, complicates standard image analysis techniques. These factors demand robust, adaptive, and often AI-powered inspection methods to ensure reliable detection and classification. This review specifically addresses these challenges and explores vision-based approaches tailored to the frozen dough context.
This shift toward human-centered design is particularly relevant in the food manufacturing sector, which faces increasing demands for flexibility, traceability, and compliance with hygiene standards. As consumer expectations evolve, manufacturers are challenged to maintain consistent product quality while adapting to frequent product variation and dynamic market conditions [6]. Machine vision has emerged as a critical enabler of this transformation, offering real-time feedback, non-invasive inspection, and intelligent automation tailored to semi-structured and human-in-the-loop environments [7].
Recent studies have demonstrated that machine vision systems, when integrated with artificial intelligence (AI) and extended reality (XR), can enhance human performance in production environments by reducing cognitive load, improving decision-making accuracy, and increasing task safety [8]. In the food sector specifically, computer vision has been applied for defect detection, texture and moisture analysis, contamination control, and packaging inspection [9,10]. However, its deployment in high-variability domains like frozen dough production remains limited in the literature, signaling the need for domain-specific analysis and applied case studies.
The remainder of this paper is organized as follows: Section 2 provides the theoretical background and a review of related work in machine vision applications for human-centric manufacturing. Section 3 outlines the adopted research methodology, including a systematic literature review protocol. Section 4 presents an in-depth case study of the frozen dough industry, detailing the implementation of machine vision technologies across various production stages. Section 5 discusses the results of the study, focusing on the contributions of machine vision to human–machine collaboration, safety, and productivity. Finally, Section 6 concludes the paper and suggests directions for future research and industrial deployment.

2. Background and Related Work

In Industry 4.0, computer vision technologies were designed primarily to automate inspection tasks, detect defects, and optimize processes with minimal human involvement [11]. However, the emergence of Industry 5.0 has redefined the role of machine vision by introducing a human-centric perspective, in which intelligent systems are expected to support, augment, and collaborate with human operators [12,13].

2.1. Human-Aided Quality Control and Process Monitoring

Recent work has demonstrated that computer vision can enhance quality assurance by increasing accuracy, consistency, and integration with human decision-making. For example, Jiang et al. [14] describe a real-time visual analytics system deployed in an assembly line that provides continuous feedback to human operators, enabling early detection of quality deviations and significantly reducing rework rates. This system achieved a 25% improvement in first-pass yield, demonstrating the value of human–AI collaboration. Similarly, Da Silva Ferreira et al. [2] emphasize the importance of explainable AI (XAI) in visual inspection, noting that systems with transparent decision processes increase operator trust and facilitate more accurate interventions during ambiguous detections.
Other studies have explored hybrid inspection frameworks, where computer vision handles repetitive surface checks, while humans verify edge cases or ambiguous patterns. These mixed-initiative systems not only reduce inspection fatigue but also serve as training feedback loops for AI models. Moreover, vision systems integrated with digital twins allow for continuous synchronization between physical production and virtual quality standards, offering a new paradigm for data-driven quality control in Industry 5.0 environments [9].

2.2. Safety and Risk Prevention

Occupational safety is one of the most impactful areas where machine vision contributes to human-centric manufacturing. Vision-based systems are increasingly used to monitor hazardous zones, detect unsafe worker behaviors (e.g., bending, improper lifting, or PPE non-compliance), and provide real-time alerts to prevent accidents. For instance, Zhao et al. [15] developed a smart surveillance system that achieved 94% accuracy in identifying hazardous postures and triggered automated responses such as visual/auditory alarms or equipment shutdown. These systems contribute to proactive safety management, reducing reliance on manual supervision while improving incident response time.
Furthermore, vision-based monitoring is often integrated with ergonomic risk assessment tools, such as REBA or RULA, to continuously evaluate posture quality during repetitive or strenuous tasks. Agote-Garrido et al. [16] demonstrated how vision-enabled digital twins can model worker motion in 3D and simulate potential fatigue or injury risks under different scenarios. When combined with AI prediction models, these systems can even anticipate risky conditions before they manifest, enabling real-time adaptation of workflows, such as reducing task speed or reassigning operations. Such approaches reflect the core principles of Industry 5.0, where worker well-being and system resilience are embedded into the operational logic of smart factories [17].

2.3. Ergonomics and Operator Assistance

In addition to preventing accidents, machine vision plays a growing role in ergonomic evaluation and real-time operator assistance. Vision systems can continuously monitor body posture, joint angles, and movement trajectories, enabling automated assessments of ergonomic risk based on models such as REBA, RULA, or OWAS. This allows early detection of strain-inducing behaviors and supports dynamic task adjustments. For example, sensor-fusion systems combining RGB-D cameras with skeletal tracking have been used in manufacturing lines to flag repetitive stress postures, reducing the long-term risk of musculoskeletal disorders.
Operator assistance systems also use vision to enhance task execution by projecting guidance overlays or enabling adaptive interaction with cobots. Modoni et al. [12] present a hybrid computer vision and augmented reality platform that provides contextual instructions during assembly tasks. This approach significantly reduced cognitive load and physical exertion, especially in high-mix low-volume (HMLV) environments. Such vision-guided assistance platforms support adaptive collaboration, allowing workers to focus on decision-making while delegating repetitive or physically demanding actions to AI-driven support tools. These innovations demonstrate how computer vision aligns with Industry 5.0 values by augmenting rather than replacing human effort, and by promoting well-being and inclusion in industrial workflows [18].

2.4. Integration into Cyber-Physical Systems

Vision systems form a crucial component of cyber-physical production systems (CPPS), allowing a seamless bridge between physical actions and digital representations. In advanced CPPS architectures, vision modules act as perceptual interfaces that continuously capture spatial and semantic information about the production environment. This allows real-time synchronization between human activity, machine state, and digital twin models [19]. For example, Zhao et al. [20] proposed a system where vision-based gesture recognition and context-aware monitoring allow machines to adapt behavior dynamically, responding to operator intent and environmental changes in milliseconds.
Moreover, Cuéllar et al. [21] implemented a computer vision layer within a modular CPPS framework that allowed production systems to identify component mismatches, detect bottlenecks, and optimize process flow. These systems are particularly effective in reconfigurable or human-interactive manufacturing cells, where predefined rules are insufficient due to variability and human intervention. Vision-enhanced CPPS can also support self-diagnosis, predictive maintenance, and contextual awareness, thereby reinforcing the adaptability and resilience central to Industry 5.0 paradigms. When paired with digital twins and AI reasoning engines, vision data can guide autonomous decision-making that includes the human-in-the-loop, rather than excluding it.

2.5. Tactile Sensing and Multimodal Perception

Vision-based systems, while powerful, often face limitations when dealing with deformable, transparent, or occluded objects—conditions frequently encountered in food manufacturing [22]. To overcome these challenges, researchers are increasingly integrating tactile, force, and proximity sensors with vision modules to form multimodal perception systems. For example, soft robotic grippers equipped with embedded force sensors and artificial “skins” can measure compliance, detect slippage, and respond to minute physical variations in contact dynamics [23]. These capabilities enable real-time correction during delicate operations such as dough manipulation, where visual cues may not fully capture texture or resistance changes.
In collaborative settings, combining visual and tactile feedback allows robots to estimate interaction intent and adjust behavior accordingly, supporting safer and more intuitive human–robot interaction. Recent developments include sensor fusion frameworks that integrate RGB-D imaging with haptic and inertial signals to improve object classification, grasp stability, and motion prediction. Such systems are particularly relevant in semi-structured environments like food factories, where unpredictability and variability are high. The fusion of modalities not only increases task robustness but also reflects Industry 5.0’s emphasis on systems that adapt to and collaborate with humans rather than operate independently.

2.6. Challenges of Vision in Food Manufacturing

Machine vision systems face substantial technical and practical challenges in food manufacturing environments [24]. Products are typically non-rigid, exhibit high intra-class variability, and may be affected by moisture, temperature, or handling-induced deformation. For instance, dough products may change shape during fermentation or transport, making geometric-based inspection unreliable. Additionally, lighting conditions, conveyor speed, and product overlap introduce dynamic variability that complicates real-time detection and classification. These factors reduce the performance of traditional computer vision algorithms trained on rigid, uniform datasets [22].
Moreover, food safety and hygiene constraints limit the use of physical markers or invasive sensing, increasing reliance on passive, vision-only solutions. Occlusions, specular reflections, and transparency (e.g., in packaging films or glazes) further degrade image quality. To address these limitations, adaptive learning methods, multispectral imaging, and context-aware segmentation techniques are being explored. However, few of these solutions have been robustly validated in real industrial settings, particularly in flexible or semi-structured environments. This underscores the need for applied domain-specific approaches that can operate reliably under such constraints, an issue that this review seeks to address. As shown in Table 1, frozen dough production combines semi-structured environments with high product variability, which increases vision system complexity.

2.7. Research Gap in Real-World, Human-Centric Applications

Although these studies illustrate the technical potential of vision-enabled manufacturing, most are developed and tested in structured, high-precision industries such as automotive or pharmaceuticals [25]. In contrast, domains such as food manufacturing present challenges, including high product variability, frequent human intervention, and unstructured environments. These factors limit the transferability of current solutions and highlight a research gap in developing practical, explainable, and robust implementations for human-centric factories [26]. To address this gap, our work investigates the deployment of a deep learning-based vision inspection system in a real-world frozen dough production facility. This study aims to evaluate how such systems can enhance efficiency, quality, and worker support in a complex and dynamic manufacturing environment.
While prior reviews have explored machine vision in general food processing contexts, few studies focus explicitly on frozen dough or ready-to-bake products, which present unique technical and operational demands. Furthermore, limited attention has been given to the integration of vision systems with AR interfaces and Human Digital Twins in these environments. This paper addresses that gap by combining thematic analysis with a perspective rooted in the frozen dough industry [27].

3. Review Strategy and Sources

This study employs a Systematic Literature Review (SLR) methodology to explore and synthesize the current body of scientific knowledge regarding the role of machine vision in Industry 5.0, particularly within the context of human-centric manufacturing environments. The qualitative evaluation of the selected articles was conducted by two independent reviewers with expertise in computer vision and manufacturing systems. Articles were assessed based on predefined thematic clusters and relevance to machine vision applications in human-centric environments. Disagreements were resolved through discussion. Tools used included keyword clustering via VOSviewer (an online software tool for constructing and visualising bibliometric networks. We used it April–May 2025, https://www.vosviewer.com/, accessed on 20 August 2025). and manual content analysis using a shared review matrix. The review is systematic to address two key research questions:
  • What is the role of machine vision in enhancing human–machine collaboration within the frozen dough manufacturing environment?
  • What are the applications of machine vision that can contribute to improved safety and productivity in human-centric factories?
These research questions provided the foundation for the article search strategy and the analytical framework of the review.

3.1. Stage I—Review Design

This initial stage defined the review scope and ensured methodological rigor by following an approach composed of three main stages and eight phases.

3.1.1. Phase 0—Identifying the Need for the Review

The increasing complexity of food industry operations and the need to comply with strict quality and safety standards have made machine vision technologies indispensable. By enabling real-time visual inspection and decision-making, machine vision supports quality control and process optimisation, reducing errors and enhancing food hygiene compliance [28]. Within the framework of Industry 5.0, the development of human-centric factories, production units that integrate technology with human well-being and ergonomics, is imperative. These factories promote cooperative work environments in which smart systems and robotics reduce physical strain while enhancing overall efficiency [29]. In this context, machine vision serves as the “eyes” of the smart factory, providing data that feeds into automated decision-making processes and reduces the need for human involvement in high-risk operations [30]. Given the rapid evolution of machine vision systems, a systematic review is essential to consolidate recent innovations and assess how these technologies are shaping the future of food manufacturing. Furthermore, the human-centered design perspective proposed by Industry 5.0 is examined for its influence on the deployment and implementation of such technologies [31].

3.1.2. Phase 1—Formulating the Review Proposal

In this phase, the review protocol was defined to ensure objectivity and validity. The review focuses on machine vision technologies in the food industry and their impact on safety, productivity, and human–machine interaction. The databases used for article retrieval included IEEE Xplore, ScienceDirect, ACM Digital Library, and Scopus. Studies were selected based on recency (published between 2017 and 2024), relevance to food manufacturing, and alignment with Industry 5.0 principles.
This systematic literature review was conducted in accordance with the PRISMA 2020 guidelines. A completed PRISMA checklist and flow diagram are provided in the Supplementary Materials.

3.1.3. Phase 2—Developing the Review Protocol

To ensure impartiality and reliability in the review process, the PRISMA protocol (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) was adopted. PRISMA is an internationally recognized framework that supports the improvement of the quality of systematic reviews and meta-analyses. It provides guidelines for maintaining transparency and methodological rigor, minimizing the risk of bias, and enhancing the reliability and reproducibility of research findings. This protocol includes a sequence of steps for defining and applying inclusion and exclusion criteria, selecting and evaluating sources, and synthesizing findings into analyses. By adhering to PRISMA guidelines, the review ensures comprehensive and methodical coverage of the research topic. The review protocol comprises multiple stages that facilitate a systematic and accurate analysis of the data. Initially, key research questions were defined to guide the review process. Subsequently, a source search was conducted using specific search terms and keywords to collect relevant literature from reputable academic databases. The selection and evaluation of sources were based on predefined inclusion and exclusion criteria, focusing on pertinent studies of human–machine collaboration and productivity in the food industry. Quality assessment was conducted, considering the validity and relevance of the data in accordance with PRISMA standards. This systematic approach ensures that the review is both comprehensive and grounded in credible sources, allowing for an organized synthesis and comparison of findings to answer the research questions with maximum precision and detail. Although no duplicate entries were identified during the search, a deduplication step was performed to ensure dataset integrity.

3.2. Stage II—Conducting the Review

Stage II consists of five phases focusing on the systematic collection, selection, evaluation, and synthesis of data from relevant studies. By applying strict criteria, the relevance and quality of sources were ensured, and the data were organized to provide accurate answers to the research questions.

3.2.1. Phase 3—Research Identification

This phase marks the beginning of data collection for the systematic review. The previously formulated research questions led to the development of appropriate search terms to identify relevant studies and articles. The search terms were chosen to comprehensively cover domains such as machine vision, robotic vision, computer vision, human-centric factory, smart manufacturing, frozen dough, baking industry, and the food industry. These keywords were combined using Boolean operators to form robust and effective search queries. For instance, search strings included expressions such as: (“computer vision” OR “machine vision” OR “robotic vision”) AND (“human-centric factory” OR “smart manufacturing” OR “frozen dough” OR “baking industry” OR “food industry”). The collection and selection of studies involved screening titles, abstracts, and keywords to identify relevant studies. Articles that did not meet the predefined inclusion criteria were excluded. Specifically, studies not written in English or those that were not peer-reviewed were removed. Those that met the requirements proceeded to Phase 4 for further quality and relevance assessment. This process ensured that the sources included in the review were directly related to the research questions and provided a solid foundation for exploring computer vision in human-centric food industry environments.

3.2.2. Phase 4—Study Selection

Following the identification of relevant studies in Phase 3, the selection process focused on applying strict criteria to assess the quality and relevance of the data to be used in the review. A total of 2186 studies were identified from four major academic databases. Figure 1 presents the distribution of articles per database, with the majority sourced from Scopus and IEEE Xplore.
The selection of studies was based on well-defined inclusion and exclusion criteria. The selected studies met the following criteria:
  • Topical relevance: Articles had to focus on or be directly related to the role of computer vision in human-centric frozen dough manufacturing environments.
  • Publication timeframe: Studies published within the last seven years (2017–2024) were selected to ensure up-to-date findings.
  • Language: Only English-language publications were included to ensure consistency and comprehensibility in the review process.
  • Full-text availability: Only articles with full access were included, to allow for in-depth reading and analysis.
After applying these criteria, the most relevant and high-quality studies were retained. This process narrowed down the pool to a refined set of articles that offered reliable data and were capable of answering the research questions comprehensively and accurately (Table 2).

3.2.3. Phase 5—Quality Assessment

The quality assessment is a critical step in the systematic review process, ensuring that the selected articles meet high standards of scientific validity and are directly relevant to the research questions. One of the primary criteria is the clarity of research objectives: the article must clearly articulate the study’s purpose and provide transparent, well-defined goals. Methodological rigor is evaluated based on the presentation of a comprehensive methodology, including a clear description of the analytical techniques and procedures used, particularly in the context of computer vision applications in human-centered environments.
The reliability of the findings is assessed by the presence of verifiable data and statistical evidence supporting the results. Relevance to the research questions is also examined to ensure that the studies focus on the scope of the review. Furthermore, the conclusions are evaluated for coherence and completeness, ensuring that they provide clear answers to the research questions identified. Only articles that satisfied these criteria were retained for analysis, thereby guaranteeing the validity and reliability of the review’s conclusions.

3.2.4. Phase 6—Data Extraction and Monitoring

The data extraction phase focuses on collecting and recording relevant information from the selected studies to comprehensively and accurately address the research questions. To facilitate this process, a structured form was developed to enable systematic gathering and documentation of each study’s key characteristics. The form included essential information, such as the reference and year of publication, for citation and bibliography purposes. It also captured the study objectives and research questions, with an emphasis on their alignment with issues related to human collaboration and safety in the food industry.
Additionally, the methodology and computer vision techniques employed, such as object detection and image analysis (e.g., YOLO algorithms for object recognition), were documented. The main findings and conclusions were recorded to highlight their implications for safety, productivity, and human–machine collaboration. The studies were also analyzed in terms of their relation to the Industry 5.0 principles, particularly in the application of computer vision to human-centric industrial environments. The data extraction process was organized using electronic spreadsheets (e.g., Excel, with columns for Author, Year, Techniques, and Findings), ensuring systematic and accurate analysis and comparison.

3.2.5. Phase 7—Data Synthesis

Following data extraction, the next phase focused on synthesizing the findings from the selected studies. As shown in Table 1, this synthesis involved grouping and categorizing the results to effectively address the research questions and provide a holistic view. Computer vision applications were classified into core areas, including quality control, food safety, production automation, and support for human–machine collaboration. Each category encompasses applications designed to serve specific objectives, aiming to enhance efficiency and safety in food production. The main findings from the studies were organized and clustered to offer concrete answers to the research questions. For example, applications such as defect detection and quality monitoring were found to be crucial in improving safety. At the same time, classification and product recognition technologies contributed to increased efficiency and precision in production processes. The synthesis of results was conducted through qualitative analysis, presenting the findings in a manner that facilitates understanding of how computer vision contributes to the food industry. This approach enabled the comparison of different technological strategies and highlighted the advantages of each application.

3.3. Stage III—Reporting and Dissemination of Results

Stage III focuses on the reporting and dissemination of the findings from the systematic review through a comprehensive presentation and analysis of the results. In this stage, clear answers to the research questions are articulated, while key themes emerging from the study are highlighted. Furthermore, this stage includes suggestions for future research and practical applications, emphasizing the continuous advancement of computer vision technologies and the enhancement of human–machine collaboration within the food industry.

Phase 8—Reporting and Recommendations

In the final phase of the review, findings are presented in a detailed report that offers explicit answers to the research questions and proposes directions for future research and implementation. The report is structured into two main components: a descriptive analysis and a thematic analysis. The descriptive analysis provides an overview of the current landscape of computer vision applications in the food industry. It includes the classification of applications into categories such as quality control, product safety, and automation, as well as an examination of the temporal evolution of related technologies. This analysis delivers a comprehensive “mapping” of the field, identifying areas with the highest concentration of innovation and the primary domains of research activity. The thematic analysis delves into key issues revealed through the review of the data. These include the contribution of computer vision to enhancing worker safety, improving productivity through automation, and strengthening human–machine collaboration. This part of the analysis offers a deeper understanding of how computer vision technologies positively impact the food industry and support the principles of Industry 5.0, with a specific focus on human-centered approaches. Based on the findings, several recommendations are made for future research and practical deployment. Research may focus on emerging computer vision technologies, particularly the integration of artificial intelligence and deep learning, to further enhance performance and accuracy. Additionally, expanding human–machine collaboration remains a crucial area for further development, especially in environments that demand high levels of safety and precision. Finally, computer vision applications are highly recommended for enhancing quality control, particularly in detecting micro-defects and addressing the increasing demands for hygiene and safety in food products. The Table titled Catalog of Scientific Studies and Application Overview—Categories, Methods, and System Integration in Industrial Contexts is provided in Appendix A.

4. The Frozen Dough Industry

The frozen dough industry focuses on the large-scale production of ready-to-bake dough products that meet various consumer needs, including convenience, quality, and consistency. The frozen dough production process involves a series of sequential stages, from raw material inspection to final quality control. A simplified representation of the overall workflow is shown in Figure 2. Technological advancements in production have transformed the sector, establishing it as a cornerstone of the global bakery industry. Production processes in this industry emphasize efficiency, scalability, and maintaining dough quality throughout the entire manufacturing, freezing, and distribution process. High-quality raw materials are crucial for producing frozen dough. Flours, yeasts, sugars, fats, and water are sourced and blended using industrial mixers to ensure uniform consistency. The mixing process is tightly controlled, as deviations can impact the texture, flavor, and baking performance of the final product. Fermentation plays a critical role in developing the dough’s structure and flavor. In frozen dough production, this process is often modified or shortened to accommodate freezing requirements. Partial fermentation is commonly employed, ensuring the dough remains stable during freezing while retaining its ability to rise after thawing and baking.
Following mixing and partial fermentation, the dough is shaped and portioned into standardized sizes and forms. Automated systems are typically used for these operations, ensuring uniformity, a key factor in achieving consistent baking results in subsequent stages. Freezing is one of the most critical stages in the production of frozen dough. Rapid freezing methods, such as air-blast freezing, are used to preserve the dough’s structure and halt yeast activity without compromising quality. Proper freezing techniques enable long-term storage while maintaining the dough’s functional properties. After freezing, the dough is vacuum-packed or sealed in airtight packaging to prevent degradation. Packaging is designed not only to preserve quality but also to provide clear consumer instructions, including baking temperatures and times, ensuring a seamless baking experience.
Throughout the production process, strict quality control measures are implemented to ensure consistency and safety. Techniques such as visual inspection and texture analysis are frequently employed to maintain quality standards. At the same time, more advanced methods, such as hyperspectral imaging, are being increasingly adopted to assess moisture content, elasticity, and the presence of contaminants. These measures help uphold high manufacturing standards, ensuring the dough meets the expectations of both producers and consumers.
Mass production of frozen dough offers numerous advantages. Centralized production reduces costs through economies of scale, automation of labor-intensive processes, and minimization of waste. For consumers and food service providers, frozen dough offers consistent quality and convenience, allowing for on-demand baking without the need for extensive preparation. Nevertheless, the industry faces challenges such as maintaining product quality during freezing and thawing, optimizing production efficiency, and responding to consumer preferences for healthier and more sustainable products. Manufacturers are addressing these challenges through innovative approaches, including the use of enzyme-based dough improvers and sustainable packaging solutions, to remain competitive in an evolving market.

4.1. Machine Vision Techniques in Frozen Dough Production

This section examines how machine vision systems are utilized across various production stages, highlighting their role in optimizing processes and ensuring adherence to quality standards. Specifically, this section analyzes the use of machine vision across the eight main stages of frozen dough manufacturing: (1) raw material inspection, (2) mixing and kneading, (3) shaping and forming, (4) freezing, (5) packaging, (6) quality control, (7) human–robot collaboration, and (8) traceability. Each of these stages presents distinct challenges and requirements for visual sensing and AI-enabled assistance. In the following subsections, we examine each stage in detail, discussing relevant vision applications, typical system configurations, and their alignment with human-centric and Industry 5.0 principles.

4.1.1. Raw Material Inspection

The inspection of raw materials is a critical step in frozen dough manufacturing, ensuring product quality, safety, and consistency. Computer vision systems, integrating technologies such as hyperspectral imaging (HSI) and machine learning, have emerged as transformative tools for automating quality assessment. These systems provide a non-destructive, rapid, and accurate method for evaluating raw materials, enabling manufacturers to maintain high production standards.
Contamination by foreign materials is a significant risk in the food industry, compromising both safety and product integrity. Hyperspectral imaging has proven highly effective in detecting foreign objects that may be visually or physically similar to raw ingredients. In cereal processing, HSI systems capture both spatial and spectral data simultaneously, enabling the identification of contaminants and ensuring a cleaner product flow. Furthermore, machine learning-enhanced HSI improves detection accuracy by classifying contamination levels and visualizing affected areas through pseudo-color maps. This approach has demonstrated particular efficacy in detecting fungal contamination in rice grains, where support vector machine (SVM) algorithms using Gaussian distributions achieved classification accuracies above 93% [32].
Consistency in the quality of ingredients, such as flour and yeast, is essential for producing frozen dough with a uniform texture and flavor. HSI facilitates the evaluation of cereal quality by analyzing parameters like moisture content, fungal contamination, and chemical composition. Fourier-transform mid-infrared (FT-MIR) spectroscopy, combined with chemometric methods, has been effective in classifying flour types, achieving 100% accuracy in identifying flour samples using models such as support vector machines (SVM) and artificial neural networks (ANN) [33]. HSI has also proven effective in classifying grains based on microstructure and biochemical composition, enabling non-destructive assessment of protein, starch, and moisture levels in wheat flour [34]. These systems have been used for rapid contamination detection in cereals, offering visual representations of spoilage levels and enabling early identification of nutritional degradation in wheat and other grains [35].
HSI’s ability to simultaneously capture spatial and spectral data makes it a valuable tool for real-time monitoring of these parameters. During processing, its capacity to detect fluctuations in moisture and chemical composition enables manufacturers to perform immediate adjustments. This real-time capability not only improves the consistency of the final product but also reduces waste and improves operational efficiency [34].
In a recent study, Kang et al. [36] developed a predictive model for the chemical properties of wheat flour during aleurone removal using machine learning and seed imagery. Their research focused on analyzing seed chemical traits over various removal intervals to forecast flour quality. The developed model reliably predicted ash content but faced challenges in predicting starch and protein content.
Despite extensive research on HSI-based food quality control, methods for certifying whole grain content in bread are lacking. To address this gap, a novel methodology was developed that combines HSI with chemometric tools to certify the whole grain flour content in bread. The proposed method, Quantification by Pixel Count with Classification (QPC), leverages the heterogeneity of bread samples containing flour mixtures to estimate whole grain ratios [37].
The QPC method is based on a binary classification model that categorizes image pixels as either whole grain or white flour based on their spectral signature. The model is trained using bread samples prepared with 100% whole grain or 100% white flour. Then it is applied to bread samples of unknown composition, estimating the content of the whole grain based on the proportion of pixels classified as whole grain [37].
Results demonstrated that the quantification model could successfully predict whole grain content with a maximum deviation of 8 g of whole grain flour per 100 g of total flour from the actual value. The method performed well regardless of the type of cereal used in breadmaking. Given common commercial whole grain flour ratios (30%, 50%, and 70%), the QPC method offers significant potential to address the current lack of official certification techniques for whole grain composition in bread [37].
Computer vision systems offer several advantages over traditional quality control methods. First, they are non-destructive, allowing raw materials to remain intact during inspection—a crucial factor in the food industry. Second, they provide rapid and accurate evaluations, making them suitable for high-throughput industrial environments. Third, HSI is environmentally friendly, eliminating the need for chemical agents commonly used in traditional testing. Finally, the adaptability of vision systems enables their integration into in-line production processes, allowing continuous monitoring and control [32,34].

4.1.2. Fermentation and Dough Preparation

The stages of fermentation and dough preparation in frozen dough manufacturing require meticulous attention, as they form the foundation for the texture, flavor, and overall quality of the final baked product. Fermentation involves the metabolic activity of yeast, which converts carbohydrates into carbon dioxide and ethanol, forming gas bubbles that expand and structure the dough. This biochemical process governs dough rise, elasticity, and moisture retention, key attributes for producing high-quality baked goods. Proper monitoring and control of fermentation conditions, such as temperature and humidity, are crucial for optimizing dough properties for freezing and subsequent baking [38].
Traditional methods of assessing dough fermentation, often manual and subjective, have been replaced by automated, non-destructive techniques that enhance accuracy and consistency. RGB imaging, combined with machine learning algorithms such as YOLOv8s, enables real-time surface analysis of dough, including its area, texture, and uniformity. These data are processed by advanced models, such as stacked ensemble models (SEM), to classify the fermentation status into under-fermented, optimally fermented, or over-fermented. Such automated systems achieve classification accuracies of up to 83%, facilitating precise adjustments to fermentation conditions and ensuring product uniformity [16,38].
Dough preparation also benefits from precise mixing and portioning technologies that incorporate machine learning to ensure consistent ingredient distribution. Deep learning models analyze image-based data to predict and standardize dough texture and volume, optimizing its readiness for freezing. The integration of these technologies reduces variability in production and facilitates scaling of frozen dough manufacturing [39,40].
Recently, a computer vision system was developed to classify fried dough products based on their furan content, a potentially carcinogenic compound. The system used image data to extract color and texture features, which were then input into classification models. The results showed that the system could accurately classify samples into low and high furan content categories, achieving 95% accuracy using only eight image features [40,41].
These innovations highlight the significance of intelligent systems in addressing traditional challenges in dough production. Specifically, they reduce the subjectivity associated with manual evaluation processes and facilitate the adaptation of production operations for industrial scale. As the frozen dough industry continues to expand, leveraging these technologies becomes critical for meeting demand for high-quality, consistent products while optimizing operational efficiency.

4.1.3. Product Shaping

Product shaping is a critical stage in the frozen dough production process, ensuring uniformity in size, weight, and form, factors that directly affect both product quality and process efficiency. Automated dough-shaping systems, often incorporating robotic arms and advanced sensor integration, have transformed traditional manual shaping methods. These systems improve consistency and reduce labor costs, particularly in large-scale production environments. Robotic systems equipped with tactile sensors and computer vision technologies are increasingly used to manage dough, a material known for its deformability and non-uniform behavior. Vision-guided robotic solutions employ rolling and shaping techniques to achieve precise dough forms. These systems rely on data from RGB-D cameras and haptic feedback to dynamically adjust force and movement, minimizing structural damage to the dough while achieving the desired shape [42,43].
An additional advancement in product shaping is the use of multi-sensor systems that combine visual and tactile data to optimize dough handling and forming. These technologies enable robots to analyze physical dough properties, such as elasticity and moisture content, and to adjust shaping techniques accordingly. This results in enhanced production throughput and improved efficiency in creating frozen dough products [44]. The application of computer vision in shaping also extends to real-time quality control. Vision-based systems can detect shape or size anomalies and adjust the process to maintain uniformity. This integration not only ensures product consistency but also minimizes defects, reducing resource waste and enhancing overall production efficiency [45,46]. In summary, the adoption of advanced robotics and computer vision in dough shaping has significantly improved the accuracy and reliability of frozen dough production processes. By leveraging these technologies, manufacturers can achieve greater product consistency, reduce operational costs, and meet the high standards demanded in the food industry.

4.1.4. Filling and Assembly

Filling and assembly processes in frozen dough production require meticulous control to maintain product quality, consistency, and operational efficiency. Automation is increasingly integrated into these stages to address challenges related to precision and uniformity. Computer vision systems enable automated product inspection through techniques such as surface analysis, defect detection, and measurement of geometric characteristics, significantly enhancing quality control in industrial applications. Robotic systems equipped with tactile sensors and advanced vision technology have revolutionized filling and assembly operations. These technologies enable robots to detect variations in dough consistency or shape and dynamically adjust their actions to minimize structural damage and ensure uniform filling distribution [47]. Multi-functional sensor systems further enhance the accuracy of shaping and filling, significantly reducing material waste during production [48]. A key challenge in filling processes is ensuring uniformity, particularly when working with high-viscosity ingredients or components that have varying physical properties. Automated filling machines, combined with real-time monitoring systems, effectively address this issue by delivering consistent filling across products. These systems optimize production and integrate smoothly with other stages such as folding, rolling, and sealing, ensuring a high-quality final product with improved operational throughput [47]. The assembly stage also benefits significantly from advancements in automation. Tasks such as folding and sealing, which require delicate manipulation, are performed by robotic systems designed to handle complex product designs without compromising quality. According to [49], autonomous assembly systems that utilize learning from demonstration and integrate robotic skills can significantly reduce setup time and adaptation costs in highly flexible production environments. The role of computer vision in this context is to detect deviations in shape or size and provide immediate feedback for corrective action. This ensures consistency and minimizes waste even in high-volume production settings [44].
Improvements in these processes have also been highlighted in studies such as that of Ylikoski [50], who analyzed production line bottlenecks. By implementing targeted modifications, such as enhanced blade-cutting mechanisms, manufacturers were able to reduce material loss and improve operational reliability. Although the study focused on a specific production line, the findings emphasize how targeted machine upgrades can enhance overall process performance. The integration of advanced automation and quality control systems has transformed the filling and assembly stages of frozen dough production. These technologies address longstanding challenges, enabling manufacturers to scale operations efficiently while maintaining high product quality standards.

4.1.5. Parbaking

Parbaking significantly influences crumb stability and moisture migration across different layers of bread, ultimately affecting both the overall quality and shelf life of the product during storage [51]. Modern computer vision systems, which leverage advanced image processing and deep learning technologies, play an increasingly important role in enhancing food quality and safety throughout all stages of production. These systems enable continuous monitoring and inspection, thereby improving quality assurance processes [52].
Computer vision systems collect image data through various methods and perform numerous tasks, including quality inspection, classification of agricultural products, detection of foreign objects, and crop monitoring. Specifically in the food industry, these systems can measure product parameters such as size, weight, shape, texture, and color, capturing details often imperceptible to the human eye [52]. A notable example of such technology is an adaptive bread-making system that employs machine learning to optimize the baking process. This system integrates a range of sensors, including a temperature sensor (MAX6675), an ethanol sensor (MQ3), and a distance sensor (GP2Y0A02YK0F), along with a high-speed camera (ELP-USBFHD08S-MFV) for capturing RGB images, depth data, and the Sum of Pixel Grayscale Values (SPGV). These sensors provide critical real-time data for monitoring and controlling the baking process.
Machine learning algorithms enable the development of Baking Process Prediction Models (BPPMs), which dynamically adjust baking parameters based on sensor and vision data. This results in notable improvements in product quality, such as an increase in loaf volume and an increased specific crumb density [36]. Furthermore, the combination of deep learning and computer vision technologies facilitates automated bread quality assessment, including color and texture analysis.

4.1.6. Freezing Process

In frozen dough production, the freezing stage is critical for preserving product quality, texture, and shelf life. Robotic systems equipped with vision technologies and sensors enhance this process by enabling precise handling and minimizing damage during the transition to freezing. These robots can dynamically adjust their operations in real-time, modifying parameters such as airflow and temperature to maintain the desired dough properties throughout storage [53]. The integration of computer vision systems into freezing operations enables the detection of defects, such as cracks or variations in thickness, which may compromise the structural integrity of the final product [54]. Real-time quality control ensures that only products meeting specifications proceed to subsequent production stages.
Cryogenic freezing methods, which utilize gases such as nitrogen or carbon dioxide for rapid temperature reduction, are increasingly applied in food production. These methods preserve product structure and moisture content. Robotic systems working in conjunction with cryogenic technologies ensure careful handling and uniform freezing, thereby minimizing the risk of deformation. Furthermore, collaborative robotic arms (cobots) operate alongside human workers, automating repetitive tasks such as loading and unloading dough into freezers. This reduces worker exposure to cold environments and enhances production safety. These innovations make the freezing process more efficient and better adapted to rising production demands.

4.1.7. Packaging

The packaging stage ensures that the frozen dough products are properly positioned, protected from external factors, and ready for distribution. The integration of computer vision and robotics has revolutionized this process, enabling the execution of complex tasks with greater precision and consistency. These systems play a crucial role in addressing challenges in the food industry, including maintaining hygiene, handling diverse packaging materials, and ensuring consistent product quality. These technologies support real-time monitoring and quality control. Using advanced imaging techniques, systems evaluate the integrity and uniformity of packaging materials, detect defects such as tears or misalignments, and ensure that labels are correctly placed and legible. 3D reconstruction based on structured light enables precise measurement of packaging dimensions, ensuring compliance with required specifications—particularly relevant in frozen dough packaging, where size and shape variations due to freezing may impact package fit and sealing [55]. Deep learning algorithms have further enhanced the capabilities of vision systems. MASK R-CNN, for example, has been successfully applied to identify packaging elements, including reflective or transparent materials often used in food products. These materials, which can cause overexposure or shadowing in conventional imaging, are effectively handled by adaptive deep learning models, ensuring accurate detection and classification [21,55].
Vision-guided robotic systems are essential for modern packaging line automation. These systems use vision to identify the orientation and position of packaging components and products, enabling precise handling and assembly. Robotic arms with integrated structured light and vision algorithms can adjust their real-time movements to account for misalignments or product shifting [16,55]. Flexible robotic arms effectively handle delicate or non-standard items during the packaging process. These arms use soft suction and flexible gripping techniques to manipulate frozen dough without damage. They are ideal for high-mix, low-volume production environments that demand greater adaptability. Moreover, their modularity facilitates integration into various configurations, enhancing operational flexibility [56]. Human–robot collaboration, supported by user-friendly interfaces and advanced safety systems, is a key factor in packaging automation. Collaborative robots (cobots) are designed to work alongside human operators, taking over tasks such as material placement and quality inspections. By handling repetitive and physically demanding tasks, cobots reduce the risk of errors and injuries, allowing human workers to focus on more complex aspects of the packaging process [22]. Despite these advancements, applying vision and robotics in packaging presents notable challenges. The diversity of packaging materials, which vary in opacity, reflectivity, and texture, requires ongoing adaptation of vision algorithms and end-effector designs. Furthermore, maintaining strict hygiene standards in food packaging environments demands robust cleaning and sterilization protocols for robotic systems, which can increase both operational costs and system complexity [56,57].

4.1.8. Quality Control

Quality control in the food industry is fundamental to ensuring product safety, quality, and compliance with both consumer expectations and regulatory standards. Machine Vision Systems (MVS) are now essential tools in modern quality control frameworks, providing non-invasive and efficient inspection methods that meet the increasing demands of the industry. Using technologies such as hyperspectral imaging, X-rays, and high-resolution cameras, MVS can collect detailed data on product size, shape, texture, and color. Hyperspectral imaging, in particular, extends the capabilities of traditional imaging by capturing data across multiple wavelengths, allowing for the detection of both internal and external defects, such as bruises on fruits or inconsistencies in dough structure [9,52]. X-ray imaging is also widely used for detecting foreign materials, such as glass, metal, or calcified particles, which contributes to food safety compliance [58]. In addition to vision-based systems, other non-invasive technologies are also used for food quality assessment, including spectroscopic methods (infrared, Raman, and terahertz), magnetic resonance imaging, and electronic sensors (e-nose and e-tongue). These methods enable rapid, non-destructive, and sensitive analysis of numerous food quality parameters [35,59].
The integration of advanced software algorithms has significantly expanded the functionality of MVS. Deep learning models, particularly Convolutional Neural Networks (CNNs), excel at automating defect detection and classification tasks. For example, CNNs trained on large datasets can autonomously detect and localize problem areas such as rot in apples or imperfections in dough, with minimal human intervention [52]. Size and dimensional analysis also play a central role in quality control, with techniques such as blob analysis used to compute parameters like perimeter, roundness, and aspect ratio. One example includes a vision system that achieved high accuracy in assessing complex geometries in additive manufacturing products, ensuring consistent results [40,43]. Such capabilities are particularly relevant to frozen dough production, where uniformity in shape and size is critical to downstream processes, such as packaging.
Intermediate-stage image processing, particularly segmentation and feature extraction, is equally crucial for isolating regions of interest (ROIs) for further analysis. Segmentation algorithms, such as watershed methods, have proven effective for distinguishing between defective and non-defective regions, e.g., detecting decay in fruit, with detection rates exceeding 99% [52]. Nevertheless, the large-scale application of MVS in food production presents ongoing challenges. The inherent variability in food products—including differences in texture, color, or shape—demands advanced and adaptable algorithms. Furthermore, the need for specialized maintenance and calibration of imaging systems increases operational complexity [1]. Despite this, advances in artificial intelligence, such as reinforcement learning and transfer learning, show promise. CNNs have been successfully applied to feature extraction and food quality classification, though there remains a need for larger datasets and improved model generalization [60].
In conclusion, the integration of machine vision systems into food industry quality control processes marks a significant technological advancement. By combining innovative imaging technologies with intelligent software algorithms, these systems reduce the need for manual inspection, minimize errors, and improve the accuracy and reliability of production workflows.

5. Results

5.1. Trends in Publications on Machine Vision Applications

The analysis of the selected articles highlights significant publication trends over the past decade. The year 2024 marked a peak in activity, with 24 published articles, reflecting the rapid advancements in machine vision technologies and their industrial applications [61]. This increase indicates the growing adoption of artificial intelligence (AI) and vision systems to enhance production processes and product quality. In contrast, 2018 recorded several publications, with only two articles. During this period, Mark et al. [61] emphasized the contribution of machine vision to improving operator efficiency through the use of advanced automation technologies.
Topically, the publications focus on defect detection, automation, and human–robot collaboration. Fan et al. [62] highlighted the role of machine vision in facilitating “holistic scene understanding,” enabling mass customization in industrial environments through human–robot interaction. Concurrently, Sharma et al. [63]. Explored the use of vision systems for real-time monitoring and quality control, underscoring the potential of AI in defect detection. Further analysis indicates that machine vision applications primarily target areas such as quality inspection and automation, with a strong emphasis on productivity enhancement in industries like automotive manufacturing and food production. Leng et al. [64] described the implementation of machine vision in quality inspection processes, showcasing its capability to detect and correct defects autonomously.
Machine vision technologies have been categorized into six key application domains, as identified through a literature review. These categories reflect the diverse use of vision systems across industries, highlighting critical areas of focus (Figure 3). The largest category, AI-driven vision systems, accounts for 42.3% of the analyzed studies. These systems utilize deep convolutional neural networks to extract features from large volumes of data, outperforming traditional approaches in terms of object recognition and defect detection capabilities [40]. Extensively deployed in automated environments, AI enhances object identification, defect detection, and the effectiveness of decision-making. Vision systems supported by AI, such as deep learning networks, optimize defect detection and improve process efficiency, thereby strengthening quality assurance across various industrial sectors [62,65].
The second-largest category, quality inspection, represents 27.6% of the studies. Vision-based quality inspection significantly enhances efficiency and reliability while reducing labor-related costs and risks [46]). These systems utilize high-resolution cameras, AI, and cutting-edge computational technologies to automatically inspect products and provide real-time data throughout the production line [66]. They can detect defects, perform dimensional analysis, and make autonomous decisions based on collected data [46]. The technology proves especially effective in identifying surface defects, verifying compliance with production standards, and reducing error rates through automated inspections. Vision-based quality inspection systems have demonstrated their ability to improve product quality and productivity, thereby increasing overall manufacturing competitiveness [67]. These systems are particularly valuable in precision industries, where real-time defect detection ensures consistent quality throughout the production process.
Vision systems in the food industry account for 16% of the analyzed studies, reflecting their growing role in ensuring product safety and quality. Applications include contaminant detection, verification of packaging integrity, and optimization of production flows for frozen goods. These innovations help maintain high food production standards while minimizing waste. Real-time vision processing systems, representing 7.1% of applications, have transformed modern manufacturing environments. These technologies achieve remarkable processing speeds, reducing decision latency from 185 ms to 45 ms [20]. Real-time visual data analysis is carried out using high-resolution cameras and real-time processing capabilities [68]. In industrial settings, such systems provide continuous monitoring of production lines, delivering immediate feedback on quality and process parameters. This significantly reduces downtime and improves overall efficiency [69]. The technology is especially effective in scenarios requiring rapid decision-making, such as quality and process control, where immediate response to visual data is critical to maintaining production standards [70]. These systems can interpret complex visual data streams and adjust in real-time to ensure optimal production outcomes [68].
Human–robot collaboration, representing 3.8% of vision system applications, focuses on establishing effective interactions within industrial environments. The integration of IT, OT, AI, and human intelligence enables bidirectional, proactive, and globally optimized collaboration between human operators and robots [71]. Through vision-based systems and advanced sensory feedback, these collaborative setups achieve comprehensive situational awareness and adapt to dynamic production conditions via digital twin modeling approaches [62]. Such collaborative functions improve workplace safety and efficiency, although their deployment may be appropriate only in contexts where human–robot interaction can be sufficiently monitored and controlled [54]. These systems require capabilities for continuous learning and evolution to adapt to diverse tasks and dynamic manufacturing environments, supporting more flexible and human-centered production processes [72]. Furthermore, they employ perception technologies that enable natural task execution, where human operators make flexible decisions, and robots interpret their intentions, adapting their behavior through advanced sensing and control mechanisms [71].
Defect detection accounts for 3.2% of machine vision applications. These systems enable rapid analysis of visual data through high-resolution cameras and real-time processing, helping manufacturers identify and eliminate defects before they affect the final product [47]. They significantly contribute to achieving zero-defect manufacturing by providing continuous monitoring and real-time quality analysis through integrated visual assessment systems [13].

Bibliometric Analysis

A methodology was employed for analyzing the collected data. The dataset was derived from 85 articles selected based on their relevance to the research scope. The articles included keywords, abstracts, and titles, which were exported from a bibliographic database in Excel format.
Subsequently, Python3.12.2 was employed for data preprocessing. Using libraries such as pandas2.3.2, the dataset was cleaned by removing irrelevant information and duplicate entries. Tools such as CountVectorizer1.7.1 from scikit-learn were used to extract the most frequent keywords, while the nltk library facilitated the removal of stopwords and overall text cleaning. After preprocessing, the data were imported into VOSviewer for further analysis. Two files were generated: a map file, which contains the keywords and their corresponding cluster allocations, and a network file, indicating the co-occurrence links among the keywords.
The analysis conducted via VOSviewer (see Figure 4) produced a keyword map that visualizes relationships between concepts and their corresponding clusters. These clusters represent thematic areas, including machine vision, automation, image analysis, and human–machine collaboration. The connections between keywords reveal the relevance and frequency of terms within the context of the literature review. The process resulted in the identification of five main clusters, organized based on thematic similarity. The analysis of these clusters contributed to aligning the results with the research questions, offering valuable insights into core concepts related to machine vision and its applications in frozen dough manufacturing environments.
Specifically, the green cluster highlights applications of image analysis and feature recognition in defect detection and product quality assessment. Keywords such as inspection, efficiency, and quality underline the role of machine vision in enhancing production and facilitating human–machine collaboration. This aligns with RQ1, as presented in Figure 5, as machines operate in a supportive role to human operators, enhancing efficiency and minimizing errors. The red cluster focuses on automation and robotics, with terms like automation, robotics, and industrial. The strong association of these terms with machine vision demonstrates how these technologies enhance safety and efficiency in production processes. This supports RQ2, suggesting that machine vision-enabled automation can detect potential hazards and ensure safer working conditions in factories.
The blue cluster, centered on computational and vision systems, highlights the ability to monitor and analyze data in real-time. Keywords such as vision, systems, detection, and computer underscore the importance of visual systems in anomaly detection and risk prevention. This supports both RQ1, by facilitating human–machine collaboration through data availability, and RQ2, by laying the groundwork for increased safety and productivity. The brown cluster emphasizes control and production processes, with terms such as production, process, and safety. Machine vision is integrated into quality control and production monitoring processes, ensuring high standards and preventing potential errors. This again supports RQ2, confirming that machine vision is crucial for safety and production optimization. In conclusion, the interconnection of these clusters demonstrates that machine vision plays a supportive role in both human–machine collaboration (RQ1) and the enhancement of safety and productivity in human-centered manufacturing systems (RQ2), leading to safer, more efficient, and flexible production environments.

5.2. Research Question 1: What Is the Role of Machine Vision in Enhancing Human–Machine Collaboration in the Frozen Dough Factory?

The development of advanced machine vision systems has transformed human–machine collaboration, particularly in industrial settings such as frozen dough manufacturing. These systems offer precision and adaptability, facilitating seamless interaction between human workers and machines. Machine vision is increasingly applied in tasks such as object detection and classification, thereby improving overall efficiency and enabling more effective management of production processes [73]. The following subsections examine key aspects of this collaboration, including human–machine interaction, automated task support, the impact on worker skills, the development of collaborative solutions, and the role of real-time feedback, highlighting the contribution of machine vision to an optimized working environment.

5.2.1. Human–Machine Collaboration

Machine vision plays a crucial role in effective human–machine collaboration by assisting operators in tasks associated with frozen dough production. High-resolution cameras can accurately detect the position and orientation of products on the production line, providing visual cues to operators for optimal placement [71]. This significantly reduces errors and enhances final product quality. Additionally, machine vision systems enhance workplace safety by continuously monitoring both worker and machine movements. If a potential hazard is detected, the system can automatically activate safety mechanisms or issue alerts to employees [47]. Machine vision also facilitates rapid onboarding and training of new employees. Augmented reality (AR) systems, powered by machine vision, can project step-by-step instructions directly into the worker’s field of view, accelerating learning and reducing training-related errors [74]. This is particularly valuable in settings with high staff turnover or seasonal fluctuations in production.

5.2.2. Automated Task Support

Automated task support is foundational to achieving human-centered smart manufacturing. Machine vision enables machines to adapt dynamically to workers’ movements, fostering more intuitive and fluid human–machine collaboration [75]. These systems also verify correct component placement in real time, enabling early error detection and correction [57], which leads to improved product quality and reduced waste.
Extended Reality (XR), encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), further supports automated task execution by reducing cognitive load and enhancing the clarity of instructions, particularly in high-mix low-volume (HMLV) production environments [76]. However, the effectiveness of XR depends on task complexity, interface design, and the mode of information delivery.
Worker gesture recognition is also a key feature of task automation. Systems such as Kinect and Leap Motion have demonstrated high accuracy in gesture detection, with success rates of 95–98.9% [76]. Furthermore, the integration of new materials and form factors, such as nanoparticles, nanowires, and graphene, ensures more flexible and ergonomic human–machine interaction devices [77], thereby improving user experience and enhancing the effectiveness of automated task support in Industry 5.0 environments.

5.2.3. Impact on Worker Skills

The integration of machine vision and AR technologies in industry is reshaping worker roles and the skill sets required. Workers are transitioning from manual tasks to supervising and managing automated systems. Instead of manually conducting repetitive quality inspections or assembly tasks, operators now oversee the functioning of machine vision systems that perform these tasks [78]. This shift necessitates new competencies in system configuration, data interpretation, and troubleshooting. As a result, workers take on more strategic roles, focusing on problem-solving, process optimization, and decision-making [58]. Human–machine collaboration becomes central, with workers developing skills for effective interaction with vision systems and robots, leveraging human flexibility alongside machine precision.
AR technology can support workers by providing real-time visual instructions during tasks involving robots, thereby enhancing both safety and performance [69]. The introduction of Human Digital Twins (HDT) in Human–Robot Collaboration (HRC) is redefining the skill requirements in Industry 5.0. By monitoring workers holistically, HDTs support the acquisition of complex skills such as ergonomic movement, intention detection, and effective interaction with robots. They enable 3D posture analysis, action intent prediction, and ergonomic risk assessment [62]. HDTs also support cross-functional expertise, equipping workers with insights that enhance both safety and productivity. Moreover, HDTs promote adaptability and critical thinking. With the help of machine learning algorithms, workers can identify and correct ergonomic risks, calculated using assessments such as REBA. HDTs can also predict and coordinate robot actions based on human intent, requiring strategic thinking for effective collaboration with intelligent machines [62]. This suite of tools enables the management of complex, dynamic production environments. HDTs reduce the need for physical prototyping and lower ergonomic risks, while workers are expected to understand and interpret HDT-generated data to optimize collaboration and efficiency. Empirical studies show that HDTs can achieve 98.54% accuracy in intention recognition and a mean error of only 0.66 in ergonomic risk assessment, demonstrating their potential to improve safety and productivity related competencies [62].

5.2.4. Development of Collaborative Solutions

Machine vision enhances collaborative robot (cobot) systems by improving their perception of human actions and workspace conditions. Cameras mounted on robotic arms continuously monitor the workspace and adjust the cobot’s movements to avoid collisions [11]. Machine vision also enables gesture and motion recognition, supporting intuitive human–robot communication [79]. For example, a cobot can detect when a worker reaches out for a component and automatically deliver it, enabling seamless and efficient cooperation.
Furthermore, vision-equipped cobots can perform high-precision quality inspections, detecting subtle defects that may go unnoticed by human inspectors. This frees human workers from repetitive tasks [72]. Machine vision also allows cobots to adapt to changing production conditions, such as identifying and handling different product types or packaging formats without manual intervention, enhancing production line flexibility [11]. Finally, the motion data captured by machine vision systems can be analyzed to optimize workflows and ergonomics, thereby contributing to improved worker safety and process efficiency [79].

5.2.5. The Role of Real-Time Feedback

Advanced machine vision systems ensure continuous oversight and real-time analysis of production processes. This enables immediate detection and correction of deviations or quality issues, significantly reducing response times and minimizing the output of defective products [68]. Real-time feedback allows for dynamic adjustment of production parameters. For example, vision systems can detect variations in product characteristics and automatically fine-tune machine settings to compensate [13]. Such feedback also facilitates rapid decision-making by operators and supervisors. Clear visual indicators and alerts enable personnel to respond promptly to emerging issues, reducing downtime and streamlining production flow [80]. The data collected in real-time can be leveraged for predictive maintenance, identifying early signs of equipment wear and preventing unplanned outages [70]. Continuous feedback also enables long-term process improvement. By analyzing data trends and patterns, manufacturers can identify opportunities to enhance product quality and production efficiency [68].

5.3. Research Question 2: What Are the Applications of Machine Vision That Can Enhance Safety and Productivity in Human-Centered Factories?

By analyzing data collected across various stages of the frozen dough production process, as illustrated in Figure 6, we identified the interrelations between different machine vision applications and production stages. The diagram categorizes the system into three main areas: production stages, intermediate and final products, and relevant machine vision applications. The connections between categories highlight how machine vision technologies are mapped across the production workflow. Overall, 18 distinct applications of machine vision were identified, spanning 12 production stages and product checkpoints, as detailed in Table 3.

5.3.1. Enhancing Worker Safety

Machine vision is a critical enabler of safety and productivity in human-centered smart factories. One of its key applications is the real-time detection of obstacles and hazards in the workspace, using cameras and sensors to alert workers to potential risks [45]. The concept of the Operator 4.0 emerges as a cornerstone of Industry 4.0 and the transition to Industry 5.0. It describes a skilled, intelligent worker who collaborates with human–machine interaction technologies to achieve synergy between humans and automation [81].
In this context, mobile industrial robots are increasingly integrated in smart manufacturing, working alongside operators to perform complex tasks. However, these robots often have limited visual fields and may lose track of dynamic human targets. To address this, a proactive human-tracking approach has been proposed using Interacting Multiple Models (IMM) to predict human motion paths based on limited sensor input [82].
Machine vision also supports ergonomic monitoring via AR, enabling the creation of digital twins of human skeletal structures that analyze posture and movements. Augmented reality visualizes ergonomic instructions and real-time warnings to mitigate risk-prone behavior [45]. Additional applications include ensuring the use of proper personal protective equipment, monitoring vital signs to prevent fatigue, and detecting safety hazards such as smoke or leaks. AR enhances the integration of humans into Human Cyber-Physical Systems (HCPS), with digital twins playing a central role. AR offers a natural and flexible interface, enabling visualization of geometry, animations, and enriched data through AR headsets to improve workflows, safety, and skill training [83]. When combined with deep learning and computer vision, AR can support decision-making and empower operators as intelligent collaborators. The synergy between AR and digital twins has sparked growing research and industrial interest. In line with the human-centric vision of Industry 5.0, AR-assisted digital twins are now used across the product lifecycle, from design and production to maintenance and recycling. For instance, in the design phase, AR enables real-time prototyping through interactive visualizations. In production, AR-twin integration supports task planning, monitoring, and adaptive assembly instructions [83].
Collaborative robots (cobots), envisioned as core agents of Industry 5.0, aim to complement rather than replace human workers. Technologies such as functional near-infrared spectroscopy (fNIRS) can be used for real-time recognition of human intent. At the same time, roles like Chief Robotics Officer (CRO) may emerge to manage robot–human collaboration. A novel HMI application utilizing XR technologies is demonstrated through the control of an overhead crane, where HoloLens 2 is used to interact with a digital twin of the crane, visualizing design specifications, status data, and issuing commands [77].

5.3.2. Improving Productivity

Dough quality control has traditionally relied on subjective, labor-intensive manual methods. Machine vision offers a non-invasive, automated alternative. Murengami et al. [38] proposed a system using RGB imaging and YOLOv8s deep learning to classify dough into under-fermented, properly fermented, or over-fermented categories. Features such as surface area, contrast, and homogeneity were fed into a stacked ensemble model (SEM) that combined SVM, AdaBoost, KNN, and RF classifiers, achieving up to 83% accuracy. In another experiment, 93.4% accuracy was achieved in contamination classification using Gaussian SVM. Another promising approach involves hyperspectral imaging, which captures spatial and spectral data simultaneously to assess dough moisture and chemical composition with high precision [35]. These methods facilitate continuous monitoring, reducing reliance on manual checks and improving product consistency. Dimensional and uniformity monitoring is another productivity booster. Ondras et al. [42] demonstrated how RGB-D cameras and shape comparison algorithms can enable robotic systems to adaptively shape dough to match a target form with an IoU > 0.90.
Packaging process optimization benefits from machine vision through high-accuracy classification of packaging types, utilizing systems that combine structured light and deep learning (e.g., Mask R-CNN for 2D recognition and 3D reconstruction). These systems reduce sorting errors, enhance speed, and enable OCR/OCV for label verification and traceability [22]. Combined with robotic systems, they allow for fully automated packaging lines, guided by vision-based gripper evaluation systems for food handling [56]. Predictive maintenance is another domain where vision systems provide value. High-resolution and thermal cameras collect data on equipment performance (e.g., temperature, vibration), which is then fed to machine learning models to anticipate faults [69]. CNNs can identify tool wear, and multimodal data from acoustic and visual sources improve maintenance planning. Lastly, spoilage detection and freezing control are critical in food safety. Vision systems outperform manual inspections in identifying mold or discoloration, while thermal cameras ensure proper cold-chain compliance [20]. CNNs, such as DenseNet121 or ResNet-50, have proven effective in defect recognition, with applications extending to smart storage and the automated rejection of compromised goods [52].

5.4. Business Intelligence and AI for Vision-Driven Decision Support

The convergence of Business Intelligence (BI) and Artificial Intelligence (AI) within machine vision systems presents significant advantages for decision-making processes in human-centric manufacturing environments. Vision systems produce high-resolution, structured data streams that extend beyond traditional quality control applications, enabling their integration into business intelligence (BI) dashboards to monitor critical performance indicators, such as defect rates, process variability, and operator-system interactions. Recent studies have demonstrated that advanced AI models enhance weakly supervised defect detection by employing anomaly-informed training methodologies [84], thereby yielding essential insights that facilitate predictive maintenance and real-time alerting. These functionalities can be seamlessly integrated into BI platforms to enhance responsiveness and operational efficiency in dynamic production environments. Furthermore, BI frameworks can incorporate external data sources, including consumer sentiment, to refine production strategies. For instance, Symeonidis et al. [85] demonstrated the application of unsupervised sentiment analysis to social media data as an effective mechanism for aligning manufacturing outputs with evolving market demands. This approach holds the potential for extension into closed-loop decision-making systems within vision-enabled food processing operations. Collectively, the integration of BI and AI bridges the gap between granular perceptual data and strategic-level decisions, thereby reinforcing the Industry 5.0 paradigm characterised by intelligent, explainable, and human-centred automation.
While both research questions address the role of machine vision in human-centric manufacturing, they target distinct dimensions of its implementation. RQ1 focuses on the collaborative dynamic between human workers and vision-enabled systems—emphasizing usability, operator support, and human–machine interaction. In contrast, RQ2 investigates the technical and operational impact of machine vision, particularly in enhancing process safety and productivity across different production stages. By separating these questions, the study distinguishes between the human-facing aspects of machine vision (RQ1) and its system-level performance contributions (RQ2), providing a comprehensive view of its relevance in modern industrial contexts.

Case Study Limitation Note

While this paper draws on the practical experience of deploying machine vision systems in a frozen dough context, specific quantitative results from the pilot system are part of an ongoing evaluation and will be reported separately. At this stage, the case study is best described as a proof-of-concept deployment, supporting conceptual validation rather than fully benchmarked implementation. As such, this manuscript focuses primarily on a structured review, conceptual analysis, and qualitative assessment framework, rather than empirical system benchmarking.

6. Assessment Framework

In order to evaluate the overall impact of machine vision systems within the frozen dough manufacturing process, this study adopts a structured assessment framework grounded in the principles of Industry 5.0. The goal is to quantify, in a qualitative manner, how each application of machine vision contributes to key performance dimensions, namely: human–machine collaboration, occupational safety, and productivity outcomes.
This framework builds upon five thematic clusters identified through bibliometric mapping (see Section 3), namely: safety and occupational health, real-time monitoring, food inspection and quality control, collaborative systems and AI, and industrial automation. These clusters inform the evaluation dimensions used in this section.

6.1. Assessment Criteria and Methodology

The assessment is based on three main criteria:
  • Human–Machine Collaboration: the extent to which machine vision technologies enhance the interaction between human operators and automated systems. This includes aspects such as task guidance, ergonomic support, and intuitive feedback mechanisms.
  • Safety Impact: the effectiveness of machine vision in reducing operational risks, detecting unsafe behaviors or anomalies, and enhancing real-time safety monitoring throughout the production process.
  • Productivity Outcomes: the contribution of vision systems to process efficiency, error reduction, waste minimization, and overall throughput improvement.
Each application area, mapped to specific stages of the frozen dough production process, is evaluated against these criteria. Assessment levels are assigned on a three-point scale: 1 = Low, 2 = Medium, 3 = High.
The scoring is based on findings derived from the literature review, case study analysis, and practical use cases as detailed in Section 5 and Section 6.

6.2. Evaluation Results

Table 4 summarizes the assessment scores across eight major production stages, while Figure 7 visualizes the same data using a color-coded heatmap for quick reference and comparison.

6.3. Analysis and Interpretation

The results reveal several key trends:
  • Safety is the strongest impact area for machine vision, with consistently high scores across stages such as freezing, packaging, and final quality control. These systems are particularly effective in hazard detection, posture monitoring, and foreign object identification.
  • Productivity is notably enhanced in stages like mixing & fermentation, shaping, and freezing, where visual feedback and real-time analysis improve consistency and reduce processing errors.
  • Human–Machine Collaboration is strongest in tasks that involve dynamic interaction, such as filling & assembly, where adaptive guidance and co-robotic systems are implemented. However, early stages such as raw material inspection or fermentation still exhibit relatively low collaborative integration, indicating opportunities for improvement.

6.4. Summary of Insights

This assessment highlights the multi-dimensional value of machine vision systems in smart food manufacturing. While the technology is mature in areas related to quality control and safety, human-centric features, especially in early production stages, require further development. These insights underline the importance of designing explainable, adaptable, and user-oriented vision systems in alignment with the human-centered goals of Industry 5.0.

7. Technological Readiness Evaluation Framework

While the previous section evaluated the impact of machine vision systems based on functional criteria such as safety, productivity, and human–machine collaboration, this section focuses on the technological maturity of those systems. Specifically, we assess the readiness level of each vision application in terms of its development stage, deployment feasibility, and integration potential, drawing on the Technology Readiness Level (TRL) framework. To support the practical implementation of machine vision systems in human-centric manufacturing, it is essential to assess their technological readiness across multiple dimensions. This section introduces an evaluation framework that enables both researchers and industry stakeholders to determine the maturity and applicability of vision-enabled technologies in real-world industrial settings. The proposed framework builds upon the principles of Technology Readiness Levels (TRLs), while extending them to accommodate the socio-technical characteristics of Industry 5.0 environments. It evaluates readiness along four key axes:
(1)
Algorithmic Robustness: This dimension assesses the reliability, adaptability, and generalizability of the vision algorithm under real-world conditions, including variable lighting, product heterogeneity, and operational disturbances. For instance, deep learning models like YOLOv8 must demonstrate high accuracy not only in laboratory settings but also in high-speed production lines with fluctuating visual inputs.
(2)
Human–Machine Integration: A crucial factor in human-centric manufacturing is the seamless interaction between operators and intelligent systems. This axis evaluates how effectively the vision system communicates outputs, enables intervention, and supports explainability. Systems that provide interpretable visual feedback and ergonomic interfaces are rated higher in readiness for Industry 5.0 use cases.
(3)
Infrastructure Compatibility: This refers to the degree of interoperability between the vision system and existing production infrastructure, including programmable logic controllers (PLCs), digital twins, cyber-physical systems (CPS), and industrial IoT platforms. Readiness increases with the ease of integration, modularity, and standard protocol support (e.g., OPC UA, MQTT).
(4)
Ethical and Social Compliance: Technologies in Industry 5.0 must also align with ethical principles, including worker privacy, data transparency, and fairness. This dimension evaluates whether the deployment of machine vision respects workplace norms and safeguards against surveillance misuse. A system’s readiness is contingent on its compliance with legal and ethical frameworks.
Each axis is rated on a scale from 1 (early-stage research) to 5 (fully deployed and integrated in industrial practice), allowing for a nuanced evaluation of technological maturity. The framework was applied in the case study of the frozen dough industry to assess the real-time inspection system developed, revealing high readiness levels in algorithmic robustness and infrastructure compatibility, with ongoing work in improving explainability and ethical assurance.
This readiness framework supports decision-making for scaling vision technologies in other sectors and provides a foundation for further standardization of evaluation practices in human-centric smart manufacturing.

7.1. Conceptual Framework for Vision-Driven Human-Centric Manufacturing

To synthesize the findings of this review, we propose a conceptual framework that organizes machine vision applications in human-centric manufacturing environments across three functional dimensions: process visibility, human-centric collaboration, and adaptive intelligence. As shown in Figure 8, the intersections of these dimensions highlight areas where machine vision technologies provide integrated value.
Applications such as final quality control or packaging are positioned at the intersection of high visibility and safety, while more advanced AI-based interventions (e.g., adaptive shaping or anomaly detection) lie closer to the intelligence axis. This framework can serve as a reference for evaluating and designing vision-based systems that align with Industry 5.0 principles.

7.2. Limitations and Open Challenges

Despite rapid advances, several limitations constrain the broader adoption of machine vision in human-centric manufacturing. Challenges include limited generalizability across different production contexts, high implementation costs, lack of standardization in vision interfaces, and difficulties in integrating vision with human behavioral variability. Moreover, real-time adaptability and explainability remain underdeveloped, particularly in collaborative tasks where human intent must be inferred dynamically.

8. Conclusions and Future Work

This study investigated the role of machine vision in human-centric manufacturing environments, with a particular focus on the frozen dough industry. The integration of machine vision systems into such contexts contributes significantly to both safety and productivity improvements. Based on a systematic literature review and a real-world case study, two primary research questions were addressed, leading to key findings outlined below. First, regarding human–machine collaboration, machine vision enhances operator efficiency by offering real-time visual guidance, enabling safer and more accurate task execution. Technologies such as augmented reality (AR), gesture recognition, and Human Digital Twins (HDTs) facilitate intuitive interactions between workers and machines. These advancements shift the role of workers from manual labor to strategic oversight, requiring new skills such as systems supervision, problem-solving, and interpretation of visual data. Second, in terms of productivity and safety enhancement, machine vision technologies support quality control at various stages of frozen dough production—from raw material inspection to packaging. Automated visual inspection enables real-time defect detection, early identification of fermentation issues, and monitoring of packaging integrity. Predictive maintenance applications and AI-driven systems ensure minimal downtime and consistent product quality. The proposed technological maturity framework provides a roadmap for integrating machine vision into food production environments. It enables organizations to evaluate current capabilities and identify opportunities for innovation, automation, and human-centered design. The maturity model spans from basic image capture functions to full cyber-physical integration in line with Industry 5.0 principles.
Additionally, the bibliometric analysis revealed increasing research interest in machine vision applications over the last decade, especially in areas such as AI-based quality control and human–robot collaboration. A clustering analysis of 85 research articles confirmed thematic convergence around automation, defect detection, and scene understanding, validating the relevance of the study’s focus. Ultimately, the findings demonstrate that machine vision plays a crucial enabling role in the transition toward resilient, safe, and adaptive production systems. It contributes to the creation of factories where human labor is augmented—not replaced—by intelligent technologies. The successful integration of machine vision depends on robust system design, operator training, and ethical considerations regarding transparency and data use.
Future research should focus on:
  • Developing lightweight, ergonomic interfaces for long-term human use,
  • Extending the role of digital twins in real-time decision support,
  • Exploring the interoperability of vision systems with IoT infrastructures,
  • Ensuring cybersecurity in increasingly connected environments.
These directions are essential for realizing the full potential of machine vision in supporting safe, productive, and sustainable human-centric manufacturing.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/electronics14173361/s1.

Author Contributions

Conceptualization, V.B. and A.T.; methodology, V.B.; validation, F.K., V.S., S.S. and T.K.; formal analysis, V.B., A.T., F.K., V.S., S.S., T.K. and A.G.; investigation., V.B., A.T., F.K., V.S., S.S., T.K. and A.G.; resources, V.B., A.T. and F.K.; data curation, V.B., A.T. and F.K.; writing—original draft preparation, V.B.; writing—review and editing, V.B., A.T., F.K., V.S., S.S., T.K. and A.G.; visualization, V.B. and S.S.; supervision, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Catalog of Scientific Studies and Application Overview

Table A1. Catalog of scientific studies and application overview—categories, methods, and system integration in industrial contexts.
Table A1. Catalog of scientific studies and application overview—categories, methods, and system integration in industrial contexts.
ReferenceCategoryFocus AreaMethodologyKey ContributionApplication Context
Zhao et al. (2024) [20]Smart ProductionVision-Guided RobotsSemi-Supervised LearningImproved accuracy and efficiency in robot control, reducing response time.Textile industry for quality inspection and automation.
Mandapaka et al. (2023) [67]Automated Quality InspectionProduct Dimension AnalysisComputer VisionPresents an automated inspection system for analyzing product dimensions.Industrial production and quality control.
Nadon et al. (2018) [48]RoboticsManipulation of Non-Rigid ObjectsSystematic ReviewRobotics offers solutions for automation, precision, and safety in the food sector.Automated food processing and packaging.
Zafar et al. (2024) [58]Collaborative RoboticsDigital Twins and Human AugmentationSystematic Literature ReviewSynergies between robots, AI, and Industry 5.0 enhance human–robot collaboration.Human–robot collaborative manufacturing in Industry 5.0.
Yousif et al. (2024) [78]SafetySafety and Vision SystemsFramework Development and ApplicationThe Safety 4.0 framework uses machine vision for proactive hazard prevention.Industrial worker safety and protection systems.
Yang et al. (2024) [76]Computer VisionMass Estimation for Kimchi CabbageExperimental StudyA novel hybrid vision technique significantly improves mass estimation accuracy.Agricultural processing and food quality control.
Wu et al. (2020) [82]RoboticsHuman Guidance by Mobile RobotsAdaptive Trajectory PredictionImproved prediction of human movement to support mobile robots in industrial tasks.Industrial robots for human–robot collaborative operations.
Akundi et al. (2021) [43]Automated Quality InspectionProduct Dimension AnalysisComputer VisionPresents an automated system for dimension-based quality inspection.Industrial production and quality assurance.
Wang et al. (2024) [53]Computer VisionInspection Systems in the Tobacco IndustryCase StudyMachine vision enhances inspection accuracy and reduces human error.Automation for quality control in tobacco manufacturing.
Wang et al. (2024) [53]Medical ManufacturingSmart Production of Medical DevicesLiterature ReviewExplores challenges and future trends in smart manufacturing for medical applications.Advanced systems for customized and precise medical device production.
Wang et al. (2023) [15]Smart ManufacturingComparative Review of SM and IMBibliometric ReviewIdentifies key traits and evolution of Smart and Intelligent Manufacturing models.Industry 4.0 and cyber-physical production systems.
Wang et al. (2019) [47]Quality AssuranceAssembly Defect DetectionImage Processing & Deep LearningCombines traditional and AI-based methods to detect defects on assembly lines.Automated defect detection in industrial production.
Velesaca et al. (2021) [86]Food ProcessingGrain ClassificationComprehensive ReviewReviews vision-based techniques for grain quality assessment and classification.Grain sorting and quality control systems.
Pereira et al. (2022) [54]Service RoboticsAction Taxonomy in Food ServicesCritical ReviewProposes a taxonomy of human actions for robotic automation in food services.Optimization of food service operations through robotics.
Steger et al. (2018) [87]Computer Vision AlgorithmsIntegrated Vision TechniquesFramework DevelopmentProvides foundational techniques for applying computer vision in industrial contexts.Quality inspection, defect detection, and robotic automation.
Sontakke et al. (2024) [74]Smart Manufacturing EducationTeaching Smart Manufacturing TechniquesCase StudyApplies real-world defect detection data to enhance engineering education.Undergraduate education in chemical engineering.
Siripatrawan et al. (2024) [32]Food Safety InspectionFungal Contamination Classification in RiceHyperspectral Imaging & SVMAchieved 93.4% accuracy in classifying fungal contamination using Gaussian SVM.Rapid, non-destructive food safety inspection.
Sharma et al. (2023) [63]Vision for Smart ManufacturingVision Systems in Industry 4.0ReviewCombines sensors and neural networks to reduce errors and enhance productivity.Cable manufacturing and defect detection systems.
Aviara et al. (2022) [35]Food Quality ControlHyperspectral Imaging for Grain InspectionReviewEnables accurate quality analysis and defect detection in grains.Agricultural grain quality assurance and classification.
Tzampazaki et al. (2024) [11]Machine VisionTransition from Industry 4.0 to Industry 5.0Systematic ReviewAnalyzes how machine vision contributes to productivity and product quality.Industrial production using vision for automation and control.
Fan et al. (2022) [62]Digital TwinsHuman-Centric Digital TwinCase StudyIntegrates human data to adapt robotic behavior for effective collaboration.Smart manufacturing and ergonomic optimization.
Modoni et al. (2023) [12]Digital TwinsHuman-Centered IndustryImplementation FrameworkProposes a framework integrating digital systems to enhance human–machine collaboration.Furniture manufacturing with optimized interactions.
Sahoo et al. (2022) [66]Smart ManufacturingTechnological Advancements in ManufacturingSystematic ReviewReviews how IoT, AI, and automation enhance efficiency and reduce human error.Industry 4.0 technologies in smart factories.
Nivelle et al. (2017) [51]Food SciencePar-Baking Effects on BreadExperimental StudyDemonstrates that par-baking reduces crumb hardening, improving bread quality and texture.Bread production and quality stability.
Rokhva et al. (2024) [31]Food IndustryFood Recognition with AIExperimental StudyMobileNetV2 enables accurate, efficient food recognition, reducing waste.Food production and monitoring.
Ren et al. (2024) [72]Smart ManufacturingEmbedded IntelligenceFramework DevelopmentIntroduces AI-driven intelligence for flexible production systems.Human-centered production and robotics.
Ondras et al. (2022) [42]Robotics & VisionRobotic Dough ShapingExperimental Control PoliciesTouch and vision-based control improves accuracy in dough shaping.Digital services and product-service lifecycle management.
Lullien-Pellerin (2024) [88]Grain CharacterizationWheat Quality AssessmentOmics Systems & MicroscopyLinks wheat quality to microstructure and composition using multivariate analysis.High-quality product development and cereal assurance.
Prasad et al. (2024) [68]Smart MonitoringReal-Time Material Tracking & Cloud IntegrationCase StudyCobots and vision systems improve process control and decision-making.Real-time tracking in machining operations.
Agote-Garrido et al. (2023) [16]Human-Centered ProductionSociotechnical SystemsTheoretical ModelProposes a model combining sustainable technology with a human-centered approach.Sustainable and resilient industrial systems.
Zhong et al. (2017) [89]Smart ManufacturingCPS and IoT IntegrationSystematic ReviewAnalyzes IoT and CPS integration in smart factories with automated real-time interactions.Digital twins and predictive analytics in factories.
Medina-García et al. (2024) [37]Food Quality & AuthenticityWhole Grain Bread AuthenticityHyperspectral Imaging & ChemometricsAccurately verifies whole grain content for quality and fraud prevention.Quality control in bakery production.
Nilsson et al. (2020) [30]Smart ManufacturingCategorization of Indicator LightsExperimental StudyYOLOv2 and AlexNet achieved over 99% accuracy in detecting indicator lights in legacy systems.Machine monitoring in old factories without IT integration.
Liu et al. (2023) [55]Robotics & VisionFood Package Recognition & SortingMachine Vision & Structured LightStructured light improved packaging classification and reduced errors.Logistics and food product management.
Mark et al. (2021) [61]Worker Assistance SystemsOperator 4.0 & Cognitive SupportSystematic ReviewReviews systems enhancing worker effectiveness without replacing human roles.Worker support in complex production tasks.
Nahavandi (2019) [90]Industry 5.0Human–Machine CollaborationSystematic ReviewIntroduced the concept of Industry 5.0, focusing on human–robot collaboration to enhance productivity.Robotic collaboration and industrial process optimization.
Leiva-Valenzuela et al. (2018) [41]Pattern RecognitionDetection of Undesirable Food CompoundsComputer Vision-Based RecognitionAccurate classification of undesirable food compounds using statistical methods.Food quality control and defect prevention.
Lin et al. (2023) [80]Agricultural AutomationVision in Modern AgricultureExperimental StudyAchieved 11.8% error in automated sorting and evaluation of apples.Automated classification and quality monitoring of fruit.
Li et al. (2023) [91]Human–Robot CollaborationProactive Human–Robot Interaction with AIFramework ProposalRobots with cognitive and predictive capabilities for proactive cooperation.Collaborative robotics in flexible production lines.
Li et al. (2023) [71]Deep Learning in ProductionReinforcement Learning ApplicationsSystematic ReviewExplores adaptive decision-making in smart production using reinforcement learning.Intelligent automation and decision-making in design and production.
Li et al. (2024) [92]Smart WarehousingVision in LogisticsExperimental System DesignEnhances logistics efficiency and reduces human error using machine vision.Automated sorting and storage in warehouses.
Leng et al. (2024) [64]Industry 5.0Collaborative AI in IndustryForesight ReviewExamines integration of collaborative, self-learning AI in Industry 5.0 systems.Product design, production, and maintenance.
Kang et al. (2024) [36]Artificial IntelligencePredicting Flour Properties during MillingMachine LearningPredicts flour properties using image-based analysis of grain during peeling.Flour quality control and product customization.
Konstantinidis et al. (2023) [1]Dairy AutomationYogurt Cup RecognitionExperimental AnalysisYOLO and Mask R-CNN models achieved nearly perfect classification.Dairy product packaging automation.
Konstantinidis et al. (2023) [13]Zero-Defect ManufacturingDigital Twin for Defect PreventionFramework DevelopmentCombines vision systems and digital twins to eliminate defects in dairy.Quality assurance in dairy production.
Konstantinidis et al. (2021) [25]Automotive IndustryVision Systems in Industry 4.0ReviewHighlights AI-powered vision systems for defect detection and optimization.Process automation and quality control in automotive.
Kanth et al. (2023) [65]RoboticsAI and Vision in Collaborative RobotsCase StudiesDemonstrates object detection in collaborative environments.Smart factories for assembly and product handling.
Jia et al. (2023) [14]Food SafetyColorimetric Sensors for Food QualityComprehensive ReviewVision systems enable real-time classification of food safety parameters.Real-time food quality and safety monitoring.
Page et al. (2021) [93]Industrial PolicyResilience and SustainabilityStrategic PolicyProposes a human-centric, green and digital industry transformation in Europe.European policy for sustainable, worker-centered industry.
Ji et al. (2021) [49]Smart ManufacturingRobotic Assembly AutomationExperimental FrameworkUses learning-by-observation for assembly with minimal human intervention.Flexible assembly lines for small-batch manufacturing.
Qiu et al. (2023) [56]Robotics & ManipulationEnd-Effector Evaluation for Food HandlingMetric EvaluationProposes metrics to assess the efficiency and flexibility of robotic systems in food handling.Food processing and packaging.
Javaid et al. (2021) [75]Robotics in Industry 4.0Enhancing Industry 4.0 ApplicationsReviewExamines 18 robotics applications, focusing on automation, safety, and data collection.Smart production systems and risk mitigation.
Vasudevan et al. (2024) [22]Robotics & VisionPrimary Food Handling & PackagingSystematic ReviewHighlights computer vision and robotics in food material handling and automated packaging.Industrial automation and error reduction.
Maddikunta et al. (2022) [79]Industry 5.0Technologies and ApplicationsSystematic ReviewAnalyzes 6G, edge computing, and digital twins as key Industry 5.0 enablers.Human-centric smart manufacturing and healthcare.
Jimoh et al. (2024) [59]Non-Invasive TechniquesFood Quality AssessmentReview of Advanced TechniquesNon-invasive technologies improve accuracy and speed in evaluating food quality.Food fraud detection and chemical safety.
Wakchaure et al. (2023) [94]RoboticsRobotics in the Food IndustrySystematic ReviewRobotics offers precise, safe, and automated solutions for food processing.Automated food handling and packaging.
Ylikoski (2022) [50]Production OptimizationBakery Production LinesProcess Analysis & Experimental StudyProposes workflow and machinery improvements in cinnamon roll lines.Small-scale bakery production.
Yang et al. (2024) [76]Human–Machine InteractionIndustry 5.0 and Smart ProductionSystematic ReviewEmphasizes HMI for human-centered, resilient, and sustainable production.Human–machine collaboration in smart factories.
Murengami et al. (2025) [38]Artificial IntelligenceDough Monitoring via Color ImagingDeep Learning (YOLOv8s & SEM)Achieved 83% accuracy in dough monitoring using color features.Bread production and quality assurance.
Ghobakhloo (2020) [57]Digitization & SustainabilityIndustry 4.0 OperationsInterpretive Structural ModelingShows the sustainability benefits (economic, social, environmental) of Industry 4.0.Clean energy, emissions reduction, and wellbeing.
Fan et al. (2022) [62]Vision-Based CollaborationHolistic Scene UnderstandingSystematic ReviewReviews vision-based approaches to proactive HRC focusing on people, objects, and environments.Personalized manufacturing and HRC.
Fattahi et al. (2024) [33]Food Quality ControlWheat Flour Variety ClassificationFT-MIR Spectroscopy & ChemometricsAchieved 100% classification accuracy of Iranian wheat varieties.Quality control and fraud prevention in flour industry.
Deng et al. (2024) [46]Computer VisionAerospace Quality InspectionReviewVision systems improve detection of drilling and assembly defects.Aerospace component inspection.
da Silva Ferreira et al. (2024) [2]Agricultural Quality ControlDragonfruit Maturity ClassificationComparative StudyVision transformers outperformed ResNet in maturity classification.Fruit grading in agriculture.
Derossi et al. (2023) [44]RoboticsUnconventional Robotics in Food IndustryAnalysis & Framework ProposalPresents novel directions to improve flexibility and processes using robotics.Food production and packaging.
Cuellar et al. (2023) [21]Industry 4.0 in ConstructionConstruction and Infrastructure TechnologiesMixed Methods AnalysisHighlights gaps in AI and robotics adoption in construction.Building information modeling and infrastructure planning.
Ciccarelli et al. (2023) [81]Human-Centered IndustryOperator 4.0Systematic Literature ReviewExplores how AR/VR technologies support human roles in Industry 5.0.Worker empowerment and safety in smart factories.
Chakravartula et al. (2023) [28]Food ProcessingSmart Monitoring in Food DryingExperimental PrototypeVision systems reduce over- and under-drying using preventive PAT tools.Agricultural food drying and quality monitoring.
Castillo-Ortiz et al. (2024) [95]Gastronomy EducationStandardization in Culinary TrainingCase StudyComputer vision ensures hygiene and consistency in culinary education.Culinary training and hygiene standard compliance.
Barthwal et al. (2024) [39]Artificial IntelligenceAI in the Food IndustryComprehensive ReviewAnalyzes current AI trends for automation in food processing.Automated production and process optimization.
Bhana et al. (2023) [45]Industrial SafetyPPE Compliance with VisionExperimental StudyYOLOv8 achieved 86% accuracy in detecting correct PPE usage.Workplace safety monitoring and PPE detection.
Liu et al. (2023) [55]Deep Learning & VisionAI and Vision in Food ProcessingReviewHighlights deep learning trends and challenges in food automation.Food processing and classification in industry.
Azadnia et al. (2023) [73]Agricultural Waste ReductionHawthorn Ripeness DetectionExperimental StudyInception-V3 achieved 100% accuracy in ripeness classification, reducing waste.Agricultural product grading and waste reduction.
Hassoun et al. (2024) [4]Smart ManufacturingVision Sensors for In-Process ControlPrototype DevelopmentVision sensors improve autonomy in quality control with real-time adjustments.Precision manufacturing and industrial quality assurance.
Zhao et al. (2024) [20]Food Industry ApplicationsVision-Based Food QuantificationLiterature ReviewAddresses safety, quality, and nutrition challenges using machine vision.Automated food processing and quality assurance.
Xiao et al. (2022) [52]Computer VisionFood Detection via VisionSystematic ReviewReviews vision capabilities for food quality and safety control.Quality control and fraud detection in food.
Alimam et al. (2023) [29]Digital TwinsDigital Triplets in Industry 5.0Systematic ReviewIntroduces digital triplet framework for advanced human-machine integration.Cognitive collaboration in Industry 5.0.
Mehdizadeh (2022) [96]Food Quality ControlHyperspectral Imaging for Grain InspectionReviewHyperspectral imaging enables precise grain quality and defect detection.Grain classification and quality assurance in agriculture.
Ahmad et al. (2022) [40]Vision for Smart ManufacturingDeep Learning for Object DetectionReviewDeep learning enhances defect detection accuracy in production lines.Automated quality inspection with DL.
Hamza et al. (2024) [70]Cyber-Physical SystemsReal-Time Vision IntegrationPrototype ApplicationReal-time vision improves feature detection and operation accuracy.High-speed industrial quality inspection.
Adjogble et al. (2023) [69]Intelligent ManufacturingAI in Production ProcessesFramework ProposalSmart AI systems enhance sustainability, efficiency, and decision-making.Sustainable and intelligent production systems.
Villani et al. (2024) [26]Digital TransformationIndustry 4.0 & Food QualitySystematic ReviewExplores the impact of digital transformation on quality and waste reduction.Digitalization in quality control and food waste minimization.
Hashmi et al. (2022) [60]Surface Quality ControlVision for Surface RoughnessComprehensive ReviewVision systems offer high-speed, automated surface quality measurements.Surface inspection in manufacturing processes.

References

  1. Konstantinidis, F.K.; Balaska, V.; Symeonidis, S.; Psarommatis, F.; Psomoulis, A.; Giakos, G.; Mouroutsos, S.G.; Gasteratos, A. Achieving zero defected products in diary 4.0 using digital twin and machine vision. In Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 5–7 July 2023; pp. 528–534. [Google Scholar]
  2. da Silva Ferreira, M.V.; Junior, S.B.; da Costa, V.G.T.; Barbin, D.F.; Barbosa, J.L., Jr. Deep computer vision system and explainable artificial intelligence applied for classification of dragon fruit (Hylocereus spp.). Sci. Hortic. 2024, 338, 113605. [Google Scholar] [CrossRef]
  3. Sheikh, R.A.; Ahmed, I.; Faqihi, A.Y.A.; Shehawy, Y.M. Global perspectives on navigating Industry 5.0 knowledge: Achieving resilience, sustainability, and human-centric innovation in manufacturing. J. Knowl. Econ. 2024, 1–36. [Google Scholar] [CrossRef]
  4. Hassoun, A.; Jagtap, S.; Trollman, H.; Garcia-Garcia, G.; Duong, L.N.; Saxena, P.; Bouzembrak, Y.; Treiblmaier, H.; Para-López, C.; Carmona-Torres, C.; et al. From Food Industry 4.0 to Food Industry 5.0: Identifying technological enablers and potential future applications in the food sector. Compr. Rev. Food Sci. Food Saf. 2024, 23, e370040. [Google Scholar] [CrossRef] [PubMed]
  5. Guruswamy, S.; Pojić, M.; Subramanian, J.; Mastilović, J.; Sarang, S.; Subbanagounder, A.; Stojanović, G.; Jeoti, V. Toward better food security using concepts from industry 5.0. Sensors 2022, 22, 8377. [Google Scholar] [CrossRef]
  6. Sood, S.; Singh, H. Computer vision and machine learning based approaches for food security: A review. Multimed. Tools Appl. 2021, 80, 27973–27999. [Google Scholar] [CrossRef]
  7. Madhavan, M.; Sharafuddin, M.A.; Wangtueai, S. Measuring the Industry 5.0-readiness level of SMEs using Industry 1.0–5.0 practices: The case of the seafood processing industry. Sustainability 2024, 16, 2205. [Google Scholar] [CrossRef]
  8. Roy, S.; Singh, S. XR and digital twins, and their role in human factor studies. Front. Energy Res. 2024, 12, 1359688. [Google Scholar] [CrossRef]
  9. Deeba, K.; Chinnpa Prabu Shankar, K.; Gnanavel, S.; Elsisi, M. Artificial Intelligence, Computer Vision, and Robotics for Industry 5.0. In Next Generation Data Science and Blockchain Technology for Industry 5.0: Concepts and Paradigms; John Wiley & Sons: Hoboken, NJ, USA, 2025; pp. 295–324. [Google Scholar]
  10. Licardo, J.T.; Domjan, M.; Orehovački, T. Intelligent robotics—A systematic review of emerging technologies and trends. Electronics 2024, 13, 542. [Google Scholar] [CrossRef]
  11. Tzampazaki, M.; Zografos, C.; Vrochidou, E.; Papakostas, G.A. Machine vision—Moving from Industry 4.0 to Industry 5.0. Appl. Sci. 2024, 14, 1471. [Google Scholar] [CrossRef]
  12. Modoni, G.E.; Sacco, M. A human digital-twin-based framework driving human centricity towards industry 5.0. Sensors 2023, 23, 6054. [Google Scholar] [CrossRef]
  13. Konstantinidis, F.K.; Balaska, V.; Symeonidis, S.; Tsilis, D.; Mouroutsos, S.G.; Bampis, L.; Psomoulis, A.; Gasteratos, A. Automating dairy production lines with the yoghurt cups recognition and detection process in the Industry 4.0 era. Procedia Comput. Sci. 2023, 217, 918–927. [Google Scholar] [CrossRef]
  14. Jia, X.; Ma, P.; Tarwa, K.; Wang, Q. Machine vision-based colorimetric sensor systems for food applications. J. Agric. Food Res. 2023, 11, 100503. [Google Scholar] [CrossRef]
  15. Wang, G.; Liu, B.; Wang, J.; Wang, J. Intelligent Inspection System of Tobacco Enterprise Measuring Equipment Based on Machine Vision. In Proceedings of the 2023 4th International Conference on Computer Science and Management Technology, Xi’an, China, 13–15 October 2023; pp. 291–296. [Google Scholar]
  16. Agote-Garrido, A.; Martín-Gómez, A.M.; Lama-Ruiz, J.R. Manufacturing system design in industry 5.0: Incorporating sociotechnical systems and social metabolism for human-centered, sustainable, and resilient production. Systems 2023, 11, 537. [Google Scholar] [CrossRef]
  17. Panghal, A.; Chhikara, N.; Sindhu, N.; Jaglan, S. Role of Food Safety Management Systems in safe food production: A review. J. Food Saf. 2018, 38, e12464. [Google Scholar] [CrossRef]
  18. Pang, J.; Zheng, P.; Fan, J.; Liu, T. Towards cognition-augmented human-centric assembly: A visual computation perspective. Robot. Comput.-Integr. Manuf. 2025, 91, 102852. [Google Scholar] [CrossRef]
  19. Lins, T.; Oliveira, R.A.R. Cyber-physical production systems retrofitting in context of industry 4.0. Comput. Ind. Eng. 2020, 139, 106193. [Google Scholar] [CrossRef]
  20. Zhao, Z.; Wang, R.; Liu, M.; Bai, L.; Sun, Y. Application of machine vision in food computing: A review. Food Chem. 2024, 463, 141238. [Google Scholar] [CrossRef]
  21. Cuellar, S.; Grisales, S.; Castaneda, D.I. Constructing tomorrow: A multifaceted exploration of Industry 4.0 scientific, patents, and market trend. Autom. Constr. 2023, 156, 105113. [Google Scholar] [CrossRef]
  22. Vasudevan, S.; Mekhalfi, M.L.; Blanes, C.; Lecca, M.; Poiesi, F.; Chippendale, P.I.; Fresnillo, P.M.; Mohammed, W.M.; Lastra, J.L.M. Robotics and Machine Vision for Primary Food Manipulation and Packaging: A Survey. IEEE Access 2024, 12, 152579–152613. [Google Scholar] [CrossRef]
  23. Navarro-Guerrero, N.; Toprak, S.; Josifovski, J.; Jamone, L. Visuo-haptic object perception for robots: An overview. Auton. Robot. 2023, 47, 377–403. [Google Scholar] [CrossRef]
  24. Palanikumar, K.; Natarajan, E.; Ponshanmugakumar, A. Application of machine vision technology in manufacturing industries—A study. In Machine Intelligence in Mechanical Engineering; Elsevier: Amsterdam, The Netherlands, 2024; pp. 91–122. [Google Scholar]
  25. Konstantinidis, F.K.; Mouroutsos, S.G.; Gasteratos, A. The role of machine vision in industry 4.0: An automotive manufacturing perspective. In Proceedings of the 2021 IEEE International Conference on Imaging Systems and Techniques (IST), Kaohsiung, Taiwan, 24–26 August 2021; pp. 1–6. [Google Scholar]
  26. Villani, V.; Picone, M.; Mamei, M.; Sabattini, L. A digital twin driven human-centric ecosystem for industry 5.0. IEEE Trans. Autom. Sci. Eng. 2024, 22, 11291–11303. [Google Scholar] [CrossRef]
  27. Chai, J.J.; O’Sullivan, C.; Gowen, A.A.; Rooney, B.; Xu, J.L. Augmented/mixed reality technologies for food: A review. Trends Food Sci. Technol. 2022, 124, 182–194. [Google Scholar] [CrossRef]
  28. Chakravartula, S.S.N.; Bandiera, A.; Nardella, M.; Bedini, G.; Ibba, P.; Massantini, R.; Moscetti, R. Computer vision-based smart monitoring and control system for food drying: A study on carrot slices. Comput. Electron. Agric. 2023, 206, 107654. [Google Scholar] [CrossRef]
  29. Alimam, H.; Mazzuto, G.; Tozzi, N.; Ciarapica, F.E.; Bevilacqua, M. The resurrection of digital triplet: A cognitive pillar of human-machine integration at the dawn of industry 5.0. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101846. [Google Scholar] [CrossRef]
  30. Nilsson, F.; Jakobsen, J.; Alonso-Fernandez, F. Detection and classification of industrial signal lights for factory floors. In Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020; pp. 1–6. [Google Scholar]
  31. Rokhva, S.; Teimourpour, B.; Soltani, A.H. Computer vision in the food industry: Accurate, real-time, and automatic food recognition with pretrained MobileNetV2. Food Humanit. 2024, 3, 100378. [Google Scholar] [CrossRef]
  32. Siripatrawan, U.; Makino, Y. Assessment of food safety risk using machine learning-assisted hyperspectral imaging: Classification of fungal contamination levels in rice grain. Microb. Risk Anal. 2024, 27, 100295. [Google Scholar] [CrossRef]
  33. Fattahi, S.H.; Kazemi, A.; Khojastehnazhand, M.; Roostaei, M.; Mahmoudi, A. The classification of Iranian wheat flour varieties using FT-MIR spectroscopy and chemometrics methods. Expert Syst. Appl. 2024, 239, 122175. [Google Scholar] [CrossRef]
  34. Liu, Y.; Pu, H.; Sun, D.W. Hyperspectral imaging technique for evaluating food quality and safety during various processes: A review of recent applications. Trends Food Sci. Technol. 2017, 69, 25–35. [Google Scholar] [CrossRef]
  35. Aviara, N.A.; Liberty, J.T.; Olatunbosun, O.S.; Shoyombo, H.A.; Oyeniyi, S.K. Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. J. Agric. Food Res. 2022, 8, 100288. [Google Scholar] [CrossRef]
  36. Kang, S.; Kim, Y.; Ajani, O.S.; Mallipeddi, R.; Ha, Y. Predicting the properties of wheat flour from grains during debranning: A machine learning approach. Heliyon 2024, 10, e36472. [Google Scholar] [CrossRef]
  37. Medina-García, M.; Roca-Nasser, E.A.; Martínez-Domingo, M.A.; Valero, E.M.; Arroyo-Cerezo, A.; Cuadros-Rodríguez, L.; Jiménez-Carvelo, A.M. Towards the establishment of a green and sustainable analytical methodology for hyperspectral imaging-based authentication of wholemeal bread. Food Control 2024, 166, 110715. [Google Scholar] [CrossRef]
  38. Murengami, B.G.; Jing, X.; Jiang, H.; Liu, X.; Mao, W.; Li, Y.; Chen, X.; Wang, S.; Li, R.; Fu, L. Monitor and classify dough based on color image with deep learning. J. Food Eng. 2025, 386, 112299. [Google Scholar]
  39. Barthwal, R.; Kathuria, D.; Joshi, S.; Kaler, R.; Singh, N. New trends in the development and application of artificial intelligence in food processing. Innov. Food Sci. Emerg. Technol. 2024, 92, 103600. [Google Scholar] [CrossRef]
  40. Ahmad, H.M.; Rahimi, A. Deep learning methods for object detection in smart manufacturing: A survey. J. Manuf. Syst. 2022, 64, 181–196. [Google Scholar] [CrossRef]
  41. Leiva-Valenzuela, G.A.; Mariotti, M.; Mondragón, G.; Pedreschi, F. Statistical pattern recognition classification with computer vision images for assessing the furan content of fried dough pieces. Food Chem. 2018, 239, 718–725. [Google Scholar] [CrossRef] [PubMed]
  42. Ondras, J.; Ni, D.; Deng, X.; Gu, Z.; Zheng, H.; Bhattacharjee, T. Robotic dough shaping. In Proceedings of the 2022 22nd International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; pp. 300–307. [Google Scholar]
  43. Akundi, A.; Reyna, M. A machine vision based automated quality control system for product dimensional analysis. Procedia Comput. Sci. 2021, 185, 127–134. [Google Scholar] [CrossRef]
  44. Derossi, A.; Di Palma, E.; Moses, J.; Santhoshkumar, P.; Caporizzi, R.; Severini, C. Avenues for non-conventional robotics technology applications in the food industry. Food Res. Int. 2023, 173, 113265. [Google Scholar] [CrossRef]
  45. Bhana, R.; Mahmoud, H.; Idrissi, M. Smart industrial safety using computer vision. In Proceedings of the 2023 28th International Conference on Automation and Computing (ICAC), Birmingham, UK, 30 August–1 September 2023; pp. 1–6. [Google Scholar]
  46. Deng, L.; Liu, G.; Zhang, Y. A review of machine vision applications in aerospace manufacturing quality inspection. In Proceedings of the 2024 4th International Conference on Computer, Control and Robotics (ICCCR), Shnaghai, China, 19–21 April 2024; pp. 31–39. [Google Scholar]
  47. Wang, J.; Hu, H.; Chen, L.; He, C. Assembly defect detection of atomizers based on machine vision. In Proceedings of the 2019 4th International Conference on Automation, Control and Robotics Engineering, Shenzhen, China, 19–21 July 2019; pp. 1–6. [Google Scholar]
  48. Nadon, F.; Valencia, A.J.; Payeur, P. Multi-modal sensing and robotic manipulation of non-rigid objects: A survey. Robotics 2018, 7, 74. [Google Scholar] [CrossRef]
  49. Ji, S.; Lee, S.; Yoo, S.; Suh, I.; Kwon, I.; Park, F.C.; Lee, S.; Kim, H. Learning-based automation of robotic assembly for smart manufacturing. Proc. IEEE 2021, 109, 423–440. [Google Scholar] [CrossRef]
  50. Ylikoski, M. Optimization of Gateau Fazer’s Production Line. Bachelor’s Thesis, Metropolia University of Applied Sciences, Helsinki, Finland, 2022. [Google Scholar]
  51. Nivelle, M.A.; Bosmans, G.M.; Delcour, J.A. The impact of parbaking on the crumb firming mechanism of fully baked tin wheat bread. J. Agric. Food Chem. 2017, 65, 10074–10083. [Google Scholar] [CrossRef]
  52. Xiao, Z.; Wang, J.; Han, L.; Guo, S.; Cui, Q. Application of machine vision system in food detection. Front. Nutr. 2022, 9, 888245. [Google Scholar] [CrossRef]
  53. Wang, X.V.; Xu, P.; Cui, M.; Yu, X.; Wang, L. A literature survey of smart manufacturing systems for medical applications. J. Manuf. Syst. 2024, 76, 502–519. [Google Scholar] [CrossRef]
  54. Pereira, D.; Bozzato, A.; Dario, P.; Ciuti, G. Towards Foodservice Robotics: A taxonomy of actions of foodservice workers and a critical review of supportive technology. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1820–1858. [Google Scholar] [CrossRef]
  55. Liu, X.; Liang, J.; Ye, Y.; Song, Z.; Zhao, J. A food package recognition and sorting system based on structured light and deep learning. In Proceedings of the 2023 International Joint Conference on Robotics and Artificial Intelligence, Shanghai, China, 7–9 July 2023; pp. 19–25. [Google Scholar]
  56. Qiu, Z.; Paul, H.; Wang, Z.; Hirai, S.; Kawamura, S. An evaluation system of robotic end-effectors for food handling. Foods 2023, 12, 4062. [Google Scholar] [CrossRef] [PubMed]
  57. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  58. Zafar, M.H.; Langås, E.F.; Sanfilippo, F. Exploring the synergies between collaborative robotics, digital twins, augmentation, and industry 5.0 for smart manufacturing: A state-of-the-art review. Robot. Comput.-Integr. Manuf. 2024, 89, 102769. [Google Scholar] [CrossRef]
  59. Jimoh, K.A.; Hashim, N. Recent advances in non-invasive techniques for assessing food quality: Applications and innovations. Adv. Food Nutr. Res. 2025, 114, 301–352. [Google Scholar]
  60. Hashmi, A.W.; Mali, H.S.; Meena, A.; Hashmi, M.F.; Bokde, N.D. Surface Characteristics Measurement Using Computer Vision: A Review. CMES-Comput. Model. Eng. Sci. 2023, 135, 917–1005. [Google Scholar]
  61. Mark, B.G.; Rauch, E.; Matt, D.T. Worker assistance systems in manufacturing: A review of the state of the art and future directions. J. Manuf. Syst. 2021, 59, 228–250. [Google Scholar] [CrossRef]
  62. Fan, J.; Zheng, P.; Li, S. Vision-based holistic scene understanding towards proactive human–robot collaboration. Robot. Comput.-Integr. Manuf. 2022, 75, 102304. [Google Scholar] [CrossRef]
  63. Sharma, A.; Kulkarni, A. Vision System for Smart Manufacturing: A Review. In Proceedings of the 2023 IEEE Engineering Informatics, Melbourne, Australia, 22–23 November 2023; pp. 1–9. [Google Scholar]
  64. Leng, J.; Zhu, X.; Huang, Z.; Li, X.; Zheng, P.; Zhou, X.; Mourtzis, D.; Wang, B.; Qi, Q.; Shao, H.; et al. Unlocking the power of industrial artificial intelligence towards Industry 5.0: Insights, pathways, and challenges. J. Manuf. Syst. 2024, 73, 349–363. [Google Scholar] [CrossRef]
  65. Kanth, R.; Heikkonen, J. Machine Vision and Artificial Intelligence in Robotics for Smart Factory. In Proceedings of the 2023 IEEE International Conference on Emerging Trends in Engineering, Sciences and Technology (ICES&T), Bahawalpur, Pakistan, 9–11 January 2023; pp. 1–4. [Google Scholar]
  66. Sahoo, S.; Lo, C.Y. Smart manufacturing powered by recent technological advancements: A review. J. Manuf. Syst. 2022, 64, 236–250. [Google Scholar] [CrossRef]
  67. Mandapaka, S.; Diaz, C.; Irisson, H.; Akundi, A.; Lopez, V.; Timmer, D. Application of automated quality control in smart factories—A deep learning-based approach. In Proceedings of the 2023 IEEE International Systems Conference (SysCon), Vancouver, BC, Canada, 17–20 April 2023; pp. 1–8. [Google Scholar]
  68. Prasad, P.D.; Patel, D.; Muthuswamy, S.; Karumbu, P. In Situ Material Identification, Machining Monitoring, and Cloud Logging Integrated with AI and Machine Vision. In Proceedings of the 2024 5th International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India, 15–16 March 2024; pp. 1–6. [Google Scholar]
  69. Adjogble, F.K.; Warschat, J.; Hemmje, M. Advanced Intelligent Manufacturing in Process Industry Using Industrial Artificial Intelligence. In Proceedings of the 2023 Portland International Conference on Management of Engineering and Technology (PICMET), Monterrey, Mexico, 23–27 July 2023; pp. 1–16. [Google Scholar]
  70. Hamza, S.A.; Jesser, A. Advancing Industry 4.0 with real-time machine vision integration in cyber-physical systems. In Proceedings of the 2024 IEEE 3rd International Conference on Computing and Machine Intelligence (ICMI), Mt. Pleasant, MI, USA, 13–14 April 2024; pp. 1–5. [Google Scholar]
  71. Li, C.; Zheng, P.; Yin, Y.; Wang, B.; Wang, L. Deep reinforcement learning in smart manufacturing: A review and prospects. CIRP J. Manuf. Sci. Technol. 2023, 40, 75–101. [Google Scholar] [CrossRef]
  72. Ren, L.; Dong, J.; Liu, S.; Zhang, L.; Wang, L. Embodied intelligence toward future smart manufacturing in the era of AI foundation model. IEEE ASME Trans. Mechatron. 2024, 30, 2632–2642. [Google Scholar] [CrossRef]
  73. Azadnia, R.; Fouladi, S.; Jahanbakhshi, A. Intelligent detection and waste control of hawthorn fruit based on ripening level using machine vision system and deep learning techniques. Results Eng. 2023, 17, 100891. [Google Scholar] [CrossRef]
  74. Sontakke, M.; Yerimah, L.E.; Rebmann, A.; Ghosh, S.; Dory, C.; Hedden, R.; Bequette, B.W. Integrating smart manufacturing techniques into undergraduate education: A case study with heat exchanger. Comput. Chem. Eng. 2024, 191, 108858. [Google Scholar] [CrossRef]
  75. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Substantial capabilities of robotics in enhancing industry 4.0 implementation. Cogn. Robot. 2021, 1, 58–75. [Google Scholar] [CrossRef]
  76. Yang, J.; Liu, Y.; Morgan, P.L. Human-machine interaction towards Industry 5.0: Human-centric smart manufacturing. Digit. Eng. 2024, 2, 100013. [Google Scholar] [CrossRef]
  77. Yang, H.I.; Min, S.G.; Yang, J.H.; Eun, J.B.; Chung, Y.B. A novel hybrid-view technique for accurate mass estimation of kimchi cabbage using computer vision. J. Food Eng. 2024, 378, 112126. [Google Scholar] [CrossRef]
  78. Yousif, I.; Samaha, J.; Ryu, J.; Harik, R. Safety 4.0: Harnessing computer vision for advanced industrial protection. Manuf. Lett. 2024, 41, 1342–1356. [Google Scholar] [CrossRef]
  79. Maddikunta, P.K.R.; Pham, Q.V.; Prabadevi, B.; Deepa, N.; Dev, K.; Gadekallu, T.R.; Ruby, R.; Liyanage, M. Industry 5.0: A survey on enabling technologies and potential applications. J. Ind. Inf. Integr. 2022, 26, 100257. [Google Scholar] [CrossRef]
  80. Lin, S.; Qi, X. Development of Intelligent Agricultural Automation Based on Computer Vision. In Proceedings of the 2023 International Conference on Integrated Intelligence and Communication Systems (ICIICS), Kalaburagi, India, 24–25 November 2023; pp. 1–6. [Google Scholar]
  81. Ciccarelli, M.; Papetti, A.; Germani, M. Exploring how new industrial paradigms affect the workforce: A literature review of Operator 4.0. J. Manuf. Syst. 2023, 70, 464–483. [Google Scholar] [CrossRef]
  82. Wu, H.; Xu, W.; Yao, B.; Hu, Y.; Feng, H. Interacting multiple model-based adaptive trajectory prediction for anticipative human following of mobile industrial robot. Procedia Comput. Sci. 2020, 176, 3692–3701. [Google Scholar] [CrossRef]
  83. Yin, Y.; Zheng, P.; Li, C.; Wang, L. A state-of-the-art survey on Augmented Reality-assisted Digital Twin for futuristic human-centric industry transformation. Robot. Comput.-Integr. Manuf. 2023, 81, 102515. [Google Scholar] [CrossRef]
  84. Sevetlidis, V.; Pavlidis, G.; Balaska, V.; Psomoulis, A.; Mouroutsos, S.G.; Gasteratos, A. Enhancing Weakly Supervised Defect Detection Through Anomaly-Informed Weighted Training. IEEE Trans. Instrum. Meas. 2024, 73, 3538310. [Google Scholar] [CrossRef]
  85. Symeonidis, S.; Peikos, G.; Arampatzis, A. Unsupervised consumer intention and sentiment mining from microblogging data as a business intelligence tool. Oper. Res. 2022, 22, 6007–6036. [Google Scholar] [CrossRef]
  86. Velesaca, H.O.; Suárez, P.L.; Mira, R.; Sappa, A.D. Computer vision based food grain classification: A comprehensive survey. Comput. Electron. Agric. 2021, 187, 106287. [Google Scholar] [CrossRef]
  87. Steger, C.; Ulrich, M.; Wiedemann, C. Machine Vision Algorithms and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  88. Lullien-Pellerin, V. How can we evaluate and predict wheat quality? J. Cereal Sci. 2024, 104001. [Google Scholar] [CrossRef]
  89. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent manufacturing in the context of industry 4.0: A review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  90. Nahavandi, S. Industry 5.0—A human-centric solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  91. Li, S.; Zheng, P.; Liu, S.; Wang, Z.; Wang, X.V.; Zheng, L.; Wang, L. Proactive human–robot collaboration: Mutual-cognitive, predictable, and self-organising perspectives. Robot. Comput.-Integr. Manuf. 2023, 81, 102510. [Google Scholar] [CrossRef]
  92. Li, M.; Liu, Y.; Xu, G.; Ma, Z. The intelligent warehousing system combined with machine vision is constructed. In Proceedings of the 2024 3rd International Symposium on Control Engineering and Robotics, Changsha, China, 24–26 May 2024; pp. 104–108. [Google Scholar]
  93. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  94. Wakchaure, M.; Patle, B.K.; Mahindrakar, A.K. Application of AI techniques and robotics in agriculture: A review. Artif. Intell. Life Sci. 2023, 3, 100057. [Google Scholar] [CrossRef]
  95. Castillo-Ortiz, I.; Villar-Patiño, C.; Guevara-Martínez, E. Computer vision solution for uniform adherence in gastronomy schools: An artificial intelligence case study. Int. J. Gastronomy Food Sci. 2024, 37, 100997. [Google Scholar] [CrossRef]
  96. Mehdizadeh, S.A. Machine vision based intelligent oven for baking inspection of cupcake: Design and implementation. Mechatronics 2022, 82, 102746. [Google Scholar] [CrossRef]
Figure 1. Distribution of articles by scientific source.
Figure 1. Distribution of articles by scientific source.
Electronics 14 03361 g001
Figure 2. Overview of the frozen dough production process. The sequence includes quality inspection of raw materials, automated dough preparation, shaping, optional pre-baking, freezing, packaging, and final quality control.
Figure 2. Overview of the frozen dough production process. The sequence includes quality inspection of raw materials, automated dough preparation, shaping, optional pre-baking, freezing, packaging, and final quality control.
Electronics 14 03361 g002
Figure 3. Number of publications per year on computer vision applications, 2010–2024. Data source: [Scopus/Web of Science], query “computer vision applications”; data retrieved on May 2025.
Figure 3. Number of publications per year on computer vision applications, 2010–2024. Data source: [Scopus/Web of Science], query “computer vision applications”; data retrieved on May 2025.
Electronics 14 03361 g003
Figure 4. Percentage distribution of machine vision categories based on their application.
Figure 4. Percentage distribution of machine vision categories based on their application.
Electronics 14 03361 g004
Figure 5. Examples of vision-based analysis in frozen dough production. (a) Keyword co-occurrence network from literature analysis. (b) Machine vision in human-centric manufacturing.
Figure 5. Examples of vision-based analysis in frozen dough production. (a) Keyword co-occurrence network from literature analysis. (b) Machine vision in human-centric manufacturing.
Electronics 14 03361 g005
Figure 6. Application of detection and analysis techniques in different stages of the bakery production process. The colored bands represent the main production stages: raw material preparation (purple), mixing and fermentation (blue), proofing (green), baking (red), packaging (cyan), and quality control (orange/red). Note: the overlapping labels “Formed products” correspond to intermediate (proofing) and final (baking) product stages.
Figure 6. Application of detection and analysis techniques in different stages of the bakery production process. The colored bands represent the main production stages: raw material preparation (purple), mixing and fermentation (blue), proofing (green), baking (red), packaging (cyan), and quality control (orange/red). Note: the overlapping labels “Formed products” correspond to intermediate (proofing) and final (baking) product stages.
Electronics 14 03361 g006
Figure 7. Heatmap of assessment levels for machine vision applications across different production stages.
Figure 7. Heatmap of assessment levels for machine vision applications across different production stages.
Electronics 14 03361 g007
Figure 8. Conceptual framework of machine vision application areas across three human-centric manufacturing dimensions.
Figure 8. Conceptual framework of machine vision application areas across three human-centric manufacturing dimensions.
Electronics 14 03361 g008
Table 1. Comparison of vision system requirements in industrial domains.
Table 1. Comparison of vision system requirements in industrial domains.
DomainEnvironmentProduct VariabilityVision Complexity
AutomotiveStructuredLowMedium
PharmaceuticalCleanroomVery LowLow
Frozen DoughSemi-structuredHighHigh
ElectronicsStructuredMediumMedium
AgricultureUnstructuredVery HighVery High
Table 2. Study selection process summary.
Table 2. Study selection process summary.
Review StageNumber of Studies
Identified via IEEE Xplore, ScienceDirect, ACM, Scopus2186
Removed prior to screening162
Screened2024
Excluded after screening334
Full-text retrieved1690
Could not be retrieved30
Assessed for eligibility1660
Excluded after full-text assessment1575
Final included studies85
Table 3. Machine vision applications in frozen dough production.
Table 3. Machine vision applications in frozen dough production.
Application (%)Description
Foreign object detection (13.33%)Detects contaminants in raw materials using high-resolution imaging, ensuring food safety.
Packaging defect inspection (6.67%)Identifies issues like misaligned labels and damaged seals during final packaging.
Dimensional inspection (6.67%)Ensures uniform shape and size of products during shaping.
Elasticity and texture analysis (6.67%)Evaluates dough elasticity in real-time to maintain consistency.
Uniformity and moisture control (6.67%)Monitors dough homogeneity and moisture to ensure stable production.
Crack and flaw detection (6.67%)Detects visible cracks and imperfections.
Dimensional analysis (6.67%)Supports consistent size measurements across production stages.
Fermentation monitoring (6.67%)Observes fermentation progress via structural and moisture analysis.
Moisture and chemical composition detection (6.67%)Monitors raw material composition for quality assurance.
Dynamic shaping (6.67%)Adjusts shaping parameters in real-time using robotic feedback.
Visual quality inspection (6.67%)Ensures visual appeal and quality consistency of products.
Thickness uniformity monitoring (6.67%)Controls and verifies even product thickness.
Robotic mechanism adaptation (6.67%)Enables robots to adapt to production changes dynamically.
Dough condition classification (6.67%)Classifies dough as under-, well-, or over-fermented for process optimization.
Table 4. Qualitative assessment of machine vision applications in frozen dough production.
Table 4. Qualitative assessment of machine vision applications in frozen dough production.
Production StageHuman–Machine CollaborationSafety ImpactProductivity Outcomes
Raw Material InspectionMediumHighMedium
Mixing & FermentationLowMediumHigh
ShapingMediumMediumHigh
Filling & AssemblyHighMediumMedium
ParbakingMediumMediumMedium
FreezingLowHighHigh
PackagingMediumHighMedium
Final Quality ControlMediumHighHigh
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balaska, V.; Tserkezis, A.; Konstantinidis, F.; Sevetlidis, V.; Symeonidis, S.; Karakatsanis, T.; Gasteratos, A. Machine Vision in Human-Centric Manufacturing: A Review from the Perspective of the Frozen Dough Industry. Electronics 2025, 14, 3361. https://doi.org/10.3390/electronics14173361

AMA Style

Balaska V, Tserkezis A, Konstantinidis F, Sevetlidis V, Symeonidis S, Karakatsanis T, Gasteratos A. Machine Vision in Human-Centric Manufacturing: A Review from the Perspective of the Frozen Dough Industry. Electronics. 2025; 14(17):3361. https://doi.org/10.3390/electronics14173361

Chicago/Turabian Style

Balaska, Vasiliki, Anestis Tserkezis, Fotios Konstantinidis, Vasileios Sevetlidis, Symeon Symeonidis, Theoklitos Karakatsanis, and Antonios Gasteratos. 2025. "Machine Vision in Human-Centric Manufacturing: A Review from the Perspective of the Frozen Dough Industry" Electronics 14, no. 17: 3361. https://doi.org/10.3390/electronics14173361

APA Style

Balaska, V., Tserkezis, A., Konstantinidis, F., Sevetlidis, V., Symeonidis, S., Karakatsanis, T., & Gasteratos, A. (2025). Machine Vision in Human-Centric Manufacturing: A Review from the Perspective of the Frozen Dough Industry. Electronics, 14(17), 3361. https://doi.org/10.3390/electronics14173361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop