Next Article in Journal
YOLO-FDLU: A Lightweight Improved YOLO11s-Based Algorithm for Accurate Maize Pest and Disease Detection
Previous Article in Journal
Theoretical Study of a Pneumatic Device for Precise Application of Mineral Fertilizers by an Agro-Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Converging Extended Reality and Robotics for Innovation in the Food Industry

Department of Food Engineering, Dankook University, Cheonan 31116, Chungcheongnam-do, Republic of Korea
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(10), 322; https://doi.org/10.3390/agriengineering7100322
Submission received: 6 August 2025 / Revised: 6 September 2025 / Accepted: 19 September 2025 / Published: 1 October 2025

Abstract

Extended Reality (XR) technologies—including Virtual Reality, Augmented Reality, and Mixed Reality—are increasingly applied in the food industry to simulate sensory environments, support education, and influence consumer behavior, while robotics addresses labor shortages, hygiene, and efficiency in production. This review uniquely synthesizes their convergence through digital twin frameworks, combining XR’s immersive simulations with robotics’ precision and scalability. A systematic literature review and keyword co-occurrence analysis of over 800 titles revealed research clusters around consumer behavior, nutrition education, sensory experience, and system design. In parallel, robotics has expanded beyond traditional pick-and-place tasks into areas such as precision cleaning, chaotic mixing, and digital gastronomy. The integration of XR and robotics offers synergies including risk-free training, predictive task validation, and enhanced human–robot interaction but faces hurdles such as high hardware costs, motion sickness, and usability constraints. Future research should prioritize interoperability, ergonomic design, and cross-disciplinary collaboration to ensure that XR–robotics systems evolve not merely as tools, but as a paradigm shift in redefining the human–food–environment relationship.

1. Introduction

This review focuses on the intersection of Extended Reality (XR) technologies and robotics in the food industry, an emerging field where immersive simulation and intelligent automation converge. The modern food supply chain has evolved into a vast and complex global system, encompassing farming, processing, distribution, and consumption. Advances in food science and technology, integrating multiple disciplines, have enabled the production of safe, nutritious, and accessible food while ensuring sustainable monitoring and control of food safety and quality [1].
Today’s food industry faces the critical challenge of meeting the growing demand for food in a sustainable manner [2]. This demand has intensified the need for efficient automation systems in food production. Robotics plays a key role by executing planned tasks with high precision and repeatability, thereby improving product quality and consistency [3]. Additionally, the use of robots reduces the risk of workplace injuries caused by repetitive motions and enhances overall working conditions. Increased efficiency helps reduce production time and costs while minimizing waste [4]. These characteristics make robotics particularly well-suited for food industry tasks that require speed, uniformity, and high-frequency repetition [5]. In food processing, robots are commonly applied in pick-and-place operations for sorting, packaging, and material handling. Robotic automation is most effective when implemented to solve or optimize specific manufacturing and processing scenarios. Among its many benefits, flexibility is one of the most critical advantages of robotics in food and beverage production. Robotics inherently enables reconfigurability and rapid adaptation to new workflows and environments [6].
Robotics has accelerated the automation of key food industry operations, including processing, quality control, and material handling, highlighting its integral role in enhancing efficiency, precision, and scalability across modern food manufacturing systems [7]. In parallel with advancements in robotics, immersive technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) provide intuitive and sensory-rich interfaces that simulate complex food processes and enhance user engagement [8]. Progress in display and computing technologies has enabled the development of devices that either superimpose digital elements onto the real world or integrate real-world components into immersive virtual environments. This convergence of digital and physical domains is collectively termed as XR [9].
The term “extended reality” has recently gained prominence as an overarching label for technologies such as VR, AR, and MR [10]. VR enables users to perceive a sense of presence within an immersive, three-dimensional environment that is entirely computer-generated [11]. AR refers to the real-time enhancement of the physical environment, in which virtual, computer-generated elements are overlaid onto a direct or indirect view of the real world [12]. MR, sometimes referred to as merged reality, describes an environment in which real and virtual elements are seamlessly integrated, allowing digital and physical objects to coexist and interact in real time [13].
XR facilitates the simulation of spatial environments under controlled conditions, enabling users to interact with, manipulate, and isolate variables, objects, and scenarios in a time- and cost-efficient manner [14]. XR technologies, encompassing VR, AR, and MR, have been applied across a variety of industrial sectors such as construction, education and training, automotive, energy, and manufacturing [15]. In fact, Herur-Raman et al. [16] utilized XR to enhance medical education, focusing on anatomy learning, communication training, and surgical simulation. Sharma [17] reviewed the use of XR in education and reported that its integration across primary, secondary, and higher education levels enhances student motivation, facilitates experiential learning, and supports deeper engagement through project-based and gamified instruction.
A growing number of studies have been conducted in the food industry utilizing XR technologies. Xu et al. [18] reported that VR has been actively applied in food consumer behavior research, particularly in sensory evaluation, shopping behavior analysis, virtual food cue exposure, and food choice studies, demonstrating its validity as a tool for simulating realistic environments and capturing consumer responses. Styliaras [19] presented a review of 34 AR applications in the food industry, highlighting their use in food promotion and analysis, including nutritional labeling and restaurant ordering. Ahn et al. [20] developed MR-FoodCoach, a mixed reality shopping system using HoloLens2 that overlays nutrition facts, NutriScore labels, and supply chain data onto virtual food items. This application demonstrates MR’s potential to support healthier food choices by enhancing information accessibility and interactivity in retail settings.
Despite rapid advances in XR and robotics individually, there is a lack of integrated frameworks that connect immersive interfaces with robotic automation in food systems. This limitation hinders the development of intelligent and scalable food technologies.
XR technologies can also support robotic control in food processing environments, providing safe and intuitive platforms for simulation and remote operation. Extended reality platforms offer a virtual environment in which robotic path planning and collision avoidance algorithms can be tested prior to real-world deployment, ensuring operational safety without compromising efficiency. Such capabilities are particularly important in contexts where direct human–robot interaction may pose safety risks, underscoring XR’s role in creating risk-free testing and training environments. Moreover, XR-enhanced teleoperation enables intuitive human–robot interaction, granting operators depth perception, situational awareness, and even haptic feedback while managing robotic systems in hazardous or inaccessible environments [21,22]. Together, these capabilities position XR as a critical enabler for integrating robotics into sensitive food industry processes, where safety and precision are paramount.
Existing reviews have addressed XR in food contexts or robotics in food automation separately, yet few have explored their convergence. Moreover, the role of digital twins as a bridge between XR-based design and robotic implementation remains underexplored. This review aims to provide a systematic synthesis of XR and robotics applications in the food industry, highlighting their integration through digital twin frameworks. In addition to examining recent empirical studies, we conduct a keyword co-occurrence analysis of over 800 titles to identify emerging research clusters and trends. This dual approach—empirical synthesis plus computational mapping—represents a novel contribution compared to prior reviews.

2. Overview of Extended Reality Technologies

2.1. Definition and Classification of XR Technologies (VR, AR, MR)

VR immerses users in a fully computer-generated environment, replacing their perception of the real world through the use of headsets, smart glasses, or multi-display systems [23]. AR has gained significant attention for its ability to transform the physical environment into an interactive interface for digital content. The seamless integration of real and virtual elements within a single interface has enabled the development of innovative applications and services across various fields [24]. Although often mistaken for AR, MR represents a convergence of both AR and VR technologies. In MR environments, users can interact with both real and virtual objects in real time, and these objects can influence one another. This environment-aware capability enables virtual elements to respond to the physical world, and vice versa, regardless of the user’s physical location [25]. Distinguishing VR from AR is essential, as they differ fundamentally in how they engage with and alter the user’s perception of the environment. VR immerses the user in a fully digital setting by replacing the real world with a simulated one through computer-generated environments. In contrast, AR overlays virtual elements onto the physical world, using devices such as head-up displays, AR glasses, smartphones, or tablets, thereby augmenting the real environment with additional digital information [26]. MR represents a hybrid environment in which physical and virtual elements not only coexist but also interact in real time. Unlike AR, which overlays virtual content onto the real world without direct interaction, MR allows for two-way interaction between real and virtual entities [27]. These three technologies can be positioned along a continuum of immersion and interaction within the XR spectrum, as illustrated in Figure 1.

2.2. Key Devices Used in XR Applications

XR technologies either enhance or completely replace the user’s perception of reality with computer-generated content, typically delivered via head-mounted displays (HMDs) [28]. Leading technology companies are actively investing in XR development, with notable headsets including Microsoft’s HoloLens, Apple’s Vision Pro, Meta’s Oculus Quest, and ByteDance’s Pico. New XR headsets with enhanced features are introduced annually, reflecting the rapid pace of technological advancement in the field [29].

2.2.1. Oculus Quest by Meta

According to Yoon et al. [30], Meta’s Quest series dominates the VR headset market, accounting for approximately 58 percent of the global share. The Meta Quest 3 is notable for its dual capability to support both AR and VR, enabling users to interact either with their physical environment or within fully immersive virtual spaces. As one of the most advanced MR-enabled devices, it offers seamless integration between physical and digital environments, delivering a highly immersive user experience [31,32].

2.2.2. Vision Pro by Apple

The Apple Vision Pro (AVP) is a next-generation spatial computing device that integrates VR and AR technologies into a unified XR platform. Featuring dual 4K displays and advanced spatial computing capabilities, it marks a major milestone in the evolution of XR technologies [33,34]. The device enables seamless integration between digital content and the real world, supporting natural interaction via eye tracking, hand gestures, and voice input. Its array of advanced cameras and sensors enables precise environmental mapping, spatial awareness, and hand tracking. The spatial audio system enhances immersion by blending digital sound with ambient noise, while personalized micro-OLED displays provide high pixel density per eye, resulting in exceptional visual clarity [35].

2.2.3. HoloLens by Microsoft

The Microsoft HoloLens, introduced in 2016, is a state-of-the-art head-mounted display (HMD) that operates on MR technology. It allows users to interact with their surroundings via holographic projections, offering a multisensory and immersive experience [36]. The HoloLens 2 is equipped with a sophisticated sensor array, including eight cameras and multiple inertial measurement units (IMUs), to facilitate precise user and environmental understanding. Its components include four visible-light cameras for visual-inertial simultaneous localization and mapping (SLAM), a depth sensor for spatial mapping and hand tracking, two infrared cameras for eye tracking, and an RGB camera for MR capture. Additionally, the device integrates an accelerometer, gyroscope, and magnetometer to support motion tracking and absolute orientation estimation [37].

3. Keyword Association Analysis of XR in the Food Industry

3.1. Methodology for Web Crawling and Keyword Analysis

To establish the research scope before examining XR applications in the food industry, we conducted a keyword analysis of article titles. These titles were collected using web crawling and web scraping techniques to capture how the terms extended reality and food are applied across different domains. A web crawler is a core component of search systems that systematically navigates through webpages to gather data for indexing and retrieval. A search ranking algorithm subsequently prioritizes the indexed content based on its relevance to user queries [38]. In contrast, web scraping refers to the targeted extraction of information from structured web content, commonly used for tasks such as price tracking, product review analysis, online presence monitoring, and content aggregation [39].
Using the search query “virtual” OR “augmented” OR “mixed” AND “reality” AND “food,” a total of 900 academic article titles were retrieved from Google Scholar, along with 418 media headlines, yielding 1318 titles in total. After removing duplicates, 805 unique titles were retained for keyword co-occurrence analysis.
To ensure consistency, the terms extended, virtual, augmented, and mixed were unified under the umbrella term extended. Additionally, common English stopwords—including articles, prepositions, and conjunctions (e.g., the, of, in, for, and)—were removed to enhance keyword interpretability. To increase clarity and analytical focus, only the top 45 most frequently occurring keywords were included in the final network analysis.
The resulting visualization (Figure 2) presents a keyword co-occurrence network revealing several interconnected thematic clusters at the intersection of XR and food.
A prominent cluster centers on user-centered experiences, featuring terms such as immersive, experience, perception, and sensory, which suggest a strong research interest in how XR technologies enhance food-related perceptions and interactions. This reflects the growing attention paid to affective and experiential dimensions of food consumption in XR environments.
Another cluster includes education, training, learning, and tool, underscoring the use of XR as a pedagogical or behavioral intervention tool. This aligns with applications in nutrition education, safety training, and health promotion.
A third cluster contains infrastructure- and design-related terms such as system, technology, design, application, and development, indicating ongoing focus on the technical and implementation aspects of XR deployment in the food sector.
Finally, terms like consumer, behavior, choices, and evaluation highlight an emphasis on understanding consumer decision-making and behavioral responses in XR-mediated food environments.
Together, these thematic clusters reveal that XR and food are connected not only through technological innovation, but also through multidimensional concerns including user experience, educational value, sensory design, and behavioral impact—illustrating a rapidly evolving research agenda in this emerging field.
Notably, the clusters associated with education, training, and experiential learning highlight how XR can function as a preparatory environment for complex or high-risk tasks. These immersive features are particularly relevant when considering the integration of robotics into food systems, where operator training, risk mitigation, and procedural simulation are critical. By extending XR’s strengths in engagement and experiential learning to robotic applications, immersive platforms can provide safer and more efficient pathways for skill acquisition and system familiarization before real-world deployment.

3.2. Key Research Trends and Emerging Topics

Research trends in the application of XR in the food industry were examined using articles published between 1995 and 2024. A total of 662 articles were included in the final analysis, excluding 36 papers from 2025 and 25 entries for which the publication year could not be identified through web crawling.
Figure 3 displays the yearly distribution of XR-related publications in the food industry over this 30-year period. Between 1995 and 2009, research activity was sporadic, with fewer than five papers published annually. Interest began to grow modestly in the early 2010s, with a notable increase in 2011 (13 publications). This upward trajectory continued steadily through the mid-2010s.
From 2016 onward, the number of publications rose substantially each year, reflecting increased academic and industry attention to immersive and XR technologies in food contexts. A key inflection point occurred in 2020, with more than 60 articles published for the first time. This upward trend continued, with 94 articles in 2021, 88 in 2022, 82 in 2023, and peaking at 114 in 2024—the highest annual total to date.
These findings suggest that research on XR applications in the food industry has evolved from an exploratory phase into a rapidly expanding field, particularly over the last five years. The sharp increase in publications since 2020 may be attributed to broader advancements in XR technology, greater accessibility of immersive tools, and the pandemic-driven demand for virtual experiences in education, retail, and food-related consumer engagement.

4. Application of XR in the Food Industry

4.1. Methodology of Literature Review

A literature search was conducted in April 2025 using Google Scholar to examine the intersection of XR technologies and the food industry. The search employed the keywords “Virtual Reality,” “Augmented Reality,” “Mixed Reality,” and “Extended Reality” in combination with the term “food.” Articles that exhibited conceptual ambiguity between XR, VR, AR, and MR, as well as review papers, were excluded during the screening process to ensure conceptual clarity and topical relevance. Studies were included if they involved empirical applications of XR technologies in food-related contexts such as production, marketing, education and training, or consumer behavior.

4.2. Application of VR in the Food Industry

XR spectrum, VR is uniquely characterized by its ability to fully immerse a user in a computer-generated, three-dimensional environment, effectively replacing their perception of the real world. This high degree of immersion fosters a powerful sense of presence, providing researchers and practitioners with a sophisticated and controlled simulation environment that transcends the physical, temporal, and ethical constraints of real-world settings [40].
These inherent properties of VR have positioned it as a highly effective tool for addressing several challenges within the food industry. Its potential is particularly evident in four key domains: (1) simulating and Validating Consumer Food Choice Behavior, where precise control over variables is critical; (2) enhancing Food Education and Promoting Sustainable Behavior, which benefits from safe and repeatable learning scenarios; (3) stimulating Appetite and Sensory Perception, enabling the study of sensory experiences without actual food stimuli; and (4) measuring Disgust, Bias, and Eating-Related Psychopathology, for investigating sensitive psychological responses in a controlled manner.
The following subsections will provide an in-depth examination of the empirical studies within each domain, focusing on their specific technical implementations and key findings. This detailed review will serve as the foundation for the broader comparative analysis against other XR technologies presented in Section 4.5.

4.2.1. Simulating and Validating Consumer Food Choice Behavior

Recent advances in VR technology have enabled the creation of immersive environments that closely replicate real-life settings such as supermarkets and food courts. These simulated environments allow researchers to systematically observe and measure food choice behavior while controlling for contextual variables that are difficult to manipulate in real-world conditions. Participants typically wear VR head-mounted displays, as shown in Figure 4a, to experience immersive food environments such as virtual supermarkets or dining tables (Figure 4b).
In a foundational study validating VR for consumer research, Siegrist et al. [41] found that cereal selections and gaze patterns in a virtual supermarket (using an Oculus Rift DK2) were highly similar to a real-life setting. While the VR task took significantly longer (123 s vs. 71 s, p < 0.001), a second experiment successfully replicated a real-world finding: health-motivated participants examined nutrition labels significantly more than taste-motivated ones (p < 0.05). These results provide strong evidence for VR’s ability to simulate realistic consumer decision-making in a controlled environment.
Furthering the ecological validation of VR, Cheah et al. [42] created a hyper-realistic buffet using photogrammetry and an HTC Vive Pro. Despite a one-week interval and intentionally unmatched food items, their analysis revealed strong correlations in nutrient intake (e.g., total calories, r = 0.58) between the virtual and real-world sessions. These objective findings, supported by high user ratings for the experience’s realism (M = 70.97/100), confirmed the platform’s value in predicting real-world dietary behavior.
To test VR’s potential for policy simulation, Allman-Farinelli et al. [43] designed a large-scale Virtual Reality Food Court and assessed its usability. The platform achieved a high presence score (mean 144/196) and acceptable usability (mean SUS 69/100), confirming its promise as a research tool. A key limitation, however, was its reliance on unnatural keyboard-and-mouse navigation, signaling a need for future interaction enhancements.
Collectively, these studies demonstrate that VR offers a robust and versatile platform for simulating consumer food choice behavior, combining ecological realism with experimental control to support behavioral nutrition research and policy simulation.
In addition to the aforementioned studies, several other works categorized under the SVC (Simulating and Validating Consumer Food Choice Behavior) domain have also examined VR’s role in food choice simulation, as summarized in Table 1.

4.2.2. Enhancing Food Education and Promoting Sustainable Behavior

In addition to simulating food choices, VR has been employed as an educational tool to influence food-related knowledge, attitudes, and behaviors. Recent studies have explored how immersive VR environments can promote healthier dietary habits, support environmentally sustainable food choices, and enhance learning in nutrition and food technology education. These applications highlight VR’s potential not only as a behavioral observation tool, but also as a persuasive medium for educational and behavioral interventions in food contexts.
Gorman et al. [50] demonstrated VR’s potential in education by creating a virtual classroom for food safety training on an Oculus Go. Using 360-degree video and a gaze-click interface, the system was highly rated by middle school students, achieving a strong System Usability Scale (SUS) score of 80.4/100. The universally positive feedback on engagement and enjoyment highlighted VR’s value as a motivating tool for practical learning, especially where physical resources are limited.
To promote sustainable food choices, Plechatá et al. [51] designed an Oculus Quest experience for middle school students. They found that an ‘Awareness + Efficacy’ condition—where students could re-select their food choices to see a positive environmental impact—was significantly more effective at increasing pro-environmental intentions (p = 0.003) than an ‘Awareness-only’ condition. The study concluded that this effect was driven by an increase in self-efficacy, highlighting the importance of empowering users to experience their own positive impact within VR interventions.
Overall, the use of VR in food-related education and sustainability initiatives shows strong potential for enhancing engagement, increasing awareness, and shaping long-term behavioral intentions, thereby demonstrating value beyond traditional instructional methods. Additional studies classified under the EFPB (Enhancing Food Education and Promoting Sustainable Behavior) category further support these findings, as summarized in Table 1.

4.2.3. Stimulating Appetite and Sensory Perception in VR

Beyond its applications in food choice and education, VR has also been employed to investigate whether digital food stimuli can evoke appetite and replicate sensory experiences. Recent studies have explored how visual realism, interactivity, and olfactory cues in VR influence food cravings, taste expectations, and sensory responses, especially in the absence of real food stimuli.
Investigating multi-sensory effects on appetite in VR, Harris et al. [52] tested how adding scent and interaction to a virtual cookie influenced cravings. Using an HTC Vive and a custom olfactory device, they found that the addition of a chocolate scent significantly increased the ‘Urge to Eat’ (p < 0.001) compared to a visual-only condition. Interaction alone was not a significant factor, but when combined with scent, it yielded the highest craving responses—though not significantly greater than scent alone—highlighting the central role of olfaction in creating believable appetitive experiences in VR.
Investigating the role of visual fidelity, Ramousse et al. [53] presented food models at seven different quality levels to participants using a Varjo XR3 headset. They found that higher visual quality significantly increased the desire to eat (p < 0.001), but the effect plateaued beyond a certain threshold. The study identified both a minimum quality level required to elicit appetite and a ceiling level beyond which further realism offered no additional benefit, thereby providing practical guidelines for designing effective virtual food stimuli.
Together, these studies suggest that VR can elicit robust appetitive and sensory responses through carefully calibrated visual and olfactory cues. These findings support the use of VR in digital sensory research, craving modulation, and the design of immersive interventions in food-related contexts. Additional research categorized under the SSV (Stimulating Sensory Perception and Appetite in VR) label further reinforces these observations, as summarized in Table 1.

4.2.4. Measuring Disgust, Bias, and Eating-Related Psychopathology

Beyond its use in consumer behavior and education, VR has also been explored as a tool for measuring emotional and cognitive responses to food-related stimuli. In particular, researchers have used VR to assess reactions such as disgust, implicit food-related biases, and traits associated with eating disorders. These studies highlight VR’s potential to offer ecologically valid yet controlled environments for investigating complex psychological phenomena that are challenging to study in real-world settings.
To test if virtual disgust could alter real-world behavior, Ammann et al. [54] had participants witness a virtual dog defecate a chocolate-like object before being offered a real piece of chocolate. The virtual disgust cue worked: the experimental group was significantly more likely to refuse the chocolate than a control group (26% vs. 4% refusal rate, p < 0.01). The study also found that this effect was mediated by the participants’ sense of physical presence, confirming that immersive disgust cues in VR can effectively influence tangible eating decisions.
Investigating food disgust in a clinical sample, Bektas et al. [55] used an Oculus Quest 2 to place individuals with anorexia nervosa in one of three virtual kitchen scenarios. They found that both trait and state food disgust were positively correlated with eating disorder severity (trait: rs = 0.45, p < 0.001). However, behavioral measures within VR, such as eye gaze and touching virtual food, showed inconsistent correlations with self-reported disgust—emerging only in specific scenarios—suggesting that while the psychological link is clear, the current VR paradigm did not fully capture its behavioral expression.
These studies demonstrate VR’s potential for eliciting and measuring food-related emotional responses, such as disgust, and for examining their associations with disordered eating behaviors. By combining immersive stimuli with behavioral tracking, VR presents a novel method for investigating food rejection and eating-related psychopathology within controlled yet ecologically valid environments. Additional investigations categorized under the MBE (Measuring Disgust, Bias, and Eating-related Psychopathology) category are summarized in Table 1.

4.3. Application of AR in the Food Industry

Distinct from the fully immersive environments of VR, AR enriches the user’s perception by superimposing computer-generated digital information—such as 3D models, text, or graphics—directly onto their view of the real world [40]. This seamless integration of virtual content with the physical environment, typically experienced through smartphones or tablets, makes AR a highly accessible and powerful tool for delivering context-aware information and interactive experiences at the precise moment of need [12].
In the food industry, this capability has been leveraged across several key domains: (1) stimulating Consumer Behavior and Sensory Engagement by enhancing the appeal of real food products and their packaging; (2) enhancing Nutrition and Sustainability Awareness by providing immediate access to complex nutritional and environmental data; and (3) designing Intelligent and Personalized AR Food Systems by integrating AR with AI to create adaptive and customized experiences. The following subsections will review empirical studies that demonstrate how AR is being used to influence consumer perception, improve educational outcomes, and pave the way for more personalized food interactions.

4.3.1. Stimulating Consumer Behavior and Sensory Engagement Through AR

AR is increasingly being explored as a tool for influencing consumer behavior in food-related contexts. By enhancing sensory engagement, facilitating mental simulation, and evoking emotional responses, AR applications influence how consumers perceive, desire, and choose food. Recent studies have reported on AR’s potential to increase food appeal, guide healthier decisions, and enrich the consumer experience at the moment of interaction.
Investigating interactive marketing, Gu et al. [56] analyzed the effect of AR-enhanced takeaway packaging through a multi-study design. Across consumer surveys and experimental comparisons, they found that AR packaging—featuring interactive 3D content accessible via smartphone scanning—significantly increased trust, satisfaction, and purchase intention compared to standard packaging (p < 0.05). Their structural model further showed that AR functions as an effective marketing tool by enhancing perceptions of interactivity, vividness, and novelty, ultimately leading to more positive consumer evaluations.
Dong et al. [57] investigated how AR environments affect consumer responses to dairy, coconut-based, and mixed yogurts. Using a Microsoft HoloLens 2 headset, participants tasted real yogurts in three different settings: a standard sensory booth, an AR coconut beach scene, and an AR dairy farm scene. The results showed a significant interaction effect between the AR context and the yogurt type on overall liking and purchase intent (p < 0.05). Specifically, dairy and mixed yogurts were rated more favorably when experienced in the congruent dairy farm environment, demonstrating AR’s potential to actively shape food experiences by modifying the surrounding visual context.
Collectively, these studies demonstrate that AR can be strategically employed to shape food-related decisions by enhancing sensory engagement, emotional context, and environmental congruence. By modifying visual surroundings to align with product characteristics, AR offers novel opportunities to influence perception, increase product appeal, and guide consumer behavior in food consumption settings. Additional investigations categorized under the SCSA (Stimulating Consumer Behavior and Sensory Engagement through AR) category are summarized in Table 2.

4.3.2. Enhancing Nutrition and Sustainability Awareness with AR

In addition to shaping consumer behavior, AR has been applied to promote nutrition and raise awareness of sustainability issues. Recent studies demonstrate that AR can support learning in nutrition, food labeling, and sustainable consumption by presenting information in an interactive and engaging format across both educational and everyday contexts. For instance, as shown in Figure 4c, AR can overlay nutritional information directly onto physical food items, enabling real-time, context-aware learning.
To improve nutrition, Juan et al. [62] created an AR mobile app using Unity and Vuforia that guides users to nutritional labels on real packaged foods and helps interpret them. A study with 40 participants demonstrated the app’s powerful educational impact, showing a significant increase in knowledge about carbohydrate information after use (p < 0.001), with average correct answers jumping from nearly zero to near-perfect. The work confirms AR’s potential for making complex food label data more accessible and understandable to consumers.
Integrating AR with generative AI, Capecchi et al. [63] developed ARFood, a serious game where middle school students receive feedback on virtual shopping from two ChatGPT 4o-powered characters (one for nutrition, one for sustainability). To ensure the feedback was educationally sound, the authors used a novel method: a RoBERTa classifier evaluated the AI responses against predefined goals, allowing for an iterative refinement of the AI prompts. This process successfully balanced the AI-generated feedback, proving a viable method for creating adaptive and educationally aligned AR learning tools.
Collectively, these studies suggest that AR technologies can effectively support nutrition initiatives and foster sustainable consumption behaviors. By delivering information in more interactive and accessible formats, AR opens new avenues for enhancing consumer education in both informal settings and structured learning environments. Additional research classified under the ENSA (Enhancing Nutrition and Sustainability Awareness with AR) category is summarized in Table 2.

4.3.3. Designing Intelligent and Personalized AR Food Systems

AR technologies are increasingly being integrated with AI and generative models to enable more intelligent and personalized food experiences. Recent studies have examined how AR systems can dynamically generate, modify, or adapt food-related content in real time, providing customized visual and interactive experiences beyond static representations.
Nakano et al. [64] developed DeepTaste, an AR system using GAN-based real-time food-to-food translation (e.g., somen → ramen, rice → curry/fried rice). Participants ate real food while viewing the visually transformed version through an HTC Vive Pro. The altered appearance significantly shifted perceived taste and texture (p < 0.05), though effects varied by food type and familiarity. This demonstrates the potential of generative AR to customize sensory experiences without altering the actual food.
Han et al. [65] proposed an immersive AR narrative framework to promote sustainable food behaviors through participatory co-creation. In workshops with Generation Z consumers, participants created storyboards for sustainable food narratives, which were later evaluated by AR developers for feasibility on devices such as the Microsoft HoloLens 2. Findings showed that effective narratives must be relatable, emotionally engaging, and convey a clear, empowering call to action. The authors conclude with a novel framework that incorporates back-loop validation, emphasizing that tailoring story paths to users’ personal contexts is crucial for fostering engagement and sustainable attitudes.
These studies illustrate how the integration of AR with intelligent systems and personalized content generation can enable more immersive, emotionally engaging, and behaviorally impactful food experiences. By enabling real-time customization and narrative-driven interactions, AR technologies offer new pathways for tailoring food-related interventions to individual users. Additional applications falling under the DPAF (Designing Intelligent and Personalized AR Food Systems) category are presented in Table 2.

4.4. Applications of MR in the Food Industry

Positioned between AR and VR, MR creates a hybrid environment where physical and digital objects interact in real-time [66]. Through its unique ability to blend real-world actions, such as eating, with controllable virtual contexts, MR offers novel methods for investigating emotional engagement, social interaction, and sensory observation in the food industry.
MR technologies are increasingly being explored in the food industry to enhance both the dining experience and the scientific investigation of food-related behaviors. As illustrated in Figure 4d, MR can merge physical elements such as tables with immersive virtual scenes, enabling hybrid environments for both realistic interaction and controlled experimentation. By integrating real-world eating with immersive virtual environments, MR enables new forms of emotional engagement, social interaction, and sensory observation that were previously limited by the constraints of traditional laboratory setups.
Low et al. [67] investigated the impact of context on emotional responses to snack consumption across three settings: a traditional sensory booth, a real-world café, and a MR simulation of that café. Using a Microsoft HoloLens headset displaying a 360-degree video of the real café, the MR condition allowed participants to see the real snack they were eating while being immersed in the virtual context. The study found that for the majority of consumers, the emotional profiles elicited in the MR café were not significantly different from those in the real café. In contrast, emotional responses recorded in the sensory booth differed significantly from both the real and MR environments, demonstrating that MR can replicate real-world emotional responses with high ecological validity for affective food research.
Similarly, Fujii et al. [68] introduced an MR-based co-eating system where a NAO humanoid robot, synchronized with a HoloLens display, appeared to pick up and eat virtual food. Compared with a talking-only control condition, participants in the robot-eating condition reported greater enjoyment (p = 0.0238), enhanced perceived taste (p = 0.0480), and consumed more food (1.59 vs. 1.31 chocolates, p = 0.00441). These findings demonstrate that MR-enabled shared eating behaviors can enhance dining experiences and modulate consumption.
Beyond emotional and behavioral outcomes, MR technologies have also been applied to improve food monitoring and sensory research. Nair and Fernandez [69] proposed a cost-effective monocular 3D reconstruction framework for fresh produce, employing a GLPN-based depth estimation model to generate detailed point clouds that were rendered as interactive digital twins on Microsoft HoloLens 2. The system successfully tracked bruise progression on apples over 10 days, quantifying its expansion from 3.32% to 7.72% of the surface area. This work illustrates the potential of MR-enabled digital twins for real-time food quality monitoring and defect detection.
Collectively, these studies highlight MR’s potential to advance both consumer-oriented applications and scientific investigations within the food industry. By combining immersive realism with technical precision, MR technologies offer promising new approaches for enhancing emotional satisfaction, improving behavioral measurement, and increasing the accuracy of food quality assessment.

4.5. Synthesis and Comparative Analysis of XR Modalities

The preceding sections have detailed the individual applications of VR, AR, and MR within the food industry. This section now provides a comparative synthesis, analyzing the distinct strengths, limitations, and optimal use cases for each modality based on the reviewed empirical evidence.
Comparative findings across XR modalities highlight modality-specific advantages and constraints in food-related applications. As summarized in Table 3, studies on the eating experience reveal clear distinctions. Oliver and Hollis [44] showed that VR environments using the HTC Vive and Unity increased presence (p < 0.006) and arousal (p = 0.02), yet did not alter intake (p = 0.98) or sensory ratings, indicating strong experimental control but limited impact on consumption. AR applications, as reported by Ghavamian et al. [70], overlaid chromatic filters on real beverages through Meta Quest 3, demonstrating that a pink filter reduced bitterness perception, although the effect reversed when combined with auditory cues (p = 0.044). MR investigations by Long et al. [71] used passthrough in Meta Quest Pro to merge real food handling with a virtual café scene, with expert evaluations rating ecological validity at 72.6/100—higher than laboratory booths but still lower than real cafés.
Evidence from supermarket choice studies further illustrates these contrasts. Waterlander et al. [72] validated a VR-based 3D supermarket built in Unity, finding close correspondence between virtual purchases and real grocery receipts, although significant deviations occurred in categories such as dairy (+6.5%, p < 0.001). By comparison, AR interventions tested by Pini et al. [73] employed Microsoft HoloLens overlays to present nutritional information during shopping tasks. Results showed increased selection of high-nutrition products (p < 0.001) and longer exploration times (p = 0.02), demonstrating AR’s capacity to guide attention and shift choices, albeit with limited ecological realism due to the absence of real pricing contexts.
In the domain of nutrition education, the findings in Table 3 also show notable divergence between VR and AR. Karkar et al. [74] applied Oculus DK2 and Vizard in a VR breakfast preparation task, reporting quiz accuracy of 87%, similar to narrative (88%) and paper-based methods (85%), though task completion time was substantially longer (112 s vs. 38 s). In contrast, Kalimuthu et al. [75] implemented an eight-week AR curriculum using Vuforia-based mobile applications, which significantly improved adolescents’ knowledge (+3.82), attitudes (+1.88), and behaviors (+1.04), all with p < 0.001.
Taken together, the results outlined in Table 3 indicate that VR provides immersive control environments for behavioral validation, AR supports context-rich decision-making and scalable education, and MR offers hybrid approaches that combine immersion with ecological validity.
Collectively, XR applications in the food industry highlight not only their potential for enhancing consumer experience, education, and sensory engagement but also their role as preparatory platforms for real-world implementation. Immersive environments allow researchers and practitioners to design, simulate, and validate scenarios under safe and controlled conditions. Yet, these benefits reach their full impact when extended into physical systems—particularly robotics—which can execute and refine the very processes conceived in XR. This convergence sets the stage for the next section, where robotics is examined as the natural continuation of XR-enabled innovation.

5. Overview of Robotics in Food Processing

5.1. Roles and Types of Robots in Food Processing

The food processing has traditionally maintained a labor-intensive structure, but it is now undergoing structural transformation driven by a combination of technological, economic, and social factors. At the center of this shift lie robotics and automation technologies, which are increasingly recognized not merely as tools for improving efficiency, but as essential components for ensuring sustainability and maintaining competitiveness in industry. This technological evolution, from simple mechanical systems to intelligent robotics, is shown in Figure 5.
This section first examines how robotics is currently being applied across food production and processing, focusing on their necessity, types, and technological characteristics. Building on the discussion of XR in Section 4, it further considers how robotics, when combined with immersive and mixed reality systems, can create additional advantages in areas such as training, simulation, and system design. These integrative aspects are addressed in detail in Section 5.4, where XR–robotic convergence is explored through the lens of digital twin technologies.

5.1.1. The Necessity of Automation

The food processing has long faced chronic labor shortages due to the repetitive, ergonomically demanding, and relatively low-wage nature of many of its tasks [76]. This issue is particularly acute in processes such as meat processing, packaging, and palletizing, which are labor-intensive and physically taxing [77]. The COVID-19 pandemic further exposed the vulnerability of labor dependency, as factory shutdowns and workforce shortages underscored the critical role of automation as a risk management strategy to ensure operational continuity. Rising labor costs have also become a strong economic driver for automation adoption [78].
Minimizing human contact during food production is crucial for reducing the risk of microbial contamination and enhancing food safety. Robots can be constructed from food-grade materials such as stainless steel and are designed to withstand strong cleaning agents and high-pressure washdowns, making them well-suited for compliance with stringent hygiene regulations [79].
Robots can operate continuously 24 h a day, seven days a week, maximizing productivity. Furthermore, they eliminate errors caused by human fatigue or mistakes, thereby improving product quality and consistency. Precision control of processes also contributes to reduced raw material waste and greater overall efficiency [80].

5.1.2. Components and Technological Characteristics of Food Automation Solutions

Food automation solutions are not composed of a single robotic arm, but rather consist of complex systems integrating various types of robotic platforms optimized for specific applications, sophisticated end-of-arm tooling (EOAT) tailored to the physical characteristics of food products, and software systems that coordinate and control their interactions [81]. Robots, which are the core technology of these systems, can generally be defined as programmable self-controlling devices composed of electronic, electrical, or mechanical components [82].
Robots used in the food industry can be classified into several types. Articulated robots feature kinematic structures similar to the human upper limb, offering high degrees of freedom and a broad working envelope. These characteristics make them widely applicable in processes such as meat processing, packaging, and palletizing [83].
Parallel robots are composed of three to four linked arms connected to a single mobile platform, enabling high-speed and high-precision movements within a limited hemispherical workspace. Their dynamic performance makes them ideal for high-speed pick-and-place operations in material handling and primary packaging processes [84].
Cartesian robots operate along linear X, Y, and Z axes, offering high precision and payload capacity due to their simple kinematic design. Although they lack flexibility in workspace coverage, their structural rigidity and cost-effectiveness make them suitable for downstream applications such as packaging and palletizing [85].
Collaborative robots (cobots) are designed to work safely alongside human operators without the need for safety fencing. Key features include intuitive programming through lead-through teaching and graphical user interfaces, fast installation and redeployment, and high operational flexibility. These advantages make cobots particularly well-suited for small-batch, high-mix production environments and for small- to medium-sized enterprises facing capital and space constraints [86]. The key features, suitability for the food industry, and potential for XR integration for each robot type are summarized in Table 4.
The operational effectiveness of robotic systems heavily depends on the performance of their EOAT, especially gripper systems. Due to the irregular shapes and delicate physical properties of many food products, gripper technology is considered a core component in food automation systems. A variety of gripping mechanisms are deployed based on the specific physical characteristics of the food being handled [91]. The specific details regarding each gripper’s operating method, applicable food textures, hygiene considerations, and XR integration capabilities are summarized in Table 5.
Figure 6 illustrates various types of robotic end-effectors categorized by their respective contact positions on food products, highlighting the diversity of gripping strategies required for different handling scenarios.

5.1.3. Key Advantages and Limitations of Robotics in Food Processing

Robotic automation in food processing offers significant advantages. Robots can operate continuously 24/7, performing tasks with greater precision and speed than humans, which has been reported to increase productivity by over 25% [95]. It eliminates human errors caused by fatigue to ensure consistent product quality and enhances workplace safety by removing workers from hazardous and repetitive tasks [81]. Additionally, minimizing human contact reduces the risk of microbial contamination, enhancing food safety and ensuring compliance with strict hygiene regulations [96].
However, adopting robotics in this sector faces significant challenges. The primary limitation stems from the nature of food itself. Food is often fragile, irregularly shaped, and variable in texture, making the design of a universal and delicate end-effector a major technical hurdle [81]. Furthermore, the high initial investment in robotic systems presents a significant economic barrier, particularly for small and medium-sized enterprises [97]. All robotic equipment must comply with strict sanitary design standards and require easily cleanable, corrosion-resistant, seamless surfaces to prevent contamination [98].

5.2. Robotic Selection Methodology by Bader and Rahimifard

The previous Section 5.1 examined the pressing challenges faced by the food industry—such as chronic labor shortages and the need for improved hygiene and productivity—and highlighted the necessity of automation as a strategic response. It also introduced various robotic technologies, including articulated and parallel robots, along with end-of-arm tooling (EOAT) as key performance determinants, thereby establishing a technical foundation for potential solutions.
However, it also stressed that the decision to adopt robotics is not straightforward. The significant advantages in terms of productivity and safety are offset by limitations. These include high initial investment costs that act as barriers for small and medium-sized enterprises, technical difficulties in handling delicate and variable food products, a shortage of skilled engineers with interdisciplinary knowledge, and strict hygienic design requirements. Consequently, the decision to adopt robotic automation becomes a complex task that requires a comprehensive evaluation of various economic, technical, and operational factors, going beyond a simple technology selection. Therefore, the need for a structured framework emerges to systematically analyze these multifaceted issues and derive the most suitable solution for a specific process.

Core Structure of the FIRM Methodology

Bader and Rahimifard developed the FIRM (Food Industrial Robot Methodology), a four-stage framework designed to systematically analyze food characteristics and production requirements to select appropriate robotic configurations and end-effectors. The first step is the food characteristic definition phase, which involves establishing the basic profile of the food. This involves defining three key elements. Food types classify product ingredients into major categories such as meat and poultry, vegetables and fruits, seafood, baked goods and confectionery, dairy products, and others. Food status is defined as one of raw, cooked, or frozen. Food form is defined as either whole or segmented. The second step involves analyzing the physical characteristics of the food in greater depth based on the information obtained in Step 1 and classifying it into one of the six Food Variety (FV) groups. This step is crucial for determining the design requirements of the end effector, which is the most technically challenging part of the robotic system. Food classification is based on the food’s rigidity and deformability. Food rigidity is categorized as rigid, semi-rigid, or non-rigid according to its ability to maintain shape under pressure. Deformability is classified as deformable or non-deformable based on whether the structure deforms under external forces.
The third step is to specify the specific types of tasks the robot must perform. Tasks are divided into three categories: material handling, assembly, and finishing processes. Material handling involves moving food from one location to another, including picking, sorting, and machine loading. Assembly is the process of combining two or more food items to create a single product (e.g., sandwich making). The finishing process includes primary, secondary, and tertiary packaging as well as palletizing operations after the product is completed. The final step involves synthesizing the analysis results from the preceding three steps to ultimately identify the most suitable physical specifications for the robot. At this step, two key elements are determined: the robot body and the gripper mechanism. The robot body is selected from among articulated, parallel, or cartesian robots, considering factors such as the workspace, speed, and payload. The gripper mechanism selects the optimal type from various options—such as pincer-type, enclosing-type, pinning-type, pneumatic, and freezing—based on the physical characteristics of the food and the type of task. The decision-making process, which was presented in a dispersed manner throughout the original paper, has been visualized as a single integrated decision-making framework to aid the reader’s intuitive understanding. This is shown in Figure 7.
The FIRM methodology has demonstrated practical value, as shown in a UK meat processing case where it successfully selected an encompassing-type gripper for packaging a 170 g fillet steak classified as FV4 (semi-rigid, non-deformable). However, as a structured decision-making tool focused primarily on traditional food processing tasks, the methodology has limited consideration for rapidly evolving non-traditional robotic applications.

5.3. Expanding Application Domains and Emerging Challenges

Building upon the structured approach outlined in Section 5.2 presents a comprehensive approach for selecting appropriate robotic solutions by systematically analyzing the physical characteristics of food products and processing requirements. This framework lays the foundation for extending food robotics applications beyond traditional domains such as pick-and-place operations and meat processing. However, with the rapid advancement of robotic technologies, the food industry is now witnessing the emergence of innovative and non-traditional application areas that go beyond conventional handling and packaging tasks.

5.3.1. Non-Traditional Applications and Technological Demands

Derossi et al. [99] conducted a comprehensive review highlighting how the application of robotics is expanding beyond traditional boundaries into areas such as mixing, cleaning, precision cooking, 3D printing, and gastronomy—domains that had previously received limited attention. These non-traditional applications share a common demand for advanced precision control, real-time environmental awareness, and complex physical interaction beyond simple repetitive tasks.
First, robotic cleaning holds the potential to dramatically improve hygiene levels in food processing facilities [100]. Robots can autonomously navigate complex 3D structures, sloped surfaces, and stairways that are difficult for human workers to access, performing cleaning and disinfection tasks [101]. For instance, a robot developed by the Fraunhofer Institute uses UV sensors to detect contamination types, thickness, and moisture levels, adjusting disinfectant type and concentration in real time. Such intelligent systems demonstrate the potential for precision cleaning, which can reduce costs and environmental impact associated with excessive chemical usage [99]. This application introduces new technological challenges, requiring robots not only to follow predefined paths but also to perceive and interpret their environment to optimize operations.
Second, in the realm of precision food manufacturing, robots are driving innovation at the process level. Unlike the standardized motions of conventional mixers, the unconstrained 3D movements of robotic arms can enable new mixing techniques such as chaotic mixing, maximizing ingredient uniformity and allowing precise control over dough rheology [102]. A study using a Delta robot for chaotic mixing showed that it achieved higher homogeneity in 20% less time than traditional circular stirring methods [103]. Additionally, by integrating robots with unconventional heat sources like lasers, precision cooking becomes possible—delivering targeted thermal energy to specific locations on the food surface to digitally design Maillard reactions [104], thereby programming flavor, aroma, and color [105]. These applications demand that robots move beyond simple position control to a level where they can understand and manipulate the internal properties and chemical reactions of food.
Third, digital gastronomy and additive manufacturing represent cases where robotics is extending into the domain of creativity. By equipping robotic arms with 3D printing extruders, it becomes possible to create meals tailored to individual nutritional needs or to craft intricate food designs beyond human capability [99]. An illustrative example of this creative application is the robotic scoring of dough, where programmable cutting paths enable consistent and intricate surface patterns, as shown in Figure 8. Moreover, robots are advancing to replicate complex human cooking skills through Learning from Demonstration (LfD), such as making pancakes [106], stir-frying in Chinese cuisine [107], or even adjusting seasoning in scrambled eggs using salinity sensors [108]. These examples signify a leap from pre-programmed tasks to robots that can learn from human demonstrations, interpret sensor feedback, and make real-time decisions during culinary execution.

5.3.2. Implications of Expanded Application Areas

The emergence of these non-traditional robotic applications is fundamentally transforming the roles and requirements of robotic systems in the food industry [7,99]. One major implication is the significant increase in the cost and risk associated with the development and validation of advanced robotic systems. In environments involving complex physical interactions and numerous variables, trial-and-error approaches in real-world settings reveal limitations in terms of economic efficiency and operational safety. To address these challenges, new technological approaches are needed—ones that enable system design in virtual environments, simulation-based verification, and safe transfer to real-world deployment. These needs can be addressed through the integration of XR environments and digital twin technologies, which will be discussed in the following section.

5.4. Integrating XR and Robotic Digital Twins as a New Paradigm for Food Systems

As explored in Section 5.3, the application domains of food robotics have expanded into areas such as precision cleaning, chaotic mixing, and digital gastronomy, driving the evolution of robotic systems from simple repetitive task tools to intelligent systems capable of complex physical interactions and real-time decision-making. This shift highlights the need for a new approach that goes beyond conventional selection methodologies to encompass system design, verification, and optimization.
In traditional food robotics development, linear processes involving physical prototyping and on-site testing have been the norm [109]. However, for modern, highly complex food robotic systems requiring multivariable optimization, such approaches expose inherent limitations including high costs, long development timelines, and safety risks. These concerns are particularly critical when robots interact directly with food, where failed experiments can compromise food safety and cause production halts [110].
To overcome these challenges, the integration of Digital Twin (DT) and XR technologies has emerged as a promising new paradigm. According to Abdurrahman & Ferrari [109], a digital twin is “a living model that looks and behaves like a physical object or system and can be continuously updated with data from the operating environment.” This technology enables the creation of fully virtual replicas of food robotic systems and provides immersive XR interfaces through which designers and operators can design, test, and optimize systems without real-world constraints [111,112]. The architecture of such an integrated system, which outlines the interaction between the physical, digital twin, and XR interface layers, is shown in Figure 9.
In the context of food robotics, this integration offers transformative solutions such as virtual commissioning, real-time control, predictive maintenance, and the optimization of complex robot–food interactions [109].

5.4.1. Enhancing Simulation, Control, and Training Through XR–Robotic Twins

Applying XR to robotic programming and operator training offers the potential to provide highly effective learning environments [113]. An early example of this approach is the VR-based robot control system proposed by Miner & Stansfield [114] in 1994, which allowed operators to intuitively control and train complex robotic systems through immersive virtual environments and voice commands. Later, Burdea [115] systematically analyzed the synergistic relationship between VR and robotics, demonstrating how VR could contribute not only to CAD design, robot programming, and factory simulation in manufacturing but also to overcoming poor visual feedback and communication delays in teleoperation tasks.
A practical implementation in educational settings is seen in the work of Crespo et al. [116], who developed a VR-based offline programming system for Mitsubishi robots using Oculus Rift and Unity 3D. Their system allowed students to program robots in a virtual environment and then transfer the code to real robots for execution. It also incorporated an A* algorithm for collision-free path generation. The A-star algorithm is used to find the minimum cost path from the starting point to the end point [117]. The results showed that students using the VR simulator completed tasks more quickly than those using traditional trial-and-error methods and gained a faster understanding of robotic joint structures and spatial parameters. An example of such an implementation is shown in Figure 10, which presents both the physical Mitsubishi Movemaster RV-M1 robot and its corresponding 3D model used in the VR simulation.

5.4.2. XR and Robotic Digital Twin Integration in the Food Industry

Recent developments in XR, robotics, and DT technologies are driving a new wave of innovation in the food industry. By integrating immersive visualization, real-time data feedback, and robotic control into unified systems, XR–robotic digital twins offer powerful tools for improving efficiency, accuracy, and interactivity across various food production environments.
A leading example is the immersive virtual reality simulator developed by González de Cosío Barrón et al. [118], which replicates robotic milk production processes using a digital twin of the DeLaval VMS system. Built with Unity 3D and deployed on HTC Vive, the system provides real-time simulation of cow behavior, robot operation, and milk yield analytics through a user-friendly dashboard. This approach not only enhances skill acquisition and process understanding but also enables remote training and system monitoring in dairy automation.
Tian et al. [119] proposed a reinforcement learning-based training framework for a fruit-picking robotic arm using a Unity-based digital twin. A six-degree-of-freedom arm mounted on an autonomous guided vehicle (AGV) was trained in a virtual orchard using the Unity ML-Agents toolkit. The system applied a Markov Decision Process (MDP) to enable the robot to learn optimal picking trajectories through trial-and-error. Virtual apples were repositioned randomly to support multi-scenario learning. While XR hardware was not used, the study shows that immersive digital twins can effectively support robotic path optimization and low-cost training in smart agriculture.
Singh et al. [120] developed a ROS–Gazebo-based digital twin system for automated strawberry harvesting using a mobile manipulator named MARTA, which combines a differential drive base with a telescopic-link arm. The system integrates a super-resolution-enhanced YOLOv9-GLEAN model for robust fruit detection and employs RGB-D sensing with visual servoing for accurate grasping. Simulation results showed high detection accuracy (Precision: 0.996; Recall: 0.991), highlighting the effectiveness of combining deep learning with digital twin-driven manipulation in precision agriculture.
Collectively, these studies illustrate a growing convergence of XR, robotics, and digital twins in food-specific environments. They demonstrate applications ranging from harvesting and dairy automation to food processing and training, underscoring how immersive digital twins are shaping the next generation of intelligent, interactive food systems.

6. Conclusions

This review has examined the convergence of XR technologies and robotics in the food industry, synthesizing empirical applications, technological advancements, and methodological frameworks. Unlike prior reviews that have addressed XR and robotics separately, this work uniquely highlights their integration through digital twin systems, offering a conceptual bridge between immersive simulation and intelligent automation. By combining a systematic literature review with a large-scale keyword network analysis, it provides both qualitative and computational insights, uncovering emerging research clusters that inform the future trajectory of this interdisciplinary field.
The unique strength of XR–robotics convergence lies in its capacity to combine the immersive, intuitive, and safe simulation environments of XR with the precision, repeatability, and scalability of robotics. This integration enables risk-free training and experimentation, predictive validation of robotic tasks before real-world deployment, and more efficient optimization of complex food processes. In parallel, it enhances human–robot interaction by providing depth perception, haptic cues, and intuitive control, thereby lowering barriers to adoption in industrial settings. These synergies illustrate why XR–robotic integration is not merely additive but fundamentally transformative for the food sector.
At the same time, the scope and methodology of this review entail inherent limitations. The literature search was restricted to publications indexed in Google Scholar and to empirical studies, which may have excluded relevant theoretical or conceptual contributions. Moreover, while the review classifies application areas across VR, AR, and MR, it does not equally cover all subfields—particularly those still underexplored, such as MR-enabled industrial training or robotics for precision gastronomy. These limitations call for cautious interpretation of the findings and highlight the need for broader evidence synthesis in future work.
From a practical perspective, XR–robotics integration offers transformative opportunities but faces significant hurdles. High hardware costs, device weight, and usability concerns limit accessibility in industrial contexts [121]. Additional challenges include the expense of producing high-quality XR content, persistent motion sickness, and fatigue, which can hinder long-term adoption. These barriers emphasize the necessity of continued refinement in hardware design, ergonomic considerations, and user-centered approaches to ensure scalability and inclusivity.
Future research must move beyond generic aspirations and prioritize actionable strategies. First, addressing technical bottlenecks—such as interoperability standards, sensor–system integration, and real-time data synchronization—remains critical for developing robust XR–robotic ecosystems. Second, cross-disciplinary collaboration between food scientists, engineers, and human–computer interaction experts should be systematically fostered to align technological innovation with industry-specific needs. Third, the establishment of shared data collection and validation frameworks will be essential to ensure reproducibility, reliability, and trust across both academic and industrial domains.
Taken together, the unique value of this review lies in reframing XR not merely as a set of digital tools but as a paradigm shift in the human–food–environment relationship. By integrating immersive experience, intelligent automation, and sustainability-oriented design, XR–robotic systems hold the potential to redefine food production, distribution, and consumption. Future research should pursue this integration with three guiding principles: safety as the priority, experience as the core, and sustainability as the compass. Only by aligning technological empowerment with human-centered values can the food industry fully realize the transformative promise of XR and robotics.

Author Contributions

Conceptualization, S.W., Y.K. and S.K.; methodology, S.K.; writing—original draft preparation, S.W. and Y.K.; writing—review and editing, S.K.; visualization, S.W. and Y.K.; supervision, S.K.; project administration, S.W.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chai, J.J.; O’Sullivan, C.; Gowen, A.A.; Rooney, B.; Xu, J.L. Augmented/mixed reality technologies for food: A review. Trends Food Sci. Technol. 2022, 124, 182–194. [Google Scholar] [CrossRef]
  2. Shamshiri, R.R.; Weltzien, C.; Hameed, I.A.; Yule, I.J.; Grift, T.E.; Balasundram, S.K.; Chowdhary, G. Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng. 2018, 11, 1–14. [Google Scholar] [CrossRef]
  3. Kim, Y.; Kim, S. Automation and optimization of food process using CNN and six-axis robotic arm. Foods 2024, 13, 3826. [Google Scholar] [CrossRef]
  4. Hassoun, A.; Jagtap, S.; Trollman, H.; Garcia-Garcia, G.; Abdullah, N.A.; Goksen, G.; Lorenzo, J.M. Food processing 4.0: Current and future developments spurred by the fourth industrial revolution. Food Control 2023, 145, 109507. [Google Scholar]
  5. Grobbelaar, W.; Verma, A.; Shukla, V.K. Analyzing human robotic interaction in the food industry. J. Phys. Conf. Ser. 2021, 1714, 012032. [Google Scholar] [CrossRef]
  6. Mason, A.; Haidegger, T.; Alvseike, O. Time for Change: The Case of Robotic Food Processing [Industry Activities]. IEEE Robot. Autom. Mag. 2023, 30, 116–122. [Google Scholar] [CrossRef]
  7. Bader, F.; Rahimifard, S. A methodology for the selection of industrial robots in food handling. Innov. Food Sci. Emerg. Technol. 2020, 64, 102379. [Google Scholar] [CrossRef]
  8. Protogeros, G.; Protogerou, A.; Pachni-Tsitiridou, O.; Mifsud, R.G.; Fouskas, K.; Katsaros, G.; Valdramidis, V. Conceptualizing and advancing on extended reality applications in food science and technology. J. Food Eng. 2025, 396, 112557. [Google Scholar] [CrossRef]
  9. Andrews, C.; Southworth, M.K.; Silva, J.N.; Silva, J.R. Extended reality in medical practice. Curr. Treat. Options Cardiovasc. Med. 2019, 21, 18. [Google Scholar] [CrossRef]
  10. Çöltekin, A.; Lochhead, I.; Madden, M.; Christophe, S.; Devaux, A.; Pettit, C.; Hedley, N. Extended reality in spatial sciences: A review of research challenges and future directions. ISPRS Int. J. Geo-Inf. 2020, 9, 439. [Google Scholar] [CrossRef]
  11. Maples-Keller, J.L.; Bunnell, B.E.; Kim, S.J.; Rothbaum, B.O. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harv. Rev. Psychiatry 2017, 25, 103–113. [Google Scholar] [CrossRef] [PubMed]
  12. Carmigniani, J.; Furht, B.; Anisetti, M.; Ceravolo, P.; Damiani, E.; Ivkovic, M. Augmented reality technologies, systems and applications. Multimed. Tools Appl. 2011, 51, 341–377. [Google Scholar] [CrossRef]
  13. Park, B.J.; Hunt, S.J.; Martin, C., III; Nadolski, G.J.; Wood, B.J.; Gade, T.P. Augmented and mixed reality: Technologies for enhancing the future of IR. J. Vasc. Interv. Radiol. 2020, 31, 1074–1082. [Google Scholar] [CrossRef] [PubMed]
  14. Reiners, D.; Davahli, M.R.; Karwowski, W.; Cruz-Neira, C. The combination of artificial intelligence and extended reality: A systematic review. Front. Virtual Real. 2021, 2, 721933. [Google Scholar] [CrossRef]
  15. Cárdenas-Robledo, L.A.; Hernández-Uribe, Ó.; Reta, C.; Cantoral-Ceballos, J.A. Extended reality applications in Industry 4.0—A systematic literature review. Telemat. Inform. 2022, 73, 101863. [Google Scholar] [CrossRef]
  16. Herur-Raman, A.; Almeida, N.D.; Greenleaf, W.; Williams, D.; Karshenas, A.; Sherman, J.H. Next-generation simulation—Integrating extended reality technology into medical education. Front. Virtual Real. 2021, 2, 693399. [Google Scholar] [CrossRef]
  17. Sharma, R. Extended reality: It’s impact on education. Int. J. Sci. Eng. Res. 2021, 12, 247–251. [Google Scholar]
  18. Xu, C.; Siegrist, M.; Hartmann, C. The application of virtual reality in food consumer behavior research: A systematic review. Trends Food Sci. Technol. 2021, 116, 533–544. [Google Scholar] [CrossRef]
  19. Styliaras, G.D. Augmented reality in food promotion and analysis: Review and potentials. Digital 2021, 1, 216–240. [Google Scholar] [CrossRef]
  20. Ahn, J.; Gaza, H.; Oh, J.; Fuchs, K.; Wu, J.; Mayer, S.; Byun, J. MR-FoodCoach: Enabling a convenience store on mixed reality space for healthier purchases. In Proceedings of the 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Singapore, 17–21 October 2022; pp. 891–892. [Google Scholar]
  21. Angelidis, A.; Plevritakis, E.; Vosniakos, G.C.; Matsas, E. An Open Extended Reality Platform Supporting Dynamic Robot Paths for Studying Human–Robot Collaboration in Manufacturing. Int. J. Adv. Manuf. Technol. 2025, 138, 3–15. [Google Scholar] [CrossRef]
  22. Su, Y.P.; Chen, X.Q.; Zhou, C.; Pearson, L.H.; Pretty, C.G.; Chase, J.G. Integrating Virtual, Mixed, and Augmented Reality into Remote Robotic Applications: A Brief Review of Extended Reality-Enhanced Robotic Systems for Intuitive Telemanipulation and Telemanufacturing Tasks in Hazardous Conditions. Appl. Sci. 2023, 13, 12129. [Google Scholar] [CrossRef]
  23. Akindele, N.; Taiwo, R.; Sarvari, H.; Oluleye, B.I.; Awodele, I.A.; Olaniran, T.O. A state-of-the-art analysis of virtual reality applications in construction health and safety. Results Eng. 2024, 23, 102382. [Google Scholar] [CrossRef]
  24. Oyman, M.; Bal, D.; Ozer, S. Extending the technology acceptance model to explain how perceived augmented reality affects consumers’ perceptions. Comput. Hum. Behav. 2022, 128, 107127. [Google Scholar] [CrossRef]
  25. Monterubbianesi, R.; Tosco, V.; Vitiello, F.; Orilisi, G.; Fraccastoro, F.; Putignano, A.; Orsini, G. Augmented, virtual and mixed reality in dentistry: A narrative review on the existing platforms and future challenges. Appl. Sci. 2022, 12, 877. [Google Scholar] [CrossRef]
  26. Arena, F.; Collotta, M.; Pau, G.; Termine, F. An overview of augmented reality. Computers 2022, 11, 28. [Google Scholar] [CrossRef]
  27. Devagiri, J.S.; Paheding, S.; Niyaz, Q.; Yang, X.; Smith, S. Augmented reality and artificial intelligence in industry: Trends, tools, and future challenges. Expert Syst. Appl. 2022, 207, 118002. [Google Scholar] [CrossRef]
  28. Rakkolainen, I.; Farooq, A.; Kangas, J.; Hakulinen, J.; Rantala, J.; Turunen, M.; Raisamo, R. Technologies for multimodal interaction in extended reality—A scoping review. Multimodal Technol. Interact. 2021, 5, 81. [Google Scholar] [CrossRef]
  29. Bondarenko, V.; Zhang, J.; Nguyen, G.T.; Fitzek, F.H. A universal method for performance assessment of Meta Quest XR devices. In Proceedings of the 2024 IEEE Gaming, Entertainment, and Media Conference (GEM), Stuttgart, Germany, 4–6 June 2024; pp. 1–6. [Google Scholar]
  30. Yoon, D.M.; Han, S.H.; Park, I.; Chung, T.S. Analyzing VR game user experience by genre: A text-mining approach on Meta Quest Store reviews. Electronics 2024, 13, 3913. [Google Scholar] [CrossRef]
  31. Aros, M.; Tyger, C.L.; Chaparro, B.S. Unraveling the Meta Quest 3: An out-of-box experience of the future of mixed reality headsets. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 29 June–4 July 2024; Springer: Cham, Switzerland, 2024; pp. 3–8. [Google Scholar]
  32. Criollo-C, S.; Guerrero-Arias, A.; Samala, A.D.; Arif, Y.M.; Luján-Mora, S. Enhancing the educational model using mixed reality technologies with Meta Quest 3: A usability analysis using IBM-CSUQ. IEEE Access 2025, 13, 56930–56945. [Google Scholar] [CrossRef]
  33. Waisberg, E.; Ong, J.; Masalkhi, M.; Zaman, N.; Sarker, P.; Lee, A.G.; Tavakkoli, A. Apple Vision Pro and the advancement of medical education with extended reality. Can. Med. Educ. J. 2024, 15, 89–90. [Google Scholar] [CrossRef]
  34. Woodland, M.B.; Ong, J.; Zaman, N.; Hirzallah, M.; Waisberg, E.; Masalkhi, M.; Tavakkoli, A. Applications of extended reality in spaceflight for human health and performance. Acta Astronaut. 2024, 214, 748–756. [Google Scholar] [CrossRef]
  35. Waisberg, E.; Ong, J.; Masalkhi, M.; Zaman, N.; Sarker, P.; Lee, A.G.; Tavakkoli, A. The future of ophthalmology and vision science with the Apple Vision Pro. Eye 2024, 38, 242–243. [Google Scholar] [CrossRef] [PubMed]
  36. Park, S.; Bokijonov, S.; Choi, Y. Review of Microsoft HoloLens applications over the past five years. Appl. Sci. 2021, 11, 7259. [Google Scholar] [CrossRef]
  37. Long, Z.; Dong, H.; El Saddik, A. Interacting with New York City data by HoloLens through remote rendering. IEEE Consum. Electron. Mag. 2022, 11, 64–72. [Google Scholar] [CrossRef]
  38. Kumar, M.; Bhatia, R.; Rattan, D. A Survey of Web Crawlers for Information Retrieval. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2017, 7, e1218. [Google Scholar] [CrossRef]
  39. Uzun, E. A Novel Web Scraping Approach Using the Additional Information Obtained from Web Pages. IEEE Access 2020, 8, 61726–61740. [Google Scholar] [CrossRef]
  40. Slater, M. Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
  41. Siegrist, M.; Ung, C.Y.; Zank, M.; Marinello, M.; Kunz, A.; Hartmann, C.; Menozzi, M. Consumers’ food selection behaviors in three-dimensional (3D) virtual reality. Food Res. Int. 2019, 117, 50–59. [Google Scholar] [CrossRef]
  42. Cheah, C.S.; Barman, S.; Vu, K.T.; Jung, S.E.; Mandalapu, V.; Masterson, T.D.; Gong, J. Validation of a virtual reality buffet environment to assess food selection processes among emerging adults. Appetite 2020, 153, 104741. [Google Scholar] [CrossRef]
  43. Allman-Farinelli, M.; Ijaz, K.; Tran, H.; Pallotta, H.; Ramos, S.; Liu, J.; Calvo, R.A. A virtual reality food court to study meal choices in youth: Design and assessment of usability. JMIR Form. Res. 2019, 3, e12456. [Google Scholar] [CrossRef]
  44. Oliver, J.H.; Hollis, J.H. Virtual reality as a tool to study the influence of the eating environment on eating behavior: A feasibility study. Foods 2021, 10, 89. [Google Scholar] [CrossRef]
  45. Gouton, M.A.; Dacremont, C.; Trystram, G.; Blumenthal, D. Validation of food visual attribute perception in virtual reality. Food Qual. Prefer. 2021, 87, 104016. [Google Scholar] [CrossRef]
  46. Meijers, M.H.; Smit, E.S.; de Wildt, K.; Karvonen, S.G.; van der Plas, D.; van der Laan, L.N. Stimulating sustainable food choices using virtual reality: Taking an environmental vs health communication perspective on enhancing response efficacy beliefs. Environ. Commun. 2022, 16, 1–22. [Google Scholar] [CrossRef]
  47. Wan, X.; Qiu, L.; Wang, C. A virtual reality-based study of color contrast to encourage more sustainable food choices. Appl. Psychol. Health Well-Being 2022, 14, 591–605. [Google Scholar] [CrossRef]
  48. Ledoux, T.; Nguyen, A.S.; Bakos-Block, C.; Bordnick, P. Using virtual reality to study food cravings. Appetite 2013, 71, 396–402. [Google Scholar] [CrossRef]
  49. Schroeder, P.A.; Collantoni, E.; Lohmann, J.; Butz, M.V.; Plewnia, C. Virtual reality assessment of a high-calorie food bias: Replication and food-specificity in healthy participants. Behav. Brain Res. 2024, 471, 115096. [Google Scholar] [CrossRef] [PubMed]
  50. Gorman, D.; Hoermann, S.; Lindeman, R.W.; Shahri, B. Using virtual reality to enhance food technology education. Int. J. Technol. Des. Educ. 2022, 32, 1659–1677. [Google Scholar] [CrossRef] [PubMed]
  51. Plechatá, A.; Morton, T.; Perez-Cueto, F.J.; Makransky, G. Why just experience the future when you can change it: Virtual reality can increase pro-environmental food choices through self-efficacy. Technol. Mind Behav. 2022, 3, 11. [Google Scholar] [CrossRef]
  52. Harris, N.M.; Lindeman, R.W.; Bah, C.S.F.; Gerhard, D.; Hoermann, S. Eliciting real cravings with virtual food: Using immersive technologies to explore the effects of food stimuli in virtual reality. Front. Psychol. 2023, 14, 956585. [Google Scholar] [CrossRef]
  53. Ramousse, F.; Raimbaud, P.; Baert, P.; Helfenstein-Didier, C.; Gay, A.; Massoubre, C.; Lavoué, G. Does this virtual food make me hungry? Effects of visual quality and food type in virtual reality. Front. Virtual Real. 2023, 4, 1221651. [Google Scholar] [CrossRef]
  54. Ammann, J.; Hartmann, C.; Peterhans, V.; Ropelato, S.; Siegrist, M. The relationship between disgust sensitivity and behaviour: A virtual reality study on food disgust. Food Qual. Prefer. 2020, 80, 103833. [Google Scholar] [CrossRef]
  55. Bektas, S.; Natali, L.; Rowlands, K.; Valmaggia, L.; Di Pietro, J.; Mutwalli, H.; Cardi, V. Exploring correlations of food-specific disgust with eating disorder psychopathology and food interaction: A preliminary study using virtual reality. Nutrients 2023, 15, 4443. [Google Scholar] [CrossRef] [PubMed]
  56. Gu, C.; Huang, T.; Wei, W.; Yang, C.; Chen, J.; Miao, W.; Sun, J. The effect of using augmented reality technology in takeaway food packaging to improve young consumers’ negative evaluations. Agriculture 2023, 13, 335. [Google Scholar] [CrossRef]
  57. Dong, Y.; Sharma, C.; Mehta, A.; Torrico, D.D. Application of augmented reality in the sensory evaluation of yogurts. Fermentation 2021, 7, 147. [Google Scholar] [CrossRef]
  58. Fritz, W.; Hadi, R.; Stephen, A. From tablet to table: How augmented reality influences food desirability. J. Acad. Mark. Sci. 2023, 51, 503–529. [Google Scholar] [CrossRef]
  59. Honee, D.; Hurst, W.; Luttikhold, A.J. Harnessing augmented reality for increasing the awareness of food waste amongst Dutch consumers. Augment. Hum. Res. 2022, 7, 2. [Google Scholar] [CrossRef]
  60. Mellos, I.; Probst, Y. Evaluating augmented reality for ‘real life’ teaching of food portion concepts. J. Hum. Nutr. Diet. 2022, 35, 1245–1254. [Google Scholar] [CrossRef]
  61. Sonderegger, A.; Ribes, D.; Henchoz, N.; Groves, E. Food talks: Visual and interaction principles for representing environmental and nutritional food information in augmented reality. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Beijing, China, 10–18 October 2019; pp. 98–103. [Google Scholar]
  62. Juan, M.C.; Charco, J.L.; García-García, I.; Mollá, R. An augmented reality app to learn to interpret the nutritional information on labels of real packaged foods. Front. Comput. Sci. 2019, 1, 1. [Google Scholar] [CrossRef]
  63. Capecchi, I.; Borghini, T.; Bellotti, M.; Bernetti, I. Enhancing education outcomes integrating augmented reality and artificial intelligence for education in nutrition and food sustainability. Sustainability 2025, 17, 2113. [Google Scholar] [CrossRef]
  64. Nakano, K.; Horita, D.; Sakata, N.; Kiyokawa, K.; Yanai, K.; Narumi, T. DeepTaste: Augmented reality gustatory manipulation with GAN-based real-time food-to-food translation. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; pp. 212–223. [Google Scholar]
  65. Han, D.I.D.; Abreu e Silva, S.G.; Schröder, K.; Melissen, F.; Haggis-Burridge, M. Designing immersive sustainable food experiences in augmented reality: A consumer participatory co-creation approach. Foods 2022, 11, 3646. [Google Scholar] [CrossRef]
  66. Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  67. Low, J.Y.; Lin, V.H.; Yeon, L.J.; Hort, J. Considering the application of a mixed reality context and consumer segmentation when evaluating emotional response to tea break snacks. Food Qual. Prefer. 2021, 88, 104113. [Google Scholar] [CrossRef]
  68. Fujii, A.; Kochigami, K.; Kitagawa, S.; Okada, K.; Inaba, M. Development and evaluation of mixed reality co-eating system: Sharing the behavior of eating food with a robot could improve our dining experience. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 357–362. [Google Scholar]
  69. Nair, M.; Fernandez, R.E. Advancing mixed reality digital twins through 3D reconstruction of fresh produce. IEEE Access 2024, 12, 4315–4327. [Google Scholar] [CrossRef]
  70. Ghavamian, P.; Beyer, J.H.; Orth, S.; Zech, M.J.N.; Müller, F.; Matviienko, A. The Bitter Taste of Confidence: Exploring Audio-Visual Taste Modulation in Immersive Reality. In Proceedings of the 2025 ACM International Conference on Interactive Media Experiences, Stockholm, Sweden, 18–21 June 2025; ACM: New York, NY, USA, 2025; pp. 462–467. [Google Scholar]
  71. Long, J.W.; Masters, B.; Sajjadi, P.; Simons, C.; Masterson, T.D. The Development of an Immersive Mixed-Reality Application to Improve the Ecological Validity of Eating and Sensory Behavior Research. Front. Nutr. 2023, 10, 1170311. [Google Scholar] [CrossRef] [PubMed]
  72. Waterlander, W.E.; Jiang, Y.; Steenhuis, I.H.M.; Mhurchu, C.N. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study. J. Med. Internet Res. 2015, 17, e3774. [Google Scholar]
  73. Pini, V.; Orso, V.; Pluchino, P.; Gamberini, L. Augmented Grocery Shopping: Fostering Healthier Food Purchases through AR. Virtual Real. 2023, 27, 2117–2128. [Google Scholar] [CrossRef]
  74. Karkar, A.; Salahuddin, T.; Almaadeed, N.; Aljaam, J.M.; Halabi, O. A Virtual Reality Nutrition Awareness Learning System for Children. In Proceedings of the 2018 IEEE Conference on e-Learning, e-Management and e-Services (IC3e), Langkawi, Malaysia, 26–28 November 2018; pp. 97–102. [Google Scholar]
  75. Kalimuthu, I.; Karpudewan, M.; Baharudin, S.M. An Interdisciplinary and Immersive Real-Time Learning Experience in Adolescent Nutrition Education through Augmented Reality Integrated with Science, Technology, Engineering, and Mathematics. J. Nutr. Educ. Behav. 2023, 55, 914–923. [Google Scholar] [CrossRef]
  76. Kelmenson, S. Between the farm and the fork: Job quality in sustainable food systems. Agric. Hum. Values 2023, 40, 317–358. [Google Scholar]
  77. Gottlieb, N.; Jungwirth, I.; Glassner, M.; de Lange, T.; Mantu, S.; Forst, L. Immigrant workers in the meat industry during COVID-19: Comparing governmental protection in Germany, the Netherlands, and the USA. Glob. Health 2025, 21, 10. [Google Scholar] [CrossRef]
  78. Anderson, J.D.; Mitchell, J.L.; Maples, J.G. Invited review: Lessons from the COVID-19 pandemic for food supply chains. Appl. Anim. Sci. 2021, 37, 738–747. [Google Scholar] [CrossRef]
  79. Moerman, F.; Kastelein, J.; Rugh, T. Hygienic Design of Food Processing Equipment. In Food Safety Management; Elsevier: Amsterdam, The Netherlands, 2023; pp. 623–678. [Google Scholar]
  80. Rosati, G.; Oscari, F.; Barbazza, L.; Faccio, M. Throughput maximization and buffer design of robotized flexible production systems with feeder renewals and priority rules. Int. J. Adv. Manuf. Technol. 2016, 85, 891–907. [Google Scholar] [CrossRef]
  81. Wang, Z.; Hirai, S.; Kawamura, S. Challenges and opportunities in robotic food handling: A review. Front. Robot. AI 2022, 8, 789107. [Google Scholar] [CrossRef] [PubMed]
  82. Talpur, M.S.H.; Shaikh, M.H. Automation of mobile pick and place robotic system for small food industry. arXiv 2012, arXiv:1203.4475. [Google Scholar]
  83. Lyu, Y.; Wu, F.; Wang, Q.; Liu, G.; Zhang, Y.; Jiang, H.; Zhou, M. A review of robotic and automated systems in meat processing. Front. Robot. AI 2025, 12, 1578318. [Google Scholar] [CrossRef]
  84. McClintock, H.; Temel, F.Z.; Doshi, N.; Koh, J.S.; Wood, R.J. The milliDelta: A high-bandwidth, high-precision, millimeter-scale Delta robot. Sci. Robot. 2018, 3, eaar3018. [Google Scholar] [CrossRef]
  85. Mehmood, Y.; Cannella, F.; Cocuzza, S. Analytical modeling, virtual prototyping, and performance optimization of Cartesian robots: A comprehensive review. Robotics 2025, 14, 62. [Google Scholar] [CrossRef]
  86. Matheson, E.; Minto, R.; Zampieri, E.G.; Faccio, M.; Rosati, G. Human–robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  87. Ngui, I.; McBeth, C.; He, G.; Santos, A.C.; Soares, L.; Morales, M.; Amato, N.M. Extended Reality System for Robotic Learning from Human Demonstration. In Proceedings of the 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Orlando, FL, USA, 22–26 March 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1304–1305. [Google Scholar]
  88. Zhang, Y.; Gao, P.; Wang, Z.; He, Q. Research on Status Monitoring and Positioning Compensation System for Digital Twin of Parallel Robots. Sci. Rep. 2025, 15, 7432. [Google Scholar] [CrossRef]
  89. Pai, Y.S.; Yap, H.J.; Md Dawal, S.Z.; Ramesh, S.; Phoon, S.Y. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment. Sci. Rep. 2016, 6, 27380. [Google Scholar]
  90. Badia, S.B.I.; Silva, P.A.; Branco, D.; Pinto, A.; Carvalho, C.; Menezes, P.; Rodrigues, L.; Almeida, S.F.; Pilacinski, A. Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives. Electronics 2022, 11, 1726. [Google Scholar] [CrossRef]
  91. Caldwell, D.G. Robotics and Automation in the Food Industry: Current and Future Technologies; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  92. Blanes Campos, C.; Mellado Arteche, M.; Ortiz Sánchez, M.C.; Valera Fernández, Á. Technologies for Robot Grippers in Pick and Place Operations for Fresh Fruits and Vegetables. In Proceedings of the VII Congreso Ibérico de Agroingeniería, Logroño, Spain, 28–30 September 2011; pp. 1480–1487. [Google Scholar]
  93. Lien, T.K. Gripper Technologies for Food Industry Robots. In Robotics and Automation in the Food Industry; Caldwell, D.G., Ed.; Woodhead Publishing: Cambridge, UK, 2013; pp. 143–170. [Google Scholar]
  94. Salvietti, G.; Iqbal, Z.; Malvezzi, M.; Eslami, T.; Prattichizzo, D. Soft Hands with Embodied Constraints: The Soft ScoopGripper. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2758–2764. [Google Scholar]
  95. Gebbers, R.; Adamchuk, V.I. Precision Agriculture and Food Security. Science 2010, 327, 828–831. [Google Scholar] [CrossRef]
  96. Hillers, V.N.; Medeiros, L.; Kendall, P.; Chen, G.; Dimascola, S. Consumer Food-Handling Behaviors Associated with Prevention of Foodborne Illnesses. J. Food Prot. 2003, 66, 1893–1899. [Google Scholar] [CrossRef] [PubMed]
  97. Curi, P.R.; Pires, E.J.; Bornia, A.C. Challenges and Opportunities in the Adoption of Industry 4.0 by the Food and Beverage Sector. Procedia CIRP 2020, 93, 268–273. [Google Scholar]
  98. Masey, R.J.M.; Gray, J.O.; Dodd, T.J.; Caldwell, D.G. Guidelines for the Design of Low-Cost Robots for the Food Industry. Ind. Robot 2010, 37, 509–517. [Google Scholar] [CrossRef]
  99. Derossi, A.; Di Palma, E.; Moses, J.A.; Santhoshkumar, P.; Caporizzi, R.; Severini, C. Avenues for non-conventional robotics technology applications in the food industry. Food Res. Int. 2023, 173, 113265. [Google Scholar] [CrossRef]
  100. Deponte, H.; Tonda, A.; Gottschalk, N.; Bouvier, L.; Delaplace, G.; Augustin, W.; Scholl, S. Two complementary methods for the computational modeling of cleaning processes in food industry. Comput. Chem. Eng. 2020, 135, 106733. [Google Scholar] [CrossRef]
  101. Figgis, B.; Bermudez, V.; Garcia, J.L. PV module vibration by robotic cleaning. Sol. Energy 2023, 250, 168–172. [Google Scholar] [CrossRef]
  102. Tang, X.; Qiu, F.; Li, H.; Zhang, Q.; Quan, X.; Tao, C.; Liu, Z. Investigation of the intensified chaotic mixing and flow structures evolution mechanism in stirred reactor with torsional rigid-flexible impeller. Ind. Eng. Chem. Res. 2023, 62, 1984–1996. [Google Scholar] [CrossRef]
  103. Kalayci, O.; Pehlivan, I.; Akgul, A.; Coskun, S.; Kurt, E. A new chaotic mixer design based on the Delta robot and its experimental studies. Math. Probl. Eng. 2021, 2021, 6615856. [Google Scholar] [CrossRef]
  104. Mizrahi, M.; Golan, A.; Mizrahi, A.B.; Gruber, R.; Lachnise, A.Z.; Zoran, A. Digital gastronomy: Methods & recipes for hybrid cooking. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 541–552. [Google Scholar]
  105. Zoran, A.; Gonzalez, E.A.; Mizrahi, A.B. Cooking with computers: The vision of digital gastronomy. In Gastronomy and Food Science; Elsevier: Amsterdam, The Netherlands, 2021; pp. 35–53. [Google Scholar]
  106. Danno, D.; Hauser, S.; Iida, F. Robotic Cooking Through Pose Extraction from Human Natural Cooking Using OpenPose. In Intelligent Autonomous Systems 16; Ang, M.H., Jr., Asama, H., Lin, W., Foong, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 288–298. [Google Scholar]
  107. Liu, J.; Chen, Y.; Dong, Z.; Wang, S.; Calinon, S.; Li, M.; Chen, F. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects. IEEE Robot. Autom. Lett. 2022, 7, 5159–5166. [Google Scholar] [CrossRef]
  108. Sochacki, G.; Hughes, J.; Hauser, S.; Iida, F. Closed-loop robotic cooking of scrambled eggs with a salinity-based ‘taste’ sensor. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 594–600. [Google Scholar]
  109. Abdurrahman, E.E.M.; Ferrari, G. Digital Twin applications in the food industry: A review. Front. Sustain. Food Syst. 2025, 9, 1538375. [Google Scholar] [CrossRef]
  110. Verboven, P.; Defraeye, T.; Datta, A.K.; Nicolai, B. Digital twins of food process operations: The next step for food process models? Curr. Opin. Food Sci. 2020, 35, 79–87. [Google Scholar] [CrossRef]
  111. Koulouris, A.; Misailidis, N.; Petrides, D. Applications of process and digital twin models for production simulation and scheduling in the manufacturing of food ingredients and products. Food Bioprod. Process. 2021, 126, 317–333. [Google Scholar] [CrossRef]
  112. Maheshwari, P.; Kamble, S.; Belhadi, A.; Mani, V.; Pundir, A. Digital twin implementation for performance improvement in process industries—A case study of food processing company. Int. J. Prod. Res. 2023, 61, 8343–8365. [Google Scholar] [CrossRef]
  113. Kaarlela, T.; Padrao, P.; Pitkäaho, T.; Pieskä, S.; Bobadilla, L. Digital twins utilizing XR-technology as robotic training tools. Machines 2022, 11, 13. [Google Scholar] [CrossRef]
  114. Miner, N.E.; Stansfield, S.A. An interactive virtual reality simulation system for robot control and operator training. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994; pp. 1428–1435. [Google Scholar]
  115. Burdea, G.C. Invited review: The synergy between virtual reality and robotics. IEEE Trans. Robot. Autom. 2002, 15, 400–410. [Google Scholar] [CrossRef]
  116. Crespo, R.; García, R.; Quiroz, S. Virtual reality application for simulation and off-line programming of the Mitsubishi MoveMaster RV-M1 robot integrated with the Oculus Rift to improve students training. Procedia Comput. Sci. 2015, 75, 107–112. [Google Scholar] [CrossRef]
  117. He, Z.; Liu, C.; Chu, X.; Negenborn, R.R.; Wu, Q. Dynamic Anti-Collision A-Star Algorithm for Multi-Ship Encounter Situations. Appl. Ocean Res. 2022, 118, 102995. [Google Scholar] [CrossRef]
  118. González de Cosío Barrón, A.; Gonzalez Almaguer, C.A.; Berglund, A.; Apraiz Iriarte, A.; Saavedra Gastelum, V.; Peñalva, J. Immersive learning in agriculture: XR design of robotic milk production processes. In Proceedings of the DS 131: International Conference on Engineering and Product Design Education (E&PDE 2024), Birmingham, UK, 5–6 September 2024; pp. 575–580. [Google Scholar]
  119. Tian, X.; Pan, B.; Bai, L.; Wang, G.; Mo, D. Fruit picking robot arm training solution based on reinforcement learning in digital twin. J. ICT Stand. 2023, 11, 261–282. [Google Scholar] [CrossRef]
  120. Singh, R.; Seneviratne, L.; Hussain, I. A Deep Learning-Based Approach to Strawberry Grasping Using a Telescopic-Link Differential Drive Mobile Robot in ROS-Gazebo for Greenhouse Digital Twin Environments. IEEE Access 2024, 13, 361–381. [Google Scholar] [CrossRef]
  121. Jin, S. A Study on Innovation Resistance and Adoption Regarding Extended Reality Devices. J. Korea Contents Assoc. 2021, 21, 918–940. [Google Scholar]
Figure 1. Positioning of AR, MR, and VR within the XR Spectrum.
Figure 1. Positioning of AR, MR, and VR within the XR Spectrum.
Agriengineering 07 00322 g001
Figure 2. Co-occurrence network of keywords associated with XR, reality, and food in article titles.
Figure 2. Co-occurrence network of keywords associated with XR, reality, and food in article titles.
Agriengineering 07 00322 g002
Figure 3. Yearly Frequency of XR–Related Research Publications.
Figure 3. Yearly Frequency of XR–Related Research Publications.
Agriengineering 07 00322 g003
Figure 4. Examples of XR Applications in Food Contexts: (a) VR Headset Usage, (b) Virtual Food Environment (VR), (c) Nutritional Overlay on Real Food (AR), and (d) Blended Cafe Scene Combining Virtual and Real Elements (MR).
Figure 4. Examples of XR Applications in Food Contexts: (a) VR Headset Usage, (b) Virtual Food Environment (VR), (c) Nutritional Overlay on Real Food (AR), and (d) Blended Cafe Scene Combining Virtual and Real Elements (MR).
Agriengineering 07 00322 g004
Figure 5. The historical progression of industrial automation.
Figure 5. The historical progression of industrial automation.
Agriengineering 07 00322 g005
Figure 6. Different types of robotic end-effectors according to their handling positions at (A) top surface, (B) side surface, (C) bottom surface, (D) top and side surfaces, (E) side and bottom surfaces, and (F) top, side, and bottom surfaces. Red star marks indicate contact positions [81].
Figure 6. Different types of robotic end-effectors according to their handling positions at (A) top surface, (B) side surface, (C) bottom surface, (D) top and side surfaces, (E) side and bottom surfaces, and (F) top, side, and bottom surfaces. Red star marks indicate contact positions [81].
Agriengineering 07 00322 g006
Figure 7. Integrated Decision Framework for Food Robotics Selection Based on FIRM Methodology.
Figure 7. Integrated Decision Framework for Food Robotics Selection Based on FIRM Methodology.
Agriengineering 07 00322 g007
Figure 8. Examples of programmable scoring paths of dough obtained by robotic technology (Adapted by author from two sources: Left image from YouTube video (https://www.youtube.com/watch?v=5Qry2UImb5k (accessed on 18 September 2025)), right image visually enhanced from news article (https://automatykaonline.pl/Artykuly/Robotyka/Szybkie-roboty-podajace.-TP80-Fast-Picker-firmy-STAeUBLI (accessed on 18 September 2025)) using Gemini AI. Both images used with permission.).
Figure 8. Examples of programmable scoring paths of dough obtained by robotic technology (Adapted by author from two sources: Left image from YouTube video (https://www.youtube.com/watch?v=5Qry2UImb5k (accessed on 18 September 2025)), right image visually enhanced from news article (https://automatykaonline.pl/Artykuly/Robotyka/Szybkie-roboty-podajace.-TP80-Fast-Picker-firmy-STAeUBLI (accessed on 18 September 2025)) using Gemini AI. Both images used with permission.).
Agriengineering 07 00322 g008
Figure 9. The architectural framework of an XR-integrated robotic digital twin system.
Figure 9. The architectural framework of an XR-integrated robotic digital twin system.
Agriengineering 07 00322 g009
Figure 10. (a) Real world Mitsubishi Movemaster RV-M1; (b) 3D model (adapted by authors).
Figure 10. (a) Real world Mitsubishi Movemaster RV-M1; (b) 3D model (adapted by authors).
Agriengineering 07 00322 g010
Table 1. Summary of VR Applications in Food-Related Research.
Table 1. Summary of VR Applications in Food-Related Research.
CategoryTechnical Specifications (H/W, S/W, Key Feature)Key Outcome (Quantitative)Summary of FindingsReference
SVCH/W: HTC Vive
S/W: Unity
Key Feature: Eating pizza rolls while measuring heart rate, skin temp and mastication data.
The restaurant scene significantly increased presence scores (5.0 vs. 3.9, p < 0.006) and heart rate (83 vs. 79 bpm, p = 0.02) compared to a blank room, but did not significantly affect total food intake
(p = 0.98).
The virtual eating environment altered participants’ sense of presence and physiological arousal but did not significantly change their total food intake or sensory ratings.Oliver & Hollis [44]
SVCH/W: HTC Vive
S/W: Unity
Key Feature: Used photogrammetry to create highly realistic virtual cookie models.
Perceptual differences between cookie types were greater than the differences between real and virtual versions of the same cookie, with 33 of 40 descriptors discriminating products similarly.The visual perception of virtual and real cookies was highly consistent, with only minor discrepancies in brightness and color contrast.Gouton et al. [45]
EFPBH/W: HTC Vive
S/W: Unity
Key Feature:
Interactive pop-ups
with impact information
appeared on product pickup.
Impact pop-ups significantly
increased pro-environmental food choices (F(4, 241) = 16.80, p < 0.001), an effect mediated by higher personal response efficacy.
VR pop-ups boosted sustainable choices by increasing personal efficacy, an effect consistent across different message types (health vs. environment, text vs. visual).Meijers et al. [46]
EFPBH/W: 17-in. computer monitor (Desktop VR)
S/W: Vizard 4.0
Key Feature: Used background color (red vs. green) as a behavioral nudge for food choice.
A red (vs. green) table background significantly reduced meat-heavy meal choices (61.2% vs. 66.9%; p = 0.007).A red table background acted as a nudge, reducing the visual appeal of meat and prompting more plant-based choices.Wan et al. [47]
SSVH/W: HMD
S/W: NeuroVR
Key Feature: Compared food craving levels induced by four different cues: neutral VR, food VR, food photos, and real food.
For primed participants, VR-induced cravings were significantly higher than neutral cues (p < 0.05), similar to food photos, but significantly lower than real food (p < 0.05).VR food stimuli elicited craving levels comparable to food photographs, but significantly less than real food.Ledoux et al. [48]
MBEH/W: Oculus Rift DK2,
S/W: Unity
Key Feature: Used hand-motion tracking to measure reaction times for grasping (approach) vs. pushing (avoidance) tasks.
Motion-tracking data revealed that while push responses were comparable, grasping and collecting high-calorie food was significantly faster than for low-calorie food (e.g., object contact time, p = 0.021; collection time, p = 0.018).VR motion-tracking revealed a motor-based approach bias, with healthy participants grasping high-calorie foods faster than low-calorie or neutral items.Schroeder et al. [49]
Table 2. Summary of AR Applications in Food-Related Research.
Table 2. Summary of AR Applications in Food-Related Research.
CategoryTechnical Specifications (H/W, S/W, Key Feature)Key Outcome (Quantitative)Summary of FindingsReference
SCSAH/W: Mobile devices
S/W: Custom mobile AR application
Key Feature: Used AR to superimpose food items into real-time environments and compared responses with non-AR formats.
In a field experiment at a restaurant (Study 1), diners who viewed desserts in AR were significantly more likely to purchase than those using a standard digital menu (41.2% vs. 18.0%; p = 0.01).AR-based food visualizations boosted desirability and purchase intent by enhancing personal relevance and process-oriented mental simulation, consistently across food types and devices.Fritz et al. [58]
ENSAH/W: Mobile Phone, optional Aryzon headset
S/W: Aryzon AR SDK, Unity
AR app that visualizes catering food waste by projecting 3D models into users’ environments.
In a pilot evaluation (N=19), 58% of participants rated the app as motivating for food waste reduction (4–5 on a 5-point scale), 60% agreed it improved their understanding of waste scale, and all participants reported the waste was larger than expected.AR visualization of food waste data increased consumer awareness and comprehension of waste quantities, showing potential to incentivize reduction behaviors, though tested on a small sample.Honee et al. [59]
ENSAH/W: Smartphone
S/W: Javascript libraries, Blender
Key Feature: Quasi-experimental study comparing an AR food portion app (1:1 scale) with an online tool and infographic control.
In a pre-test/post-test comparison of estimation accuracy, the AR tool group showed the highest improvement (+12.2%), outperforming both the online tool group (+11.6%) and the infographic control group, which showed a decrease (−1.7%).The AR tool was the most effective method for improving the accuracy of nutrition students’ food portion size estimations compared to an online tool and a traditional infographic.Mellos & Probst [60]
DPAFH/W: OnePlus 5T Smartphone
S/W: Custom mobile AR application
Key Feature: Compared AR vs. static-page app for presenting environmental and nutritional food information.
Between-subjects study (N = 84): AR users learned significantly more than static users (F(1, 78) = 4.8, p <.05), while both versions scored highly on usability (mean SUS = 86.4).AR enhanced user learning about food products without compromising usability or aesthetics, supporting its credibility as a medium for food information.Sonderegger et al. [61]
Table 3. Comparative Analysis of XR Applications Across Key Food-Related Domains.
Table 3. Comparative Analysis of XR Applications Across Key Food-Related Domains.
Application DomainXR
Technology
Technical Specifications (H/W, S/W, Method)Key Outcome
(Quantitative)
Summary of Efficacy
& Limitations
Reference
Research on Contextual Effects of the Eating ExperienceVRH/W: HTC Vive
S/W: Unity
Method: Consumed real food within a fully virtual environment.
Virtual restaurant increased presence (p < 0.006) and arousal (p = 0.02), but had no significant effect on total intake (p = 0.98) or sensory ratings.Efficacy: Provides high experimental control for studying psychological/physiological responses.
Limitation: Bulky HMD setup can disrupt natural eating behavior and may not affect key outcomes like intake.
[44]
ARH/W: Meta Quest 3
S/W: Unity
Method: Drank sugar-water through a straw with AR visual filters and synchronized audio cues.
Sweet-associated pink filter reduced bitterness alone, but paradoxically increased bitterness when combined with sweet-associated audio cue (p = 0.044).Efficacy: Enables natural interaction with real food/drinks while studying subtle crossmodal effects.
Limitation: Restricted to simple chromatic overlays; lacks ability to simulate richer environmental contexts.
[70]
MRH/W: Meta Quest Pro
S/W: Unity
Method: Consumed real food with hands and tabletop visible via passthrough, embedded in a virtual restaurant.
Experts rated MR more ecologically valid than a lab booth but less than a real restaurant (mean 72.6/100).Efficacy: Offers a methodological “middle ground,” merging VR’s immersion with AR’s realism to balance control and ecological validity.
Limitation: Dependent on passthrough quality (resolution, latency) for naturalistic experience.
[71]
Supermarket Food Choice StudiesVRH/W: PC (Keyboard/Mouse)
S/W: Unity
Method: Validated a desktop 3D virtual supermarket by comparing purchases with real grocery receipts.
Top four food groups matched real shopping; significant differences in 6/18 categories, notably dairy (+6.5%, p < 0.001).Efficacy: Suitable for tracking overall purchasing patterns.
Limitation: Less accurate for specific categories (e.g., fresh produce); lacks HMD immersion.
[72]
ARH/W: Microsoft HoloLens
S/W: Unity, HoloToolkit
Method: Compared AR supermarket (3D models + nutritional overlays) vs. traditional packaging.
AR group more often chose high-nutrition products (p < 0.001) and relied on nutrition info (p = 0.034); also spent more time exploring (p = 0.02).Efficacy: Effective at shifting attention to nutritional data and promoting healthier choices.
Limitation: No real purchase context (no prices), HMD burden, limited student sample.
[73]
Nutrition
Education
VRH/W: Oculus DK2
S/W: Vizard
Method: Children prepared virtual breakfast in immersive VR; compared with paper- and narrative-based learning
VR group quiz score 87% (Narrative 88%, Paper 85%); Task time longer in VR (112 s vs. 38 s)Efficacy: Highly engaging, effective for immediate knowledge transfer.
Limitation: Longer completion time; requires HMD hardware; cultural generalizability not tested
[74]
ARH/W: Smartphones (Android/iOS)
S/W: Vuforia Engine with Unity
Method: An 8-week AR nutrition curriculum (8 activities + 3 STEM projects) grounded in Kolb’s experiential learning theory.
The 8-week curriculum led to statistically significant improvements in adolescents’ knowledge (mean score +3.82), attitude (+1.88), and self-reported behavior (+1.04), with p < 0.001 for all changesEfficacy: Improved knowledge, attitudes, and behaviors; effective as a long-term, structured, and scalable curriculum within formal schools.
Limitation: Effects reflect the whole curriculum rather than AR alone; tested in one school with a limited sample, limiting generalizability.
[75]
Table 4. Characteristics and XR Integration Potential by Robot Type for the Food Industry.
Table 4. Characteristics and XR Integration Potential by Robot Type for the Food Industry.
Robot TypeKey CharacteristicsFood Industry
Suitability
XR Integration
Potential
Reference
Articulated RobotHuman arm-like structure, high degrees of freedom, wide working rangeMeat processing, packaging, palletizingIntuitive control and simulation of complex movements in virtual environments[87]
Parallel RobotMultiple arms connected to a single platform structure. High speed/high precision motion within limited spaceSorting and classification, packagingStatus monitoring and position compensation[88]
Cartesian RobotLinear motion based on X-Y-Z axes. Simple structure, high precision and load-bearing capacityPackaging, palletizing and other simple, repetitive downstream processesIntuitive tuning of path and speed profiles and real-time collision verification[89]
Collaborative RobotCapable of collaborating with workers without safety fencing. Intuitive programming, high flexibilityQuality inspection, collaborative assemblySafe simulation of hazardous scenarios[90]
Table 5. Types and Characteristics of Food Grippers [92,93,94].
Table 5. Types and Characteristics of Food Grippers [92,93,94].
Gripper TypeDescriptionFood Texture CompatibilityHygiene ComplianceXR Integration
Potential
Pinching Mechanical gripping between two or more fingers. Grips via friction between the finger and the part, and releases by opening the finger.Rigid, semi-rigid non-deformable, deformable, non-sticky, slippery.
Used for pick-and-place operations in baked goods production.
Food residue may get trapped in mechanical joint areas, making cleaning difficult.Simulation that prevents product damage by adjusting grip force in a virtual environment
Enclosinglaw/jaw-like attachments encompass components to achieve partial or full grip, release achieved by opening of apparatusRigid, semi-rigid non-deformable, deformable, non-sticky, slippery.
Used for sorting, packaging, and palletizing fruits and vegetables.
Food residue may get trapped in mechanical joint areas, making cleaning difficult.Simulating grip strategies and forces for complex food shapes
PinningInsert one or more pins into the part. From a surface or deep grip through penetration, then release by removing the pins.Rigid, semi-rigid non-deformable, slippery
Used for pick and place operations of meat and poultry, fish and seafood
May create microbial contamination pathwaysSimulation for penetration depth optimization. Penetration points training system for specific foods in a virtual environment.
PneumaticGrasping using air or pressurized gas through a vacuum. Releasing by removing pressure.Rigid, semi-rigid non-deformable, deformable Smooth surface non-sticky, slippery
Egg pick and place operations, packaging, and palletizing
Food residue may accumulate.Real-time AR overlays for monitoring vacuum pressure and seal integrity
FreezingForm ice through an instantaneous freezing point between the gripper and food components. Instantly melt the ice to release.Rigid, non-rigid, semi-rigid non-deformable, deformable Smooth surface
Pick and place for meat, poultry, fish, seafood, or frozen fruits and vegetables
Risk of microbial growth during freezing and thawing processesVisualization of temperature gradients in virtual environments and optimization of freeze–thaw cycles
LevitatingBased on Bernoulli’s principle. Gripper lifts parts using differential air velocity. Releases by blocking the air flow.Rigid, non-rigid, semi-rigid non-deformable, deformable smooth surface
Used for pick and place operations involving soft and light foods, such as baked goods.
Theoretically, it is non-contact and offers hygienic advantages, but research is needed.Simulating airflow levitation for the delicate handling of light foods
ScoopingThe gripper design is flat or parabolic. It picks up food with a ‘sweeping’ motion and releases it by tilting.Rigid, non-rigid, semi-rigid, non-deformable, slippery, non-sticky
Sauces, powders, etc.
Food residue may accumulate.Predicting material skew and spillage in scoops
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Woo, S.; Kim, Y.; Kim, S. Converging Extended Reality and Robotics for Innovation in the Food Industry. AgriEngineering 2025, 7, 322. https://doi.org/10.3390/agriengineering7100322

AMA Style

Woo S, Kim Y, Kim S. Converging Extended Reality and Robotics for Innovation in the Food Industry. AgriEngineering. 2025; 7(10):322. https://doi.org/10.3390/agriengineering7100322

Chicago/Turabian Style

Woo, Seongju, Youngjin Kim, and Sangoh Kim. 2025. "Converging Extended Reality and Robotics for Innovation in the Food Industry" AgriEngineering 7, no. 10: 322. https://doi.org/10.3390/agriengineering7100322

APA Style

Woo, S., Kim, Y., & Kim, S. (2025). Converging Extended Reality and Robotics for Innovation in the Food Industry. AgriEngineering, 7(10), 322. https://doi.org/10.3390/agriengineering7100322

Article Metrics

Back to TopTop